2. hopefully with time we'll have better approaches to engineer all things that are engineered
No, at the moment we go for the biggest and shiniest lens that we can get our hands on and hope that it's capable enough to tackle our problem. If it is we can waste time designing a smaller, more constrained, lens to ship to consumers.
I'm curious if you two are going to be talking past each other with the first point. Any chance I could get you both to explore what you mean by composable?
These days any method that uses gradient descent to optimize a computational graph gets branded as deep learning. It's a very general paradigm that allows for almost any composition of functions as long as it's differentiable. If that's not composable then I don't know what is.
My guess for what they meant was that you can't compose the trained models.
For example, a classifier that tells you cat or not can't be used with one that says running or not to get running cat.
The benefit being that you could put together more "off the shelf" models into products. Instead, you have to train up pretty much everything from the ground. And we compare against others doing the same.
2. hopefully with time we'll have better approaches to engineer all things that are engineered
No, at the moment we go for the biggest and shiniest lens that we can get our hands on and hope that it's capable enough to tackle our problem. If it is we can waste time designing a smaller, more constrained, lens to ship to consumers.