The idea is that a certain simplification could work enough to be valuable. It is similar with gravity isn't it? We don't have a grand unified theory of quantum mechanics, but even Newton's abstractions serves us well in certain situations. We don't know how it behaves in the micro level but we can still observe the macro effects and create an abstraction out of said behavior.
And to be honest, it took us somewhere. Yes we don't have AGI or anything close to it but the products of machine learning is something we depend on every single day now.
The risk is that we've developed a formalism and ecosystem that works on entirely different principles than the brain, even if it looks similar.
It still works, but it might not be a useful model to anyone studying the brain. I've yet to meet the neuroscientist who assumes it is, so perhaps that's not a problem.
> The risk is that we've developed a formalism and ecosystem that works on entirely different principles [...]
It's a possibility, but I sometime have the feeling that people dismiss the idea that such a simple model of the brain could be enough to explain complex behaviors because they want to believe there is more to the brain.
I'm sure real neurons are very complex and difficult to model, but I also believe that the real challenge is to explain how neurons interact with each other, not how they behave individually.
You're right, I doubt that the current models are enough to capture the complexity of the brain.
But models based on incredibly simple neurons can already produce quite complex behaviors. They show how many simple computing units interacting with each others can lead to things like vision. And I do believe that this is a fundamental principle.
Maybe we should explore that idea and scale this model up, instead of rejecting it as "too simple" and hoping that the complexity of the brain will be fully explained by the discovery of some quantum effect in neurons.
> As it is, ML is running away rather fast from the integrator model by introducing explicit gating and nonlinearities in the neurons.
I think the idea of a non-linear activation function has always been around. But for the rest I agree.
And to be honest, it took us somewhere. Yes we don't have AGI or anything close to it but the products of machine learning is something we depend on every single day now.