good luck with that. DeepMind should sponsor B. F. Skinner award, to honor the father of their behaviorist theories of 'reward and punishment' as a sort of all-encompassing theory of everything related to cognition. At least now they are torturing GPUs and not some poor lab animals.
on a serious note the only positive outcome of all this shameless PR is that the heavy investment in ML/RL might trickle down to actual science labs and fundamental neuroscience research which might move us forward towards understanding natural intelligence, a prerequisite for creating an artificial one.
> towards understanding natural intelligence, a prerequisite for creating an artificial one.
I've thought about this before, and I'm not convinced it's really prerequisite. Naturally developed intelligence in my mind may actually be highly constrained and inefficient because it was limited to what was biologically feasible. i.e. There may be simpler ways of achieving comparable results. Natural intelligence does however have the benefit of being an actual working model, but deciphering the blackbox may be just as hard as developing a working theory from first principles.
yes, it's a recurring thread, "do we really need to mimic the birds in order to build airplanes", etc.
I think someone serious about AI should treat it not as engineering problem but as a science, like physics, which starts with model of nature, and experiment to prove or disprove the theory. Nature provides the constraints by which theory is developed, which radically limits the "search space" of theories. Otherwise it's a bit like throwing things on the wall and see what sticks, which is the primary method of current AI research.
on a serious note the only positive outcome of all this shameless PR is that the heavy investment in ML/RL might trickle down to actual science labs and fundamental neuroscience research which might move us forward towards understanding natural intelligence, a prerequisite for creating an artificial one.