The way in which the brain solves classification tasks - a reactionary process that doesn't draw on deliberate reasoning ability - seems similar to how we recreate such abilities in ANN's. So in that sense it seems our AI systems are following the path laid out for us by our own brains.
However it seems to me (and I may be wrong) that the DNC tries to implement procedural, deliberate thinking in a way divergent from the method implemented by the human brain.
The brain is clearly capable of providing us with the ability to navigate through complex procedure (like interpreting subway maps), something beyond the abilities of modern AI which mainly excels at more reactionary classification. So the question becomes, why is Deep Mind diverging from the way the brain works in the case of the DNC?
Is it because we don't actually understand how the brain implements reasoning, or is it because in order to do so in the brain's own way it would exceed our technical capabilities? Or both?
> The way in which the brain solves classification tasks - a reactionary process that doesn't draw on deliberate reasoning ability
Does this mean that when I see an animal my brain reactivity just knows "cat" instead of going "well it a small thing with fur and whiskers, fits cat"?
> Does this mean that when I see an animal my brain reactivity just knows "cat" instead of going "well it a small thing with fur and whiskers, fits cat"?
Yeah – I mean, you would notice it if the process where conscious, right?
I'm mostly skeptical of brain/AI analogies. But there are a quite a few examples where the experiences and mechanisms are eerily similar. Deep dream was probably the first I saw–the similarity to hallucinations such as those induced by LSD is just too overwhelming to be dismissed.
Deep Dream is actually a good demonstration of pattern recognition, and as such it shows how similar the brain operates in this domain. Face recognition is the most advanced of these systems, which is why we have the tendency to see faces everywhere (clouds, :) etc. There are some nice visualisations of the intermediate layers of object recognition NNs, and they seem to operate similar to the brain–starting with edge detection, having certain nodes specialising on very specific tasks (parallel lines, two small circles on a horizontal axis etc), then aggregating these.
That's vision, and probably music: layers of patterns. I'd speculate that consciousness and language operate with a somewhat different paradigm that we haven't cracked yet. NNs have made some impressive progress with text and speech, but I don't see the same sort of analogy to anything the brain does. But as far as I know we actually know much less about how these systems work in the brain, anyway.
I'd say that the classification of a cat as a cat is a fundamentally different process from, say, solving a complex math problem.
The main difference seems to be that the former is a subconscious process and the latter is conscious, but it's more than that because to solve the complex problem we're relying on explicit selective recall from our long term memory then transferring that knowledge to our short term memory for conscious processing. I don't think this transfer of information from long term to short term memory occurs when looking at a cat.
However it seems to me (and I may be wrong) that the DNC tries to implement procedural, deliberate thinking in a way divergent from the method implemented by the human brain.
The brain is clearly capable of providing us with the ability to navigate through complex procedure (like interpreting subway maps), something beyond the abilities of modern AI which mainly excels at more reactionary classification. So the question becomes, why is Deep Mind diverging from the way the brain works in the case of the DNC?
Is it because we don't actually understand how the brain implements reasoning, or is it because in order to do so in the brain's own way it would exceed our technical capabilities? Or both?