> Does this mean that when I see an animal my brain reactivity just knows "cat" instead of going "well it a small thing with fur and whiskers, fits cat"?
Yeah – I mean, you would notice it if the process where conscious, right?
I'm mostly skeptical of brain/AI analogies. But there are a quite a few examples where the experiences and mechanisms are eerily similar. Deep dream was probably the first I saw–the similarity to hallucinations such as those induced by LSD is just too overwhelming to be dismissed.
Deep Dream is actually a good demonstration of pattern recognition, and as such it shows how similar the brain operates in this domain. Face recognition is the most advanced of these systems, which is why we have the tendency to see faces everywhere (clouds, :) etc. There are some nice visualisations of the intermediate layers of object recognition NNs, and they seem to operate similar to the brain–starting with edge detection, having certain nodes specialising on very specific tasks (parallel lines, two small circles on a horizontal axis etc), then aggregating these.
That's vision, and probably music: layers of patterns. I'd speculate that consciousness and language operate with a somewhat different paradigm that we haven't cracked yet. NNs have made some impressive progress with text and speech, but I don't see the same sort of analogy to anything the brain does. But as far as I know we actually know much less about how these systems work in the brain, anyway.
Yeah – I mean, you would notice it if the process where conscious, right?
I'm mostly skeptical of brain/AI analogies. But there are a quite a few examples where the experiences and mechanisms are eerily similar. Deep dream was probably the first I saw–the similarity to hallucinations such as those induced by LSD is just too overwhelming to be dismissed.
Deep Dream is actually a good demonstration of pattern recognition, and as such it shows how similar the brain operates in this domain. Face recognition is the most advanced of these systems, which is why we have the tendency to see faces everywhere (clouds, :) etc. There are some nice visualisations of the intermediate layers of object recognition NNs, and they seem to operate similar to the brain–starting with edge detection, having certain nodes specialising on very specific tasks (parallel lines, two small circles on a horizontal axis etc), then aggregating these.
That's vision, and probably music: layers of patterns. I'd speculate that consciousness and language operate with a somewhat different paradigm that we haven't cracked yet. NNs have made some impressive progress with text and speech, but I don't see the same sort of analogy to anything the brain does. But as far as I know we actually know much less about how these systems work in the brain, anyway.