George Zarkadaki: In Our Own Image (2015) describes six metaphors people have used to explain human intelligence in the last two millennia. At first it was the gods infusing us with spirit. After that it's always been engineering: after the first water clocks and the qanat hydraulics seemed a good explanation of everything. The flow of different fluids in the body, the "humors" explained physical and mental function. Later it was mechanical engineering. Some of the greatest thinkers of the 1500s and 1600s -- including Descartes and Hobbes -- assured us it was tiny machines, tiny mechanical motions. In the 1800s Hermann von Helmholtz compared the brain to the telegraph. So of course after the invention of computers came the metaphor of the brain as a computer. This became absolutely pervasive and we have a very hard time describing our thinking without falling back to this metaphor. But, of course, it's just a metaphor and much as our brain is not a tiny machine made out of gear it's also not "prima facie digital" despite that's what John von Neumann claimed in 1958. It is, indeed, quite astonishing how everyone without any shred of evidence just believes this. It's not like John von Neumann gained some sudden insight into the actual workings of the brain. Much as his forefathers he saw semblance in the perceived workings of the brain and the latest in engineering and so he stated immediately that's what it is.
Our everyday lives should make it evident how much the working of our brain doesn't resemble that of our computers. Our experiences change our brains somehow but exactly how we don't have the faintest idea about and we can re-live these experiences somewhat which creates a memory but the mechanism is by no means perfect. There's the Mandela Effect https://pubmed.ncbi.nlm.nih.gov/36219739/ and of course "tip of my tongue" where we almost remember a word and then perhaps minutes or hours later it just bursts into our consciousness. If it's a computer why is learning so hard? Read something and bam, it's written in your memory, right? Right? Instead, there's something incredibly complex going on, in 2016 an fMRI study was made among the survivors of a plane crash and large swaths of the brain lit up upon recall. https://pubmed.ncbi.nlm.nih.gov/27158567/ Our current best guess is somehow its the connections among neurons which change and some of these connections together form a memory. There are 100 trillion connections in there so we certainly have our task cut.
And so we are here where people believe they can copy human intelligence when they do not even know what they are trying to copy falling for the latest metaphor of the workings of the human brain believing it to be more than a metaphor.
Helmholtz didn't say a brain was like a telegraph; he was talking about the peripheral nervous system. And he was right, signals sent from visual receptors and pain receptors are the same stuff being interpreted differently, just as telegraphing "Blue" and "Ouch" would be. That and the spirit of the gods have no place on this list and strain the argument.
Hydraulics, gear systems, and computers are all Turing complete. If you're not a dualist, you have to believe that each of these would be capable of building a brain.
The history described here is one where humans invent a superior information processor, notice that it and humans both process information, and conclude that they must be the same physically. The last step is obviously flawed, but they were hardly going to conclude that the brain processes information with electricity and neurotransmitters when the height of technology was the gear.
Nowadays, we know the physical substrate that the brain uses. We compare brains to computers even though we know there are no silicon microchips or motherboards with RAM slots involved. We do that because we figured out that it doesn't matter what a machine uses to compute; if it is Turing complete, it can compute exactly as much as any other computer, no more, no less.
That's interesting, but technology has always been about augmenting or mimicking human intelligence, though. The Turing test is literally about computers being able to mimic humans so well that real humans wouldn't be able to tell them apart. We're now past that point in some areas, but we never really prioritized thinking about what intelligence _actually_ is, and how we can best reproduce it.
At the end of the day, does it matter? If humans can be fooled by artificial intelligence in pretty much all areas, and that intelligence surpasses ours by every possible measurement, does it really matter that it's not powered by biological brains? We haven't quite reached that stage yet, but I don't think this will matter when we do.
> If humans can be fooled by artificial intelligence in pretty much all areas,
This is just preposterous. You can be fooled if you have no knowledge in the area but that's about it. With current tech there is, there can not be anything novel. Guernica was novel. No matter how you train any probabilistic model on every piece of art produced before Guernica it'll never ever create it.
There are novel novels (sorry for the pun) every few years. They delight us with genuinely new turns of prose, unexpected plot twists etc.
And yes we have cars which move faster than a human can but they don't compete in high jumps or climb rock walls. Despite we have a fairly good idea about the mechanical workings of the human body, muscles and joints and all that we can't make a "tin man", not by far. As impressive as Boston Dynamics demos are they are still very very far from this.
> With current tech there is, there can not be anything novel.
I wasn't talking about current tech, which is obviously not at human levels of intelligence yet. I would still say that our progress in the last 100 years, and the last 50 in particular, has been astonishing. What's preposterous is expecting that we can crack a problem we've been thinking about for millennia in just 100 years.
Do you honestly think that once we're able to build AI that _fully_ mimics humans by every measurement we have, that we'll care whether or not it's biological? That was my question, and "no" was my answer. Whether we can do this without understanding how biological intelligence works is another matter.
Also, AI doesn't even need to fully mimic our intelligence to be useful, as we've seen with the current tech. Dismissing it because of this is throwing the baby out with the bath water.
> What made you think that is measurable and if it is then we can build something like that ever?
What makes you think it isn't, and that we can't? The Turing test was proposed 75 years ago, and we have many cognitive tests today which current gen AI also passes. So we clearly have ways of measuring intelligence by whatever criteria we deem important. Even if those measurements are flawed, and we can agree that current AI systems don't truly understand anything but are just regurgitation machines, this doesn't matter for practical purposes. The appearance of intelligence can be as useful as actual intelligence in many situations. Humans know this well.
Yes, I read the article. There's nothing novel about saying that current ML tech is bad at outliers, and showcasing hallucinations. We can argue about whether the current approaches will lead to AGI or not, but that is beside the point I was making originally, which you keep ignoring.
Again, the point is: if we can build AI that mimics biological intelligence it won't matter that it's not biological. And a sidenote of: even if we're not 100% there, it can still be very useful.
Again, the point is: you can not build AI that mimics biological intelligence because you do not even have any idea what biological intelligence even is. Once again, what's Picasso's velocity of painting?
That's beside my point, but they augment it. Agtech enhances our ability to feed ourselves; cars enhance our locomotor skills; medicine enhances our self-preservation skills, etc.
Our everyday lives should make it evident how much the working of our brain doesn't resemble that of our computers. Our experiences change our brains somehow but exactly how we don't have the faintest idea about and we can re-live these experiences somewhat which creates a memory but the mechanism is by no means perfect. There's the Mandela Effect https://pubmed.ncbi.nlm.nih.gov/36219739/ and of course "tip of my tongue" where we almost remember a word and then perhaps minutes or hours later it just bursts into our consciousness. If it's a computer why is learning so hard? Read something and bam, it's written in your memory, right? Right? Instead, there's something incredibly complex going on, in 2016 an fMRI study was made among the survivors of a plane crash and large swaths of the brain lit up upon recall. https://pubmed.ncbi.nlm.nih.gov/27158567/ Our current best guess is somehow its the connections among neurons which change and some of these connections together form a memory. There are 100 trillion connections in there so we certainly have our task cut.
And so we are here where people believe they can copy human intelligence when they do not even know what they are trying to copy falling for the latest metaphor of the workings of the human brain believing it to be more than a metaphor.