He would be correct if creation of AI depended on a thorough understanding of neuroscience. But I hope we needn't wait that long.
It's the old "Birds fly. To fly, man must fully understand bird flight." argument. Yet today we still don't completely understand bird flight but planes _do_ fly.
The analogy is not complete: we have yet to find the "air", the "turbulence", a "Bernoulli principle", etc. of intelligence. That is to be determined. But this approach is the only reasonable one.
As the author implies, waiting for neuroscience is like waiting for Godot.
Exactly, we've had airplanes for a century but working orthnocopters are still something of a black art. Like so many things in engineering, it is easier, and better, to not pay naturally occurring phenomenon undue attention. We are capable of engineering better.
The lack of AI progress in the last 30 years is not a good sign. You're also ignoring things like the materials that have come out of studying the pads on geckos' feet.
There has not been a lack of AI progress in the past 30 years, but rather some sort of 'True Scottsman'-esque raised expectations phenomenon. Natural language processing, computer vision, etc have seen dramatic improvements in recent years but every time there is an improvement people say "well that's just standard stuff, not real AI". As long as a technology is real, people seem weirdly unwilling to accept that it is also an example of AI. The "real stuff" seems to include "fictional" as an integral part of it's definition.
I'm specifically talking about general AI. I don't think we're any closer to that than when the problem was first posed. Although I'm willing to change my mind if you make a good argument for it.
I guess that's a matter of definition. If humans are the only example of "intelligence" that we know of, then it seems natural that artificial intelligence would concern emulating humans.
I'm sure there are a number of bots that are able to fool people a lot of the time and many have been around for a long time, but that isn't what people think of when you say AI.
Exactly, and he's talking about the human practice of neuroscience. When we manage to build a sufficiently advanced AI we can set it to work on these types of problems, that's what's exciting to me.
It's the old "Birds fly. To fly, man must fully understand bird flight." argument. Yet today we still don't completely understand bird flight but planes _do_ fly.
The analogy is not complete: we have yet to find the "air", the "turbulence", a "Bernoulli principle", etc. of intelligence. That is to be determined. But this approach is the only reasonable one.
As the author implies, waiting for neuroscience is like waiting for Godot.