At some point a consensus of AI researchers will decide that we have a generally intelligent system (able to adapt and learn many tasks, pass a legitimate turing test, etc). Currently there are zero AI researchers that would claim such a thing even with our current breakthroughs in specialized tasks.
The moving goalpost has been less an issue of "what is AI" and more an issue of "what are difficult task at the edge of AI research". People with passing interest (even most programmers) don't distinguish between the two. Of course "difficult tasks in AI research" is a moving goalpost and it will be moving until we achieve general intelligence and beyond. This is a requirement for progress in AI research. If those goalposts stop moving before we have a general intelligence then something is wrong in the field.
When researchers (not the general public) start arguing whether the goalposts should be general intelligence or super intelligence that is when we know we have traditional AI. When we try to figure out how to get adult human level intelligence to take hours or days to train on the top supercomputers rather than months or years -- that is when we have AI. Even then, if the training part requires that much computational intensity, how many top supercomputers are going to be dedicated to having a single human level intelligence?
You could train current algorithms used in AI research for decades and have nothing resembling general human intelligence.
I agree, but I guess my point (in this comment and others) would be that we should stop thinking of intelligence, consciousness, free-will and other attributes as a hard line but rather gradients or quantities.
> 1) sounds like exactly what deep learning is...map more complex abstractions in each succeeding layer
Only at such a high level of abstraction as to be meaningless.
> 2) are computers that can understand speech, recognize faces, drive cars, beat humans at Jeopardy really 'nothing at all like human intelligence?'
They are not. Hundreds of man years worth of engineering time go into each of those systems, and none of those systems generalizes to anything other than the task it was created for. That's nothing like human intelligence.
> Only at such a high level of abstraction as to be meaningless.
I'm not sure what this means or how the abstractions are meaningless? From Gabor filters to concepts like "dog", the abstractions are quite meaningful (in that they function well), even if not to us.
> They are not. Hundreds of man years worth of engineering time go into each of those systems, and none of those systems generalizes to anything other than the task it was created for. That's nothing like human intelligence.
This isn't strictly true if we look at the ability to generalize as a sliding scale. The level of generalization has actually increased significantly from expert systems to machine learning to deep learning. We have not reached human levels of generalization but we are approaching.
Consider that DL can identify objects, people, animals in unique photos never seen before and that more generally the success of modern machine learning is it's ability generalize from training to test time rather than hand engineering for each new case. Newer work is even able to learn from just a few examples[0] and then generalize beyond that. Or the Atari work from DeepMind that can generalize to dozens/hundreds of games. None of those networks are created specifically for Break Out or Pong.
It's also not entirely fair to discount the hundreds of years of engineering considering most of these systems are trained from scratch (randomness). Humans, however, benefit from the preceding evolution which has a time scale that far exceeds any human engineering effort. :)
What matters here is the concept itself (deep learning as a generic technique) but also scalability. Not the specifics that we have today, but the specifics that we will have 20 years from now.
The concept is proven, all that matters now is time...
> The concept is proven, all that matters now is time...
This is a very naive point of view. You could deep-learn with a billion times more processing power and a billion times more data for 20 years and it would not produce a general artificial intelligence. Deep learning is a set of neural network tweaks that is insufficient to produce AGI. Within 20 years we may have enough additional tweaks to make an AGI, but I doubt that algorithm will look anything like the deep learning we have today.
1) I am aware of deep learning, and the improvements made circa 2012 or so, but it is still ultimately failing to make advanced correlations between differing training sets, a strong memory, and a meaningful distinction between high-level abstractions and low-level inputs (although all these things are being addressed in one way or another with incremental improvements). And also effectively sharing learned data among separate entities or reteaching it.
2) These things are all very human-like, but they are still sub-problems IMHO :)
Yep - also we have a large tendency to use feedforward NNs right now, but I have a sneaking suspicion that the future lies in something closer to recurrent NNs. Or probably something more complex, like automata-ish properties (IIRC there is also some NN that uses Turing-like devices).
Realize that the computer's advantages can also lead to weaknesses. By this, I mean that a computer's powerful and precise memory means that it is better able to work off raw correlations, without as much of a need to abstract or seek out causal models. While this might turn out okay for detailed large but ultimately simple (stationary) patterns, it will not be so advantageous in more dynamic settings or scenarios with multiple levels of organization with differing dynamics.
2) are computers that can understand speech, recognize faces, drive cars, beat humans at Jeopardy really 'nothing at all like human intelligence?'