Hacker News new | past | comments | ask | show | jobs | submit login

Interesting to see the amount of "winters" AI has gone through (analogous, to a lesser extent, to VR).

I see increasing compute power, an increased learning set (the internet, etc), and increasingly refined algorithms all pouring into making the stuff we had decades ago more accurate and faster. But we still have nothing at all like human intelligence. We can solve little sub-problems pretty well though.

I theorize that we are solving problems slightly the wrong way. For example, we often focus on totally abstract input like a set of pixels, but in reality our brains have a more gestalt / semantic approach that handles higher-level concepts rather than series of very small inputs (although we do preprocess those inputs, i.e. rays of light, to produce higher level concepts). In other words, we try to map input to output at too granular of a level.

I wonder though if there will be a radical rethinking of AI algorithms at some point? I tend to always be of the view that "X is a solved problem / no room for improvement in X" is BS, no matter how many people have refined a field over any period of time. That might be "naive" with regards to AI, but history has often shown that impossible is not a fact, just a challenge. :)




Human intelligence is basically the ability to solve a large collection of sub-problems. As models learn how to solve more and more sub-problems they become closer and closer to human intelligence. Right now the focus is on solving important specific sub-problems better humans, rather than the ability to solve a much wider variety of sub-problems.

Machine learning and human learning happen in much the same way. We have a dataset of memories, and we have a training dataset of results. We then classify things based on pattern matching. The current human advantage is an ability to store, acquire and access certain kinds of data more efficiently, which helps in solving a wider variety of problems. For problems in which machines have found out how to store, acquire and access data more efficiently (such as chess) machines are far superior to humans.


Human intelligence is so much more than that. I feel like we vastly underestimate the problem when we make it sound so simple. "Oh well the machines are basically the same as us, so we just have to get them to be able to solve more sub-problems and then we've got it!"

Right now the focus is on solving important specific sub-problems better humans, rather than the ability to solve a much wider variety of sub-problems.

The focus is there because there are business applications and money there. Do researchers really think that some version of a chess-bot or go-bot or cat-image-bot or jeopardy-bot will just "wake up" one day when it reaches some threshold? That this approach is truly the best path to AGI?

A machine can play chess better than a human because the human used its knowledge to build a chess-playing machine. That's all it can do. It takes chess inputs and produces chess outputs. It doesn't know why it's playing chess. It didn't choose to learn chess because it seemed interesting or valuable. No machine has ever displayed any form of "agency." A chatbot that learns from a corpus of text and rehashes it to produce "realistic" text doesn't count either.

You could argue many of the same things about humans themselves. Consciousness is an illusion, we don't have true agency either, we also just rehash words we've heard - I believe these things. But it seems clear to me that what is going on inside a human brain is so far beyond what we have gotten machines to do. And a lot of that has to do with the fact that we underwent a developmental process of billions of years, being molded and built up for the specific purpose of surviving in our environment. Computers have none of that. We built a toy that can do some tricks. Compared to the absolute insanity of biological life, it's a joke. I think it is such hubris to say that we're anywhere close to figuring out how to make something that rivals our own intelligence, which itself is well beyond our comprehension.


This is basically strong AI vs weak AI. I don't know what the ultimate solution is - it could be exactly as you describe. :) My theory is just that it will need to be generally applicable, on domains it is not trained on, if it is to reach human-level intelligence.


Human-level intelligence is not generally applicable on domains it is not trained on, so holding AI to this standard is ridiculous. Humans need to be taught just like machines do.


Yes, but there is cross-over of domains. For example, say you learned how to ride a bicycle. This might aid you in how fast you learn to ride a motorcycle, or vice versa. (Might be a bad example but I hope it illustrates the point)


AI can do the same thing. What you are talking about is ex balance.

On top of that, once balance is learned it's instantly transferable to other "machines" where as each human has to learn it.


You are referring to transfer learning


1) sounds like exactly what deep learning is...map more complex abstractions in each succeeding layer

2) are computers that can understand speech, recognize faces, drive cars, beat humans at Jeopardy really 'nothing at all like human intelligence?'


For #2 you've touched on the "AI Effect"[0] or moving goal posts.

[0] https://en.wikipedia.org/wiki/AI_effect


At some point a consensus of AI researchers will decide that we have a generally intelligent system (able to adapt and learn many tasks, pass a legitimate turing test, etc). Currently there are zero AI researchers that would claim such a thing even with our current breakthroughs in specialized tasks.

The moving goalpost has been less an issue of "what is AI" and more an issue of "what are difficult task at the edge of AI research". People with passing interest (even most programmers) don't distinguish between the two. Of course "difficult tasks in AI research" is a moving goalpost and it will be moving until we achieve general intelligence and beyond. This is a requirement for progress in AI research. If those goalposts stop moving before we have a general intelligence then something is wrong in the field.

When researchers (not the general public) start arguing whether the goalposts should be general intelligence or super intelligence that is when we know we have traditional AI. When we try to figure out how to get adult human level intelligence to take hours or days to train on the top supercomputers rather than months or years -- that is when we have AI. Even then, if the training part requires that much computational intensity, how many top supercomputers are going to be dedicated to having a single human level intelligence?

You could train current algorithms used in AI research for decades and have nothing resembling general human intelligence.


I agree, but I guess my point (in this comment and others) would be that we should stop thinking of intelligence, consciousness, free-will and other attributes as a hard line but rather gradients or quantities.


> 1) sounds like exactly what deep learning is...map more complex abstractions in each succeeding layer

Only at such a high level of abstraction as to be meaningless.

> 2) are computers that can understand speech, recognize faces, drive cars, beat humans at Jeopardy really 'nothing at all like human intelligence?'

They are not. Hundreds of man years worth of engineering time go into each of those systems, and none of those systems generalizes to anything other than the task it was created for. That's nothing like human intelligence.


> Only at such a high level of abstraction as to be meaningless.

I'm not sure what this means or how the abstractions are meaningless? From Gabor filters to concepts like "dog", the abstractions are quite meaningful (in that they function well), even if not to us.

> They are not. Hundreds of man years worth of engineering time go into each of those systems, and none of those systems generalizes to anything other than the task it was created for. That's nothing like human intelligence.

This isn't strictly true if we look at the ability to generalize as a sliding scale. The level of generalization has actually increased significantly from expert systems to machine learning to deep learning. We have not reached human levels of generalization but we are approaching.

Consider that DL can identify objects, people, animals in unique photos never seen before and that more generally the success of modern machine learning is it's ability generalize from training to test time rather than hand engineering for each new case. Newer work is even able to learn from just a few examples[0] and then generalize beyond that. Or the Atari work from DeepMind that can generalize to dozens/hundreds of games. None of those networks are created specifically for Break Out or Pong.

It's also not entirely fair to discount the hundreds of years of engineering considering most of these systems are trained from scratch (randomness). Humans, however, benefit from the preceding evolution which has a time scale that far exceeds any human engineering effort. :)

[0] https://arxiv.org/abs/1605.06065


It beats human with accuracy. Which makes it more practical.

Computer doesn't need to be strong AI to replace human.


We spend a lifetime building the skills that we use in our day to day lives.

And most of them don't transfer.


This is a very superficial point-of-view.

What matters here is the concept itself (deep learning as a generic technique) but also scalability. Not the specifics that we have today, but the specifics that we will have 20 years from now.

The concept is proven, all that matters now is time...


> The concept is proven, all that matters now is time...

This is a very naive point of view. You could deep-learn with a billion times more processing power and a billion times more data for 20 years and it would not produce a general artificial intelligence. Deep learning is a set of neural network tweaks that is insufficient to produce AGI. Within 20 years we may have enough additional tweaks to make an AGI, but I doubt that algorithm will look anything like the deep learning we have today.


This is basically exactly what I was trying to say with my original comment; thanks for stating it in a clearer way.


1) I am aware of deep learning, and the improvements made circa 2012 or so, but it is still ultimately failing to make advanced correlations between differing training sets, a strong memory, and a meaningful distinction between high-level abstractions and low-level inputs (although all these things are being addressed in one way or another with incremental improvements). And also effectively sharing learned data among separate entities or reteaching it.

2) These things are all very human-like, but they are still sub-problems IMHO :)


This reminded me that there are other NN variants that might help provide some clues in that direction.

Hopfield nets for example provide associative memory.

It may not all be groundbreakingly efficient, but very worthwhile.


Yep - also we have a large tendency to use feedforward NNs right now, but I have a sneaking suspicion that the future lies in something closer to recurrent NNs. Or probably something more complex, like automata-ish properties (IIRC there is also some NN that uses Turing-like devices).


Nah. If we want properly humanlike AI software, we're going to have to find a way to make inference in probabilistic programs a lot faster.


Realize that the computer's advantages can also lead to weaknesses. By this, I mean that a computer's powerful and precise memory means that it is better able to work off raw correlations, without as much of a need to abstract or seek out causal models. While this might turn out okay for detailed large but ultimately simple (stationary) patterns, it will not be so advantageous in more dynamic settings or scenarios with multiple levels of organization with differing dynamics.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: