Hacker News new | past | comments | ask | show | jobs | submit login
Go, Marvin Minsky, and the Chasm That AI Hasn’t yet Crossed (medium.com/backchannel)
163 points by wslh on Feb 2, 2016 | hide | past | favorite | 100 comments



> AlphaGo isn’t a pure neural net at all — it’s a hybrid, melding deep reinforcement learning with one of the foundational techniques of classical AI — tree-search

Most board game computer players use some sort of tree search followed by evaluation at the leaves of the tree. What we discovered in the 70s is that you don't need to have human-level evaluation to win at chess; it is enough to count material and piece activity, plus some heuristics (pawn structure, king safety...); computers more than compensate this weakness with their superhuman tree exploration.

This approach never worked so well for Go because evaluation was a mystery: which group is weak or strong? how much territory will their power yield? These are questions that professionals answer intuitively according to their experience. With so many parts of the board that depend on each other, we don't know how to solve the equation.

It looks like AlphaGo is the first one to get this evaluation right. At the end of the game, his groups are still alive and they control more territory. So Go evaluation is yet another task that used to be reserved to human experts and that computers now master. The fact that this is mixed with classical tree search does not make it less impressive.


I agree that the main strength of AlphaGo seems to be evaluation, using supervised learning + reinforcement learning.

What I found interesting about AlphaGo's final algorithm is that there are so many different methods being used at once:

0. there's the monte carlo tree search. while this is definitely a "classic" tree search, this particular tree search algorithm is a fairly recent development, and relies heavily on statistics, which is perhaps somewhat less classical

1. the policy function approximation they use in the final algorithm, aka the policy network, is based on supervised learning + deep network model. but it is NOT the other policy network in the paper that was further tuned used reinforcement learning - that one made the overall system perform worse!

2. the value function approximation they use in the final algorithm isn't just a network. it's a linear combination of a network and a rollout approximation using a much weaker, faster, simpler evaluation function trained on different features. they find the system performs best when each is given an equal weight.

3. from what i understand, the value network is trained (at huge computational cost, particularly in generating the data set required) to give similar accuracy to the value function one could define by using the reinforcement-learning policy network. the value network gives similar valuations but runs 1500x faster. in some sense this isn't terribly algorithmically interesting - it is just an implementation detail to give faster results at game-time at the cost of a ridiculous amount of offline computation.


Computer Go actually had advanced a long way by using Monte Carlo Tree Search in particular. The pre-AlphaGo programs that AlphaGo defeated were much stronger than computer Go programs from before the era of Monte Carlo tree search. Computer Chess was not achieved instantly by applying generic "tree search" but required quite a bit of tweaking to the various algorithms which were applied.


The author of this post (Gary Marcus) is a huge proponent of hybrid systems, in fact he is using that technique for his current stealth startup: https://www.technologyreview.com/s/544606/can-this-man-make-...


Yep.

Not to be harsh, but Marcus has been critical of Neural Nets for a while now. His claims that there are issues around the provability of them are well made.

But.. there is a way to make people listen to you. It's called results. Deep Learning is getting them, in an increasing number of diverse fields.


But.. there is a way to make people listen to you.

Hype, right?

Choosing problems for their theatrical effect, rather than utility. Writing articles and research papers as if they're marketing pamphlets. Claiming that incremental improvements are paradigm shifts. Treating arbitrary achievements as if they were commonly agreed upon milestones all along.

All of this is happening right now. Being skeptical in such environment is the only right thing to do.

Deep Learning is getting them, in an increasing number of diverse fields.

If you look for practical applications that give tangible benefits to people outside of academia, the achievement of applied deep learning so far aren't nearly as impressive as you make them out to be. This is despite insane levels of hype, huge investments in research and amounts of computing power available.

Heck, if anything, the fact that AlphaGo needs to use a tree search to prop up its ANN components could be seen as a sign that ANNs have some serious practical limitations when it comes to "results". Which is kind of the point of the article.


No doubt there is plenty of hype. From where I sit though, a lot of it is justified (Not the general intelligence stuff of course).

Choosing problems for their theatrical effect, rather than utility. Writing articles and research papers as if they're marketing pamphlets. Claiming that incremental improvements are paradigm shifts. Treating arbitrary achievements as if they were commonly agreed upon milestones all along.

I'm not sure what to say to this.

There are no "commonly agreed upon milestones". The closest things are the academic benchmarks/shared tasks that you seem to be critical of.

I guess the closest thing you'll find to a "commonly agreed upon milestone" is something like the Winograd schema[1]? Based on progress like "Teaching Machines to Read and Comprehend" I wouldn't be betting against deep learning on that.

If you look for practical applications that give tangible benefits to people outside of academia, the achievement of applied deep learning so far aren't nearly as impressive as you make them out to be.

Could you explain what you expecting? Deep learning techniques aren't exactly wide spread yet, and outside Google and a few other companies it takes time for things to migrate into products and have tangible benefits.

Nevertheless, Google Search, Pinterest, Facebook image tagging, Android Voice Search etc, etc.. these all are used by billions of people daily. I think it's hard to argue there isn't at least some practical applications.

[1] https://en.wikipedia.org/wiki/Winograd_Schema_Challenge

[2] http://arxiv.org/abs/1506.03340


Results are important, but don't worship short-term results at the expense of everything else.

Deep learning (and other machine learning techniques that are forced to call themselves deep learning to get attention) are getting great results right now, yes. It's important to follow these results, to use them, and to try to understand them.

But when this approach hits a local maximum, do you want AI Winter #3, or do you want there to be another approach that people have been working on?


Interesting article! However, Deep Blue, Watson, and AlphaGo are very different from one another. I don't think anyone deemed beating humans at chess or jeopardy impossible at the time Deep Blue and Watson were built. On AlphaGo the point about generalizing AI is valid, yet I think the author doesn't fully appreciate the novelty of approach described in the AlphaGo paper. Their work advances the field, has more general utility than Deep Blue's chess or Watson's jeopardy programs. AlphaGo paper specifically represents an advance in machine learning algorithms for games in general. As I understand, Watson's new NLP algorithm is PRISMATIC [1]. PRISMATIC is a rule-based NLP system and AlphaGo is more statistical inference/neural networks. Even if AlphaGo's 'policy-network- value-network' framework is not too generalizable, the philosophical implication is that we can build AIs that can mimic 'human intuition'. Jeopardy and chess have lesser components of 'human intuition' than Go. They are apples and oranges. So, I wonder if the author errs in bringing Deep Blue and Watson-Jeopardy into the picture.

In my opinion while Watson-Jeopardy and Deep Blue were 'over-fitting' for Jeopardy and chess respectively, AlphaGo algorithms are more general and over-fits for the larger category of 'games'.

[1] http://brenocon.com/watson_special_issue/05%20automatic%20kn...


I'll get hammered for this statement but.

Traditional AI suffers from a love of solving parlor tricks. Solving tick tack toe, checkers, chess, poker, Jeopardy(tm), are parlor tricks. It seems important because frankly humans just suck at parlor trick type problems and other forms of intelligence like say a cat brain don't even get parlor tricks. SO then we found that computers were really good at parlor tricks and it seemed like we were really onto something here. But nope.

On the other hand playing Go is not really a parlor trick, it's actually hard simple symbolic logic totally fails to grasp the problem at the first level.


If a problem is too difficult to solve, then we solve simpler related problems first. In doing so, we typically gain insight into the more challenging problem. For example, before Calculus was discovered, areas and volumes were all computed in ad hoc ways, via "parlor tricks". Only by generalizing the insights gleamed from some of these "tricks" did we stumble upon Calculus.

Problems such as chess certainly did not seem like "parlor tricks" at the time they were proposed. In fact, many thought of chess play as being an idyllic example of human intelligence. Just because we do not understand how to solve Go today doesn't mean that it won't be a trivial "parlor trick" in ten years.


Can you provide a firm definition of parlor trick?

Not saying I agree or disagree with you, but without a real definition of parlor trick you have a wide open no-true-Scotsman defense.


Well that's the crux of the problem - the corollary to your question is "can you provide a firm definition of intelligence?" Nobody can yet, so it's all speculation and subjective opinion.

The reason that I personally consider all these things parlor tricks (including, hypothetically, complete mastery of Go) is that I see no path from these particular types of systems to general intelligence. A human can take in arbitrary sensory data and make all sorts of conclusions and associations with it. Does this particular system have the capability to get to a point where it can see an apple falling and posit a theory of gravity? Will it ever be able to read subtle cues in facial/oral/bodily expression and combine them with all sorts of other data, instantaneously, to achieve compelling real-time social interaction? Will this system ever invent the game of Go, or anything else, because it felt like it? No, it has absolutely no framework to do any of those things, or countless other things humans can do. It's a machine built with a single purpose in mind, and it can only serve that purpose. It's a glorified function call. I don't think this type of machine will just wake up one day after digging deeper and deeper into these "hard" tasks. We need breadth, not depth.


You may find this paragraph on wikipedia interesting: https://en.wikipedia.org/wiki/Great_ape_language#Limitations...


Parlor trick is something that looks interesting only as long as you make incorrect assumptions about how it's being done. For example, a card that seems to appear out of nowhere while in reality it is held behind the hand between little and index finger. Or a program that seems to engage in planning when in reality it simply iterates over all possible solutions.


If AlphaGo eventually beats humans at go I think it says more about the game (or the way humans play the game) than about AI in general. Its strength comes from mcts with better heuristics. Previous ConvNet work posted here https://news.ycombinator.com/item?id=10558831 was already impressive in generating human like moves and combining it with search was an obvious next step. Unbeknownst to all Google's DeepMind team had already succeeded in doing so at that time.


> If AlphaGo eventually beats humans at go I think it says more about the game (or the way humans play the game) than about AI in general.

AI problems are always mysterious until they're solved. People say this sort of thing every time an AI achieves some task which had previously been deemed impossible.


> AI problems are always mysterious until they're solved.

There's truth there but this line is over-used. Playing chess better than a human was never deemed impossible and the ultimate solution doesn't seem to have any crazy insights that would stun a researcher from the 1960s. (You can read the source for Stockfish which is state of the art and open source). The improvements came in the form of more and more horsepower through hardware, more and more efficient innovations in the tree search very specific to the game of chess (bit board representation, move ordering), and simplifying evaluation and tweaking parameters according to unsupervised learning. Correct me if I'm wrong?

At the end of the day, chess (and go) are discrete games with perfect information played one turn at a time according to a tree of simple and trivially determined possible moves with clear criteria for winning. I don't see why we'd put this on a pedestal as the example of something generally considered uniquely human, so we'd better expand our imaginations of what we can achieve with AI. As the parent said, the mistake may have been in supposing there wasn't a solvable evaluation function for Go positions when in fact, through some more human ingenuity, there is.


So what happens if (when?) we come up with a system that does everything a human can do, better than a human, but doesn't contain any 'crazy insights', just a bunch of incremental improvements on what we have now?

Does that mean we aren't intelligent? Or does it mean that the system isn't intelligent but that we are "because we do it differently"? Or do we accept at that point that intelligence is composed of simple building blocks interacting in complex ways (which we already know, if we eschew Cartesian duelism)?


We would declare it intelligent? Certainly the people of today would call it intelligent. What I suspect you are claiming is that the people of that future would not call it intelligent, and this is the basis for arguing why that objection is not valid today. But that extrapolation to the future is just your speculation.


Or, in my view the most likely, that isn't in fact possible and the system we eventually arrive at that does that, _will be_ extremely different from what we have now.

Although I actually think that we'll never make a system that does _everything_ a human can do at all, simply because that would be silly.

And of course, there also has never been a human who can do everything that humans can do, so this bar is way too high anyway.


Perfect information isn't true, you don't exactly know the opponents next move. This broadens the search tree exponentially. Generally, with many hard problems, the size of the problem is a problem, when memory is limited.


Playing chess better than humans was certainly deemed impossible by some people (not the ones who wrote chess programs, of course). See for example Hubert Dreyfus:

https://en.wikipedia.org/wiki/Hubert_Dreyfus%27s_views_on_ar...

While progress came not as quickly as AI researchers (or their universities' publicity departments) had hoped, computers can now do a lot of the things that he wrote about for example in "What computers can't do".


Dreyfus does not appear to have claimed that computers would never be able to play chess well. At least, not in that book.

He reacted with skepticism when Newell and Simon said in 1957 that a computer would be world chess champion by 1967 and, well, he was right to.

He said that the computational techniques in use for computer chess in the 1970s wouldn't be capable of producing a world-class player, and he was probably wrong about that -- largely, I guess, because he didn't foresee how big an impact a performance improvement of ~10000x could have.

If he actually claimed that playing chess better than humans was impossible, can you say where?


Dreyfus was defeated at chess by MacHack in 1967: https://www.chess.com/article/view/machack-attack


There is another way of saying the same thing. A lot of seemingly groundbreaking progress in AI usually happens when people (people, not machines) discover a clever way of mapping a new unsolved problem to another - well-solved - problem. That's a valid and insightful observation that you shouldn't dismiss with such an ease.


Why 'seemingly'? Why is that not 'actual' groundbreaking progress? The achievement of any level of AI will by necessity require the chaining together of processes which are not themselves 'intelligent'. It has to be bootstrapped somehow.

On the day they build a walking, talking AI who can converse fully with a human, write a symphony, design a building, feel and express emotions and all the rest of the things we define as being essentially in the domain of humain intelligence, everyone will say, "But of course, none of those things required intelligence at all. This is all an elaborate collection of illusions and ugly hacks."

And they'll be right, but I suspect that the brain is the same way.


Why 'seemingly'? Why is that not 'actual' groundbreaking progress?

Because mapping problems to pre-existing algorithms is bread and butter of computer science and software engineering. To be groundbreaking a work needs to change our understanding of the underlying issues. In a lot of popularized cases that does not happen.


i tend to agree with that. it does seem like there's some general instinct in people that other things (animals, "artificially" intelligent machines) could possess human-like intelligence and subjective experience. i mean, we both seem to be tacitly agreeing to that here. there's also obviously a general instinct that's completely the opposite, and i'd guess that's probably the more prevalent instinct (in the population at large and in many conflicted individuals).

i think the opinion that humans are less special than we once thought, especially on expansive time scales, will only become more widespread.


The fact that people in the past have used the line "that's not really intelligence" doesn't at all invalidate the point that is being made, which is that games like Go/Chess/etc. have little to do with the real world. The real world is exponentially harder than those games (imperfect info, imperfect sensors, infinite state space, large - if not infinite - possible moves, infinite time horizon).


If AlphaGo eventually beats humans at go I think it says more about the game (or the way humans play the game) than about AI in general.

I'd agree with the rest of your post but this statement seems to imply there's some "general level" that can't reached by the incremental approaches that yielded the results.

The progress that we see here is incremental progress in a defined area but it's also progress towards more general and easier to implement approach - it's yet "point and solve" but it might be a step toward "point and solve". Given little "general intelligence" is understood, no one can say now for certain we won't arrive at it through a series of these advances.


Let me expand a little more on what I meant and see if it makes more sense to you:

Much about Go/Weiqi is still unknown even to the top players. For example Ke Jie (the current highest ranked player) feels that the current komi (compensation given to white for playing second) is too generous and he prefers white (last year he won almost all his white games). This is why professional players seem very excited about AlphaGo. If AlphaGo can consistently beat top players it may teach us more about the game. It may discover or settle questions about josekis (pattern conventions). It may tell us what komi is fair. It could settle questions about how a particular board position should be valued. On the other hand since it uses patterns learned from human plays, this could also motivate new theories of play that could defeat past strategies. Or if no one can come up with ways to defeat AlphaGo it may indicate the valuation function it produces is approaching ideal.

But AlphaGo is very much about combining existing tools of ConvNet and MCTS, albeit in a highly innovative manner, to solve the search problem of Go. Its success or failure could teach us a lot about how amenable the Go game is to such an approach (and potentially advance theories about the Go game), more than I would say about how in general problems in AI can be solved. That is what is learned here is very specific because Go is a very visual game and/or humans play it in a very visual way to take advantage of the massive parallelism of our vision system to quickly narrow down choices.


For those wondering about komi:

Standard komi is 6.5 points under the Japanese and Korean rules; under Chinese, Ing and AGA rules standard komi is 7.5 points

https://en.wikipedia.org/wiki/Komidashi


Really? I thought it was 5.5 back when I played go. I thought 5.5 was fair, so maybe he's right that 6.5 is too generous.

But of course the increase to 6.5 was done for good reasons. Perhaps the strategies for white just suit Ke Jie's style slightly better.


That's what I remembered it as, but it's been a while.


> I'd agree with the rest of your post but this statement seems to imply there's some "general level" that can't reached by the incremental approaches that yielded the results.

At this point in time, the belief in "incremental" general AI looks like a kind of pseudo-religion emerging in IT. People go way beyond postulating it as a possibility. They fervently defend it as some kind of obvious fact and flaunt this attitude as progressive.

What really disturbs me is the way these bridge the gaping void between existing AI and biological brains. On one hand I see absolutely insane amounts of hype around artificial neural networks, exploding way beyond the optimism warranted by the actual research. On the other hand I see the insistence that biological brains are "nothing special". I wonder how deep that goes. Are these people ultimate sociopath who truly believe that everyone around them is a mere pattern-matching device?

All this bullshit is actually detrimental to the usage of "AI" algorithms as programming techniques aimed at solving real-life problems. For one, many managers look at the hype and mysticism and conclude that AI is something that is too complex for mere mortals to handle. I've seen this on many occasions.


There are two different issues.

One is whether simple progress on neural nets is enough to close-in on real intelligence and I think those who really understand neural nets generally think not.

The other issue is whether incremental progress in general can achieve AI - there I don't think anyone can be sure, especially the relative vagueness of "incremental" and so I don't one should dismiss incremental progress or uncritically assume it.


Go ranks are in "stones" that one player can give another as a handicap - except at the professional level where there's more relative equality of play and the stone-ranks are honorary and ELO ratings are more accurate.

Monte Carlo tree search had advanced computer Go from 2-3K to 6-7 dan, a gain of 8 stones. Alpha Go apparently has advanced to 9-10 stones, professional level, by using neural networks to enhance the final position and search policy used in Monte Carlo tree search.

In many ways, it seems to me that Monte Carlo Tree Search was the primary answer to how computers could deal with Go [1].

Chess essentially conquered through a combination of red-black and other smart pruning algorithms, incremental advances in hand-tuned final position policy and improved hardware [2].

So it seems like the "conquest" of Go has involved a more generalized, self-learned version of the original approach to chess (tree-search strategies plus final position heuristics). That might enough for just about any deterministic game one can find.

It should be noted that Arimaa, a game designed specifically to be hard for computers without having the large board of Go, was "conquered" last year but without any neural net techniques (apparently).

[1]https://en.wikipedia.org/wiki/Computer_Go

[2]https://en.wikipedia.org/wiki/Computer_chess

[3]https://en.wikipedia.org/wiki/Computer_Arimaa#Techniques_use...


I wonder if there is any work going on to go back to Chess and try these same techniques that were used in AlphaGo.

It would be interesting to see how well this more generalized approach fairs against the hand-tuned final position policy code that you spoke about.


Is MCTS algorithmically interesting, or is it only powerful because it is embarrassingly parallel and so can leverage all the computing power as you have availalbe?


I really think Gary agrees with you, and you're just picking an unnecessary fight. Having advocated for this kind of approach decades ago, he fully appreciates the novelty of the approach. He points out the simplicity of Jeopardy and Chess as reasons why they haven't translated to the real world well, and maintains optimism about DeepMind. I don't read "the real question is whether the technology developed there can be taken out of the game world and into the real world" as a criticism, but praise; the question is worth considering, whereas so often it is not.


I don't intend to pick a fight. I am pointing out one deficiency I perceive in this widely shared article.

On the philosophical questions: I think even with the greatest technological advances, the question remains: will we adopt AI? I have seen too many situations where machine learning is not adopted even though algorithms can enable great functionality: usually the reason is lack of financial incentives or simple entrenched human interests (low-grade Ludditism). This resistance will lead to more (to use terminology from the AI debate) paperclip maximizers and fewer general AIs. This is already happening, the future of 2001 didn't happen to be HAL 9000, but an ingenious model and linear algebra algorithm to deliver better search results.


One of the main points of the article is the hybrid approach of AI against the "neural networks/deep learning solve everything" mantra.


I don't think Watson was over-fitting for Jeopardy. It took a lot of modification to even get it to play Jeopardy.


PRISMATIC isn't Watson. PRISMATIC is a single, rules based component for deriving the LAT (lexiconal answer type) of a question.

Watson wasn't deep learning based, but it has a lot more than PRISMATIC


How is that overfitting? That term is not applicable in this context.


It does: like an overfit (verb) model, Deep Blue and Jeopardy Watson don't generalize well to other problems.


Overfitting concerns itself with generalization to out of training data on the same task.

I think the term you are looking for is that it's too domain-specific. Or "it uses too much feature engineering and domain-specific engineering."


I see quotes to use it as an analogy. I'm not claiming that their algorithm over fits. It is just another way of saying the system is too specific and optimized to chess to be generalizable.


I'm a bit disappointed in this piece. It doesn't say anything surprising to anyone who read a few words into the DeepMind paper, and it serves to settle some of Marcus's academic scores:

> two people ought to be really pleased by this result: Steven Pinker, and myself. Pinker and I spent the 1990’s lobbying — against enormous hostility from the field — for hybrid systems

Told-you-so's are almost always boring, especially when they are part of a larger campaign. Here, Marcus's campaign is that neural nets are not enough, which isn't really news to DeepMind or most other people working with NNs, and doesn't matter much to anyone not working with them.

His chief critique is their interpretability.

>In 2016 networks have gotten deeper and deeper, but there are still very few provable guarantees about how they work with real-world data.

But for most of the world, including the people using whatever Marcus's startup eventually makes, predictive accuracy trumps interpretability. Let's get to 99% accuracy and worry about why later. And that's what researchers have done for many problems with NNs. Of course it's nice to know why, but it's not a fatal flaw if you don't, most of the time.

IBM's struggles to market Watson are a bit of a straw man. If you had judged the PC market by IBM's moves a few decades ago, you might have reached the same conclusions, and you would have been dead wrong.


>But for most of the world, including the people using whatever Marcus's startup eventually makes, predictive accuracy trumps interpretability.

Does it? What if the network has 99% accuracy, but is equally confident about its correct and incorrect predictions? "Deep neural networks are easily fooled", after all.


Sure, that's a flaw. It's just not a fatal flaw. For the chief reason that there's probably nothing better. So you take your lumps and remember that it's wrong sometimes. It's something we can work on while still benefitting from these tools.


>For the chief reason that there's probably nothing better.

The paper "Deep Neural Networks are Easily Fooled" noted that generative models don't suffer from this flaw.


Interesting read & puts the Google Go bot in some needed perspective.

From the article:

> In the real world, the answer to any given question to be just about anything, and nobody has yet figured out how to scale AI to open-ended worlds at human levels of sophistication and flexibility.

One doesn't have to shoot for the moon in order to find useful applications for AI or Cognitive technology. If you can restrict the domain of knowledge of an expert system, it doesn't need to be create 'open-ended worlds' in order to provide value. It just has to beat human effort, or be an augmentation to human cognition to enable scale, for it to be useful - or provide business value.


Maybe we should take a cue from John Searle and consider AI an extension of human intelligence? Often, what we call "AI" is really a codification, automation, and scaling of human intelligence. Machine translation is a good example of this.


But then why call it AI? Perhaps the work that it does is not actually intelligent?


Is what people do 'actually intelligent'? If you break down the processes of the brain to a low enough level, all of the 'intelligence' will disappear, just as it does in a computer neural network.

Intelligence is not some kind of aristotelian substance that permeates brain matter. At some level, anything which is intelligent has to be built from parts which are not intelligent.


> Is what people do 'actually intelligent'?

Yes, by definition.

> If you break down the processes of the brain to a low enough level, all of the 'intelligence' will disappear, just as it does in a computer neural network.

If you break matter down to low enough level, everything is just elementary particles. Now, would you please trade me some gold for equal mass of aluminum?

A search tree will work better than an artificial neural network for a lot of domains. But you don't expect it to magically change behavior and become intelligent if you scale it up, do you?


>If you break down the processes of the brain to a low enough level, all of the 'intelligence' will disappear, just as it does in a computer neural network.

Well no. Free-energy minimizing, multi-information maximizing generative causal modeling is what appears when you break down the processes of the brain (at least as we best understand them right now).


That's right, yet "I think therefore I am." I can prove that I exist and have conscious thought beyond your observation that the my actions are low-level survival instincts. I would define intelligence as knowing "I think therefore I am." Unfortunately, it's impossible (as far as I know) to prove for anyone outside myself :)


More people need to be aware of the distinction between "strong AI" and "weak AI."

Right now, all we have is weak AI.


Exactly. AI is such a loaded term with high expectations...


Also, doesn't each human require some 20-30 years of training from birth in order to be able to answer such questions? This fact seems to be constantly ignored.


A 5 year old can figure out an awful lot.

The 20 to 30 years is what it takes to get to where you are thinking about something new to humanity.


Humans do require training, they're slow, they sleep, rest, lose concentration, vary greatly in performance etc. All true and economically significant.

That is however distinct from no machine being able to do some of the things humans do as of now. It just means that most things they can do, they do better than humans. But first they must be able to do it.


Shoot for the moon! Even if you miss you'll end among the cold, dead vacuum of space.


It's fascinating that nature has created human-level intelligence using blind randomness, albeit over a period of 1+ billion years.

My theory is that with renewed global focus on AI, we're going to have a lot of minds looking at this problem from various outside perspectives. I believe a breakthrough in AI will come about not from the computer science sphere, but a very unlikely area that will surprise many.


> It's fascinating that nature has created human-level intelligence using blind randomness

Blind randomness plus tons of feedback ("natural selection" etc). Randomness alone, without feedback loops, could not make anything interesting even in a trillion years - that's the argument of the creationists, actually.


What would your guess be for that area?


Biology would be the "obvious" answer, but more specifically, I could see a breakthrough coming from psychedelic research, which is making a huge comeback right now after decades of ridicule. It's amazing how little research has been conducted in this area, a lot of scientists are rediscovering and relearning things that were first explored back in the 60s, and there's already been lots of progress related to human psychology.


Deep Mind are working on the biology thing and more specifically study of the human brain. Demis Hassabis, the main guy at Deep Mind did a PhD in cognitive neuroscience and is focused on that stuff. Not so sure about psychedelic research - I don't know how you'd use that to build computer systems even if the Google dream pictures are pretty trippy looking.

(https://www.google.com/search?q=google+dreams&num=20&tbm=isc...)


Actually, you just brought up a great point. The Google dream pictures are very similar to what you might see during a psychedelic experience.

I admit I don't know how psychedelic research could help in building AI, I'm just saying that my hunch is that a breakthrough in AI will come about from left field somewhere.


An interesting article about the author, Gary Marcus, and his stealth startup[1]: https://www.technologyreview.com/s/544606/can-this-man-make-...

[1] http://geometric.ai/


I went through Stanford in the 1980s, just as it was becoming clear that logic-based AI had hit a wall. That was the beginning of the "AI Winter", which lasted about 15 years. Then came machine learning.

AI used to be a tiny field. In the heyday of McCarthy and Minsky, almost everything was at MIT, Stanford, and CMU, and the groups weren't that big. There were probably less than 100 people doing anything interesting. Also, the total compute power available to the Stanford AI lab in 1982 was about 5 MIPS.

Part of what makes machine learning go is sheer compute power. Training a neural net is an incredibly inefficient process. Many of the basic algorithms date from the 1980s or earlier, but nobody could hammer on them hard enough until recently. Back in the 1980s, John Koza's group at Stanford was trying to build a cluster out of a big pile of desktop PCs. Stanford got a used NCube Hypercube with 64 processors (1 MIPS, 128KB each). The NCube turned out to be useless. There was a suspicion that with a few more orders of magnitude in crunch power, something might work, but with the failure of AI, nobody was going to throw money at the problem.

At last, there are profitable AI applications, and thus the field is huge. Progress is much faster now, just because there's more effort going in. But understanding of why neural nets work is still poor. Things are getting better; the trick of using an image recognition neural net to generate canonical images of what it recognizes finally provided a tool to get some insight into what was going on. At last there was a debug tool for neural nets. Early attempts in that direction determined that the recognizer for "school bus" was recognizing "yellow with black stripe", and that some totally bogus images of noise would be mis-recognized. Now there are somewhat ad-hoc techniques for dealing with that class of problems.

The next big issue is to develop something that has more of an overview than a neural net, but isn't as structured as classic predicate-calculus AI. One school tries to do this by working with natural language; Gary Markus, the author of the parent article, is from that group. There's a long tradition in this area, and it has ties to semantics and classical philosophy.

The Google self-driving car people are working on higher-level understanding out of necessity. They need to not just recognize other road users, but infer their intent and predict their behavior. They need "common sense" at an animal level. This may be more important than language. Most of the mammals have that level of common sense, enough to deal with their environment, and they do it without much language. It makes sense to get that problem solved before dealing with language. At last, there's a "killer app" for this technology and big money is being spent solving it.


There seems to be a mismatch indeed in incorporating common sense aspects and the proposed symbolic methods to solve them. Common sense seems "fuzzy". It doesn't look like rules and symbols are rich enough to cover its essence.

On the other hand, context has to do with a slot-like thinking architecture. If I wake up in the weekend in the same bed, I'm acting differently than during the week. Moreover I am easily think of a scenario in which I wake up in my bed on a raft on the ocean.

I'm not aware of any connectionist framework that comes close. 1) Polychronization doesn't tell how to connect groups. 2) Global workspace theory is "homogeneous" connectionist. As is Copycat. 3) Alternatives such as ARTMAP are not grammar like either. 4) Bayesian methods or dynamic field theories care even less about representation.

We really have a lot to figure out in the coming decennia! :-)


> it was becoming clear that logic-based AI had hit a wall... Then came machine learning.

In that context, this advance is a sign that there's still momentum going in the current ML paradigm - the hardest board game was at last solved! But as many have pointed out, there's a time in the near future where this approach too will stall short of ambitions. Will this lead to another winter?

For the algorithm ideologues probably yes. The key to current interest was that NN do vision extremely well, which can be built into image search, etc, which can be commercialized. So I think you are right that R&D for self-driving cars will be the sustaining investment in this field for the future, and this will drag AI research towards "animal common sense" as a goal. If they build a hell of safe car and the people buy into it, there could a Cambrian explosion in AI. But its failure or rejection could break the field too.


> the trick of using an image recognition neural net to generate canonical images of what it recognizes

That sounds interesting. Where can I read more about it?


Some of us went through school in the 90s-2000s and were trained by folks who never let go of the dead-end pure-logic-based systems.


Ouch. That was a decade late to be doing that.

I finished a MSCS in 1985. Ed Feigenbaum was still influential then, but it was getting embarrassing. He'd been claiming that expert systems would yield strong AI Real Soon Now. He wrote a book, "The Fifth Generation", [1] which is a call to battle to win in AI. Against Japan, which at the time had a heavily funded effort to develop "fifth generation computers" that would run Prolog. (Anybody remember Prolog? Turbo Prolog?) He'd testified before Congress that the "US would become an agrarian nation" if Congress didn't fund a big AI lab headed by him.

I'd already been doing programming proof of correctness work (I went to Stanford grad school from industry, not right out of college), and so I was already using theorem provers and aware of what you could and couldn't do with inference engines. Some of the Stanford courses were just bad philosophy. (One exam question: "Does a rock have intentions?")

"Expert systems" turned out to just be another way of programming, and not a widely useful one. Today, we'd call it a domain-specific programming language. It's useful for some problems like troubleshooting and how-to guides, but you're mostly just encoding a flowchart. You get out what some human put in, no more.

One idea at the time was that if enough effort went into describing the real world in rules, AI would somehow emerge. The Cyc project[3] started to do this in 1984, struggling to encode common sense in predicate calculus. They're still at it, at some low level of effort.[4] They tried to make it relevant to the "semantic web", but that didn't seem to result in much.

Stanford at one time offered a 5-year "Knowledge Engineering" degree. This was to train people for the coming boom in expert systems, which would need people with both CS and psychology training. They would watch and learn how experts did things, as psychology researchers do, then manually codify that information into rules.[2] I wonder what happened to those people.

[1] http://www.amazon.com/The-Fifth-Generation-Artificial-Intell... [2] https://saltworks.stanford.edu/assets/gx753nb0607.pdf [3] https://en.wikipedia.org/wiki/Cyc [4] http://www.businessinsider.com/cycorp-ai-2014-7


Interesting. I studied AI in the 1990s, and although I don't know this author, I've always felt that the real progress in AI would come from combining various techniques. I don't understand why there would be any hostility towards that idea. (Except that in research people can be very protective of their own pet projects.)


The challenge is to have grammar-like and pointer-like structures in a connectionist network. To ground such symbolic notions is the entire quest!!

MC tree search converges to minimax.

I'm really fond of Bayesian methods, but I think arguments about optimality should not be overstated. The brain is just an approximation at best.


So now that computers are getting good at Go, what's the next logical stepping stone for AI?


Deterministic games with limited information, for example Stratego.


AI chess that actually thinks like a human: http://www.popularmechanics.com/technology/robots/a17339/che...

If you could get a chess engine that can prune the tree as efficiently as a human, while calculating as fast as a computer, it would be phenomenal.


After reading the DeepMind paper, it does seem like their techniques could also be applied to chess and could possibly improve the state-of-art there as well. It is unclear, however, how much improvement remains in modern chess engines. They are already phenomenal.


Beating the world's 663th go player is not mastering the game...

I guess AI has to purge human out from earth to justify the statement "AI masters xxxx"...


The superhuman fallacy is the nemesis of all AI research: if it isn't better than the best human who has ever attempted a problem, it is worthless and derisory.

I've done a lot of work on artificial creativity, and am constantly thrilled when code generates something at the level of a creative 3rd grader. But show it to most people and you get "It's hardly a Michaelangelo, is it?"

Frustrating.


Sounds less like a fallacy and more like a strawman argument by certain types of AI proponents. First they claim that efficiency is the only thing that matters in intelligence. Then they fail to demonstrate high enough efficiency in real-life applications and resort to rhetoric to explain it all away.


> they claim that efficiency is the only thing that matters in intelligence

Who, and where, I've not come across such a claim.

There is, of course, a question of whether a particular AI tool demonstrates value for money in a particular commercial niche. But not being so doesn't mean it isn't 'real' AI or 'real' intelligence. There are plenty of situations you wouldn't be value for money for me to invest in, I don't take this as meaning you aren't intelligent. It's a quite different question.


No, it's rather the idea that you cannot say "AI beats Go master" or "AI masters Go" if it didn't actually attain some sort of high ranking by itself. Beating a low-ranked player while still interesting is not necessarily a proof of proficiency but could be due to just blind luck for example.


Winning 5 games to 0 probably isn't blind luck.

There's no clear definition of "mastery." Honinbō Shusaku was a master. Honinbō Shūsai, Go Seigen, and Minoru Kitani would also be considered masters. As time marches on, average player skill increases. While Honinbō Shusaku was among the strongest players of his day, he would be hard pressed to hold his own against professionals in today's era.

I think it's fair to say that anyone who reaches shodan (1 dan professional) plays at the master level. The difference between 1 dan professional (1P) and 9 dan professional (9P) is three stones, or roughly 30 points. In amateur play, for comparison, the difference between a 1 dan and 3 dan is about 2 stones (roughly 20 points).

AlphaGo won 5 straight games against a 2 dan professional player. That puts AlphaGo around 3P, well into the master range.

In Go, the ranks are:

    30 kyu amateur (never played)
    1 kyu amateur (understands the game)
    1 dan amateur (mastered the basics)
    7 dan amateur (nearly professional strength)
    shodan (1 dan professional)
    9 dan (top ranking professional)
https://en.wikipedia.org/wiki/Go_ranks_and_ratings#Professio...

https://en.wikipedia.org/wiki/Go_professional#Discrepancies_...

"Traditionally it has been uncommon for a low professional dan to beat some of the highest pro dans. But since the late-90s it has slowly become more common."


There are about 1500 chess players with the title "Grandmaster", and a whole lot more for various other "Master" title.

That is to say that a player ranking within the top 1000 of a game like Go is a master of the game. A human player will take 10 years and way north of 10000 hours to get to where AlphaGo is.


It did attain some high proficiency. It both beat a Go master, and played at a master level (where 'master' = professional dan rating). It wasn't assessed for a ranking, but beat a 2p-dan player 5-0. That's not a 'low ranked player' - I suspect you don't know much about Go.

We'll see in March how it does against a 9p.

Your comment was a perfect example of the superhuman fallacy - where a 3p+ AI becomes 'low ranked'.


You have a very high bar for proficiency.


Bear in mind, it didn't just beat him, it smashed him 5-0.


3-2 in the fast games.


If beating somebody better than 99.99995% of the people isn't mastering something, what is?


Indeed, if this software beat me soundly (and it would since I don't play Go), could I still claim that the AI hasn't mastered the game, because a handful of humans are better than it?


Framing questions about AI in terms of "mastering" some field is somewhat disingenuous in the first place.


it seems to me that while humans are built out of neural nets, we are capable of logic, and the sort of reasoning that seems to be fall under the category of "tree search" in this conversation. in that sense, it would seem that we can do deductive logical reasoning, and we run that software on neural net hardware (slash firmware/software).

so, why the slavishness among some to neural nets and building everything on top of that? why emulate logic processing on top of associative processing and then mesh the two, when you could just do associative processing and mesh it with more "native" logic processing, since man-made computing hardware already does that easily?

also: this has been brought up here and elsewhere before, but as the article mentions: "Just yesterday, a few hours before the Go paper was made public, I went to a talk where a graduate student of a deep learning expert acknowledged that (a) people in that field still don’t really understand why their models work as they well as they do and (b) they still can’t really guarantee much of anything if you test them in circumstances that differ significantly from the circumstances on which they were trained. To many neural network people, Minsky represents the evil empire. But almost half a century later they still haven’t fully faced up to his challenges."

let's keep in mind that some of the biggest mistakes humans make come from unexamined associative reasoning. let's also keep in mind that this crowd tends to be particularly suspect of the sort of not-strictly-logical associative heuristics that we associate (haha) with more mainstream society (unexamined articles of faith, habits and social customs that have unintended deleterious effects, etc).

one of my favorite failures of loose associative reasoning is racism, so here's a link many of you have probably seen, but it's an important thing to keep in mind, so here it is anyway: http://www.nytimes.com/2015/07/10/upshot/when-algorithms-dis...

let's not reproduce some of our species' biggest flaws just because it let us expediently reach some narrowly defined (and likely immediately financially desirable) goal.

i realized i've sort of conflated my points here: it is entirely possible to implement all sorts of terrible things (like racism) using logic. there's certainly no shortage of spurious logic to justify all sorts of bad behavior. axioms need to be sound and nuance needs to be considered. but i think my general point is that humans aren't some sort of model of perfection, and should not be copied as such.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: