Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Go Still Foils the Computers (wsj.com)
40 points by lxm on Dec 31, 2015 | hide | past | favorite | 30 comments


This article doesn't really explain what the title suggests it will. Instead, it tells us:

1. That computers have a hard time with Go.

2. That many people think Go is deeper than other board games.

3. That some people are working on making computers better at Go with neural networks, which more closely resemble the human brains that do so well at Go.


Foil is relative. The article doesn't provide much background on contemporary computer Go.

On Kgs where I play, the highest rated computers are at 5 dan. That is basically better than 99% of the players on the server which I think is the largest server in the English speaking world. Computer go has made considerable strides since 2009 [1]. The main impetus for this has been Monte Carlo Tree search[2]. Computers have not yet reached the level of professional Go player unlike Chess but progress from existing methods seems quite possible.

[1] https://en.wikipedia.org/wiki/Computer_Go [2] https://en.wikipedia.org/wiki/Monte_Carlo_tree_search


I read http://arxiv.org/abs/1412.6564 earlier this year and tldr they trained a deep net bot that managed to hold about 1-2d on KGS. Since there's no searching going on, that's pretty good, though not as good as CrazyStone or other MCTS bots. (I hadn't heard they went from 6d to 5d, but sure enough http://www.gokgs.com/graphPage.jsp?user=CrazyStone I wonder what happened?) I suspect combining the two methods will yield even stronger programs, I've heard some folks at Facebook and Google are each working on their own Go bots based on deep learning. If some company spent a few years building dedicated hardware like Deep Blue I even bet they could beat 9d pros with current methods. Go is close to falling.


a big, discontinuous jump in rankings like that suggests a couple of things 1) the bot was taken offline and updated with a different engine hence different rank and/or 2) KGS recalibrated it's relative kyu/stone ranks (which it does from time to time)


It's worth pointing out that Monte Carlo tree search is a relatively 'dumb' algorithm. It turns out that getting a computer to be good at Go doesn't reveal anything deep or interesting about: Humans, Computers, Intelligence or Go.



Because combinatorial explosion?


Then how do human beings play Go? I've heard a lot of claims about human intelligence been non-replicable in-silico, but I've never heard the claim that it can somehow "defeat" a combinatorial explosion of possibilities just to plan a sales trip or play a board game.


Heuristics. Really complex heuristics that defy simple codification: the feeling that a particular group of stones just isn't quite safe yet, the feeling that there is weakness in a structure on the other side of the board that can be exploited, the feeling that this corner is too hot right now, so you should definitely extend instead of the hane.

If you're not already a go player, and want to get started, come sign up on online-go.com and ping me, I'm pathogenix.


>Heuristics. Really complex heuristics that defy simple codification:

The whole point of a heuristic is that it's a simple rule that works reasonably well, one might even say admissibly as they do in undergrad AI classes, for dealing with an unsolvably complex problem.

Saying "humans use complex heuristics" amounts to just saying, "Humans use some algorithm I don't know."

>the feeling that a particular group of stones just isn't quite safe yet, the feeling that there is weakness in a structure on the other side of the board that can be exploited, the feeling that this corner is too hot right now, so you should definitely extend instead of the hane.

This mostly just sounds like probabilistic, bounded-rational prediction and evaluation of positions, which is what we currently think human cognition is anyway, but hey.


It's not that humans only _use_ heuristics, it's that humans _create_ heuristics, and seem to be able to optimize the speed of the heuristic with training and use. They're also introspectable to some level, and can be combined with rational observation and feedback.


Devils advocate - ML is introspective at some level and certainly can observe (with Spock-like objectivity; almost defining the term) rationality, and take the result of each move and grade it with some degree of confidence as a "good move" or "bad move" [and even contextualize the move: i.e., move : 'e4' ; context : "opening" => evaluation - "great move"]. I agree with your first point though w/r/t heuristics and more importantly pattern recognition which can be used to integrate in more heuristic knowledge in your aggregate 'decision making system' at a way more 'effective' rate (with respect to time, within the domain of the game Go).


One possibility is humans actually have more training time in go to fine tune things than computers so far. 6 hours a day * 300 days * 5-20 years is a lot of training, but multiply that by millions of people who don't all come up with great models.

The idea being people with better heuristics end up as better players adding. So, more people effectivly adds more training time backing up the best models.

PS: This also means each player is using a different algorithm while playing.


Sure, but there are a lot of them, they involve several layers of metaheuristics to balance different approaches, and nobody has been able to automatically grow something which matches the same success rate as highly trained human heuristic layering.

Which maybe takes some fun out of it, but this does seem to be an area where humans consistently out-perform AI: when local optimizations have to be balanced across many medium to medium-large optimization criteria as well. Similar things happen in language, at least metaphorically.


The answer is actually: We don't know. Some people suggest comllex visual pattern matching is at play, but this seems really unlikely when you get to know the space because:

1. Good Go players can often reconstruct an entire game just by looking at the board, if they have some idea how it started. Given that good go players can substantially alter the board in the course of play (called "playing under the stones" in many books), this suggests more than simple visual recognition.

2. Some go players can even play "one color go" which is pretty amazing to watch. Its basically a game of who can keep every move in their head. This isn't a silly stunt, some people really practice this.

Personally, I think that actually Go is more like a contextual NL problem than a vision definition problem. The existence of things like "joseki" and the fact that small board games play out in such a radically different way than big board games suggests that a variety of human cognitive shortcuts are at play.

It is absolutely the case that with just a few months of modest practice almost anyone can beat the pants off the best go playing computers.


> It is absolutely the case that with just a few months of modest practice almost anyone can beat the pants off the best go playing computers.

It sure isn't. The strongest computers are a few stones worse than professionals. You can count on your hands and toes the number of people in the United States that can beat the pants off the best go-playing computers.


https://en.m.wikipedia.org/wiki/Computer_Go#Performance:

"In 2009, the first such programs appeared which could reach and hold low dan-level ranks on the KGS Go Server also on the 19x19 board."

I know virtually nothing about what it takes to become a low-level dan ranked player, but I would think it would take more than "just a few months" to "beat the pants off" them.

Back to the subject at hand: I think we will solve mathematical go before we solve chess (where 'solve' is used in the mathematical sense, so that, for example, we can prove "chess is a win for white, in 53 moves", and mathematical go is as described in https://math.berkeley.edu/~berlek/cgt/gobook.html; its difference with regular go variants is the way half stones are counted).

Reason is that both games, even with extensive pruning, are too complex for an full search of their game tree, and go has a simpler structure, making it easier to reason about it without doing that exhaustive search.

[I also doubt I'll live to see either happen]


It has been demonstrated that Go (when generalized to an arbitrarily large board, and an arbitrary initial configuration of stones) is PSPACE hard.

This means that an analytical solution would either need to be an optimization of brute force, or exploit some additional structure that results from starting with an empty go board.

As an added wrinkle, such a solution may involve assuming that White [0] plays optimally. Ironically, this means that it may still be possible to beat someone who has fully analyzed and "solved" go, by making incorrect moves that puts the board in a position where their analysis only says that a winning move exists.

[0] I am assuming that Black has the winning strategy.


This was my standard trick in chess when I still played a lot. If I would play against someone strong on openings but a weak player otherwise make an irrational opening move totally upsetting their apple-carts. If they were strong players they'd take advantage but if they were only strong on book knowledge you'd take them to the cleaners. Knowing your opponent in chess can be as important as knowing the game.


I'm not convinced proofs for arbitrarily large boards say much about solving the 19x19 board. There may be a generalization of tic-tac-toe for n-dimensional boards that are hard to analyze, but that does not imply that tic-tac-toe is particularly hard to analyze (to be fair, tic-tac-toe is an example where one basically has to do an exhaustive search to show it is a draw)

A full analysis of go will have to include some "if N is less than X" clause for an X at least equal to 20, making it less beautiful, but I don't see that as a big hindrance (and X need not be optimal in a proof for a 19x19 board)

In particular, the proof I know of (http://www.cs.bu.edu/faculty/gacs/courses/cs535/papers/Licht...) uses building blocks so large that you can, maybe, fit 2 on a 19x19 board.

Also, I think it is clear white doesn't have a winning strategy, as black can pass at the start of the game.


Humans are much better at pattern recognition than computers. As a result, computers have to employ "brute force" algorithms that are vulnerable to combinatorial explosions.


Explains why there is such a focus on generating ever better and more efficient pattern matching algorithms. It also helps to be able to utilize ever more powerful machines. But I think well performing ones are all task-specific algorithms, there is no general purpose pattern matching capability that I know of that you can plug into a computer today and have it perform untrained tasks. I think it is possible, and it will be a big step forward. I would love to have time to work on this type of problem...


100 trillion synapses isn't brute force?


Brute force = check every possible next move, and then assuming you've picked that move, check every possible opponent's move... you end up playing out a whole game's worth of turns for each turn.

Brute force has this particular definition of trying every possible outcome.

Humans pattern match pretty quickly and more or less guess based off prior experience and "intuition" -- it's not a brute force approach.


It may be the case that the hardware of your brain isn't essentially brute forcing behind the scenes, but it certainly isn't obvious to me that this is the case.


I thinks it's provable that it's not a brute force process. Since combinatorial explosion means by definition you cannot play to the end, and humans can in fact play Go, there has to be something else going on.


You can still brute force all game states in a tree with height n, with some heuristic to estimate the strength of position at each game state. I certainly don't think humans solve Go, because I've never heard of any experts being undefeated.


Most of that is irrelevant to the problem domain. Humans do a lot more with their brains than just play Go.


Humans uses features that aren't used by computers.

I'm more familiar with Chess than Go, but in Chess people will often talk about things like "too crowded" or how pieces are exposed.

These visible to humans quite easily, but hard to engineer sufficiently well to be useful to computers. In chess, brute force is easier.

In Go, I suspect that some deep-learning style bots will develop similar features themselves in the hidden layers. It's worth noting that the Google Deep Mind team is looking at tackling NP-hard problems (like traveling salesman) with their Neural Turing Machines[1].

[1] See for eg: https://medium.com/@alevitale/notes-from-deep-learning-summi...


Yes - Go is full of concepts like 'aji' (lit. "taste"), thickness, influence, good and bad shape which are features that can only really be evaluated in terms of actual points dozens of moves later.

I think Go is particularly difficult for AI because you need fuzzy pattern matching and precise reading out of positions (life and death problems). as well as the judgement to know when to use which.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: