Hacker News new | past | comments | ask | show | jobs | submit login

One direction I would like to see more focus on is AI providing a useful explanation of the decisions that are taken. AlphaGo made quite a few non-conventional moves. They turned out to be good moves. What I would find really interesting is if an AI like AlphaGo can not only excel at playing games (solving problems) but also be used as a tool to gain understanding. Perhaps, Lee Sedol will become even stronger now because his experience with AlphaGo. Perhaps, humans will be able to learn to view the game in a slightly different way that closes the gap between where we are and where AlphaGo appears to be? Maybe humans will once again regain supremacy over AlphaGo? I think that would be a much more interesting outcome than moving on to another problem space.



To a large degree, I don't think this is possible.

As humans, we want an intuitive explanation. For instance with chess, because it weakens certain squares or allows a particular combination resulting in a pawn break.

Unfortunately, these notions arise from our human attempt to understand a complicated game by reasoning through abstractions over the game. Things like pawn structure, control over light and dark squares, and pressure on pinned pieces aren't fundamental components of chess — they're just patterns that help us understand and reason about complicated board positions.

An AI doesn't need or use these abstractions. At the end of the day, all of them can, will, and should be ignored for the sake of simply achieving a superior position on the board. And it's very likely in my mind that moves at this deep a level can't be suggested in terms of a more useful abstraction than, "because it's better than all the other moves".


> What I would find really interesting is if an AI like AlphaGo can not only excel at playing games (solving problems) but also be used as a tool to gain understanding.

No, it cannot. It's generally thought that the only salient difference between 'machine learning' and 'statistics' is that 'statistics' attempts to find explanations, while 'machine learning' only attempts to find heuristics that give results.

Don't anthopomorphise AI. It's not artificial life, it's only probability theory plus lots of CPU power.


Knowing almost nothing about neural nets, is it possible for a NN based AI to explain it's decisions in some manner that we would understand?


What you can do is see which neurons were activated by a certain event, then figure out what other situations they are activated by and the patterns that they match. That's won't get you a coherent explanation straight away, especially for a very large network, but it's a start.


I bet humans will be able to gain supremacy over the AlphaGo that Lee Sedol has (and is) playing.

However, AlphaGo is still training and advancing. I am not sure people will be able to keep up with that moving target.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: