Sometimes optimal solutions don't make sense to the human mind because they're not intuitive.
For instance, I developed a system that used machine learning and linear solver models to spit out a series of actions to take in response to some events. The actions were to be acted on by humans who were experts in the field. In fact, they were the ones from whom we inferred the relevant initial heuristics.
Everyday, I would get a support call from one of the users. They'd be like, 'this output is completely wrong. You have a bug in your code.'
I'd then have to spend several hours walking through each of the actions with them and recording the results. In every case, the machine would produce recommended actions that were optimal. However, they were rarely intuitive.
In the end, it took months of this back and forth until the experts began to trust the machine outputs.
This is the frightening thing about AI - not only can an AI outperform experts, but it often makes decisions that are incomprehensible.
What you said about the expert calling something a bug reminded me of how the commentator in the first game would see a move by alphaGo and say that it was wrong. He did this multiple times for alphaGo but never once questioned the human's move. Yet even with all those "wrong" moves alphaGo won. Didn't watch the second game, so not sure if he kept doing that.
The english-speaking human 9-dan only did this once for AlphaGo yesterday (when AlphaGo made an "overextension" which eventually won the AI the game), but maybe did it approximately 3 or 4 times for Lee (Hmm, that position looks a bit weak. I think AlphaGo will push his advantage here and... oh, look at that. AlphaGo moved here).
Later, he did admit that the "overextension" on the north side of the board was more solid than he originally thought, and called it a good move.
He never explicitly said that a move was "good" or "bad", and always emphasized that as he was talking, his analysis of the game was relatively shallow compared to the players. But in hindsight, whenever he point out an "bad-juju feel" on the part of Lee's move, AlphaGo managed to find a way to attack the position.
Overall, you knew when either player made a good move, because the commentator would stop talking and just stare at the board for minutes, at least until the other commentator (an amateur player) would force a conversation, so that the feed wouldn't be quiet.
The vast, vast majority of the time, the English-speaking 9-dan was predicting the moves of both players, in positions more complicated than I could read. (Oh, but it was obvious both players would move there. There were clearly times when the commentator would veer off into a deep distant conversation with the predicted moves still on the demonstration board, because he KNEW both players were going to play out a sequence of maybe 6 or 7 moves forward).
They really got a world-class commentator on the English live feed. If you got 4 hours to spare, I suggest watching the game live.
Elsewhere in this thread, IvyMike pointed out [1]:
> I sense a change in the announcer's attitude towards AlphaGo. Yesterday there were a few strange moves from AlphaGo that were called mistakes; today, similar moves were called "interesting".
Or, maybe, there could have been bugs in the code.
If I'm an expert in some domain and a computer is telling me to do something completely different ("Trust me--just drive over the river!") I'm certainly going to question the result.
For instance, I developed a system that used machine learning and linear solver models to spit out a series of actions to take in response to some events. The actions were to be acted on by humans who were experts in the field. In fact, they were the ones from whom we inferred the relevant initial heuristics.
Everyday, I would get a support call from one of the users. They'd be like, 'this output is completely wrong. You have a bug in your code.'
I'd then have to spend several hours walking through each of the actions with them and recording the results. In every case, the machine would produce recommended actions that were optimal. However, they were rarely intuitive.
In the end, it took months of this back and forth until the experts began to trust the machine outputs.
This is the frightening thing about AI - not only can an AI outperform experts, but it often makes decisions that are incomprehensible.