AlphaGo's 1st incarnation included an input scalar that the Net was taught represented the level of Human Play to Emulate (handily included in the dataset).
Thus AlphaGo plays as a human (before MCTS), it's choices model a challenging oppenent.
If the dataset included the name of the player in the Dataset then AlphaGo could play in the style of that player.
This offers a human experience, and for teaching one can show what different experts at differing levels would choose.
This (along with the Neural Algorithm for Artistic Style paper) opens some interesting questions about simulating specific humans and the relationship of such simulations to mind uploading. Obviously it's a huge distance away from a proper upload, but with enough input data maybe this could form the basis of Alastair Reynolds' "beta simulations".
The first time I ever encountered a game that had superior AI to humans was Uniracers, back in the 90s. Great game, but the manual stated that they had to make the AI worse before they shipped because no one could beat it.
At the time it seemed novel, and even rather hilarious. Now it seems somewhat standard (aimbots can win every time, for example).
Thus AlphaGo plays as a human (before MCTS), it's choices model a challenging oppenent.
If the dataset included the name of the player in the Dataset then AlphaGo could play in the style of that player.
This offers a human experience, and for teaching one can show what different experts at differing levels would choose.