If I leave Stockfish to study for longer then Qe1 comes up in the analysis. Which makes me wonder whether SF gets weaker in some positions the more it's left to think.
SF plays really odd moves when left to its own devices for a time. As does this AI. So maybe chess looks really weird with play significantly better than the best humans.
I think being able to play tactically perfect chess over 20 or so moves will often look weird to human strategic sensibilities. The computer sees every tiny exception to the patterns and heuristics you've incorporated into your gut feel about positions. In a way these moves are right just because they're right, and that's what's jarring - there's no _principle_ behind them that can be learned and generalised, which is something humans struggle with in all walks of life.
Except AlphaZero doesn't evaluate nearly as many moves as Stockfish (80Knps vs 70Mnps), so in a sense, it has exactly generalized a principle (or likely a whole lot of principles) that allows it to estimate positions much better than Stockfish.
Of course you are right about perfect play, but the human-like aspect is part of what is exciting about these new Alpha engines.
There's definitely nothing fishy going on, although it'd be nice to see a fully loaded Stockfish on its full complement of 512 cores and a proper endgame tablebase to really slog it out with AlphaZero.