Randomized linear algebra and under-solving (mixed precision or fp32 instead of fp64) seem to be taking off more than in the past, mostly on gpu though (use of tensor cores, expensive fp64, memory bandwidth limits).
And I wish Eigen had a larger spectrum of 'solvers' you can chose from, depending on what you want. But in general I agree with you, except there's always a cycle to eke out somewhere, right?