Innovation and better application of a relatively fixed amount of intelligence got us from wood spears to the moon.
So even if the plateau is real (which I doubt given the pace of new releases and things like AlphaEvolve) and we'd only expect small fundamental improvements some "better applications" could still mean a lot of untapped potential.
The core models have plateaued. MoE and CoT are use of LLMs. Agents are applications of LLMs. It's hard to say how far novel uses and applications will take us, but the fiery explosion at the core has turned into a smolder.
We'll continue to see incremental improvements as training sets, weights, size, and compute improve. But they're incremental.
So even if the plateau is real (which I doubt given the pace of new releases and things like AlphaEvolve) and we'd only expect small fundamental improvements some "better applications" could still mean a lot of untapped potential.