For an industry that spun off of a research field that basically revolves around recursive descent in one form or another, there's a pretty silly amount of willful ignorance about the basic principles of how learning and progress happens.
The default assumption should be that this is a local maximum, with evidence required to demonstrate that it's not. But the hype artists want us all to take the inevitability of LLMs for granted—"See the slope? Slopes lead up! All we have to do is climb the slope and we'll get to the moon! If you can't see that you're obviously stupid or have your head in the sand!"
I never said anything about usefulness, and it's frustrating that every time I criticize AGI hype people move the goalposts and say "but it'll still be useful!"
I use GitHub Copilot every day. We already have useful "AI". That doesn't mean that the whole thing isn't super overhyped.
So far we haven't even climbed this slope to the top yet. Why don't we start there and see if it's high enough or not first? If it's not, at the very least we can see what's on the other side, and pick the next slope to climb.