That's true up to a certain threshold on 'probabilistically correct', right? At a certain number of 9s, it's fine. And increasingly I use AI to help ask me questions, refine my understanding of problem spaces, do deep research on existing patterns or trends in a space and then use the results as context to have a planning session, which provides context for architecture, etc.
So, I don't know that the tools are inherently rightward-pushing
The problem is, given the inherent limitations of natural language as a format to feed to an AI, it can never have enough information to be able to solve your problem adequately. Often the constraints of what you're trying to solve only crop up during the process of trying to solve the problem itself, as it was unclear that they even existed beforehand
An AI tool that could have a precise enough specification fed into it to produce the result that you wanted with no errors, would be a programming language
I don't disagree at all that AI can be helpful, but there's a huge difference between using it as a research tool (which is very valid), and the folks who are trying to use it to replace programmers en masse. The latter is what's driving the bubble, not the former
So, I don't know that the tools are inherently rightward-pushing