It still kinda sucks though. You can make it work, but you can also easily end up wasting a huge amount of time trying to make it do something that it's just incapable of. And it's impossible to know upfront if it will work. It's more like gambling.
I personally think we have reached some kind of local maximum. I work 8 hours per day with claude code, so I'm very much aware of even subtle changes in the model. Taking into account how much money was thrown at it, I can't see much progress in the last few model iterations. Only the "benchmarks" are improving, but the results I'm getting are not. If I care about some work, I almost never use AI. I also watch a lot of people streaming online to pick up new workflows and often they say something like "I don't care much about the UI, so I let it just do its thing". I think this tells you more about the current state of AI for coding than anything else. Far from _not sucking_ territory.
I personally think we have reached some kind of local maximum. I work 8 hours per day with claude code, so I'm very much aware of even subtle changes in the model. Taking into account how much money was thrown at it, I can't see much progress in the last few model iterations. Only the "benchmarks" are improving, but the results I'm getting are not. If I care about some work, I almost never use AI. I also watch a lot of people streaming online to pick up new workflows and often they say something like "I don't care much about the UI, so I let it just do its thing". I think this tells you more about the current state of AI for coding than anything else. Far from _not sucking_ territory.