And people don’t like this. Something being logical doesn’t mean people have to accept it.
Also AI has been basically useless every time I tried it except converting some struct definitions across languages or similar tasks, it seems very unlikely that it would boost productivity by more than 10% let alone 400%.
You’re assuming how i would respond before i even respond. Please allow inquiries to happen naturally without polluting the thread with meritless cynicism.
With all due respect, with a response like "What AI coding tools/models have you been using?" to a complaint that AI tools just don't seem to be effective, what difference does a reply to that even make? If your experience makes you believe that certain tools are particularly good--or particularly bad--for the tasks at hand, you can just volunteer those specifics.
FWIW, my own experiences with AI have ranged from mediocre to downright abysmal. And, no, I don't know which models the tools were using. I'm rather annoyed that it seems to be impossible to express a negative opinion about the value of AI without having to have a thoroughly documented experiment that inevitably invites the response that obviously some parameter was chosen incorrectly, while the people claiming how good it is get to be all offended when someone asks them to maybe show their work a little bit.
Some people complain about AI but are using the free version of ChatGPT. Others are using the best models without a middleman system but still see faults, and I think it’s valuable to inquire about which domains they see no value in AI from. There are too many people saying “I tried AI and it didn’t work at all” without clarifying what models, what tools, what they asked it to do, etc. Without that context it’s hard to gauge the value of any value judgement on AI.
It’s like saying “I drove a car and it was horrible, cars suck” without clarifying what car, the age, the make, how much experience that person had driving, etc. Of course its more difficult to provide specifics than just say it was good or bad, but there is little value in claims that AI is altogether bad when you don’t offer any details about what it is specifically bad at and how.
> It’s like saying “I drove a car and it was horrible, cars suck” without clarifying what car, the age, the make, how much experience that person had driving, etc.
That's an interesting comparison. That kind of statement can be reasonably inferred to be made by someone just learning to drive who doesn't like the experience of driving. And if I were a motorhead trying to convert that person to like driving, my first questions wouldn't be those questions, trying to interrogate them on their exact scenario to invalidate their results, but instead to question what aspect of driving they don't like to see if I could work out a fix for them that would meaningfully change their experience (and not being a motorhead, the only thing I can think of is maybe automatic versus manual transmission).
> there is little value in claims that AI is altogether bad when you don’t offer any details about what it is specifically bad at and how.
Also, do remember that this holds true when you s/bad/good/g.
We're still in the early days of LLMs. ChatGPT was only three years ago. The difference it makes is that without details, we don't know if someone's opinion is still relevant, because of how fast things have moved since the original GPT-3 release of ChatGPT. If someone half-assed an attempt to use the tools a year ago, and hasn't touched them since, and is going around still commenting about the number of R's in strawberry, then we can just ignore them and move on because they're just being loudmouths who need everyone else to know they don't like AI. If someone makes an honest attempt, and there's some shortcoming, then that can be noted, and then the next version coming out of the AI companies can be improved.
But if all we have to go on is "I used it and it sucked" or "I used it and it was great", like, okay, good for you?
> With all due respect, with a response like "What AI coding tools/models have you been using?" to a complaint that AI tools just don't seem to be effective, what difference does a reply to that even make?
"Damn, these relational databases really suck, I don't know why anyone would use them, some of the data by my users had emojis in them and it totally it! Furthermore, I have some bits of data that have about 100-200 columns and the database doesn't work well at all, that's horrible!"
In some cases knowing more details could help, for example in the database example a person historically using MySQL 5.5 could have had a pretty bad experience, in which case telling them to use something more recent or PostgreSQL would have been pretty good.
In other cases, they're literally just holding it wrong, for example trying to use a RDBMS for something where a column store would be a bit better.
Replace the DB example with AI, same principles are at play. It is equally annoying to hear people blaming all of the tools when some are clearly better/worse than others, as well as making broad statements that cannot really be proven or disproven with the given information, as it is people always asking for more details. I honestly believe that all of these AI discussions should be had with as much data present as possible - both the bad and good experiences.
> If your experience makes you believe that certain tools are particularly good--or particularly bad--for the tasks at hand, you can just volunteer those specifics.
My personal experience:
* most self-hosted models kind of suck, use cloud ones unless you can get really beefy hardware (e.g. waste a lot of money on them)
* most free models also aren't very good, nor have that much context space
* some paid models also suck, the likes of Mistral (like what they're doing, just not very good at it), or most mini/flash models
* around Gemini 2.5 Pro and Claude Sonnet 4 they start getting somewhat decent, GPT 5 feels a bit slow and like it "thinks" too much
* regardless of what you do, you still have to babysit them a lot of the time, they might take some of the cognitive load off, but won't make you 10x faster usually, the gains might definitely be there from reduced development friction (esp. when starting new work items)
* regardless of what you do, they will still screw up quite a bit, much like a lot of human devs do out there - having a loop of tests will be pretty much mandatory, e.g. scripts that run the test suite and also the compilation
* agentic tools like RooCode feel like they make them less useless, as do good descriptions of what you want to do - references to existing files and patterns etc., normally throwing some developer documentation and ADRs at them should be enough but most places straight up don't have any of that, so feeding in a bunch of code is a must
* expect usage of around 100-200 USD per month for API calls if the rate limits of regular subscriptions are too limiting
Are they worth it? Depends. The more boilerplate and boring bullshit code you have to write, the better they'll do. Go off the beaten path (e.g. not your typical CRUD webapp) and they'll make a mess more often. That said, I still find them useful for the reduced boilerplate, reduced cognitive load, as well as them being able to ingest and process information more quickly than I can - since they have more working memory and the ability to spot patterns when working on a change that impacts 20-30 files. That said, the SOTA models are... kinda okay in general.
Also AI has been basically useless every time I tried it except converting some struct definitions across languages or similar tasks, it seems very unlikely that it would boost productivity by more than 10% let alone 400%.