> holding out with the vague 'I tried it and it came up with crap'
Isn't that a perfectly reasonable metric? The topic has been dominated by hype for at least the past 5 if not 10 years. So when you encounter the latest in a long line of "the future is here the sky is falling" claims, where every past claim to date has been wrong, it's natural to try for yourself, observe a poor result, and report back "nope, just more BS as usual".
If the hyped future does ever arrive then anyone trying for themselves will get a workable result. It will be trivially easy to demonstrate that naysayers are full of shit. That does not currently appear to be the case.
Wasn't transformer 2017? There's been constant AI hype since at least that far back and it's only gotten worse.
If I release a claim once a month that armageddon will happen next month, and then after 20 years it finally does, are all of my past claims vindicated? Or was I spewing nonsense the entire time? What if my claim was the next big pandemic? The next 9.0 earthquake?
Transformers was 2017 and it had some implications on translation (which were in no way overstated), but it took GPT-2 and 3 to kick it off in earnest and the real hype machine started with ChatGPT.
What you are doing however is dismissing the outrageous progress on NLP and by extension code generation of the last few years just because people over hype it.
People over hyped the Internet in the early 2000s, yet here we are.
Well I've been seeing an objectionable amount of what I consider to be hype since at least transformers.
I never dismissed the actual verifiable progress that has occurred. I objected specifically to the hype. Are you sure you're arguing with what I actually said as opposed to some position that you've imagined that I hold?
> People over hyped the Internet in the early 2000s, yet here we are.
And? Did you not read the comment you are replying to? If I make wild predictions and they eventually pan out does that vindicate me? Or was I just spewing nonsense and things happened to work out?
"LLMs will replace developers any day now" is such a claim. If it happens a month from now then you can say you were correct. If it doesn't then it was just hype and everyone forgets about it. Rinse and repeat once every few months and you have the current situation.
I don't dispute that the situation is rapidly evolving. It is certainly possible that we could achieve AGI in the near future. It is also entirely possible that we might not. Claims such as that AGI is close or that we will soon be replacing developers entirely are pure hype.
When someone says something to the effect of "LLMs are on the verge of replacing developers any day now" it is perfectly reasonable to respond "I tried it and it came up with crap". If we were actually near that point you wouldn't have gotten crap back when you tried it for yourself.
There's a big difference between "I tried it and it produced crap" and "it will replace developers entirely any day now"
People who use this stuff everyday know that people who are still saying "I tried it and it produced crap" just don't know how to use it correctly. Those developers WILL get replaced - by ones who know how to use the tool.
> Those developers WILL get replaced - by ones who know how to use the tool.
Now _that_ I would believe. But note how different "those who fail to adapt to this new tool will be replaced" is from "the vast majority will be replaced by this tool itself".
If someone had said that six (give or take) months ago I would have dismissed it as hype. But there have been at least a few decently well documented AI assisted projects done by veteran developers that have made the front page recently. Importantly they've shown clear and undeniable results as opposed to handwaving and empty aspirations. They've also been up front about the shortcomings of the new tool.
You probably mean antirez porting Flux to c. There were not too many shortcomings in his breakdown; his biggest one as I saw was that his knowledge and experience building large c programs really was a requirement. But given one of these experts, you don't see how that person and claude code just replaces a team. The less capable people on the team cannot do what he does so before they were just entering code and getting corrected in reviews or asking for help. Now the AI can do that, but on 10 projects in parallel. In a weekend you wont have time for that but not everything has to be done in a weekend.
Isn't that a perfectly reasonable metric? The topic has been dominated by hype for at least the past 5 if not 10 years. So when you encounter the latest in a long line of "the future is here the sky is falling" claims, where every past claim to date has been wrong, it's natural to try for yourself, observe a poor result, and report back "nope, just more BS as usual".
If the hyped future does ever arrive then anyone trying for themselves will get a workable result. It will be trivially easy to demonstrate that naysayers are full of shit. That does not currently appear to be the case.