Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The answer is that we're making it right now. AI didn't speed me up at all until agents got good enough, which was April/May of this year.

Just today I built a shovelware CLI that exports iMessage archives into a standalone website export. Would have taken me weeks. I'll probably have it out as a homebrew formula in a day or two.

I'm working on an iOS app as well that's MUCH further along than it would be if I hand-rolled it, but I'm intentionally taking my time with it.

Anyway, the post's data mostly ends in March/April which is when generative AI started being useful for coding at all (and I've had Copilot enabled since Nov 2022)



It's amazing how whenever criticisms pop up the responses for the last 3 years have been "well you aren't using <insert latest>, it's finally good!"


isn't this likely to be the case when a field is developing quickly and there are a large number of people who have different opinions on the subject?

e.g. I liked GitHub Copilot but didn't find it to be a game changer. I tried Cursor this year and started to see how AI can be today.


Indeed. The LLMs have been pretty useful for greenfield projects & one off scripts for a while, but GPT-5 was the first time I've found a model to be quite helpful on large-scale legacy code (>1M LOC).


FWIW this closely matches my experience. I’m pretty late to the AI hype train but my opinion changed specifically because of using combinations of models & tools that released right before the cut off date for the data here. My impression from friends is that it’s taken even longer for many companies to decide they’re OK with these tools being used at all, so I would expect a lot of hysteresis on outputs from that kind of adoption.

That said I’ve had similar misgivings about the METR study and I’m eager for there to be more aggregate study of the productivity outcomes.


Yeah, I released a new version of a little open source project based almost entirely on vibe-coding with Claude/Codex. It was more fun than bashing out my own code, and despite all the problems others have mentioned (ignored instructions, not using libraries, etc.), it was probably faster than if I'd added the new features myself.


> was probably faster

That sure doesn't sound like 10x


> AI didn't speed me up at all until agents got good enough, which was April/May of this year.

That was 5 months ago, which is 6 years in 10x time.


> That was 5 months ago, which is 6 years in 10x time.

That's some pretty bad math.

But yes, it isn't making software get made 10x faster. Feel free to blow that straw man down (or hype influencer, same thing.)


Interested in this Homebrew. Share when ready?


Agreed. Agentic AI is a completely different tool than “traditional” AI.

Im curious what the author’s data and experiment would look like a year from now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: