Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The argument here seems to be “you need AGI to write good code. Good code is required for… reasons. AGI is far away. Therefore code is not dead.”

First, I disagree that good code is required in any sense. We have decades of experience proving that bad code can be wildly successful.

Second, has the author not seen the METR plot? We went from: LLMs can write a function to agents can write working compilers in less than a year. Anyone who thinks AGI is far away deserves to be blindsided.



In agree in principle, but the compiler is a terrible example given the amount of scaffolding afforded to the LLMs, literally hundreds of thousands of test cases covering all kinds of esoteric corners.

Also (and this is coming from someone who thinks it's quite close) "AGI" is not implied by the ability to implement very-long-horizon software tasks. That's not "general" at all.


You're moving the goal posts. A year ago, _no one_ thought it could write a working compiler. Yes, the compilers we've seen today are not great. Yes, they rely too much on existing implementations. But... if you can't see which way the wind is blowing then I can't help you at this point.

AGI is a meaningless milestone. No one can actually define it. The best definition I've seen is the one that ARC is using: "AI that is as good at a human at every task".


What goal posts have I moved? You seem to be attributing arguments to me that I haven't made. I'm simply pointing out that the example you gave involves a level of scaffolding that most projects don't have, so that the data point is exaggerated; and that it's possible (and quite reasonable) to have an agent that is extremely good at programming while not matching what most companies and people in the space have defined as "AGI". I do believe that we'll soon have agents that can achieve Claude C Compiler–level achievements in spaces with far less scaffolding.


That's not my argument at all! Though I can see why you took that away; my bad for not making my argument clearer.

I believe that even when we have AGI, code will still be super valuable because it'll be how we get precise abstractions into human heads, which is necessary for humans to be able to bring informed opinions to bear.


No, I think we just fundamentally disagree.

IMO, black boxes that "just work" will be fine provided they can produce intermediate artifacts and explanations that make sense. The people that I know who use CoWork already don't care about how the agent got the result as long as the outputs look right and the process is explainable.


I don't disagree with anything you just said




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: