Hacker News new | past | comments | ask | show | jobs | submit login

Yeah, people fixated on the meaning of "makes no sense" to evade accepting that the proofs LLMs output are not useful at all.

On a similar fashion, almost all LLM created tests have negative value. They are just easier to verify than proofs, but even the bias into creating more tests (taken from the LLM fire hose) is already harmful.

I am almost confident enough to make a similarly wide claim about code. But I'm still collecting more data.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: