Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But it's not that most of programming, anyway?


No, most of programming is at least implicitly coming up with a human-language description of the problem and solution that isn't full of gaps and errors. LLM users often don't give themselves enough credit for how much thought goes into the prompt - likely because those thoughts are easy for humans! But not necessarily for LLMs.

Sort of related to how you need to specify the level of LLM reasoning not just to control cost, but because the non-reasoning model just goes ahead and answers incorrectly, and the reasoning model will "overreason" on simple problems. Being able to estimate the reasoning-intensiveness of a problem before solving it is a big part of human intelligence (and IIRC is common to all great apes). I don't think LLMs are really able to do this, except via case-by-case RLHF whack-a-mole.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: