Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

- could actually do useful work reliably with minimal supervision

That's the big problem. LLMs can't be allowed to do anything important without supervision. We're still at 5-10% totally bogus results.



I think a deeper issue is that they are essentially "attempt to extend this document based on patterns in documents you've already seen" engines.

Sometimes that's valuable and exactly what you need, but problems arise when people try to treat them as some sort of magical oracle that just needs to be primed with the right text.

Even "conversational LLMs are just updating a theater-style script where one of the characters happens to be described as a computer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: