Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The difference is that an LLM isn't very good at saying "I'm a fucking idiot" and changing it when asked to double-check (unless you handhold it in the direction of the exact error it's meant to be looking for). Humans recognize their own hallucinations. There's not really any promising work towards getting AI to do the same.


Have you tried it? They're actually pretty good at it in a lot of scenarios. It's not flawless, but they're only getting better.


Leela.ai was founded to work on problems like that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: