Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this is the big question lots of people are working on right now

It's apparently really hard to objectively measure/report the "truthiness" of LLM results

Allowing an LLM to "improvise" and be a bit fast-and-lose is unfortunately a necessary ingredient in how they currently work.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: