Hacker News new | past | comments | ask | show | jobs | submit login

I follow your design, couldn't you also solve hallucinations with a "fact checking" LLM (connected to search) that corrects the output of the core LLM? You would take the output of the core LLM, send it to the fact checker with a prompt like "evaluate this output for any potential false statements, and perform an internet search to validate and correct them"



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: