Hacker News new | past | comments | ask | show | jobs | submit login

One workaround for using RAG, as mentioned in a podcast I listened to, involves employing a second LLM agent to assess the work of the first LLM. This agent evaluates the response or hallucination by requiring the first LLM to cite sources and subsequently locate those sources.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: