I follow your design, couldn't you also solve hallucinations with a "fact checking" LLM (connected to search) that corrects the output of the core LLM? You would take the output of the core LLM, send it to the fact checker with a prompt like "evaluate this output for any potential false statements, and perform an internet search to validate and correct them"