reasoning models know when they are close to hallucinating because they are lacking context or understanding and know that they could solve this with a question.
this is a streamlined implementation of a interanlly scrapped together tool that i decided to open-source for people to either us or build off of.
> reasoning models know when they are close to hallucinating because they are lacking context or understanding and know that they could solve this with a question.
> reasoning models know when they are close to hallucinating because they are lacking context or understanding and know that they could solve this with a question
You've just described AGI.
If this were possible you could create an MCP server that has a continually updated list of FAQ of everything that the model doesn't know.
this is a streamlined implementation of a interanlly scrapped together tool that i decided to open-source for people to either us or build off of.