Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

reasoning models know when they are close to hallucinating because they are lacking context or understanding and know that they could solve this with a question.

this is a streamlined implementation of a interanlly scrapped together tool that i decided to open-source for people to either us or build off of.



> reasoning models know when they are close to hallucinating because they are lacking context or understanding and know that they could solve this with a question.

I’m interested. Where can I read more about this?


> reasoning models know when they are close to hallucinating because they are lacking context or understanding and know that they could solve this with a question

You've just described AGI.

If this were possible you could create an MCP server that has a continually updated list of FAQ of everything that the model doesn't know.

Over time it would learn everything.


Unless there is as yet insufficient data for meaningful answer.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: