Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> reasoning models know when they are close to hallucinating because they are lacking context or understanding and know that they could solve this with a question

You've just described AGI.

If this were possible you could create an MCP server that has a continually updated list of FAQ of everything that the model doesn't know.

Over time it would learn everything.



Unless there is as yet insufficient data for meaningful answer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: