Hacker News new | past | comments | ask | show | jobs | submit login

> does anyone know of ML directions that could add any kind of factual confidence level to ChatGPT and similar?

Yes. It's a very active area of research. For example:

Discovering Latent Knowledge in Language Models Without Supervision (https://arxiv.org/abs/2212.03827) shows an unsupervised approach for probing a LLM to discover things it thinks are facts

Locating and Editing Factual Associations in GPT (https://arxiv.org/pdf/2202.05262.pdf) shows an approach to editing a LLM to edit facts.

Language Models as Knowledge Bases? (https://aclanthology.org/D19-1250.pdf) is some slightly older work exploring how well LLMs store factual information itself.




Replying to this comment to find it later. (Is there a good way to bookmark comments on HN?)


You can click the date of the comment then "favorite" it.


Thank you so much! Those are exactly the types of links I'm curious about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: