> does anyone know of ML directions that could add any kind of factual confidence level to ChatGPT and similar?
Yes. It's a very active area of research. For example:
Discovering Latent Knowledge in Language Models Without Supervision (https://arxiv.org/abs/2212.03827) shows an unsupervised approach for probing a LLM to discover things it thinks are facts
Language Models as Knowledge Bases? (https://aclanthology.org/D19-1250.pdf) is some slightly older work exploring how well LLMs store factual information itself.
Yes. It's a very active area of research. For example:
Discovering Latent Knowledge in Language Models Without Supervision (https://arxiv.org/abs/2212.03827) shows an unsupervised approach for probing a LLM to discover things it thinks are facts
Locating and Editing Factual Associations in GPT (https://arxiv.org/pdf/2202.05262.pdf) shows an approach to editing a LLM to edit facts.
Language Models as Knowledge Bases? (https://aclanthology.org/D19-1250.pdf) is some slightly older work exploring how well LLMs store factual information itself.