The main problem is that GPT3 has a free will of a creative but delusional person and will distort facts as it pleases to maximize its output function score.
Hmm, can you turn GPT-3 api into a fact checker by priming it with tuples of (in)correct statements, true/false? And then maybe prime it to add explanations and references for the responses as well.
Yes, but it's only effective for things a few months before its creation; anything after that is just “sounds plausible to GPT-3”. Likewise, its explanations aren't very good unless it has good domain knowledge on the subject; if it doesn't know something, it won't tell you so unless you've primed it with not-knowing being an option.