More likely due to lack of "good" data than to existence of "bad" data. ChatGPT is know for its ability to "hallucinate" answers for questions that it wasn't trained for.
In fact ChatGPT doesn't know anything about true and false. It's just generating text that most closely resembles text it's seen on similar subjects.
E.g. ask it about the molecular description for anything. It'll start with something fundamental like the CH3N4 etc then describe the bonds. But the bonds will be a mishmash of many chemical descriptions thrown together. Because similar questions had that kind of answer.
The worst part is, it blurts forth with perfect confidence. I liken it to a blowhard acquaintance that will make up crap about any technical subject they have a few words for, as if they are an expert. It's funny except when somebody relies on it as truth.
I don't think GPT3 at its heart is an expert at anything. Except generating likely-looking text. There's no 'superego' involved anywhere that audits the output for truthfulness. And certainly no logical understanding of what it's saying.
People have taken to asking ChatGPT to create entire scripts to trade money. When they don't work, they go into chatrooms or forums and ask "why doesn't this work" without saying it was made by ChatGPT. It causes people to open the post, read it a bit and only maybe after a minute or two of wasted time, realize the script is complete nonsense.