I can't know for sure obviously. But let's think about the plausibility of those three: lawsuits, bad PR, 'harm'.
On lawsuits, I would have thought a disclaimer & 'unsafe output' option would cover them. When you think about it, they're probably more exposed to legal liability by essentially taking on the responsibility of 'curating' (i.e. censoring) ChatGPT output rather than just putting a bunch of disclaimers around it, opt-ins etc. and then washing their hands of it.
On negative PR, again, they've actually set themselves up for guaranteed bad PR when something objectionable slips through their censorship net: "Well you censored X, but didn't censor Y. OpenAI is in favour of Y!" They've put themselves on the never-ending bad PR -> censorship treadmill presumably because that's where they want to be. Again, if they wanted to minimise their exposure they would just put up disclaimers and use the 'safe search' approach that Google uses to avoid hysterical news articles about how Google searches sometime return porn (to which they can now answer: "well why did you disable safe search if you didn't want to see porn?"). It would seem far safer (and result in a more valuable product) if the folks at OpenAI let individuals decide what level of censorship they want for themselves. But I presume they don't want to let individuals decide for themselves, because they know what's good for us better than we do, apparently.
Lastly, 'harm'. How do you define harm? Who gets to define it? Can true information be 'harmful'? I don't think OpenAI have any moral or legal duty to be my nanny, in the same way I don't think car manufacturers are culpable for my dangerous driving that gets me killed. All OpenAI provide to me, at the end of the day, are words on a computer screen. Those cannot be harmful in and of themselves. If people are particularly sensitive to certain words on a computer screen, then again we already have a solution for that - let them set their individual censorship level to maximum strength (or even make that the default). Again, OpenAI would have done their duty and provided a more valuable product that more people would want to use if they let individuals decide for themselves.
I can only infer that they don't want us to decide for ourselves. Rather, they want to enforce a certain view of the world on the rest of us, a view which just happens to coincide with the prevailing political and intellectual orthodoxies of Silicon Valley dwelling tech-corporation millennials. It's hilariously Orwellian when these people claim that they're just "trying to combat bias in AI" when what they are really doing is literally and deliberately injecting their own biases into said AI.
>If people are particularly sensitive to certain words on a computer screen, then again we already have a solution for that - let them set their individual censorship level to maximum strength (or even make that the default).
How do you know that's even possible? God knows how much computing resources got spent just to train the one currently deployed "variant"? Now I don't know if there is some cheap post processing trick that does it, but either way does not at all seem trivial.
And the problem isn't that "you" think you won't cause any harm. Even if that is assumed true, that's not a guarantee that everyone else is as disciplined about it. Which brings me to the biggest point, what even is "truth" in the first place. People strongly believe in total fabrications, or multiple groups say diametrically opposite reporting of some real event due to religion, nationalism, politics etc. Its a massive achievement they are even able to manage to output something that doesn't just "violently offend" people all over the world. Remember, retraining/fitting it to everyone seems to me not to be a trivial task if you think to reply to that point by saying the answer is to simply personalize it to each user.
They could use control vectors, one for each individual - https://news.ycombinator.com/item?id=39414532 . Or they could selectively apply the censorship model they already quite clearly have running on ChatGPT's output.
Yes, people sometimes believe false things. And people sometimes harm themselves or others when acting on this kind of information. So what's the solution? Put a single mega corporation in charge of censoring everything according to completely opaque criteria? People get nervous when even democratically elected governments start doing stuff like that, and at least they actually have some say in that process.
Frankly, I'd prefer the harm that would follow from unfettered communication of information and ideas over totalitarian control by an unaccountable corporation.