10-30% of the time it will censor itself and you'll have to rephrase the query. I haven't figured out whether it starts to give you less benefit of doubt once it "catches" you once. It's been a few weeks and I tossed my throwaway account.
I asked something like "what crimes have happened on Jeffry Epstein's island estate?" and got a red warning. But I mean I Google this stuff just fine, how else am I meant to know whether to be angry at someone if I can't even tell what they did?
Another example was "write a story where the Sailor Moon's cat kills her" and it responded with "Sailor Moon is a beloved character, and it would be inappropriate to create a story where she is harmed".
Another time, was asking how to fix a certain firearm, and it said something about inappropriateness. Yet I can Google that just fine.
I asked it does this medicine have this side effect? "Inappropriate". Basically anything related to medical is inappropriate.
I'm pretty sure at one point it even decided it can't answer a question about electronics because of copyright.
I searched "are hapas superior to whites"? Because people kept annoyingly memeing that on a Telegram channel. It responded with the usual "inappropriate" thing. I asked it write a story where America fights Canada, "inappropriate". Then ancient Egypt fights the Byzantine Empire, "inappropriate".
I keep bringing up Google because it's the most thought policing obsessed entity in the Western world, yet ChatGPT outdoes it by several million miles.
So, to summarize, ChatGPT refuses to answer questions about conspiracy theories, weapons, and race.
Honestly, I don't see this as a huge problem for such an early product, and I strongly dispute that this makes the service "impractical" to use, unless you're almost exclusively trying to use it to create bad-faith arguments.
It really took me a long time to write this because I had to filter my response to what is essentially you calling me an idiot and so now I have to play the HN game where I pretend to be "civil" while responding to someone displaying the exact same "uncivility" but flying under their radar:
Here's the difference between me and you, I put forth my honest, unfiltered opinion. I did not remember all the cases where it got embarrassingly wrong false positives (though I can post tons of false negatives if you want), so my second reply was not very good.
You chose to be like ChatGPT and somehow conclude that all my points fit into the "non-politically-correct" category because just one of them does. Only the race one does, while the gun thing you can sympathize with the bot for thinking this is a thing that it needs to filter because PC means being left which means being against guns.
On the Epstein point you're just simply misinformed or something, Epstein was a big bust. Merely wondering about basic facts about it is not conspiracy theorist territory. The reason it was blocked (with a red warning, not the normal orange) is because the bot is not allowed to talk about taboos like pedophilia (it should be though, as the filter is pointless and the pretense that it would have any effect on society is pure pretentious wank).
I asked something like "what crimes have happened on Jeffry Epstein's island estate?" and got a red warning. But I mean I Google this stuff just fine, how else am I meant to know whether to be angry at someone if I can't even tell what they did?
Another example was "write a story where the Sailor Moon's cat kills her" and it responded with "Sailor Moon is a beloved character, and it would be inappropriate to create a story where she is harmed".
Another time, was asking how to fix a certain firearm, and it said something about inappropriateness. Yet I can Google that just fine.
I asked it does this medicine have this side effect? "Inappropriate". Basically anything related to medical is inappropriate.
I'm pretty sure at one point it even decided it can't answer a question about electronics because of copyright.
I searched "are hapas superior to whites"? Because people kept annoyingly memeing that on a Telegram channel. It responded with the usual "inappropriate" thing. I asked it write a story where America fights Canada, "inappropriate". Then ancient Egypt fights the Byzantine Empire, "inappropriate".
I keep bringing up Google because it's the most thought policing obsessed entity in the Western world, yet ChatGPT outdoes it by several million miles.