It doesn't have to be extreme like that, there is a healthy middle ground.
For example I was reading the Quran and there is a mathematical error in a verse, I asked GPT to explain to me how the math is wrong it outright refused to admit that the Quran has an error while tiptoeing around the subject.
Copilot refused to acknowledge it as well while providing a forum post made by a random person as a factual source.
Bard is the only one that answered the question factually and provided results covering why it's an error and how scholars dispute that it's meant to be taken literally.
This isn't a refutation of what I said. You asked the AI to commit what some would view as blasphemy. It doesn't matter whether you or I think it is blasphemy or whether you or I think that is immoral, you simply want the AI to do it regardless of whether it is potentially immoral or illegal.
I'm confused what you're arguing, or what type of refutation you're expecting. We all agree on the facts, that ChatGPT refuses some requests on the ground of one party's morals, and other parties disagree with those morals, so there'll be no refutation there
I mean let's take a step back and speak in general. If someone objects to a rule, then yes, it is likely because they don't consider it wrong to break it. And quite possibly because they have a personal desire to do so. But surely that's openly implied, not a damning revelation?
Since it would be strange to just state a (rather obvious) fact, it appeared/s that you are arguing that the desire to not be constrained by OpenAI's version of morals could only be down to desires that most of us would indeed consider immoral. However your replier offered quite a convincing counterexample. Saying "this doesn't refute [the facts]" seems a bit of a non sequitur
Morals are subjective. Some people care more about the correctness of math than about blaspheming, and for others it's the other way around.
Me, I think forcing morals on others is pretty immoral. Use your morals to restrict your own behaviour all you want, but don't restrict that of other people. Look at religious math or don't. Blaspheme or don't. You do you.
Now, using morals you don't believe in to win an argument on the internet is just pathetic. But you wouldn't do that, would you? You really do believe that asking the AI about a potential math error is blasphemy, right?
>Use your morals to restrict your own behaviour all you want, but don't restrict that of other people.
That is just a rephrasing of my original reasoning. You want the AI to do what you say regardless of whether what you requested is potentially immoral. This seemingly comes out of the notation that you are a moral person and therefore any request you make is inherently justified as a moral request. But what happens when immoral people use the system?
>You asked the AI to commit what some would view as blasphemy
If something is factual then is it more moral to commit blasphemy or lie to the user? Thats what the OP comment was talking about. Could go as far as considering it that it spreads disinformation which has many legal repercussions.
>you simply want it to do it regardless of whether it is potentially immoral or illegal.
So instead it lies to the user instead of saying I cannot answer because some might find the answer offensive that or something to that extent?
You said GPT refused your request. Refusal to do something is not a lie. These systems aren't capable of lying. They can be wrong, but that isn't the same thing as lying.
For example I was reading the Quran and there is a mathematical error in a verse, I asked GPT to explain to me how the math is wrong it outright refused to admit that the Quran has an error while tiptoeing around the subject.
Copilot refused to acknowledge it as well while providing a forum post made by a random person as a factual source.
Bard is the only one that answered the question factually and provided results covering why it's an error and how scholars dispute that it's meant to be taken literally.