The article explains the problems. AI proponents want to use these systems to censor. And it leads to major companies like Microsoft slandering people. Microsoft should be afraid about that.
Because you used "Microsoft’s Bing, which is powered by GPT-4" (quoting TFA) and GPT-4 is designed to generate copy that is not to be interpreted as entirely true or entirely false, it's simply to be interpreted as humanlike.
I would put it on the chrome, but well, on the content would solve the problem too.
Personally, I would be happier if they stopped their submarine marketing that says that the results are reliable. It's tiresome. But I don't care much either way, I don't own the brand they are tarnishing and don't personally know anybody attacked by it yet. It's just mildly annoying to see they laying all over the web.
Maybe. But I’m not sure. If I write an article, and say up top that the article may contain made-up stuff, then later down I say, “hunter2_ likes to have sex with walruses, it’s a fact. Here’s a link to a Washington Post article with all the gory details,” it’s not clear that pointing to my disclaimer would indemnify me from liability for harm that came to you from the walrus allegation, if people believed and acted on it.
Here, maybe this article will help make you feel more sure. What you're describing is parody or satire. At least in the US, it's a very protected form of speech.
And here's their actual brief. It was sent to the actual Supreme Court, despite being funny, something nobody on the court has ever been nor appreciated.
But Bing doesn’t present its results as parody or satire, and they don’t intrinsically appear to be such. They’re clearly taken as factual by the public, which is the entire problem. So how is this relevant?
> funny, something nobody on the court has ever been nor appreciated.
I agree that "you're talking to an algorithm that isn't capable of exclusively telling the truth, so your results may vary" isn't QUITE parody/satire, but IDK that I can take "everyone believe ChatGPT is always telling the truth about everything" as a good faith read either and parody felt like the closest place as IANAL.
Intent is the cornerstone of slander law in the US, and you would need a LOT of discovery to prove that the devs are weighting the scale in favor of bad outcomes for some people (and not just like, end users feeding information into the AI).
TL;dr- Everyone's stance on this specific issue seems to depend on whether you believe people think these AI chatbots exclusively tell them the truth, and I just don't buy that worldview (but hey, I'm an optimist that believe that humanity has a chance, so wtf do I know?)