Hacker News new | past | comments | ask | show | jobs | submit login

On what basis? How would you write it? Why don't existing laws cover this? Is there a law that covers incorrect information in Wikipedia? or a search result?



> Why don't existing laws cover this?

Machine-generated lies have only recently become consistently convincing enough that they create these types of problems.

In fact, that's the major innovation of ChatGPT: it's not that it creates "good" text, it's that it creates incredibly convincing lies. It's a scalable version of a Wikipedia vandal.


The blog post claims that a human generated video with incorrect information was the source of this. So, why are we blaming GPT for this incorrect information?

What's more, the blog post is claiming that GPT was trained on video material (which it wasn't) which is also incorrect information and is apparently convincing enough to cause people to get up in arms about the product of yet another company.

The combination of issues of (a) people are using a language model as a knowledge base, (b) incorrect information exists out there on the net, and (c) people are assuming that the knowledge base is correct and not reading the documentation before singing up.

Alternatively, would you say that humans posting information that is incorrect and falsely represents the capabilities of another company's product should be similarly covered in laws?


> So, why are we blaming GPT for this incorrect information?

I didn't blame ChatGPT for anything. I just said that it's only function is to generate lies.

> Alternatively, would you say that humans posting information that is incorrect and falsely represents the capabilities of another company's product should be similarly covered in laws?

Machines shouldn't have the same rights to speech as humans. A single company controlling ChatGPT can flood society with billions of convincing lies per hour. There's no point in any automation if it's not more efficient than a human is, and ChatGPT is far more efficient than humans at putting this stuff out.

The same straw man is always used with ChatGPT: a human can lie, so why not let this machine lie?

You might as well say that a human can punch someone to death, so why should we outlaw people owning rocket launchers?

The scale and purpose matters.


Its function is to transform and classify language. To do this, there is an emergent theory of the world that is contained within the model. People are interpreting the information that can be extracted from it as truth - which isn't its function and it hasn't ever been claimed to be. I would urge you to look at https://platform.openai.com/examples and identify which of those are "generating lies".

My question is "why is a program that is being misused by people held to a higher standard than people posting information on blogs?" Can I hold a person who has a YouTube channel with a video with millions of views to the same standard? Does a presenter on an entertainment media channel with the reach of millions of people get to say untruthful things that are opinion with a free pass?

Scale and purpose matters - yes, indeed it does. We need to make sure that we say what the purpose of GPT is (and it is not to generate lies) and its scale and compare it to other human run endeavors that have similar scale and purpose.

If we say "presenting verifiably incorrect information as true and accessible to more than 10,000 people is against the law" then let's say that. The source of the material isn't at issue - it doesn't matter if that is created by a program or a human, the damage is to the person reading it and there the source doesn't matter what did it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: