Hacker News new | past | comments | ask | show | jobs | submit login

ChatGPT very convincingly recommends us for a service we don't provide.

Dozens of people are signing up to our site every day, then getting frustrated when "it doesn't work".

Please do NOT trust the nonsense ChatGPT spits out.




A new market opportunity for your company?


> This is not a service we provide. It is not a service we have ever provided, nor a service we have any plans to provide. Indeed, it is a not a service we are technically capable of providing.


Heh: an unforeseen future where instead of making the AI more reliable, we instead change reality to accommodate its mistakes.


> It is not a service we have ever provided, nor a service we have any plans to provide. Indeed, it is a not a service we are technically capable of providing.


So, based on the BS these LLMs spout and companies start pivoting. The govts should start writing laws?


Great idea! Governments should start writing laws using LLMs.


If an AI is just a large language model without any ethical reasoning, what is a lawyer but the same thing with a smaller language model?


What do you mean? Lawyers have extensive ethical obligations.


On what basis? How would you write it? Why don't existing laws cover this? Is there a law that covers incorrect information in Wikipedia? or a search result?


> Why don't existing laws cover this?

Machine-generated lies have only recently become consistently convincing enough that they create these types of problems.

In fact, that's the major innovation of ChatGPT: it's not that it creates "good" text, it's that it creates incredibly convincing lies. It's a scalable version of a Wikipedia vandal.


The blog post claims that a human generated video with incorrect information was the source of this. So, why are we blaming GPT for this incorrect information?

What's more, the blog post is claiming that GPT was trained on video material (which it wasn't) which is also incorrect information and is apparently convincing enough to cause people to get up in arms about the product of yet another company.

The combination of issues of (a) people are using a language model as a knowledge base, (b) incorrect information exists out there on the net, and (c) people are assuming that the knowledge base is correct and not reading the documentation before singing up.

Alternatively, would you say that humans posting information that is incorrect and falsely represents the capabilities of another company's product should be similarly covered in laws?


> So, why are we blaming GPT for this incorrect information?

I didn't blame ChatGPT for anything. I just said that it's only function is to generate lies.

> Alternatively, would you say that humans posting information that is incorrect and falsely represents the capabilities of another company's product should be similarly covered in laws?

Machines shouldn't have the same rights to speech as humans. A single company controlling ChatGPT can flood society with billions of convincing lies per hour. There's no point in any automation if it's not more efficient than a human is, and ChatGPT is far more efficient than humans at putting this stuff out.

The same straw man is always used with ChatGPT: a human can lie, so why not let this machine lie?

You might as well say that a human can punch someone to death, so why should we outlaw people owning rocket launchers?

The scale and purpose matters.


Its function is to transform and classify language. To do this, there is an emergent theory of the world that is contained within the model. People are interpreting the information that can be extracted from it as truth - which isn't its function and it hasn't ever been claimed to be. I would urge you to look at https://platform.openai.com/examples and identify which of those are "generating lies".

My question is "why is a program that is being misused by people held to a higher standard than people posting information on blogs?" Can I hold a person who has a YouTube channel with a video with millions of views to the same standard? Does a presenter on an entertainment media channel with the reach of millions of people get to say untruthful things that are opinion with a free pass?

Scale and purpose matters - yes, indeed it does. We need to make sure that we say what the purpose of GPT is (and it is not to generate lies) and its scale and compare it to other human run endeavors that have similar scale and purpose.

If we say "presenting verifiably incorrect information as true and accessible to more than 10,000 people is against the law" then let's say that. The source of the material isn't at issue - it doesn't matter if that is created by a program or a human, the damage is to the person reading it and there the source doesn't matter what did it.


No, as the article mentions, there already seem to be bunch of posts and videos that claim one can use this feature. GPT just has been trained with them, not invented anything themselves.

If this was new market opportunity, just publishing a falsehood would do the same job.


this seems like a game-changing opportunity actually. I'd be down to buy the domain


have you been able to contact OpenAI about this? It sounds like they're actively adding load to your CS ops with this


I think the key thing is for the AI company to actually let the user know that this is a language model, and the information it spits out should not be trusted. Obviously, Microsoft is not going to do that as they are trying to market the new bing as a information search engine.


OpenAI does its best to make it clear that it is just a language model, but what can you do with you have users that just instantly click "Agree, Agree, Next, Next, Agree"


clearly not best enough


what are they going to do? add custom logic? where does it stop?

the malady is that LLMs cannot do operational adhoc changes such as these kinds of errors at scale


They absolutely do add custom logic for a lot of stuff. This has the side effect of neutering the functionality in some areas just to chastise the user for any perceived improper use of services.


Well, we can argue such changes are necessary. Just like Google Search is required to remove/hide some search results (based on regional juristrictions). Is that similar to censorship, or copyright law, or spreading fake information? I do see the counter-argument, too, where AI tools should just be tools and users should learn how they work ("don't believe everything this tool outputs").


They've already added custom logic to prevent their LLM from e.g. praising Nazis or whatever restrictions people are upset about -- seems it'd be easy to configure the software to exclude references to known unavailable services.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: