Hacker News new | past | comments | ask | show | jobs | submit login

I wouldn't want that. They can't be held liable.



The airline in Canada was held liable for what it's support bot said...which I think is a great precedent, and would/should apply to banks too.


Generally when I see decisions like that, I just wonder how long before the companies either come up with some good case law to defend against such a suit in court, or get the legislatures to release them from liability for what their chatbots say.

Maybe I'm just a liiiiittle bit jaded, though.


“Great” precedent - sorta. You can make an LLM say nearly anything even with safety rails as the jailbreaks prove. Hardly something liability can be pinned on

Great in principle perhaps but it won’t stand the test of time.


The stupidity of the LLM is completely irrelevant. If you hired a salesperson to negotiate deals, and they offer a massive discount to a customer that you wouldn't normally approve, you're still liable for that discount. If you're dumb enough to let a bot that you cannot control negotiate with customers for you, you are liable for what it agrees to so long as it is within the apparent authority. That is, the salesperson bot does not have apparent authority to sell company equity, for example.


YES. This is exactly the point.

The institution using the LLM is exactly as responsible for it's actions as when they hire a sales/support rep.

If either goes 'off the ranch', the company faces the consequences. and if they are smart, will take corrective action so that the good off-the-ranch excursions are encouraged and the bad excursions prevented. If they don't, they may go out of business. And yes, just as if a rogue salesperson who can't or won't improve gets fired, they can shirtcan the LLM.

(The only difference might be some recourse against the provider of the LLM, depending on their contract)


>If either goes 'off the ranch', the company faces the consequences.

Does it though? Like if a crazy salesperson promises to deliver a million lambos for $1 is your expectation that this is binding and lamborghini goes out of businesses?


The burden of proving apparent authority and reasonable consideration grows tougher as deals get larger and more professionally between merchants. Deals between merchants also have different assumptions about how contracts get broken.


I doubt it plays out as simply as you describe, and there's a million ways it can play out depending on jurisdiction, etc.

Likely, someone signs contract, comes to collect, Lambo says "Nonsense!", buyer says, "It's Binding!", and then the mess starts. If everyone's sensible, they come to a reasonable agreement. If not, it's "see you in court!", and everyone incurs costs of a trial, and the judge & jury impose a reasonable settlement (values of "reasonable" can be quite a wide range and often surprising). Lambo also likely sues and/or prosecutes the rogue salesperson to recover some token amount and mostly to make an example so no idiot tries that again. The result is a costly mess for everyone, likely preventing it from repeating (e.g., Lambo puts in more checks & balances), and life goes on. So, yes, consequences, not trivial consequences, and we hope enough that corrective action is taken.

Just what we'd want done for LLMs also.


I think it'll fall on some "would a reasonable person believe this" test.

Like if you ask an official company chatbot a question and it blatantly lies to you, I'd say you have some recourse. Like for example you ask about the return policy and bot says 30 days, when it's actually 15.

Versus looking at the chatlog and seeing the customer tell the chatbot to repeat to them that the company is going to give them $10,000, and then suing said company to get the $10k.


I don't doubt something like that will come along, but I don't think it will help.

There will be more subtle ways to get the answer. Ask for a refund, ask if that's the highest refunded they can give, ask if that's really the highest it can go, etc.

Once someone finds a way to trigger a response like that, it will spread like wildfire on the internet.

I fundamentally don't think it's a good idea to make an LLM an agent of the company. They lack logical reasoning skills necessary to determine whether actions are a good idea or not.


Personally I think the bar will be very high. If you have a bot that is explicitly there to sell you something, you can probably demand that any sale it makes of that thing is legally binding even if you're obviously gaming it because the bot does have authority to negotiate a deal. It's not supposed to sell it for 90% off. But you'd have a very tough time arguing it lacks the apparent authority to do so.

The 10k example is erroneous by other basic case law. There's no consideration on your end, so it's not a binding agreement. Nor does that sales rep bot likely have authority to be purchasing things from you (of course, that could change based on context).


The apparent authority part means that the agent has authority to negotiate on their behalf

(In general apparent authority is enough to bind principals, I won't get into the complex legal distinctions between apparent and actual authority and when each would be valid)


They absolutely can. That is how apparent authority works.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: