I'm still not understanding the point though, 6 hours later.
Why can't it just be a tool for assistance that is not legally binding?
Also throughout this year I have thought about those problems, and to me it's always been weird how people have so much problems with "hallucinations". And I've thought about exact similar ChatBot as Chevy used and how awesome it would be to be able to use something like that myself to find products.
To me the expectations of this having to be legally binding, etc just seem misguided.
AI tools increase my productivity so much, and also people often make up things, lie, but it's even more difficult to tell when they do that, as everyone's different and everyone lies differently.
>To me the expectations of this having to be legally binding, etc just seem misguided.
I think you're getting my point confused with a tangentially related one. Your point may be "chatbots shouldn't be legally binding" and I would tend to agree. But my point was that simply throwing a disclaimer on it may not be the best way to get there.
Consider if poison control uses a chatbot to answer phone calls and give advice. They can't waive their responsibility by just throwing a disclaimer on it. It doesn't meet the current strict liability standards regarding what kind of duty is required. There is such a thing in law as "duty creep," and there may be a liability if a jury finds it a reasonable expectation that a chatbot provides accurate answers. To my point, the duty is going to be largely context-dependent, and that means broad-brushed superficial "solutions" probably aren't sufficient.
I used that analogy because it’s painfully clear how it can go off the rails. The common thread is that legality isn’t simply waived in all cases. Legality is determined by reasonableness and, in some cases, by an expectation of duty. I don’t believe the Chevy example constitutes a contract but not for the reasons you’ve presented. Thinking you can just say “lol nothing here is binding but thanks for the money!” without understanding broader context is indicative of a cavalier attitude and superficial understanding.
Why can't it just be a tool for assistance that is not legally binding?
Also throughout this year I have thought about those problems, and to me it's always been weird how people have so much problems with "hallucinations". And I've thought about exact similar ChatBot as Chevy used and how awesome it would be to be able to use something like that myself to find products.
To me the expectations of this having to be legally binding, etc just seem misguided.
AI tools increase my productivity so much, and also people often make up things, lie, but it's even more difficult to tell when they do that, as everyone's different and everyone lies differently.