What I was getting at is: legal protections are good and necessary and all, but people try these things presumably because they work sometimes, and that fact bothers me. The idea that current generative AI tech - even if it were actually built to purpose - could actually fight for you in court, or output legal briefs that hold up to scrutiny and don't require review by a human expert, seems laughable to me. Law is definitely not a suitable field for an agent that frequently "hallucinates" and never questions or second-guesses your requests. There's so much that would have to go into such an AI system to be reliable, beyond the actual prose generation, that I certainly wouldn't a priori expect it to exist in 2024.
If so many people are willing to take the claim at face value, that suggests to me a general naivete and lack of understanding of AI out there that really needs to be fixed.
Aside from AI-related stuff, GGP mentioned "monthly subscriptions for services that most people need in a one-off manner". It's amazing to me that anyone would sign up for a monthly subscription to anything at all, without any consideration for whether they'd likely have a use for it every month.
i think that's called "consequences" and it's the subject of the article you're commenting on