I think people love the idea of DoNotPay: A magical internet machine that saves people money and fights back against evil corporations.
Before ChatGPT they were basically mad libs for finding and filling out the right form. Helpful for people who couldn't figure out how to navigate situations by themselves. There is real value in this.
However, they've also been running the same growth hacking playbooks that people disdain: False advertising, monthly subscriptions for services that most people need in a one-off manner, claiming AI features to be more reliable than they are, releasing untested products on consumers. Once you look past the headline of the company you find they're not entirely altruistic, they're just another startup playing the growth hacking game and all that comes with it.
I think the same could be said software engineers. I don't engage with the general public about writing people's next brilliant idea because it's a hige waste of my time when I could be making FAANG bucks talking to people who know that I'm worth it. While I will alwatry to explain to my mom how the internet works etc, it's not economically justifiable to engage laymen tonsolve their probno matter how altruistic it may seem. I still have to pay the bills. How are lawyers any different?
Lawyers are the interface between the public and the justice system -- which would exist whether software did or not. It's an access and equity issue: people with money have access to the legal system. People without largely don't.
I don't think anyone would argue that lawyers shouldn't get paid for their time, but in many cases their rates are excessive. When people's rights and freedom are at risk the cost should never be a barrier that most people can't afford to clear.
Legal systems becoming complex predates the emergence of lawyers.
Lawyers have also led significant efforts to simplify the law. For example the American Bar Association has consistently created simple model statute frameworks that eventually are adopted.
> For example the American Bar Association has consistently created simple model statute frameworks that eventually are adopted.
"Simple" is not an accurate description of typical model legislation.
> Law is complex because society is complex.
Law is complex because it's an evolved system influenced by politics and corruption. The extent of its complexity is not intrinsic and much of it is specifically a defense mechanism against public understanding, because the public wouldn't support many things in the status quo if they understood the workings of them, and the people who do understand the workings but prefer the status quo use this to their advantage.
Your description is not consistent with history. Politics and corruption are not outsized drivers of law. Especially case law built through the courts. It’s all edge cases.
Try the example of drafting a standard apartment lease, over millions of transactions between landlords and tenants lots of edge cases emerge. So over time leases get more complicated. And then the law around interpretation and enforcement gets complicated.
> Politics and corruption are not outsized drivers of law. Especially case law built through the courts. It’s all edge cases.
Case law is full of politics. How do you think courts resolve the ambiguities? If there was an objective standard for how to do it then judges could be replaced by computer programs. Judges are used instead because rigorous and consistent application of rules would lead to outcomes that are politically inexpedient, so judges only apply the rules as written when politics fails to require something different.
> Try the example of drafting a standard apartment lease, over millions of transactions between landlords and tenants lots of edge cases emerge. So over time leases get more complicated.
This is just a facet of how contracts and lawyers work. The law creates defaults that a contractual agreement can override, so each time the law establishes a default that landlords don't like but are allowed to change, they add a new clause to the lease to turn it back the other way. What they really want is a simple one-liner that says all disputes the law allows to be resolved in favor of the landlord, will be. But politics doesn't allow them to get away with that because what they're doing would be too clear to the public, so politics requires them to achieve the result they want through an opacifying layer of complexity.
"Politics and corruption" is exactly what is complex about society, and it is absolutely intristic to society. That's why we have laws in the first place.
> because the public wouldn't support many things in the status quo if they understood the workings of them
Personally, I've observed the opposite more often: somebody feeds the public a clickbait-ey and manipulative "explanation" of how things work, and public becomes enraged without any real understanding of complexities and trade-offs of the system, as well as unintended consequences of proposed "fixes". It is the main reason why socialism is a thing.
> "Politics and corruption" is exactly what is complex about society, and it is absolutely intristic to society. That's why we have laws in the first place.
The reason we have laws is to facilitate corruption? That seems like something we ought not to want.
> Personally, I've observed the opposite more often: somebody feeds the public a clickbait-ey and manipulative "explanation" of how things work, and public becomes enraged without any real understanding of complexities and trade-offs of the system, as well as unintended consequences of proposed "fixes".
That's the media. The government over-complicates things. The media over-simplifies things.
It has the same cause. People tune out when something becomes so complicated they can't understand it. So if they want people to pay attention to them, they over-simplify things. If they want people to ignore what they're doing, they over-complicate things.
> The reason we have laws is to facilitate corruption? That seems like something we ought not to want.
No, the reason we have laws is because politics and corruption and crime is intristic to society. They are the reality of human condition which can't go away and we can't ignore, so we have to deal with it.
> That's the media. The government over-complicates things. The media over-simplifies things.
I had in mind the part of the "media" which "rebels against the media", Noam Chomskies and Michael Moores of the world.
> No, the reason we have laws is because politics and corruption and crime is intristic to society. They are the reality of human condition which can't go away and we can't ignore, so we have to deal with it.
Public corruption is intrinsic to government action, but the way you constrain it isn't by passing laws that limit the public, it's by limiting what laws can be passed by the government.
> I had in mind the part of the "media" which "rebels against the media", Noam Chomskies and Michael Moores of the world.
Chomsky probably isn't a great example of over-simplifying things. Many of his criticisms are legitimate.
But having a legitimate criticism of the status quo is a different thing than having a viable solution.
There are a lot of cases (to the point where I expect your average person sees dozens of them every day) where the media isn't just "over-simplifying"; they're presenting things that are specifically crafted to both
- Be factually correct
- Make the reader leave with an false understanding of the situation
This exact same thing happens with political campaigns.
Over-simplifications are false. They don't even meet the bar of being factually correct, whether because the proponent is willfully leaving something out or because they're ignorant themselves. It's not impossible for it to happen innocently, because people selling simplistic narratives often build a following even when they're true believers.
The thing you're talking about is selection bias. It's the thing assholes do when they want to lie to people but don't want to get sued for defamation. Whenever you discover someone using this modus operandi, delete them from your feed.
Sure - when I was a kid, I was speeding in a neighborhood (think 40 in a 30) and an annoyed cop charged me with reckless driving. The public defender recommended I plead guilty, pay a large fine and be put on probation for a year. I think a more expensive lawyer would have had different advice.
The term "snake oil salesman" has been around since the 1800s and that's effectively what most of these growth hackers are. I'm sure there are plenty of terms for the same practice of fraudulent marketing that predate that by centuries or even millennia. If you can hype people up enough about what you're selling and get them imagining how much better their live's will be using your product a certain number will buy into anything (in DNPs case, people imagine how much time and money they'll save on not using a lawyer).
What I was getting at is: legal protections are good and necessary and all, but people try these things presumably because they work sometimes, and that fact bothers me. The idea that current generative AI tech - even if it were actually built to purpose - could actually fight for you in court, or output legal briefs that hold up to scrutiny and don't require review by a human expert, seems laughable to me. Law is definitely not a suitable field for an agent that frequently "hallucinates" and never questions or second-guesses your requests. There's so much that would have to go into such an AI system to be reliable, beyond the actual prose generation, that I certainly wouldn't a priori expect it to exist in 2024.
If so many people are willing to take the claim at face value, that suggests to me a general naivete and lack of understanding of AI out there that really needs to be fixed.
Aside from AI-related stuff, GGP mentioned "monthly subscriptions for services that most people need in a one-off manner". It's amazing to me that anyone would sign up for a monthly subscription to anything at all, without any consideration for whether they'd likely have a use for it every month.
Yep, need some way to image each new brain that comes online with some basics so it's not starting from 0 each time (and what basics to include would be a battle for the ages)
Oof, no thanks. Part of our resilience comes from each generation observing and learning what the world actually is without all of the dogma from the previous generation. Instilling a set of basics is probably the worst thing we could do to fight against gaming humanity.
Natural selection results in species succeeding that do some pretty brutal things. Natural selection also applies to religions, governments, and startups.
In America? Barely. EVERYTHING can be called "puffery", which apparently makes it perfectly legal to make outright lies about your product, and if you instead merely pay someone who makes outright lies, apparently that's fine too if you didn't explicitly tell them to make those specific lies!
In America, it is legal to call your uncarbonated soft drink "vitamin water"!
There's altruism, running a business, and unrestrained avarice. Sometimes libertarians can be as prone to equating the first two as leftists are the latter.
In your world, it seems the leftists have it figured out. The purpose of a business is always to maximize profit.
EDIT: yeah, yeah, I hear you. You can survive on VC money, and maximize share price instead of focusing on profit. You can also be a small business owned by good people just trying to make a living, but then you still have to not get drowned out by more ruthless competition. The purpose of a business is not always to maximize profit.
> The purpose of a business is always to maximize profit.
That's false. The purpose of a business is whatever the owners of that business decide. I've known a large number of business owners that chose less profit in exchange for any number of other attributes they valued more than max profit: more of their own time (working less), better serving a local community by donating a lot of resources / air time (media company), paying employees abnormally higher wages (because said employees had been with them a long time and loyalty matters to some people a lot) - and so on and so forth.
Max profit is one of a zillion possible attributes to optionally optimize for as the owner of a business. The larger the owner the more say they obviously will tend to have in the culture.
Facebook as a prominent public example, hasn't been optimized for max profit at any point in the past decade. They easily could have extracted far more profit than what they did. Zuckerberg, being the voting control shareholder, chose to invest hilariously vast amounts of money into eg the Metaverse / VR. He did that on a personal lark bet, with very little evidence to suggest it would assist in maximizing profit (and at the least he was very wrong in the closer-term 10-15 year span; maybe it'll pay off 20-30 years out, doubtful).
The pursuit of max profit is a cultural attribute, a choice, and that's all. It's generally neither a legal requirement nor a moral requirement of a business.
A majority of the shares in most publicly traded corporations are held by retirement funds, both "private" (BlackRock, Vanguard, State Street) and public (FERS, CalPERS, ...). These entities, generally, have no appreciable interest other than maximizing profit. They are all regulated financial entities, even the private ones are quasi-governmental (e.g., BlackRock has close business relationships with the Federal Reserve), and the public ones are just straight up government agencies.
So, in a pretty real way, there is a legal requirement, though like many such things in the United States today, it is not properly formalized.
But then you have to make a different statement. The purpose of a large, publicly traded for-profit corporation is to maximize profit.
This is quite an important distinction because it implies we may want to limit the prevalence of those things and increase the prevalence of small businesses and privately-held medium businesses that can advance other societal goals.
It is not clear to me that large and/or publicly traded corporations must maximize short-term profit. The seminal case of shareholders vs management concerning the Ford Motor Company in its early days shows that the objectives and incentives are not beyond debate and thus not intrinsically tied to the size or ownership model of a company.
The government has consolidated around the current set of incentives, both directly through its own arms and indirectly through legislation and policy, leading to the result we see today. Breaking these large businesses up may lead to some disruption for awhile, but if the incentives stay the same, the same end result will likely be arrived at again before too long.
The purpose of a business is whatever its owners want it to be. Typically, this is maximizing profit, but it could be anything, like getting paid for what you like to do.
It's funny: the profit thing keeps being parroted here of all places, Y Combinator, when we know all too well that there are scads of businesses, especially today, that are bleeding millions and hemorrhaging cash, just to disrupt an industry sector, just to amass assets/user data, or just to amass a customer base and get sold off.
So no, profit is not a universal motive. But it's a popular one; if you have a conventional business and you expect to stay solvent year-over-year, then you make profits, you stay in the black, yes? Nobody can prognosticate when the lean years will come, and so you watch that bottom line and keep as much cushion as possible, to ride out a bad year or two.
Furthermore, if a business is competing with other businesses, that's going to moderate the profit motive with market share and other considerations. But I would say that publicly-traded companies have the strongest impetus to profit and satisfy shareholders. The publicly-traded space is far more constrained than other businesses or entities, such as charities, public interest groups, political action, NGOs, etc.
"they" seems to be referring to DoNotPay, the subject of this discussion.
"mad libs" is a game where you have a set of text with a bunch of blank spaces and then the group fills them in with words to come up with a funny end result.
Yes! I used this "AI" tool to help a friend write a letter to her landlord. It was not at all "generative AI" and seemed to just paste modules together based on her answers to its questionnaire.
To your second point, it's very funny how OpenAI seems to have soured the tech crowd on tech.
The race to the bottom in the ruthless and relentless pursuit of profit is what soured us on tech, and the AI hype train is but one in a long procession.
Actually, I think they are the opposite of contradictory. This tech is dumb, funny, new, and maybe it has potential in the right (low-stakes) applications.
Meanwhile, I dunno, I have some begrudging admiration for the folks getting rich selling premium GEMMs, but eventually they are going to piss off all their investors and cause a giant mess. Like good luck guys, get that money, but please don’t take us all down with you.
Pretty classic hacker behavior. "This sucks. Now if you'll excuse me, I'm going to go make it better because everyone else working on it is an idiot and I, alone, see the True Way Forward."
I'm not soured on tech. I'm soured on the tech industry. I think there's quite a difference between these two things.
Using OpenAI as an example. ChatGPT is wonderful for the things it's made for. It's a tool, and a great one but that's all it is.
But OpenAI itself is a terrible company and Sam Altman is a power hungry conman that borders on snakeoil salesman.
And I'm soured on people like the CEO of my company who wants to shove a GPT chatbot into our application to do things that it's not at all good at or made for because they see dollar signs.
It worked as well as any other eighteen dollar a month lawyer back when I tried it in 2017/2018.
It was actually free back then I used it the one time and felt grateful enough for the help that I signed up for one cycle and then cancelled (since I didn't have a continued need for it).
I actually used the service in question during that time frame and did not feel deceived by their advertising. In fact, I felt good enough about the experience that I threw them a few bucks after the fact to compensate them for some of the value that they gave me.
You read an article about it and 7 years later and are convinced they are crooks.
> You read an article about it and 7 years later and are convinced they are crooks.
Actually, you read an article and assumed that your anecdotal experience from 7 years ago is more reflective of how a business operates than a current year investigation into that company by a federal agency.
Nobody is arguing that they deceived you personally 7 years ago.
This. Try being broke and able to get some advice and consultation with trepidation. I think its fine as long as more clearly labeled not legal advice or counsel but experiment tool.
Basically every action the big tech companies FAANG, MS, HP etc have all done for the past decade+ has been detrimental to users. Oh sure yes I want ads in my Operating System and I want every browser to be Chrome with a mask, oh right I also want to pay a subscription to use a printer. Just absolutely bonkers brains in power at tech companies lately.
Actually, of all of these, every browser being Chrome with a mask has been kind of nice as a web developer. Never have I had to invest fewer resources in wrestling browser quirks to the ground.
> To your second point, it's very funny how OpenAI seems to have soured the tech crowd on tech.
They represent an amplifier for the enshittening that was already souring the tech crown on tech.
LLM's used in this sort of way, which is exactly OpenAI's trillion dollar bet, will just make products appear to have larger capabilities while simultaneously making many capabilities far less reliable.
Most of the "win" in cases like this is for the product vendor cutting capital costs of development while inflating their marketability in the short term, at the expense of making everything they let it touch get more unpredictable and inconsistent. Optimistic/naive users jump in for the market promise of new and more dynamic features, but aren't being coached to anticipate the tradeoff.
It's the same thing we've been seeing in digital products for the last 15 years, and manufactured products for the last 40, but cranked up by an order of magnitude.
Given the scope of the topic it appealed in 2016, parking tickets in two specific cities, I can see such a petty case like that be automated. The expansion in 2017 to seeking refuge seems like it'd be a bigger hurdle. But I wouldn't be surprised if that process can be automated a lot as well.
Seems like the killing blow here was claiming it can outright replace legal advice. Wonder how much that lie made compared to the settlement.
But yes, HN in general is a lot more empathetic towards AI than what the average consensus seems to be based on surveys this year.
> It's something I personally find very bizarre, but I've definitely noticed that a lot of people have a very strong mental block about doing things on a computer, or even a browser.
It's interesting that many have expressed something similar in regards to the current LLMs, for programming for example: that even if their output isn't exactly ideal, they still lower the barrier of entry for trying to do certain things, like starting a project in Python from scratch in a stack that you aren't entirely familiar with yet.
Not sure about the history, I based my comment on this quote from the article:
>[...] DoNotPay's legal service [...] relying on an API with OpenAI's ChatGPT.
Perhaps they rolled their own chatbot then later switched to ChatGPT? Either way, they probably should have a lawyer involved at some point in the process.
Yes I think you are right about that. Someone else called it "mad libs" and that is very much what it felt like back in 2017/18.
Idk why they needed to have a lawyer involved though. Many processes in life just need an "official" sounding response: to get to the next phase, or open the gate to talk to a real human, or even to close the issue with a positive result.
Many people are not able to conjure up an "official" sounding response from nothing, so these chatbot/ChatGPTs are great ways for them to generate a response to use for their IRL need (parking ticket, letter to landlord, etc).
"Lawyer" is a regulated term that comes with a lot of expectations of the "lawyer" (liability for malpractice, a bunch of duties, etc). You can't just say that something is a lawyer any more than you could do the same with doctors or police officers.
Machines are also not allowed to be the "author" of court documents if they actually get to court (so far as I'm aware). A lawyer has to sign off and claim it as their own work, and doing so without the lawyer reading it is pretty taboo (I think maybe sanctionable by the BAR but I could be wrong).
My understanding is that they had much more linear automation of very specific, narrow, high-frequency processes — basically form letters plus some process automation — before they got GPT and decided they could do a lot more “lawyer” things with it.