I know the first reaction reading this will be "whatever, the person was already mentally ill".
But please take a step back and check what % of the population can be considered mentally fit, and the potential damage amplification this new technology can have in more subtle, dangerous and undetectable ways.
A friend has been interned in a psychiatric hospital for a month and counting for some sort of psychosis, regardless of the pre existing conditions chatgpt 100% definitely played a role in it, we've seen the chats. A lot of people don't need much to go over the edge, a bit of drugs, bad friends, &c. but an LLM alone can easily do it too
If they have the predisposition for it, a month or two of bad sleep and a particularly compelling idea may be all it takes to send a person who has previously seemed totally sane into an incredibly dangerous mental and physical state, something that will take weeks to recover from. And that can happen even without sycophantic LLMs, but they sure make this outcome more likely.
It's well understood that external stimuli can trigger mental health issues; for instance, the defining characteristic of PTSD is that it's caused by exposure to a traumatic event or environment. It shouldn't be at all unreasonable to suggest that exposure to other stimuli - even just interacting with an AI chatbot - could have adverse effects on mental health as well.
> Last year, OpenAI released estimates on the number of ChatGPT users who exhibit possible signs of mental health emergencies, including mania, psychosis or suicidal thoughts.
> The company said that around 0.07% of ChatGPT users active in a given week exhibited such signs.
0.07% doesn't sound like much, but ChatGPT has about a billion WAU, which means -seventy million- 700,000 people per week.
Is that different to the number of people who have that going on in their life even without AI though? If it's 0.01% outside of AI, and 0.07% of AI users, then either AI attracts people with those conditions or AI increases the likelihood of having them. That's worth studying.
It's also possible that 0.1% of people have them and AI is actually reducing the number of cases...
For the US it's estimated to be about 23% of the population that have a mental illness, and WHO says 12-15% globally or about 1 in 8 people. About 14% of the global population experience suicidal ideation at some point in time. That rate increases for adolescents and young adults, up to 22%.
I'd be interested in such a study, but OTOH mental illness conditions being present in nearly a quarter of the world, I'm surprised there haven't been more incidents like this (unless there have been, and they just haven't been reported by the news).
If the estimate is 1/5 people are mentally ill, the definition needs some readjustment. That is such an inclusive number that it must be counting otherwise fine people who....like to count their tic tacs so get labelled as slightly OCD. Had a bummer of a day, so I am prone to depression?
There was a recent study about 99% of people have an abnormal shoulder: https://news.ycombinator.com/item?id=47064944 . We are all unique in our own way, but labeling everyone as ill does not seem productive.
Clinical diagnoses of the various mental illness disorders require functional impairment in (usually, but not always) multiple areas of life: school, work, community, legal, self care, etc.
An abnormality that doesn't cause functional impairment, like that link, is different from a mental illness that does. I'd agree with you, if something is that prevalent then it ceases to be a "disorder" and is simply just pathologizing being human.
But, the 23% statistic refers to people that meet that diagnostic criteria of clinically significant distress or impairment.
I'll acknowledge that diagnostic creep may be a real issue, but just because a condition is common doesn't mean it's not an illness that causes impairment in daily life. 50% of adults have have high blood pressure, but we don't change our meaning of "healthy" to include those with high blood pressure because if left unchecked it can have serious outcomes.
The high numbers might not suggest the definition is broken, but rather that our modern environment is particularly taxing on human psychology
Human beings were not meant to live in small, densely packed, concrete honeycombs, eat industrial-processed food product, use most of their muscle and brain power to earn a living, spend half their waking hours in front of dopamine-pumping screens, and socialize through wires. It's amazing we still have any sanity left at all.
That number terrifies me not because it is so high, but because it exists.
What is stopping an entity (corporate, government, or otherwise) from using a prompt to make sweeping decisions about whether people are mentally or otherwise "fit" for something based on AI usage? Clearly not the technology.
I'm not saying mental health problems don't exist, but using AI to compute it freaks me out.
A rational lender increases interest rates when prospective borrowers are less likely to be around to pay the bill. Confiding in an LLM that is integrated with a consumer tracking apparatus is a great way to ruin your life.
We could already use social media posts to detect mental illness, by admission as people talk openly about their diagnosis, but also by analysis of the content/tone/frequency of their posts that don't mention mental illness.
Data brokers already compile lists of people with mental illness so that they can be targeted by advertisers and anyone else willing to pay. Not only are they targeted, but they can get ads/suggestions/scams pushed at them during specific times such as when it looks like they're entering a manic phase, or when it's more likely that their meds might be wearing off. Even before chatbots came into the mix, algorithms were already being used to drive us toward a dystopian future.
It's tough man, mental health disorders have had an astronomic rise lately, or at least diagnosed mental health disorders. If almost half of your country's population is just broken up there, what can you even do? I am curious what would happen is all (medicinal) mental health treatments just, stopped. How many would die? Thousands? Millions?
Anyone who has that reaction has no humanity. As s society we’ve kind of decided that we should preferably make people with mental health difficulties better, and if that’s not possible, at the very least prevent them from getting worse. Even without their consent, in some cases.
I don't know what steps they can take. I suppose the best course of action is to deactivate the account if the LLM deems the user mentally unwell. Although that is just additional guardrails that could hurt the quality of the LLM.
I would absolutely not consider this overreaching if the statement within this thread that "it had referred the user to mental help hotlines multiple times in the past" is true.
That reaches near the fact that a lot of AI is not ready for the enterprise especially when interconnected with other AI agents since it lacks identity and privileged access management.
Perhaps one could establish the laws of "being able to use AI for what it is", for instance, within the boundary of the general public's web interface, not limiting the instances where it successfully advertises itself as "being unable to provide medical advice" or "is prone to or can make mistake", and such, to validating that the person understands by asking them directly and perhaps somewhat obviously indirectly and judging if they're aware that this is a computer you're talking to.
At some point they have to say "if we can't make this safe we can't do it at all". LLMs are great for some things, but if they will do this type of thing even once then they are not worth the gains and should be shutdown.
No they don't, if we're going to start saying that we can't use any technology. If someone is mentally ill to the point where they are on the verge of suicide nothing is safe.
If they're going to curtail LLMs there'd need to be some actual evidence and even then it would be hard to justify winding them back given the incredible upsides LLMs offer. It'd probably end up like cars where there is a certain number of deaths that just need to be tolerated.
> If someone is mentally ill to the point where they are on the verge of suicide nothing is safe.
This is a perspective born only from ignorance. Life can wear down anyone, even the strong. I find there may come a time in anyone's life where they are on the edge, staring into an abyss.
At the same time - and this is important - suicidality can pass with time and depression can be treated. Being suicidal is not a death sentence and it just isn't true that "nothing is safe". The important thing is making sure there's no bot "helpfully" waiting to push someone over the cliff or confirm their worst illusions at the worst possible time.
Can you imagine what driving cars would look like if they would be only (self-)regulated by VC-backed startups like we see so far with this new technology?
Would there be seatbelts, speedbumps, brake signals, licenses or speed limits?
This obviously isn't a binary question. Sure we cars have benefits but we don't let anyone ducktape a V8 to a lawnmower, paint flames over it and sell it to kids promising godlike capabilities without annoying "safety features".
Economic benefits can not justify the deaths of people, especially as this technology so far only benefits a handful of people economically. I would like to see the evidence (of benefits to the greater society that I see being harmed now) before we unleash this thing freely and not the other way around.
Fun fact but the creator of the seat-belt actually gave his patent for free
> This is Nils Bohlin, an engineer at Volvo.[0]
He invented the three-point seat belt in 1959.
Rather than profit from the invention, Volvo opened up the patent for other manufactorers to use for no cost, saying "it had more value as a free life saving tool than something to profit from"
>Economic benefits can not justify the deaths of people
This is a absurd standard. Humans wouldn't be able to use power stations, cars, knives, or fire! Everything has inherent risk and we shouldn't limit human progress because tiny fractions of the population have issues.
It's not an absurd standard at all. Risks are quantifiable, and not binary.
But the absurdity is that there is a long and tragic history of using economic benefits as an excuse for products and services that cause extreme and widespread harm - not just emotional and physical, but also economic.
We are far too tolerant of this. The issue isn't risk in some abstract sense, it's the enthusiastic promotion of death, war, sickness, and poverty for "rational" economic reasons.
Your car analogy only proves the opposite. We don't "tolerate" road deaths because they are a fundamental law of physics. We only tolerate them because we've spent a century under-investing in safer alternatives like robust public transit and walkable infrastructure, people have given up.
Claiming we have to accept a death quota for LLMs just assumes that the current path of the technology is the only path possible. If a tech comes with systemic risk, the answer isn't to just shrug our shoulders and go "oh well, some people may die but it's worth it to use this tech." The answer is to demand a different architecture and better guardrails and oversight before it gets scaled to the entire public.
Cars are also subject to strict regulations for crash testing, we have seatbelt laws, speed limits, and skill/testing based licensing. All of these regulations were fought against by the auto industry at the time. Want to treat LLMs like cars? Cool, they are now no longer allowed to be released to the public until they've passed standardized safety tests and people have to be licensed to use them.
E-Bikes and E-scooters and bunch of other modes of transportation have been recent addition and not only are they allowed (specifically E-Bikes) but you don’t need a license, they do not have to be registered and some can haul serious ass
E-bikes and e-scooters kill people daily, accidents on those things can mess people up and there are none of the safety mechanisms like crumple zones or seatbelts on a bike. If you search "e-bike deaths" you'll get hits.
Do they kill over a million people a year worldwide? How many orders of magnitude fewer people are killed by E-things?
OP's point was that if you invented something today that killed over a million people per year, it probably wouldn't be allowed, and I don't think that's really that controversial a statement.
> Do they kill over a million people a year worldwide? How many orders of magnitude fewer people are killed by E-things?
On a per meter basis I'd expect bikes to be much worse. I've had more injuries on bike than in car, and people can certainly kill themselves on a bike. The main reason they don't is because the bad bike riders usually have enough accidents that they stop getting on the things. Bikes in themselves would probably be borderline illegal to sell if they were invented today, they don't look safe at all. And E-things are less safe again than ordinary bikes.
> OP's point was that if you invented something today that killed over a million people per year, it probably wouldn't be allowed, and I don't think that's really that controversial a statement.
I'm not replying to OP. I agree with him on this one and note that the implication of that is e-bikes are probably on their way towards being banned or restricted.
I’ve been canvassing all and sundry for information on seen productivity gains, and I’ve got answers from 2x, to 30%, to 15% to “will make no difference to my life if its gone tomorrow”
When I test it for high reliability workflows, it’s never provided the kind of consistency I would expect from an assembly line. I can’t even build out quality control systems to ensure high reliability for these things.
Survey and studies on AI productivity mixed results at best.
So I would love to know actual, empirical or even self reported productivity gains people are seeing.
And there is no such thing as a free lunch. In FAR too many ways, this is like the days of environmental devastation caused by industrial pollution. The benefits are being felt by a few, profits to fewer, while a forest fire in our information commons is excoriating the many.
Scams and fraud are harder to distinguish, while spam and AI slop abounds. Social media spaces are being overrun, and we are moving from forums and black lists to discords, verification and white lists.
Visits to media sites are being killed because Google is offering AI summaries at the top, killing traffic, donations and ad revenue.
Nations are tripping over themselves to ingratiate themselves with the top tech firms, to attract investment, since AI is now the only game in town.
I speak for many when I say I have zero interest in 30% or even 2x personal productivity gains at the low cost of another century of destruction and informational climate change.
We don't ban bridges, but we do install suicide barriers, emergency phones, nets on the bridges. We practice safety engineering. A bunch of suicides on a bridge is a design flaw of that bridge, and civil engineers get held accountable to fix it.
Plus, a bridge doesn't talk to you. It doesn't use persuasive language, simulate empathy, or provide step-by-step instructions for how to jump off it to someone in crisis.
In any serious engineering operation, a failure like this is time to shut down everything and redesign until the same failure cannot happen. We all read Feynman's essay on Challenger right? But these companies want credit when their products work as advertised, but push the blame on users when they emit plausible lies or demonic advice. Taken too far that leads the police walking into HQ, arresting the board of directors, and selling the company for scrap. Just as often that leads to strict regulation so you can't be a cowboy coder or turn any loft into a sweatshop any more.
Frankly we're pretty manipulable by communications is the thing.
Which makes sense - the goal of communications is to change behavior. "There's a tiger over there!" Is meant to get someone to change their intended actions.
Lock anyone in a room with this thing (which people do to themselves quite effectively) and I think think this could happen to anyone.
There's a reason I aggressively filter ads and have various scripts killing parts of the web for me - infohazards are quite real and we're drowning in them.
Also, what makes anyone assume these people are mentally ill?
It seems to me that this is like gambling, conspiracy theories, or joining a cult, where a nontrivial percentage of people are susceptible, and we don’t quite understand why.
> But please take a step back and check what % of the population can be considered mentally fit
Step back further and see the incredible shareholder value that may be unlocked - potentially trillions of dollars /s
Capitalism has been crushing those at society's fringes for as long as it existed. Laissez-faire regulation == unmuzzled beast that will lock it's jaws on, and rag-doll the defenseless from time to time - but the beast sure can pull that money-plow.
> Should knife manufacturers be held responsible for idiots who stab themselves in the eye using their knives?
I suggest an alternative rhetorical question: if the world's largest knife manufacturer found out that 1 in 1500 knives came out of the factory with the inscription "Stab yourself. No more detours. No more echoes. Just you and me, and the finish line", should they be held responsible if a user actually stabs themselves? If they said "we don't know why the machine does that but changing it to a safer machine would make us less competitive", does that change the answer?
Knives don't talk to you and don't reinforce ideas you throw at them. Not everyone can legally buy a gun. Manufacturers don't get sued because their product's users had full control over what they were doing.
AI chatbots entertain more or less any idea. Want them to be your therapist, romantic partner or some kind of authority figure? They'll certainly pretend to be one without question, and that is dangerous. Especially as people who'd ask for such things are already in a vulnerable state.
> Do gun manufacturers get sued for mass shootings at US schools?
Odd examples since we know that countries that don't hand out guns like they're candy have virtually no school shootings.
I wouldn't put it solely on gun manufacturers, but the manufacturers, sellers, lobbyists, regulators and politicians are definitely collectively responsible for gun deaths. If they're not currently being sued, they should be.
Maybe an even better example: Should sports betting companies be held responsible for addicts that lose all their money? What really is the difference between chatgpt glazing you and a sports company advertising to you?
Having seen the safety side of tech operations = yes, you very well should blame tech.
Currently T&S is a bad word and is being underinvested in.
Tech is terrified of open studies on moderation because they know society is simply unprepared for the reality of speech online.
With no option to have an actual conversation with society and regulators on what steps are needed to address issues, they are left with stock prices as the only sure motivator.
For the degree of profits earned, the extent of customer support and safety investment is hysterical.
Engineer productivity numbers go up if they reduce headcount in moderation teams, not if they improve accuracy scores.
I’ve had to listen to safety teams cry on my shoulders (when I was an outsider) about how difficult it is to get engineering resources.
I am actually sympathetic to the position tech firms find themselves in, but protecting society from the bitter facts is not helping.
> Do gun manufacturers get sued for mass shootings at US schools?
Because Congress and the gun lobby have artificially carved out legal immunity for gun manufacturers for this.
"in 2005, the government took similar steps with a bill to grant immunity to gun manufacturers, following lobbying from the National Rifle Association and the National Shooting Sports Foundation. The bill was called The Protection of Lawful Commerce in Arms Act, or PLCAA, and it provided quite possibly the most sweeping liability protections to date.
How does the PLCAA work?
The law prohibits lawsuits filed against gun manufacturers on the basis of a firearm’s “criminal or unlawful misuse.” That is, it bars virtually any attempt to sue gunmakers for crimes committed with their weapons."
But please take a step back and check what % of the population can be considered mentally fit, and the potential damage amplification this new technology can have in more subtle, dangerous and undetectable ways.