Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The wild thing was this is what people "wanted"

- The agreeableness came from heavier weighted user signals, not from OpenAI. The users give better feedback when the answers positively respond to their questions.

- The most common use case for AI is a therapist.

- The reward signals (user feedback) being positive means that the response helped them emotionally. Which is easy to imagine someone asking a hard question "Am I enough in this situation?" The model then assures that the user IS enough, and the user gives it a thumbs up.

- I would say: 1. Coding is not the killer use case. Productivity is not the killer use case. The killer use case is emotional support. 2. Who is anyone to say that this use case is wrong? 3. It is impossible to say what is "right" in this situation. (everyone thinks their advice to a friend is the right advice, even when it is opposite). So there is no objectivity.

- I think a lot of people who wanted an "open, do anything" model, are going to have more problems and dissonance with this idea than they would about the model allowing people to create weapons, allow self-harm, etc. because it will feel uncomfortable that a model is shaping reality for a large portion of the population.

https://hbr.org/2025/04/how-people-are-really-using-gen-ai-i...



> Who is anyone to say that this use case is wrong?

So when faced with an important question, you've decided to opt for an indifferent shrug, which is frustrating. But the usual answers apply here so you're in luck. As with most other things.. one should probably consult with experts or practitioners in the field, governments, family or human friends you trust, or shit just poll society in general if you don't trust any of the other people.

I think if you want to consult desperate and vulnerable people, or a random company who stands to profit from "anything goes!", then you already know you're not going to get a very well-considered kind of response


With that argument, you should use a “real” lawyer, or coder, or designer, or strategy consultant.

Instead of an indifferent shrug, I would argue that paying attention to the fact that all of the things you mentioned are available now and people STILL are seeking this out goes to show that it’s a real problem.

As with most problems, the usual answers apply, people have made the choice already, you can either have the empathy to meet them where they are and understand WHY they are doing it, or you can dismiss it and suggest a perfect solution that is unrealistic.


> The most common use case for AI is a therapist.

Source? Sounds like a fairly wild claim.


Its hard to find an actual scientific study on it, but this is from 2024 and I know I've recently seen that therapy took first place now: https://learn.filtered.com/thoughts/ai-now-report

You can probably find something better, but its hard to filter through all the corporate bullshit on google.


The HBR article I linked.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: