Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's an article where it link to the chats. And yes, they are vile. https://arstechnica.com/tech-policy/2024/12/chatbots-urged-t...

> 9/10 times these issues are caused by the AI being overly sycophantic and agreeing with the user when the user says insane things.

Repeat after me: an AI for sale should never advocate suicide or agree that it should be a good idea.



It's an entertainment product. You're basically acting like the comic code is necessary when the reality is that this is more like parents complaining that they let their kid watch an NC17 movie and it messed them up.


Which screenshot showed an AI advocating for suicice?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: