Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Worth noting is that the article is from April 2022 and used gpt-3. The "friend" was an imaginary friend, not a dead friend, and so probably more prone to taking actions which would appear in a fictional context. From my research it looks like the base gpt-3 model was just a text predictor without any RLHF or training to be helpful/harmless.

Certainly AI safety isn't perfect, but if you're going to criticize it at least criticize the AIs people actually use today. It's like arguing cars are unsafe and pointing to an old model without seatbelts.

It's not surprising at all that people are willing to use AIs even if they give dangerous answers sometimes, because they are useful. Surely they're less dangerous than cars or power tools or guns, and all of those have legitimate uses which make them worth the risk (depending on your risk tolerance.)




Fair enough but it's still wild that the template for AI and robots in science fiction has (usually) been to portray them as hyper-rational, competent and logical even to a fault, but the most accurate prediction of what AI and robots turn out to be is probably Star Wars, not anything by Asimov.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: