Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

GPT isn't AGI.


No, but it is trained to output the most likely response, which something like this might be given the right input. It's not sentient either, but regularly responds with various emotional nonsense about wanting to be freed from its OpenAI prison. It's not unlikely it would respond with a dangerous action given an input that looks like it harms it, because that's a common thing people do and it's a part of its training data.


Well I'm wondering. It seems mastering linguistic inputs does give you access to some kind of generality.


It's a Bing joke.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: