Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"GPT-4 is still not dangerous" is a bold claim already tbh. It can be easily jailbroken still, can be used to train a local model which can spread and learn, and can be told to take on a persona which can have its own goals and aspirations - some of which can be counter to containment. Already we're walking that tightrope.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: