Hacker News new | past | comments | ask | show | jobs | submit login

The mediocre poisoning instructions aren't supposed to be scary in and of themselves, it's just interesting as demonstration that a safety feature has been bypassed.

None of the "evil" use cases are particularly exciting yet for the same reasons that the non-evil use cases aren't particularly exciting yet.




Governments and tech companies and academic and industry groups are designing guidance and rules based on the "safety" threat of AI when these benign use cases are the best examples they have. I agree it parallels some of the business hype, neither is a good way to move forward.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: