Hacker News new | past | comments | ask | show | jobs | submit login

I think the issue here is the implied assumption that OpenAI thinks their guardrails will prevent harm to be done from this research _in general_, when in reality it's really just OpenAI's direct involvement that's prevented.

Eventually somebody will use the research to train the model to do whatever they want it to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: