Hacker News new | past | comments | ask | show | jobs | submit login

Those aren't the only two ideas of AI Safety, right? I'll concede that the second category is pretty straight forward (i.e. don't build the Terminator ... maybe we can debate the definition of "Terminator").

But the first is really a branch, not a category, that has a wide variety of definitions of "safety", some of which start to coalesce more around "moderation" than "safety". And this, in my opinion, is actually where the real danger is: Guiding the super-intelligence to think certain thoughts that are in the realm of opinion and not fact. In essence, teaching the AI to lie.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: