Hacker News new | past | comments | ask | show | jobs | submit login

Here's the paradox: a human tells a robot to kill another human; at that point, law #2 is in direct opposition to law #1.



But why would it be? If you give it the first law, then that command should be "disabled" for anyone from outside. You could try to tell a robot that, but it shouldn't work. Your best chance should be if you try to trick the robot into doing a certain action that would indirectly get that person dead. but if he's smart enough to know how to protect a human life, he should be smart enough to deny you the command, knowing it will endanger that other person's life.


Law #2 makes mention of law #1:

A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: