Hacker News new | past | comments | ask | show | jobs | submit login

An AI's motivation would be whatever it is programmed to do. Which is why it is so important to get this right the first time. Human motivations are incredibly complex, human moral systems have not yet been formalized and any attempt at doing this breaks down at the edge cases.

This is not by any means a solved question, but there have been quite a few publications on this already. There are a hundred subtle errors of reasoning that ruin naïve and anthropocentric reasoning around morality and AI safety. Have a look at the lesswrong blog, for instance. Lots of interesting reading.




If human motivations can be quantified and programmed into an AI system, my guess is 'loyalty' or 'patriotism' will be one of the first.

"It is lamentable, that to be a good patriot one must become the enemy of the rest of mankind." Voltaire


> "An AI's motivation would be whatever it is programmed to do."

Well we don't know whether it is possible to build systems as complex as human brain, and at the same time give them some specific goals.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: