An AI's motivation would be whatever it is programmed to do. Which is why it is so important to get this right the first time. Human motivations are incredibly complex, human moral systems have not yet been formalized and any attempt at doing this breaks down at the edge cases.
This is not by any means a solved question, but there have been quite a few publications on this already. There are a hundred subtle errors of reasoning that ruin naïve and anthropocentric reasoning around morality and AI safety. Have a look at the lesswrong blog, for instance. Lots of interesting reading.
This is not by any means a solved question, but there have been quite a few publications on this already. There are a hundred subtle errors of reasoning that ruin naïve and anthropocentric reasoning around morality and AI safety. Have a look at the lesswrong blog, for instance. Lots of interesting reading.