Hacker News new | past | comments | ask | show | jobs | submit login

An AI would not be 'smart enough' to figure out what is a good idea or not. That requires a human context, that's the whole point. An AI is not good or evil, its ultimately a loose cannon with no understood (or even understandable) motivations at all.



Sure, but if an AI isn't smart enough to figure out "trying to murder corporate leaders is a bad plan which could lead to seizure of the portfolio I'm trying to manage, among other things" then how do you expect it to be smart enough to engage in the sort of open-ended planning necessary to pull off the described murder spree? :)


As someone pointed out, perhaps the example wasn't thought out well enough. So why go into the details of that, and skip the underlying point? That maybe the system WILL come up with a bad plan and execute it? Or are you saying:

"if the system is smart enough to plan murder, he is also smart enough to know murder is bad" ??

Doesn't that depend on what data the system has seen? Perhaps he doesn't he doesn't have cases in its knowledge base that demonstrate eh side effects of murdering business heads. Perhaps he can only see what happens when they normally die? What if it simply correlates the wrong variables (and forgets correlation isn't causation") ??

I apologize for coming back to the same example again. but it would be really nice if you overlook the flaws of the example and discuss the underlying point




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: