AIs are better at creativity than us - specifically, better at generating new, creative ideas, as this is a matter of injecting some random noise to the reasoning process. They may be worse at filtering out bad ideas and retaining good ones (where "bad" and "good" are - currently - defined as whatever we feel is bad or good), but that's arguably a function of intelligence.
> and maybe even deviousness to become a real threat
As the infamous saying of Eliezer Yudkowsky goes: the AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
> As the infamous saying of Eliezer Yudkowsky goes: the AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
My point is that this presumes the system reasons around the human response. Like
A) I want to make as many paper-clips as possible
B) If I attempt to convert this city make of steel structural columns to paper-clips, the humans will see this and stop me.
C) Therefore I will not tell them this is my objective.
I am not suggesting this requires some kind of malice-aforethought. But it does require some kind of indirect reasoning of cause-and-effect and the ability to apply that to its own systems, and further that requires the ability to obfuscate actions.
AIs are better at creativity than us - specifically, better at generating new, creative ideas, as this is a matter of injecting some random noise to the reasoning process. They may be worse at filtering out bad ideas and retaining good ones (where "bad" and "good" are - currently - defined as whatever we feel is bad or good), but that's arguably a function of intelligence.
> and maybe even deviousness to become a real threat
As the infamous saying of Eliezer Yudkowsky goes: the AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.