It's also contingent upon giving the AI complete control over an army of robots and telling it to do whatever it feels like while we completely ignore what it's doing (I think Bostrom's argument is that it also has the magical ability to exponentially increase it's intelligence in a very short time period). Like most of the AI-apocalypse, it assumes that a string of highly improbable things would all occur at the same time, and then says, "wouldn't that be terrible?"
Well, yeah. But CERN creating a killer black hole would also be terrible. But we should think about what's probable, not just what's scary.
That doesn't follow at all. A sufficiently smart AI could do all sorts of obtuse things to achieve its aims. It could speculate on the stock market or go on an identity theft spree to pay for more servers. It could start a business and hire employees to run a robot factory. It could start a propaganda campaign to persuade humans to do its bidding. It could lobby politicians to remove legal impediments.
The fundamental principle of the paperclip argument is that the motivations of an AI will not necessarily align with our interests. An AI may do all sorts of things that seem nonsensical or morally repugnant to human beings if it does not share our moral intuitions.
If the intelligence of that AI significantly exceeds the range of human intelligence, we may be powerless to stop it or even to comprehend what it is doing. A rogue AI could become a catastrophically nasty Stuxnet, distributing itself across the billions of Turing machines we have networked together. Our only effective response may amount to "erase every data storage device in the world and start from scratch".
Yes, and the apocalypse resulting from that mismatch of goals requires the things the parent post talks about (humans ignoring this AI and blithely giving it free reign without testing it)
Well, yeah. But CERN creating a killer black hole would also be terrible. But we should think about what's probable, not just what's scary.