That doesn't follow at all. A sufficiently smart AI could do all sorts of obtuse things to achieve its aims. It could speculate on the stock market or go on an identity theft spree to pay for more servers. It could start a business and hire employees to run a robot factory. It could start a propaganda campaign to persuade humans to do its bidding. It could lobby politicians to remove legal impediments.
The fundamental principle of the paperclip argument is that the motivations of an AI will not necessarily align with our interests. An AI may do all sorts of things that seem nonsensical or morally repugnant to human beings if it does not share our moral intuitions.
If the intelligence of that AI significantly exceeds the range of human intelligence, we may be powerless to stop it or even to comprehend what it is doing. A rogue AI could become a catastrophically nasty Stuxnet, distributing itself across the billions of Turing machines we have networked together. Our only effective response may amount to "erase every data storage device in the world and start from scratch".
Yes, and the apocalypse resulting from that mismatch of goals requires the things the parent post talks about (humans ignoring this AI and blithely giving it free reign without testing it)
The fundamental principle of the paperclip argument is that the motivations of an AI will not necessarily align with our interests. An AI may do all sorts of things that seem nonsensical or morally repugnant to human beings if it does not share our moral intuitions.
If the intelligence of that AI significantly exceeds the range of human intelligence, we may be powerless to stop it or even to comprehend what it is doing. A rogue AI could become a catastrophically nasty Stuxnet, distributing itself across the billions of Turing machines we have networked together. Our only effective response may amount to "erase every data storage device in the world and start from scratch".
Related: http://blog.figuringshitout.com/nov-12th-day-30-no-evil-geni...