To reduce that a bit farther, it doesn't have to be desired control. Anomalous behavior in the form of bugs can have the same undesirable effects, and simpler systems have fewer bugs.
How many bugs are there in my 1960s vintage toaster? How many bugs are there in an ICBM's control circuitry? How many bugs are there in a modern kernel?
You make lethal devices "smarter", you'd better make sure you know what they do. This isn't impossible, but it's not easy or quick either (something which I think we can all agree drives a lot of systems design). Given that a lot of machine intelligence is predicated on statistical methods and eventual convergence... maybe not the best combination?
Yeah...I suppose my argument construes too much of a strawman. It doesn't have to be that we invent machine superintelligence...it's merely enough to naively believe in our automated, neural-networks systems that, without proper feedback controls, can cause catastrophic damage...whether they are sentiment in doing so is besides the point.
Interesting side point: I wonder if emergent-MI systems will be more resistant to attack?
From a biological standpoint, what we're basically doing with deploying code currently is creating and propagating generations of clones. Which, didn't work out so well for bananas...
"The single bug that causes all smart-fridges to murder their owners in a pique of crushed-ice-producing rage" would be less of a concern as we move towards more exogenous (with respect to the base code) processing systems.
>human-controlled
They might not be human-controlled. That's why it's extra scary.