Yeah...I suppose my argument construes too much of a strawman. It doesn't have to be that we invent machine superintelligence...it's merely enough to naively believe in our automated, neural-networks systems that, without proper feedback controls, can cause catastrophic damage...whether they are sentiment in doing so is besides the point.
Interesting side point: I wonder if emergent-MI systems will be more resistant to attack?
From a biological standpoint, what we're basically doing with deploying code currently is creating and propagating generations of clones. Which, didn't work out so well for bananas...
"The single bug that causes all smart-fridges to murder their owners in a pique of crushed-ice-producing rage" would be less of a concern as we move towards more exogenous (with respect to the base code) processing systems.