I never said am certain, I'd assign both cases about the same probability. Since I've already stated why I think the malevolent simple AI scenario such as a paperclip maximizer is plausible, you haven't stated why you find it implausible. That is why I asked.
Also, what does the Intermediate value theorem have to do with it?
You asked why I was certain. So I asked why you were certain. The onus is on the people putting out apocalyptic theories to justify the apocalyptic scenario (extraordinary claims require extraordinary evidence - a central tenet of the scientific method).
We are going to have a normal-intelligent AI before a superintelligent AI.
Also, https://en.wikipedia.org/wiki/Intermediate_value_theorem