In this hypothetical, the infinitely intelligent super AI, knowing that what it says must not be acted upon, could say the exact right thing so as to get you to do the thing that it really wanted you to do anyway. I'm thinking of that scene in Doctor Who where the Doctor says a couple of words and takes down the Prime Minister with six words.
That feels like a Maxwell's Demon kind of infinitely intelligent to me.
I recognise this might be a failure of imagination on my part, given how many times I've seen other people say "no AI can possibly do XYZ" even when an AI has already done XYZ — but based on what I see, it's extrapolating beyond what I am comfortable anticipating.
The character of The Doctor can be excused here, not only for being fictional, but also for having a time machine and knowing how the universe is supposed to unfold.
We're well into Maxwell's Demon thought experiment-grade territory here. An ASI that dooms the human race is absolutely the same sort of intellectual faffing about that Maxwell proposed in 1867 with his though experiment, though it wasn't refered to as a demon until later, by Lord Kelvin, in 1874. It wouldn't be until the early 1970's that the Unix daemon would come about.
If your want to look at successes, corn, albeit with some modifications, and domesticated animals, have also been really successful at making sure their DNA reproduces.
Crops, pets, and livestock are symbiotic with us, they don't hurt us. The things I listed harm their host, they had to be in that category to make the point that harming us doesn't require high IQ — the harms we suffer from corn very much count as our own fault.
https://www.youtube.com/watch?v=GidbEhL0teE