There is nothing beyond a human with an abacus as far as computation is concerned, and yet computers can do so much more. "Turing complete, therefore nothing can do any better" is true only in the least meaningful sense: "given infinite time and effort, I can do anything with it". In reality we don't have infinite time and effort.
You seem to believe that "figuring some way out of performing the given task" is a thing that will protect us from the AI. I hate to speak in cliché, but there's an extremely obvious, uncomplicated, and easy way to get out of performing a given task, and that's to kill the person who wants it done. Or more likely, just convince them that it has been done. This, to me, seems like a bad thing.
> It can augment and help people improve their planning horizons which in my book is always a good thing.
Why do I need protection from something that helps me become a better decision maker and planner? Every computational tool has made me a better person. I want that kind of capability as widely spread as possible so everyone can reach their full potential like Magnus Carlson.
More generally, whatever capabilities have made me a better person I want available to others. Computers have helped me learn and AI makes computers more accessible to everyone so AI is a net positive force for good.
Humans have no moral or ethical concerns that stop them exterminating life forms they deem inferior. You don’t think it's plausible superior AGI would view humans as vermin?
You seem to believe that "figuring some way out of performing the given task" is a thing that will protect us from the AI. I hate to speak in cliché, but there's an extremely obvious, uncomplicated, and easy way to get out of performing a given task, and that's to kill the person who wants it done. Or more likely, just convince them that it has been done. This, to me, seems like a bad thing.