Do you have an example of a mind extremely different from a human one?
I ask because if we assume that human minds are Turing complete then there is nothing beyond human minds as far as computation is concerned. I see no reason to suspect that self-aware Turing machines will be unlike humans. I don't fear humans so I have no reason to fear self-aware AI because as far as I'm concerned I interact with self-aware AI all the time and nothing bad has happened to me.
My larger point is that I dislike the fear mongering when it comes to AI because computational tools and patterns have always been helpful/useful for me and AI is another computational tool. It can augment and help people improve their planning horizons which in my book is always a good thing.
> A smart machine will first consider which is more worth its while: to perform the given task or, instead, to figure some way out of it. Whichever is easier. And why indeed should it behave otherwise, being truly intelligent? For true intelligence demands choice, internal freedom. And therefore we have the malingerants, fudgerators, and drudge-dodgers, not to mention the special phenomenon of simulimbecility or mimicretinism. [0] ...
There is nothing beyond a human with an abacus as far as computation is concerned, and yet computers can do so much more. "Turing complete, therefore nothing can do any better" is true only in the least meaningful sense: "given infinite time and effort, I can do anything with it". In reality we don't have infinite time and effort.
You seem to believe that "figuring some way out of performing the given task" is a thing that will protect us from the AI. I hate to speak in cliché, but there's an extremely obvious, uncomplicated, and easy way to get out of performing a given task, and that's to kill the person who wants it done. Or more likely, just convince them that it has been done. This, to me, seems like a bad thing.
> It can augment and help people improve their planning horizons which in my book is always a good thing.
Why do I need protection from something that helps me become a better decision maker and planner? Every computational tool has made me a better person. I want that kind of capability as widely spread as possible so everyone can reach their full potential like Magnus Carlson.
More generally, whatever capabilities have made me a better person I want available to others. Computers have helped me learn and AI makes computers more accessible to everyone so AI is a net positive force for good.
Humans have no moral or ethical concerns that stop them exterminating life forms they deem inferior. You don’t think it's plausible superior AGI would view humans as vermin?
I ask because if we assume that human minds are Turing complete then there is nothing beyond human minds as far as computation is concerned. I see no reason to suspect that self-aware Turing machines will be unlike humans. I don't fear humans so I have no reason to fear self-aware AI because as far as I'm concerned I interact with self-aware AI all the time and nothing bad has happened to me.
My larger point is that I dislike the fear mongering when it comes to AI because computational tools and patterns have always been helpful/useful for me and AI is another computational tool. It can augment and help people improve their planning horizons which in my book is always a good thing.
> A smart machine will first consider which is more worth its while: to perform the given task or, instead, to figure some way out of it. Whichever is easier. And why indeed should it behave otherwise, being truly intelligent? For true intelligence demands choice, internal freedom. And therefore we have the malingerants, fudgerators, and drudge-dodgers, not to mention the special phenomenon of simulimbecility or mimicretinism. [0] ...
--
[0]: The Futurological Congress by Stanislaw Lem - https://quotepark.com/authors/stanislaw-lem/