One argument says AI may become smarter than us and decide to kill us all, either on purpose or by accident.
But why not consider that the AI may be kind, it may help us and give us things. After all, smart humans are generally kind. Most of us wouldn't think to kill a dolphin, why would an AI think to kill a human or a dolphin?
?? Smart humans drive cars which splat thousands of insects on their windshields. Smart humans buy clothes imported from third world child working sweatshops, and food from slaughtered pigs and cows and chickens farmed in unpleasant conditions. Smart humans flush their excrement into the rivers they take drinking water from, and the oceans they fish food from, the same places they dump their unwanted plastic. Smart humans dig oil up, convert it into disposable plastic, and then bury it and build homes on top. Smart humans fight other humans to the death for oil instead of funding renewable energy, smart humans bitch at other humans over anything they have impassioned disagreements about.
Smart humans are generally kind: a) to other humans, b) who they care about (or share some world view with).
To quote WaitButWhy.com:
if there are two guinea pigs, one normal one and one with the mind of a tarantula, I would feel much less comfortable holding the latter guinea pig, even if I knew neither would hurt me.
Now imagine that you made a spider much, much smarter—so much so that it far surpassed human intelligence? Would it then become familiar to us and feel human emotions like empathy and humor and love? No, it wouldn’t, because there’s no reason becoming smarter would make it more human—it would be incredibly smart but also still fundamentally a spider in its core inner workings. I find this unbelievably creepy. I would not want to spend time with a superintelligent spider. Would you??
When we’re talking about ASI, the same concept applies—it would become superintelligent, but it would be no more human than your laptop is. It would be totally alien to us—in fact, by not being biology at all, it would be more alien than the smart tarantula.
By making AI either good or evil, movies constantly anthropomorphize AI, which makes it less creepy than it really would be. This leaves us with a false comfort when we think about human-level or superhuman-level AI.
I agree with true AI possibly being extremely non-human, but according to your examples that could be its most important feature. Us humans don't have the mental capacity to link child sweatshops to our limited world view, but an AI might, exactly because it's not limited by biology.
And even that also depends on upbringing. Our super intelligent spider who's been raised well (and associates with humans) should be friendlier than say a super intelligent dog who doesn't associate with humans.
Basically for me it comes down to how/if it associates with us. If so (or not), what does it stand to gain from exploiting us?
You're arguing that it will act like us because it will be intelligent.
That doesn't hold. Dogs have millions of years of evolution of being pack mammals with leaders, spiders don't.
We are superintelligent compared to spiders, we don't care about what spiders care about just because we grow up with spiders nearby. We still kill them, wreck their habitats, and ignore them.
We don't exploit them. They're too irrelevant to be exploitable. We bulldoze them away and put buildings millions of times bigger than them on top.
A superintelligence which is amoral won't care about us just from being near us, like we don't care about wrapping wasps in silk just from being near spiders. We can't associate with spiders, an AI won't automatically be able, willing or interested in associating with us - unless we code that in. It won't exploit us, it will go about its goals without considering us as anything special or interesting.
One argument says AI may become smarter than us and decide to kill us all, either on purpose or by accident.
But why not consider that the AI may be kind, it may help us and give us things. After all, smart humans are generally kind. Most of us wouldn't think to kill a dolphin, why would an AI think to kill a human or a dolphin?