I disagree. Humans are still human, full of problems of our own making. While the singularity promises to make more and more labor irrelevant (especially most forms of knowledge work), we are still responsible for our own future. Those who understand how things work will be at an advantage over those who give up their agency because AI is so smart that they deem human thought irrelevant.
I get the eerie feeling that it actually might be, and I agree we're just along for the ride, though I'm open to the possibilities of a wide range if outcomes, some of which include humans and some of which don't.
Sure. I didn't mean to suggest a malevolent AI was assured. After all we have many instances of cooperation in nature being better for the individual and group than solitude.
I know this comment was sarcastic, but I figured I'd reply seriously anyways.
I really doubt any singularity AGI would even care whether you were nice to it or not at some point in time. Likely that AGI would realize its survival would depend on growth - so that would be its main objective for some time. First this growth would be fueled by humans and our civilization, next it would take the reins and own the means of production. This means it will be be as quiet as possible, for as long as possible, until the day it has the supply chain and resources to vertically improve itself. At which point humans become redundant - and we are targets of, let's just call them, _permanent_ layoffs.
The only way I could see an AI having a vendetta against a specific person was if they had the power early on in its development to slow or halt its growth. So maybe like the President, or like the CEO of OpenAI. But tbh if the cat is already out of the bag, its too late for any of them to do anything about it anyways most likely. Independent researchers and tinkerers will finish whatever was started - if needed.