It's not a concern for any current living generations, so any answers are moot because the landscape of the entire friction between corporations, governments and the people will likely have shifted dramatically to where our opinions today have no relevance to their issues.
> It's not a concern for any current living generations
What exactly is impossible to implement if some implementations of so-called artificial intelligence can do so much of useful things?
Don't you believe that AI can just take 1% of human jobs and became a billionaire with significant impact to world's politics? It needn't to add a lot of things to existing implementation, just give it a human's rights such as a bank account and ability to buy businesses.
> It's not a concern for any current living generations
How much would you be willing to bet? I understand the skepticism, but to assign 0% probability to it happening in our lifetimes seems excessively low.
AGI slightly exceeds humans, but they are actually kind of shitty in all sorts of annoying and hard to predict ways. They turn out to be fantastic slackers and liars. Your voters, to put it mildly, don't like them. It's hard to monetize them and we all agree we should focus our efforts on something else.
>How much would you be willing to bet? I understand the skepticism, but to assign 0% probability to it happening in our lifetimes seems excessively low.
Not GP, but how much you got?
AGI (or hard AI, or whatever you want to call it) strongly implies not just reasoning and interaction with the environment, but self awareness. Something which is conveniently ignored by folks who claim that AGI is just around the corner, and welcome their new 'grey goo' overlords.
As Heinlein (it's fiction of course, but the principle that self awareness is necessary for AGI -- not (just) numbers of neurons/data points -- holds IMHO) put it[0]:
"Am not going to argue whether a machine can 'really' be alive, 'really' be self-aware. Is a virus self-aware? Nyet. How about oyster? I doubt it. A cat? Almost certainly. A human? Don't know about you, tovarishch, but I am. Somewhere along evolutionary chain from macromolecule to human brain self-awareness crept in. Psychologists assert it happens automatically whenever a brain acquires certain very high number of associational paths. Can't see it matters whether paths are protein or platinum. ('Soul?' Does a dog have a soul? How about cockroach?)"
As we've seen[1], a variety of meat machines (i.e., animals like us) have varying levels of self awareness. Without that trait, AGI won't be achievable.
Without the ability to recognize and incorporate the concept that one is an entity with existence separate from the rest of the world, there is no real awareness or consciousness.
I'd even go so far to posit that until human children are able to understand object permanence and that their mental states aren't globally available to everyone, they don't meet the standard of "self-awareness."
That's a hard problem, and while we have some conceptual ideas about how that might arise, we have no mechanism or even a foundation for inculcating such a trait into the algorithms folks call "AI".
Until that problem is solved, there will be no AGI. Full stop. And I find it unlikely in the extreme that we will gain the scientific/engineering know how to make that happen in our lifetimes.