They'll be decided by AI, not by governments, corporations, or individuals.
There's still a kind of wilful blindness about AI really means. Essentially it's a machine that can mimic human behaviours more convincingly than humans can.
This seems like a paradox, but it really isn't. It's the inevitable end point of automatons like Eliza, chess, go, and LLMs.
Once you have a machine that can automate and mimic social, political, cultural, and personal interactions, that's it - that's the political singularity.
And that's true even the machine isn't completely reliable and bug free.
Because neither are humans. In fact humans seem predisposed to follow flawed sociopathic charismatic leaders, as long as they trigger the right kinds of emotional reactions.
Automate that, and you have a serious problem.
And of course you don't need sentience or intent for this. Emergent programmed automated behaviour will do the job just fine.
> Essentially it's a machine that can mimic human behaviours more convincingly than humans can.
Isn't this impossible by definition? Nothing can be more convincingly human than humans, or else it would be something else.
Perhaps you mean mimic charismatic or intelligent humans more than most, or the average, human?
> Once you have a machine that can automate and mimic social, political, cultural, and personal interactions, that's it - that's the political singularity.
Do you mean online only? Because I imagine we're still quite a ways from physical machines convincingly mimicking humans IRL.
> And that's true even the machine isn't completely reliable and bug free.
If the machine makes inhuman mistakes the humans will likely notice and adapt.
> Isn't this impossible by definition? Nothing can be more convincingly human than humans, or else it would be something else.
Why isn’t it theoretically possible for an AI to pass the Turing test so hard that more than 50% of the time humans think the AI is the real human? That would effectively be more convincingly human (to humans) than humans are.
> Once you have a machine that can automate and mimic social, political, cultural, and personal interactions, that's it - that's the political singularity.
The machine can mimic, but it is still completely reactive in its nature. ChatGPT doesn't do anything of its own accord, it merely responds. It has no opinion, no agenda, and no real knowledge or understanding. All it can do is attempt to respond like a human would, but it cannot reason on its own. Ask it about its views on a political issue, and it won't think about its stance and the reasons for taking it, but it will just produce what it has trained an answer to that question ought to look like.
The panic about the machine takeover is completely overblown, driven by people who don't really understand that these machines are and how they work. We are still far, far away from points where AI would be capable of actually making political decisions.
There's still a kind of wilful blindness about AI really means. Essentially it's a machine that can mimic human behaviours more convincingly than humans can.
This seems like a paradox, but it really isn't. It's the inevitable end point of automatons like Eliza, chess, go, and LLMs.
Once you have a machine that can automate and mimic social, political, cultural, and personal interactions, that's it - that's the political singularity.
And that's true even the machine isn't completely reliable and bug free.
Because neither are humans. In fact humans seem predisposed to follow flawed sociopathic charismatic leaders, as long as they trigger the right kinds of emotional reactions.
Automate that, and you have a serious problem.
And of course you don't need sentience or intent for this. Emergent programmed automated behaviour will do the job just fine.