Democracy (in the general sense) works just fine. It relies on groups of people of a certain geographic area physically getting out and interacting. Democracy isn't some guy living in his closet plugged into the web and making life choices based on AI-selected video streams. That's some kind of transhumanist nightmare.
AI is an extinction-level threat to humanity, but when you say that, people think you're talking about Terminator robots laying waste to the landscape. The scenario this article talks about is much more realistic: hundreds of little stupid AIs tearing apart the inner workings of our humanity by giving us exactly what we want (but not what we need).
Your comment made me double-take by how succinct you put an ancient and profound sentiment. The ancient Greeks did not know about computers, but a horde of powerful yet incompetent demigods accidentally destroying humanity by helpfully giving us everything we ask for? I think they’d have no problem at all with that concept!
I must disagree about democracy - abstracted out it only relies upon distribution of power and dependencies.
I keep on hearing AI claimed as an extinction level threat yet never has an actual mechanism been given - just a pile of tropes taken as dogma.
Let alone the fact that if humanity does something stupid enough with basically anything it could be an extinction level threat. A comet and a sufficently large number reenacting Heavens Gate would be an extinction level threat with no inherent technology even communications.
Mass adoption of plutonium codpieces/IUDs and deliberate refusal to recognize the effects of radiation poisoning for fear of effect on commerce or pride could be an extinction level event.
AI won't save us from being a pile of complete idiots unless it is vastly superhuman but it cannot be blamed for death by stupidity.
"I keep on hearing AI claimed as an extinction level threat yet never has an actual mechanism been given - just a pile of tropes taken as dogma."
I believe that is the point I'm making: by digitizing the human experience and optimizing certain easily-optimized chains of thought, you will never see the mechanism; nor will I.
There is no mechanism in the sense you seem to be asking for.
This is a preponderance of the evidence argument, not a geometric one, so even if we completely understand one another, it's perfectly fine for you to feel the point hasn't been made and I to feel like it has.
We all know various situations where people are given what they ask for and it ends up destroying them. People who suddenly win the lottery don't generally have a bright future ahead of them. People with paranoia issues who spend a lot of time off medications alone researching things usually don't end up in a good place. Rebellious youths experimenting with opiates are in a dangerous place. Isolated social groups with tight moral strictures have problems in a larger secular society.
I don't know how many of these situations you'd like listed, but there are easily dozens, and that's speaking in a generic sense. Once you start individually customizing the scenarios, say an isolated youth with some tendencies towards paranoia living in a tightly-controlled social group, the scenarios expand without limitations.
And that's what current AI promises, customized experiences in various situations based on all sorts of variables you and I may never have considered. You do this with every person, in more and more situations, and the impact is undecidable. Yes, you don't get it. The reasoning doesn't hold up. That's because if I could make a specific case about one particular scenario, it wouldn't be applicable to the argument I'm making.
I wish I could say we're performing a wide scale social experiment that we've never seen before. But the word "experiment" implies a lot of agency that isn't there. We're just mucking around with millions of variables simultaneously across a population of billions and telling people that because there's nothing obviously bad to be seen, nothing bad must be there. Then we end up reading these vague studies about how teens who use their cell phones more are more unhappy than those who don't -- and we're unable to process that information in any reasonable context. We're expecting to be able to reason about AI, but if we could do that, we wouldn't need the AI in the first place.
what you're spouting is just religion all over again.
There will always be a certain segment of the population that are into drugs, into religion, into politics, into the mindless entertainment provided by youtube, ad nauseum.
But it will never be all of humanity, or even most of humanity. The trash still needs to be collected, the electricity still needs to be generated, the food still needs to be made and distributed. And the country still needs to be run.
The ones doing these things are going to understand the reality enough to value doing them.
The danger of AI isn't in a long slow destruction of humanity, it's in a flash event that wrests control from us such that we can never regain it.
Now, whether or not that can, or will, happen is up for debate.
But this argument about how AI is slowly going to destroy us because we're all going to slowly start valuing what it tells us over "real life" is just the same old morality arguments surrounding religion reskinned. It just means you understand their perspective, or their need to enforce their world vision on others.
No, I'm not. Religion is a formalized system of causality about things we do not understand: the sky god wants us to eat grapes, we do not eat grapes, there are floods, therefore we must eat more grapes. It's not wrong or right, it's non-rational.
Religions know how things work, you just can't reason with them. I'm arguing from ignorance: we cannot know. My only additional point is that not only can we not know, we can not know in a billion different scenarios. Odds are many of these scenarios will work out poorly. That's the only "point of faith" my argument calls for. It seems to me to be a reasonable thing to believe.
You seem to feel that this will be a disastrous thing. It's interesting to me how people who don't see problems with AI keep insisting that there must be some huge, horrible result. If there were, as you point out, people wouldn't do it.
You also seem to assume that I'm making some sort of moral value judgment. That's interesting to me as a drug-legalization, open-borders libertarian. I wonder what sorts of morals I am supposed to be having?
No morals or religion is required to understand my argument. We humans work as best we can in various-sized social groups based on each of our understandings of cause-and-effect, as flawed as it all is. If we change that in a massive way, the obvious conclusion is that we cannot continue to reason about the results, not that they would be morally good or bad. Then, it logically follows that for whatever definition of good or bad you have, moral, utilitarian, whatnot, there's going to be a lot of bad things happen for which our society has no prior experience. That doesn't seem workable to me.
We gotta stop expecting these arguments to play out in some grand fashion. Boundless optimism vs. religious fear might be a great plot for a movie, but it's highly doubtful the future is going to play out like that at all.
Do you have a blog? If so you should consider writing a post that synthesizes your last few comments in this thread - AI, democracy, and religion. I say that, selfishly, because I would quite like to read it.
1. you're tilting at windmills here, I gave no opinion on what I think the result of AI will be.
2. You didn't understand the comparison to religion.
You could literally take your arguments and reskin them as religious points.
One could even imagine this exact discussion happening when humanity first discovered drugs. Because they feel good all of humanity will eventually be hooked on them, yada yada yada. Only that presupposes that there's no value in procuring the drugs themselves, because the second you have to have a certain segment of the population procuring those drugs you have people who: 1) have a lot of power, and 2) have a reason for existing outside of simply taking drugs. In other words, the argument is a contradiction itself.
Now, if an external force had been able to get all of humanity hooked on drugs in a very short amount of time (and takes care of the procurement), then the predictions would be possible because procuring the drugs is no longer valuable for humanity.
The dangers of AI are not that we're slowly going to lose ourselves as we all become mindless zombies watching entertainment. The danger is that, like the drug example, those who procure AI are going to have a lot of power, and if AI itself ever becomes independent of humanity then we could lose all control over our own destiny.
And to loop this back to the religion comparison, there are always people in this world wanting to impress their worldview on others. Which is why your arguments can be reskinned so easily as religious arguments. They use the same techniques you're using here.
Very interesting point about Democracy. It suggests that Democracy has a natural scale, for example a city or town of a certain size. The city-states of classical antiquity, the middle ages, and the renaissance seem to support this. Of course not all the city-states were democracies, still, I think your point about physicality and democracy is grossly under-appreciated today.
I use to ask people what the right software was for a task. (I still do at times) The answers explained what people liked about the software they used but an objective A vs B was rare and it didn't do justice to the rest of the set.
Out of curiosity I decided to install every IRC client I could find. Connect to a few servers, open a good number of channels and learn to use them one by one while looking at memory and cpu usage.
Like many I have deep thoughts. Mine are as hard to find for their potential audience as the many are for me. We do however pay a lot of attention to people who make a lot of noise.
I had this hypothesis that people would copy other peoples political ideology that are copies from that of others in long chains that, rather than start with a persons deep objective thoughts, are just connected in loops long enough for us not to notice - with a number of thinking nodes insignificant to the result. [lets call them dictator nodes for laughs]
In order to test this rather absurd hypothesis I took the entire list of US presidential candidates and looked at their social media.
This quickly confirmed the loops to exist, fuck, my hypothesis was optimistic compared to reality. I found that close to 100% had facebook pages and youtube channels that didn't enjoy enough traffic to account for friends and close relatives of the person.
Eventually I worked my way up to the green party, they had 250 views on their youtube channel. I wondered about the meaning of it.... what does it mean?
I think it means even journalists didn't bother to look at it. The huge apparatus of international journalism did not bother to look at the top 5 candidates most screamed about while I took a look at a really large number of them.
For democracy to work we ALL need to look at the menu then make up our own mind. In stead what we got is NO ONE looking at the ideas. If 99% of the population had exactly the same opinion about everything and one of us would write it into a political program no one would vote for it.
The point I'm trying to get to is this: We've already build the machine that contains us. Small groups of people who cared about something gathered and implemented their ideas. These things are now bolted down so firmly that unmaking their actions takes such an absurd unrealistic amount of effort that we can at best imagine doing it. We have millions of implementations like that and they are here to stay. It doesn't even need to be stupid. The idea could have been brilliant 200 years ago.
The stupidity in Artificial-stupidity will not be in the AI, the system will continue to "liberate" humans from having to think deeply which will move us further from a position of influence. If it does a bad job it will actually be beneficial to the end result.
AI is an extinction-level threat to humanity, but when you say that, people think you're talking about Terminator robots laying waste to the landscape. The scenario this article talks about is much more realistic: hundreds of little stupid AIs tearing apart the inner workings of our humanity by giving us exactly what we want (but not what we need).