(The deliberate construction of an AI that pays people to produce lies to damage other people's health and welfare sounds like a farfetched science fiction plot. Accidentally building one, and then trying to deny responsibility for it, is a far more plausible thing for humans to do to ourselves. Just as Schneier talked about the "Exxon Valdez of privacy", what we have now is the tetraethyl lead of video entertainment.)
There is a lot to be said about stupidity of yt algorithm that is basically designed to give you more of the same. However, nyt's nostalgia about the times when they could simply deplatform whoever they do not see fit, is in the league of its own
I think it is worth pointing out that a lot of this, 'artificial stupidity' is simply a problem of scale. People may think of Google as a leading Ai company and in many respects that is true, but is so many others it is not. When Google utilizes a model it has to work across billions of users where the margins are tiny on each user, the models have to be damn simple. The industry has far better than what Google offers in most cases but because of Google's scale, stupidity is all they can afford.
I would add to this line of reasoning that companies don't particularly care about a given model one way or the other, as long as it makes them money and is easy to manage. Not all these models will be good for users, however, nor will they be obvious in the damage the model causes later in other systems.
I'm not sure if Youtube-driven Politics impact is really as concerned as someone being blamed as a Criminal just because of their color or someone's insurance not getting paid because of some useless reason which the model has identified as an important variable.
I think as a bunch of humans who understand this we have to really start working on saying "No to the grey"[1] area of the predictions.
Oh wow, that's almost a real-life "Sort By Controversial", Scott Alexander's short story about what happens when an algorithm optimizes for controversial content that's so warping people think the other side misunderstood it.
They should make youtube great again. It used to be that I could watch 911 videos on the anniversary and there'd be real heart-felt personal videos from actual people involved and now most of that's been censored by dramatic conspiratorial junk.
It'd also be great if it didn't recommend for every video I watch, Jordan Peterson on his social justice sprees complaining about the newest thing that he's offended by this week. That got old last year.
No, it did not. By some takes, society is in its most dire crisis ever and he's telling people they can do something other than panic. That's popular, get over it.
Out of all the crises society has ever faced, you're talking about one of the most peaceful times in mankind. This is why people are getting annoyed at youtube's recommendations and part of the problem. People like you and Peterson trying to fight over people's minds and telling them you need to be offended by this or you need to do that. No, I just want to watch my nature docs, I don't need people trying to recruit me to fight in their own personal culture wars.
> Out of all the crises society has ever faced, you're talking about one of the most peaceful times in mankind.
The crisis is beneath your nose.
> No, I just want to watch my nature docs, I don't need people trying to recruit me to fight in their own personal culture wars.
That is a healthy attitude, good for you! Jordan Peterson's not reaching out to people who are well adjusted and who have something going on in their lives. He's presenting an alternative to resentment, the particular kind of resentment that lends all too well to scapegoating.
If you are not put upon by his ideological opponents, that's great; but many people are, and I like personal responsibility as a message a heck of a lot better than blaming a merchant class, a race, or an unfalsifiable conspiracy: the latter of which seems to be the popular alternative to a narrative of self-ownership.
The fact that you, a particularly unusual person, are not interested in his work is no indication of whether or not it is popular enough to warrant promotion by YouTube's algorithm. Because people YouTube considers similar to the natural audience of that work, will tend to watch it at length when given the choice, YouTube figures it ought to promote it. That seems like the way a recommendation engine ought to work. If not that way, then how?
Added: to sum; I think it is at least morally acceptable that YouTube has a recommendation system based largely on how much time you are likely to spend watching the content. If that content is monetized, it is not pleasant that they would prefer monetized content, but it is at least defensible. The fact that you do not like the particular content mentioned by the parent comment is neither here, nor there.
What is he telling people they can do? Other than stand up straight and ignore the crisis until they’ve cleaned their rooms? (I’ve already read the book.)
NYTimes is a very biased, liberal source. Every time an election doesn't go as planned, as in right-wing parties win, media such as NYTimes or Washington Post blame it on Youtube/Facebook/AI and everything else, as if a person couldn't rationally vote for right-wing policies and political parties.
YouTube recommendations feel like a direct impediment to democracy. When your recommendations consist of the same crap all the time, it may seem as though that stuff carries more weight than it really does.
I wonder to what degree YouTube is responsible for the current political climate.
The same with Twitter really. When you only follow and see posts from people you choose, how are you supposed to give important issues a fair shake and debate viewpoints that different from yours.
Which calls into question the whole idea that democracy even works and that people can actually think for themselves and aren't just controlled by whatever propaganda happens to show up in front of their faces. I don't even argue politics with people any more because it's almost 100% what news sources they trust and find credible.
It's all just rumor and innuendo these days. Those news sources trust and find credible "anonymous sources" that say for example that Trump colluded with Russia or the Clintons are doing various unspeakable things and then you're arguing with these supposedly credible anonymous sources that each person reads and trusts. Since the sources are anonymous and nobody actually goes on the record, it's just a bunch of back and forth with no basis in facts except which news sources people trust and find authoritative.
So are you suggesting that if someone was mistaken in one line of reasoning or idea, that they are necessarily wrong on all?
If not... what's your point caller? All I can see in your statement is someone trying to avoid challenging an idea they don't like on its merits.
Actually, I'm surprised at how many participants in public discourse today, especially amongst the so called "educated" classes, actually are willing to make such arguments with a straight face and even give them credence. All it is is ad hominem... a logical fallacy. Shameful state of affairs. (...and maybe it was never different and I'm just noticing more now...)
I'm saying that he might not be right all the time, not that he is definitely wrong all of the time. It is possible that his "democracy is fundamentally broken" beliefs are correct, but we shouldn't just say "ooh Aristotle was smart and believed this so its a reasonable claim" and move on.
I think he is wrong about democracy and I bring up other cases where he is wrong so people will think twice about his beliefs about democracy.
It is of course true that arguments should be evaluated on their own merits. However, we rarely have time to do that, and so we trust authoritative thinkers based on their record of producing good reasoning. The commenter who quoted aristotle invoked his authority to lend credibility to the quoted statement. You would have discarded this statement unless it was shown as a quote of airstotle. The next commenter reduced the authority of Aristotle by calling his record of good reasoning into question.
When appeal to authority is invoked, it is perfectly reasonable to question the validity of said authority.
Everybody is allowed to vote, as long as they are white: Felt like a real democracy but is a fake
Everybody is allowed to vote, except women. Felt like "the best democracy" once upon a time, but is also fake. Not even a half democracy.
Many of the things that today are normal could (and will) be deeply embarrasing in the future and so similar to a real democracy as a plastic imitation of a turd coated in glitter.
A real democracy is difficult to find, but really easy to spot. Must fulfill this three rules: 1) One adult citizen, one vote 2) All citizens are equal and therefore all of their (legally issued) votes score the same. No votes can count double or x100. The only vote scoring less than other is the x0 blank vote. AND 3) everybody can apply for candidate if they desire so, and participate on an equal footing.
If one of this three rules can't apply you have spotted a fake: Everybody can vote freely and is encouraged to vote, but there is only a party allowed to show... rule 3, you have found a dictatorship.
If there is a group of people legally allowed to vote, that put a valid vote, but their votes are then zeroed and removed in postproduction then is the rule 2.
I assume those are supposed to be necessary rather than sufficient conditions, because my immediate thoughts are:
1: Why adults? Why citizen? How do you define each?
2: Why make them equal when you just excluded minors and non-citizens entirely — e.g. you could make it so 13-18 year olds get counted as (years less than 18)/5 of a vote, why is that bad?
3: why is that important, why not require passing a basic civics qualification?
Meta: when non-white people were disenfranchised, were they also treated as “not people”? If so, what other categories currently considered “not people” should in future be treated as people? (If any)
> 2: Why make them equal when you just excluded minors and non-citizens entirely
There is a worlwide consensus about that people from other nations are not allowed to participate in elections of the government of a country. We would need to change the legal definition of a sovereign country for allowing it, and the benefits are unclear (to start, the president of all small and middle sized countries would be chinese).
There is a worldwide consensus also about that laws have to treat minors in a different way than adults. More indulgent. They have different rights and duties. Minors are expected to commit mistakes.
Allowing a toddler to vote for a party would imply explaining first what means terms like "left", "my left, not your left", "rigth", "communism", "liberalism", "populism" or "fascism", when they should be crashing windows with baseball balls instead. This would hit close to adoctrinating boys and girls and forcing them to choose one side (and deal with the unavoidable consequences in a yet enough confuse and complex phase of their lives).
> 3: why not require passing a basic civics qualification?
Lets people decide with their vote if is a politician or a clown. This is the purpose of voting. If the semi-automatic lonely boy is democratically elected by the majority, people has spoken, and I'm fine with that.
It should be understood that rule 2) means neither the US nor the EU are a "real democracy".
US fails due to the electoral vote issue. And the EU on the fact that smaller countries have a more representatives/citizen in the EU parliament than larger countries.
It would be interesting to know if there actually is a democracy that fulfills those rules...
Democracy (in the general sense) works just fine. It relies on groups of people of a certain geographic area physically getting out and interacting. Democracy isn't some guy living in his closet plugged into the web and making life choices based on AI-selected video streams. That's some kind of transhumanist nightmare.
AI is an extinction-level threat to humanity, but when you say that, people think you're talking about Terminator robots laying waste to the landscape. The scenario this article talks about is much more realistic: hundreds of little stupid AIs tearing apart the inner workings of our humanity by giving us exactly what we want (but not what we need).
Your comment made me double-take by how succinct you put an ancient and profound sentiment. The ancient Greeks did not know about computers, but a horde of powerful yet incompetent demigods accidentally destroying humanity by helpfully giving us everything we ask for? I think they’d have no problem at all with that concept!
I must disagree about democracy - abstracted out it only relies upon distribution of power and dependencies.
I keep on hearing AI claimed as an extinction level threat yet never has an actual mechanism been given - just a pile of tropes taken as dogma.
Let alone the fact that if humanity does something stupid enough with basically anything it could be an extinction level threat. A comet and a sufficently large number reenacting Heavens Gate would be an extinction level threat with no inherent technology even communications.
Mass adoption of plutonium codpieces/IUDs and deliberate refusal to recognize the effects of radiation poisoning for fear of effect on commerce or pride could be an extinction level event.
AI won't save us from being a pile of complete idiots unless it is vastly superhuman but it cannot be blamed for death by stupidity.
"I keep on hearing AI claimed as an extinction level threat yet never has an actual mechanism been given - just a pile of tropes taken as dogma."
I believe that is the point I'm making: by digitizing the human experience and optimizing certain easily-optimized chains of thought, you will never see the mechanism; nor will I.
There is no mechanism in the sense you seem to be asking for.
This is a preponderance of the evidence argument, not a geometric one, so even if we completely understand one another, it's perfectly fine for you to feel the point hasn't been made and I to feel like it has.
We all know various situations where people are given what they ask for and it ends up destroying them. People who suddenly win the lottery don't generally have a bright future ahead of them. People with paranoia issues who spend a lot of time off medications alone researching things usually don't end up in a good place. Rebellious youths experimenting with opiates are in a dangerous place. Isolated social groups with tight moral strictures have problems in a larger secular society.
I don't know how many of these situations you'd like listed, but there are easily dozens, and that's speaking in a generic sense. Once you start individually customizing the scenarios, say an isolated youth with some tendencies towards paranoia living in a tightly-controlled social group, the scenarios expand without limitations.
And that's what current AI promises, customized experiences in various situations based on all sorts of variables you and I may never have considered. You do this with every person, in more and more situations, and the impact is undecidable. Yes, you don't get it. The reasoning doesn't hold up. That's because if I could make a specific case about one particular scenario, it wouldn't be applicable to the argument I'm making.
I wish I could say we're performing a wide scale social experiment that we've never seen before. But the word "experiment" implies a lot of agency that isn't there. We're just mucking around with millions of variables simultaneously across a population of billions and telling people that because there's nothing obviously bad to be seen, nothing bad must be there. Then we end up reading these vague studies about how teens who use their cell phones more are more unhappy than those who don't -- and we're unable to process that information in any reasonable context. We're expecting to be able to reason about AI, but if we could do that, we wouldn't need the AI in the first place.
what you're spouting is just religion all over again.
There will always be a certain segment of the population that are into drugs, into religion, into politics, into the mindless entertainment provided by youtube, ad nauseum.
But it will never be all of humanity, or even most of humanity. The trash still needs to be collected, the electricity still needs to be generated, the food still needs to be made and distributed. And the country still needs to be run.
The ones doing these things are going to understand the reality enough to value doing them.
The danger of AI isn't in a long slow destruction of humanity, it's in a flash event that wrests control from us such that we can never regain it.
Now, whether or not that can, or will, happen is up for debate.
But this argument about how AI is slowly going to destroy us because we're all going to slowly start valuing what it tells us over "real life" is just the same old morality arguments surrounding religion reskinned. It just means you understand their perspective, or their need to enforce their world vision on others.
No, I'm not. Religion is a formalized system of causality about things we do not understand: the sky god wants us to eat grapes, we do not eat grapes, there are floods, therefore we must eat more grapes. It's not wrong or right, it's non-rational.
Religions know how things work, you just can't reason with them. I'm arguing from ignorance: we cannot know. My only additional point is that not only can we not know, we can not know in a billion different scenarios. Odds are many of these scenarios will work out poorly. That's the only "point of faith" my argument calls for. It seems to me to be a reasonable thing to believe.
You seem to feel that this will be a disastrous thing. It's interesting to me how people who don't see problems with AI keep insisting that there must be some huge, horrible result. If there were, as you point out, people wouldn't do it.
You also seem to assume that I'm making some sort of moral value judgment. That's interesting to me as a drug-legalization, open-borders libertarian. I wonder what sorts of morals I am supposed to be having?
No morals or religion is required to understand my argument. We humans work as best we can in various-sized social groups based on each of our understandings of cause-and-effect, as flawed as it all is. If we change that in a massive way, the obvious conclusion is that we cannot continue to reason about the results, not that they would be morally good or bad. Then, it logically follows that for whatever definition of good or bad you have, moral, utilitarian, whatnot, there's going to be a lot of bad things happen for which our society has no prior experience. That doesn't seem workable to me.
We gotta stop expecting these arguments to play out in some grand fashion. Boundless optimism vs. religious fear might be a great plot for a movie, but it's highly doubtful the future is going to play out like that at all.
Do you have a blog? If so you should consider writing a post that synthesizes your last few comments in this thread - AI, democracy, and religion. I say that, selfishly, because I would quite like to read it.
1. you're tilting at windmills here, I gave no opinion on what I think the result of AI will be.
2. You didn't understand the comparison to religion.
You could literally take your arguments and reskin them as religious points.
One could even imagine this exact discussion happening when humanity first discovered drugs. Because they feel good all of humanity will eventually be hooked on them, yada yada yada. Only that presupposes that there's no value in procuring the drugs themselves, because the second you have to have a certain segment of the population procuring those drugs you have people who: 1) have a lot of power, and 2) have a reason for existing outside of simply taking drugs. In other words, the argument is a contradiction itself.
Now, if an external force had been able to get all of humanity hooked on drugs in a very short amount of time (and takes care of the procurement), then the predictions would be possible because procuring the drugs is no longer valuable for humanity.
The dangers of AI are not that we're slowly going to lose ourselves as we all become mindless zombies watching entertainment. The danger is that, like the drug example, those who procure AI are going to have a lot of power, and if AI itself ever becomes independent of humanity then we could lose all control over our own destiny.
And to loop this back to the religion comparison, there are always people in this world wanting to impress their worldview on others. Which is why your arguments can be reskinned so easily as religious arguments. They use the same techniques you're using here.
Very interesting point about Democracy. It suggests that Democracy has a natural scale, for example a city or town of a certain size. The city-states of classical antiquity, the middle ages, and the renaissance seem to support this. Of course not all the city-states were democracies, still, I think your point about physicality and democracy is grossly under-appreciated today.
I use to ask people what the right software was for a task. (I still do at times) The answers explained what people liked about the software they used but an objective A vs B was rare and it didn't do justice to the rest of the set.
Out of curiosity I decided to install every IRC client I could find. Connect to a few servers, open a good number of channels and learn to use them one by one while looking at memory and cpu usage.
Like many I have deep thoughts. Mine are as hard to find for their potential audience as the many are for me. We do however pay a lot of attention to people who make a lot of noise.
I had this hypothesis that people would copy other peoples political ideology that are copies from that of others in long chains that, rather than start with a persons deep objective thoughts, are just connected in loops long enough for us not to notice - with a number of thinking nodes insignificant to the result. [lets call them dictator nodes for laughs]
In order to test this rather absurd hypothesis I took the entire list of US presidential candidates and looked at their social media.
This quickly confirmed the loops to exist, fuck, my hypothesis was optimistic compared to reality. I found that close to 100% had facebook pages and youtube channels that didn't enjoy enough traffic to account for friends and close relatives of the person.
Eventually I worked my way up to the green party, they had 250 views on their youtube channel. I wondered about the meaning of it.... what does it mean?
I think it means even journalists didn't bother to look at it. The huge apparatus of international journalism did not bother to look at the top 5 candidates most screamed about while I took a look at a really large number of them.
For democracy to work we ALL need to look at the menu then make up our own mind. In stead what we got is NO ONE looking at the ideas. If 99% of the population had exactly the same opinion about everything and one of us would write it into a political program no one would vote for it.
The point I'm trying to get to is this: We've already build the machine that contains us. Small groups of people who cared about something gathered and implemented their ideas. These things are now bolted down so firmly that unmaking their actions takes such an absurd unrealistic amount of effort that we can at best imagine doing it. We have millions of implementations like that and they are here to stay. It doesn't even need to be stupid. The idea could have been brilliant 200 years ago.
The stupidity in Artificial-stupidity will not be in the AI, the system will continue to "liberate" humans from having to think deeply which will move us further from a position of influence. If it does a bad job it will actually be beneficial to the end result.
It seems to illustrate our attenuation to story and the effect of dominant stories on our worldviews.
I'd suggest it's less a challenge to democracy and more an affirmation of the criticality of a fact-based, unbiased 4th estate, and the scary effects of anyone with a comb-over and green screen being able to make their own "news" show.
So do the politicians in power only give out media licenses to news sources that feed from those "Anonymous Sources" that support their agenda? The problem is not who is allowed to report, it's that the mainstream news has started citing anonymous sources with ridiculous frequency and the credibility of those sources is never called into question because nobody knows who they are. When those anonymous sources turn out to be wrong, nobody cares and nobody's reputation is damaged for reporting totally made up stuff that's likely someone with an axe to grind's propaganda.
10,000 Alex Jones, Ben Shapiro, Andy Ngo peddling conspiracy or ideology under the guise of journalism is not better.
You’re right as well of course about anonymous sources being an issue, but your “politicians in power” comment is an unnecessary straw man to the conversation.
I think that's a really good point. When I see 'customize your feed' on sites that report the news, what I actually read is 'don't show me stuff that might challenge my preconceptions'. That's exactly what I don't want.
Absolutely not. Consuming unfiltered news is like staring at the sun. Not only is there too much of it, and too much of it is unreliable or malicious, but too much of it is both irrelevant and horrifying. I'm not in America, so I don't need a live feed of mass shootings. I have already mentally blacklisted most of the UK print media for various reasons, so I don't want their stuff appearing in my feed either.
I've even blocked people on Twitter who I've never interacted with simply because people I follow keep retweeting their US politics stuff and I don't want to read it.
I do follow a few financial and international relations analysis people who I trust for reliable news. A manually, not automatically, curated feed. Otherwise I'm mostly reverse-engineering the news from the jokes made about it. And I get zero of it from Youtube or Facebook.
Democracy work just fine. It might not what you want but what if the current youtube recommendation system is what I want ? In democracy people like me should still get equal power to decide. There are no right or wrong outcome in democracy.
No system of decision-making can survive a high level of misinformation. In the case of democracy that means you might be happy and placated, but you won’t have any control no matter how often you vote.
That’s not what I mean. When you have enough disinformation, democracy has no more effect than a video game — vote for the party that you think will lower taxes, they get in, taxes go up because that’s what they actually promised and you didn’t know.
In democracy, each party is free to influence/convince their voters. If you vote for lying politician then thats your failure and the failure of other politician to better convince their constituent.
“Your failure”? Do you believe you can spot disinformation with any reliability? When that misinformation constitutes the majority of all that you think you know about the issues you base your votes on?
But it is not the limitation of democracy. Whatever people end up vote for is not the concern of democracy. By definition
democracy is merely a system where people exercise power by voting, it doesn't concern about the outcome.
As long as people have the right to vote, it is a democracy.
Its not the concern of democracy if the people vote for the "wrong" politician.
There is no power to be exercised in this scenario. They can vote, but it is literally mere coincidence if anything related to their intentions occurs. It’s like typing onto a keyboard that isn’t plugged in. Or prayers to a non-existent deity. Voting ceases having anything to do with reality the moment reality stops having anything to do with the decision behind the vote. It is action without a discernible causal connection to the result.
(Is it just me, or does our disagreement sound like two people arguing if a tree falling alone in a forest makes a noise, because one thinks “noise” is the vibration and the other thinks it is the qualia?)
Mass media feels like a direct impediment to democracy. When what they publish consist of the same crap all the time, it may seem as though that stuff carries more weight than it really does.
I wonder to what degree the mass media are responsible for the current political climate.
The article generalizes and remains superficial in what it asserts.
"AI that fails" currently will improve in the future and is currently being mitigated in use.
Then there's the toothless "dumbed down" section:
"Some artificial intelligence machines and programs are deliberately ‘dumbed-down’. This marks an entirely different take on the term artificial stupidity. By putting spelling errors in typed messages, not adhering to strict grammar and so on, AI seems less intelligent. These (fully intentional) errors are coded into the system with the goal of creating AI that appears human."
The writer than asserts this is actually a good thing because it provides more human-likeness in customer interaction for instance.
Attributing "incapability" to AI because it has the technology has no moral compass is also a platitude. Technology can always be used for bad.
Fact remains, the current AI paradigm provides tremendous economic value and is merely an extension of the economic striving for automation. Harping on AI has become a bit of a trend and the author calls for cynicism but ends up with a pretty toothless analysis.
A lot of what we call AI these days (especially neural network stuff) should really be called "artificial knee-jerk reactions". A neural network trained to detect traffic lights is 'intelligent' only insofar as your leg is 'intelligent' when it involuntarily kicks in response to a tap on the knee. Of course, automating a knee-jerk reaction in response to traffic lights can still be extremely useful (say, for self-driving cars). But there's no contemplation involved, no thinking, no deliberation---no intelligence.
Agree. I would argue context makes all the difference in real intelligence and we can never define context because it's multi level, slightly different for everyone and constantly changing.
On top of all that forcing a certain context across the board can have countless unintended consequences.
Yes we do, it's called humans. This continual bashing of AI is so funny. We never measure the baseline of human activity which is chock full of stupidity, illogic and unreasonableness. Of course, AI is human directed, but it will likely be many times more predictable.
I used Google Translate yesterday to translate a french phrase to english and got something like: "It's better to be alone than to be in bad comapny".
I looked around for a Duolingo-style "report bad translation", "rate" or "give feedback" action but couldn't find one anywhere on that screen.
Artificial stupidity seems like a natural step on the way to artificial intelligence, but we would likely benefit from building in some training wheels.
Possibly irrelevant to this discussion (you decide): the level of stupidity in any human undertaking is almost always a) higher than you'd think, and b) pretty constant across all cross-sections of humanity.
Despite widespread attempts by humans to eradicate it, it exists everywhere; ignoring race/class/education/gender/orientation/profession or any other sub-division you can think of. You'll find roughly the same amount of stupidity in a gaggle of university professors as you will in a humdrum of railway attendants (I made up those collective terms, in case it wasn't obvious).
Not the parent but sometimes ignoring the rational choice leads to interesting discoveries or results. For creative endeavors it's also good to not overthink things and let the flow take over.
Stupid people tend to die and gave example, or make bombastic plans that later explode in their face, so we, the rest of stupid people can learn something about the experience. Don't gobble this berries nor those shiny mushroms, don't pet this nice crocodile, don't jump over this small canyon with a too short pole... the list is endless.
If you're a caveman it would be stupid to play with fire. You dont have any medical care to treat burns and it's dangerous, but because we did we started cooking, then melting metals. So what is stupid at one point becomes knowledge at another point.
Ugh, human stupidity is more than enough for me to worry about right now... global warming will flood my city... US-vs-Russia jingoism will get us all killed in a nuclear war...
And real weapons grade stupidity, at least in my experience, tends to be wielded by people who are actually very intelligent in general, but having a bad day or an attack of hubris....
I suspect something like the "artificial inanity" described in Anathem so that the reputation of authors becomes a vital part of the infrastructure of their network.
I can't blame Youtube, or any social media here at all. The enemy here is ignorance which is the root enabler for spreading this garbage. The only way to fix that is via education. Good luck. A well educated populace that thinks for itself is less profitable because you can't sell them a line of bullshit. And you wonder why education budgets are under attack.
Another issue is how we humans emotionally relate education with self worth in society. Ignorant people don't like to be called ignorant. Educated people telling them they need to be smarter is insulting. So there is a tendency to distrust educated people because the ignorant persons perception is that educated people think they are bad or broken and need to be fixed. If they don't feel bad or broken then there must be something more to this, why are we being manipulated... (conspiracy can then take over)
So perhaps we should rethink how to approach the problem and realize that ignorant impressionable people are a given. Instead of trying to fight propaganda with education we should disguise education as easy to digest propaganda. Don't fight fiction with facts because you're always going to lose. Just start your own propaganda machine and hope you can out smart the stupid. Sure it's manipulation but how much longer can educated people stay righteous while the world is being burned down around them by the ignorant?
"Instead of trying to fight propaganda with education we should disguise education as easy to digest propaganda."
How is this different from what we are already doing?
I do not just mean that as mindless snark. There's a selection effect, where children can only handle even more radically simplified versions of reality than even our feeble adult minds can handle, so by necessity, it's already been processed down to that level because nothing else would stick even a little. A common example is "The civil war was caused by slavery", which gets mocked, but it's not like there's any explanation you can fit into a middle-school textbook that isn't just as simplified and loaded with the biases of the simplifier. (Which themselves are the result of the bias-loaded simplifications that they were themselves fed....)
(The deliberate construction of an AI that pays people to produce lies to damage other people's health and welfare sounds like a farfetched science fiction plot. Accidentally building one, and then trying to deny responsibility for it, is a far more plausible thing for humans to do to ourselves. Just as Schneier talked about the "Exxon Valdez of privacy", what we have now is the tetraethyl lead of video entertainment.)