To answer the question at the end of the post: probably none.
As others have said, it's incredibly unlikely that a tweet is going to change your mind about anything.
Having worked both in and out of the journalism industry, I know that journalists get a lot more worked up about what people say online than everyone else does. They think it matters a lot more than it does (probably why they got into the job of saying things online in the first place). Twitter is completely irrelevant to most people. They're vaguely aware it exists, but it really doesn't matter.
In TFA, you can see each tweet getting a few hundred to a few thousand likes/retweets, and most of those identified by the analysts as bots. So each tweet is maybe reaching an audience of a couple hundred humans, at best (and it'll be the same few hundred humans for all such tweets). How many of those humans are going to change their vote because of that tweet?
It's irrelevant.
But it is dangerous. Politicians get very nervous about elections. If the politicians are persuaded that any of this matters, then they'll be more inclined to stop it mattering. And that means laws that curb free speech online, monitor communication, prevent encryption, and all the rest of the shitstorm we're facing.
I think you might be underestimating the power of seeing some very subtle opinions over and over online.
In my country I've been seeing a constant campaign of subtle disinformation against opposition figures that is very powerful. You check the "user's" Twitter profile and they'll have a bunch of pro-opposition opinions on it. Then, the "user" will start to express disillusionment about certain people, asking them not to betray "our side" etc. By the end, it's a full blown attack against "traitors".
This kind of stuff is very subtle and powerful. I've seen it over and over and people will get dragged into believing this stuff and it's very damaging.
It's almost stuff straight out of the movie "Inception". You just need to plant that "seed" of doubt and it will grow in a lot people. It might just be the nature of our political climate, so it might not work everywhere though.
that's very clever, and I'm sure the ~42 people who get to see the entire stream of tweets will be very persuaded by it. And since most of those people work in media, it may even lead to an article which might be read by a few thousand people.
Don't get me wrong: I think online "propaganda" can be effective - just not on Twitter, because of its scale and primary audience.
But because Twitter is more transparent it gets analysed more. I guess you could make the argument that the same people are doing the same things on FB and Whatsapp, but it's difficult to see how those messages propagate so well.
That's the thing, I don't think everyone needs to view the entire stream of tweets from this "person" for this to be effective. You just need one person to latch on to it and start retweeting it. Then this person's friends see it and don't need much convincing...
I’m skeptical that has any material impact on the elections. Why do I think so? Because you can ask someone who holds different opinion than you about those subjects and you’ll know exactly why they vote the way to they do.
You may not agree with it, but I really don’t think you’ll cling to the idea that they’ve been hypnotized by internet trolls.
> content that is far-right, racist, and highly partisan, one has to wonder what effect they’ve had on the EU election conversation space on Twitter over the past few weeks. [...]
I'm not familiar with the reasons of the outcome in France, but this whole election has been dictated by far-right, racist and highly partisan reasons in my country. When you wonder why you got a result from a huge number of people you need to analyse the data, and analysts are noticing that ...
"The League of Matteo Salvini gets a resounding 30.75% of preferences in Riace , the municipality of the Reggio area guided until a few months ago by Domenico Lucano and known throughout the world for the model of reception and integration of migrants that was practiced there."
"Even in the island of Lampedusa , at the center of the migration phenomenon, the League wins. Matteo Salvini's party gets 45.85%, more than twice the Democratic Party that launches Pietro Bartolo, the migrant doctor, among the candidates in Strasbourg."
> Doesn't it depend on how close the election was to begin with? If it's very close, you may not need to move the needle very far to change the outcome.
There are 12 points of distance between the 1st party (extreme right) and the 2nd (pro immigration).
The territorial distribution of votes in favor of the extreme right is aligned with the cases of uncontrolled and non-regular immigration, cases that the population lives in the area and which have little interaction with the (old/new)media.
This doesn't mean that disinformation attempts are not happening, they're. But the outcome was predictable, and it's difficult to argue that there is a correlation with ..Twitter.
You mention 42, then a thousand, and then we're talking real effects. Twitter gets analyzed by professional analysts. It also gets digested without any analysis by its users.
Kind of a fun “internet troll butterfly effect” thought experiment. However, using the same logic any small “flutter” in the other direction would be enough to cancel out the “real effects” and make them moot.
hmm, yes... this is absolutely something you could experiment with.
A thousand people is not enough to influence an election. A single article is not enough to influence an election. Yes, it's possible that something goes viral and gets seen a million times, but that's not what we're talking about here.
You are right that normal people don't read Twitter.
But Twitter users read Twitter. And there are many of them. And they amplify what each other says, conspiracy theories, telephone game, and intentional disinformation alike.
And… eventually some of that disinformation reaches critical mass and graduates from the disinformation Petri dish of twitter.com and releases spores into the mindspace of journalists, "journalists", and normal people who happen to be on Twitter sometimes.
And then, having evolved into a truth-resistant strain, that disinformation spreads through Facebook, InfoWars, well-meaning television news outlets, and television news outlets owned by Sinclair Broadcast Group, into memorable one-liners that reinforce one's own beliefs just enough to avoid application of even the hint of rational thought, which get tucked away in that special space of the mind used to store those juicy "gotchas" you're saving up for next Thanksgiving.
And come Thanksgiving, you use them – take that, Hillary-loving millennial nephew! And, you find out they're wrong, because of course they're wrong, they're ridiculous – why would Hillary Clinton be performing satanic rituals underneath a pizza place? – but it doesn't matter. You've spent the past year reinforcing your beliefs because you half-remembered that you heard somewhere a vague description that Hillary did this awful thing, and you told your friends and it reinforced their beliefs, because even if it made no sense to you, it was vague enough to make sense to them, and, well, she probably did something else terrible anyway, and in fact, your just buddy told you about how Wall Street is paying her to get rid of US borders. Sounds about right!
I don't entirely understand this 'butterfly' effect people like to claim in regards to things like this. Take for instance the most recent US election. Nearly 100% of the mainstream media was against the person who ended up being elected. This included nonstop negative coverage as well as polls, which we can now see were quite severely flawed, including one prominent source claiming he had less than a 1% chance of being elected [1] - something that could discourage voters. This seems to have had no effect, or even a paradoxically positive effect (for the person they were attempting to discredit).
In terms of scope, scale, and reach you're talking about the difference between a butterfly flapping its wings, and a nuclear weapon. And the butterfly is supposed to come out on top? I think many people simply lack the empathy to understand that people can see things in very different ways without in any way being misinformed, and so they look for explanations for why the other person 'must be wrong.'
For instance two things are both true: migration can help improve the economy and overall wealth of a nation; migration can hurt workers in affected fields within a nation. Instead people who are partisan focus exclusively on one side and simply pretend the other does not exist. This leads to an inability for people to grasp with why people would see things differently. It's because they intentionally avoid seeing the negative sides of their own views, or the positive sides of views they disagree with. So it leads to people mutually arguing that the other side only believes what they do because they're being lied to and manipulated, or alternatively - that they're idiots.
The conversation wasn't about the MSM, but still...
> Nearly 100% of the mainstream media was against the person who ended up being elected.
Including Fox News and other right-leaning news outlets? I would suspect that for people who voted for Trump their news content was close to 100% in support of him, with negative news and polling basically either ignored, minimised or re-framed as being another sign of "the elites" being contemptuous and/or scared of Trump and by extension his supporters.
Why would you assume that people are consuming political news they don't like when every indication is that they don't? You even say this yourself later on in the post:
> Instead people who are partisan focus exclusively on one side and simply pretend the other does not exist.
When you consume content that is unabashedly partisan in nature you are far less likely to subject it to any kind of critical analysis, and even small amounts of reinforcement from other sources you align with can give enough credibility to establish a particular "fact". See pizzagate or the Q-anon conspiracy theories for instance.
This is the opposite of how reality works for a big chunk of the population, people reading the same lies multiple times (read "tweets" or any constant influx of information) is how we got anti-vaxxers, flat-earthers, and is the reason politicians many times spend money on ads with their names only (0 promises, 0 slogans, just their name, but it works)
yeah, I agree. But I think Twitter, in particular, because of its audience demographics, is mostly one big echo chamber with a tiny effect outside the media industry.
I have a good friend who spends millions of dollars in advertising every year, and who is convinced that almost none of that has much impact on customer behaviour.
There are exceptions, of course, and advertising can be effective, but there is a lot of wasted money spent trying to persuade people of things.
Exactly, the mind is the most unstable part of them all, specially given the high levels of ignorance (mostly lack of skepticism) prevalent in the US and the world, pretending reading a text can't change it because its origin its "Twitter" is very naive.
I disagree, people have the views they have because of a wide variety of "real" reasons, based on their personality, their peers, their education, etc.
A tweet, or even a series of cunningly-crafted tweets, will not change someone's mind enough to change their vote.
> As others have said, it's incredibly unlikely that a tweet is going to change your mind about anything.
Yes, but multiply that by the number of targets, then by the number of messages, and then by exposures and you'll see that very small chance turn into a massive tide. That's how advertising works.
Each one was liked by a couple hundred. How many tweets were there? Each like probably appeared to a number of followers of each account that liked it, furthering its reach - even if the person didn't engage by liking or RT'ing, a lot of people read it. These people read multiple ones, for a given time, from seemingly multiple sources. Those were seen directly, from people they follow, and indirectly, from tweets liked by people they follow.
While a single tweet won't change views, the repeated exposure to a narrative will, eventually, do that. We are not talking about individual tweets, but concerted efforts to further narratives for various purposes.
We have pretty significant evidence over the last decade indicating otherwise. The RAND institute did an entire paper about it.
>If the politicians are persuaded that any of this matters, then they'll be more inclined to stop it mattering. And that means laws that curb free speech online
Pretty sure misinforming the voting public matters? Uneditorialized journalism and publishing created this issue. Don't try to pretend any law addressing this problem is somehow constraining the free speech of the public.
While I don't want to imply disinformation will always sway an election, I think you may be drastically underestimating the number of people a disinformation campaign needs to reach.
In the US 2018 congressional races many of them were decided by an incredibly small numbers of voters, a tremendous amount were decided by less than 5% and quite a few were decided by margins of a fraction of a percent.
I'm literally in a car leaving for the airport so I can't verify the exact numbers, but I'm confident I'm close enough[1]: Kansas' 2nd district was decided by like .87%; Florida's US Senate Rick Scott won by .12%; Georgia 7th was .15%; Minnesota 1st was for sure .45%; Illinois' 13th district was something like .77%; New York's 27th district was like .28%. In 2016 presidential race, New Hampshire was decided by like 1500 votes. That's just a sample of the extremely close races across the country.
Now, while those races might seem concerning, it gets even more concerning when we consider smaller districts, county elections, and city elections where it isn't uncommon for seats to be decided by margins of 25 people or less.
The worrying thing about disinformation campaigns isn't that the disinformation is going to suddenly convert 90% of our population into propaganda vessels, it's our modern ability to target smaller and smaller demographics in key locations with information in ways that we previously were unable to do in cost effective ways. The concern (and it is a concern) is that the disinformation and often intentional outright lies will misinform just tiny tiny fraction of a percent of vulnerable logic challenged people in key areas where the ripple effects punch outside their weight.
And also of course how easy it is to swing local county and city elections, considering that a rather high ratio of local voters tend to be elderly people who may not have their skeptical hats on when the magic box tells them the opposing party is eating babies or whatever.
If you've spent any time having long casual conversations with many of our voters, you'd find out that many of these people have been convinced of and believe in truly outlandish things, they want to believe in fantastic explanations rather than what is usually a mundane explanation. We don't have to look far to find outlandish ideas floating around our population--just consider anti-vax and how it spread mostly organically, it was not a highly funded and highly targeted campaign.
Our people can make rational decisions and vote for their interests, provided they are given solid information and are not exposed to outright disinformation. It doesn't matter which direction people vote, as long as their decisions are not based on disinformation spread intentionally.
[1] I'll verify those numbers once I'm settled in, but I am confident they're within a reasonable accuracy for this topic. If you're still not convinced of the larger implications here, I'd be happy to list more. There were a whole shitload of key counties where elections were decided by margins of less than 10 people and quite a few were less than 5.
If, as you suggest, opinions expressed online don't have the power to change anyone's mind, what does it matter when their free expression is curbed by new laws?
It’s useful to know details such as these, but the bigger problem is that the audience that most “enjoys” and retweets this stuff honestly doesn’t care that it is false.
I personally know people that, when provided clear evidence of entirely false and manipulative stories will justify them by saying, “well, this story may not be true, but there are a lot of stories like this that you haven’t seen which are true”.
The point is, they believe what they want to believe, and no amount of proof to the contrary will change their mind. In fact, they often just double down on their arguments.
Sure, that's normal human pleasure from having their beliefs reinforced. It's probably and even bigger problem for vague unfalsifiable ideas like "police are racist", "capitalism is evil" or "immigration is bad". If a news story appears to support such a belief, the person will have their belief reinforced, even if it's wrong and the (true) news story is an exception. One photo of an immigrant vandalizing a statue or one photo of a border guard making a child cry says "See? You were right! That whole political party is bad!".
> but the bigger problem is that the audience that most “enjoys” and retweets this stuff honestly doesn’t care that it is false. [...] The point is, they believe what they want to believe, and no amount of proof to the contrary will change their mind
All people are like that, just everyone for different topics. I heard so many people say "There are not enough resources on earth so that everyone can have a mobile phone" or the like, which obviously is false. Don't get me started on all the beliefs of lefts and rights in the refugee crisis in Germany. Or take "the free market solves it all" from many liberals, "capitalism is evil" from young contrarians and whatever conservative thoughts from older folks.
My point is, the problem of blind belief is not specific to the audience of those tweets, it seems human nature.
It feels like 2016 all over again, with (some) people desperate for an explanation for why things happened the way they did which doesn’t involve people holding ideological differences than what’s found among the elite.
That's interesting to analyze and everything, but it's surely inconsequential compared to what real people say and all their misinformation, rhetoric and general bias. What's the difference between a popular influencer with a public persona spreading politically biased stories and an faceless account doing the same thing?
It sounds like they're looking for someone to blame for right wing candidates' success in the same way people blamed Russian bots for Trump's success because they can't believe that real people could possibly be voting for such obviously "wrong" candidates of their own free will and they must surely have been fooled into it by clever tricksters.
Didn't the FBI and the Mueller report pretty thoroughly lay out the massive scope of Russian social media influence on the 2016 election and beyond? I thought this was a decided issue but maybe I'm wrong.
The report is publicly available here. [1] This is mostly covered on pages ~22-26. Quite a short read since it's redacted like crazy. The 'massive influence' seems to have been a company that setup some fake accounts on Facebook with a total advertising spend of $100k, and one that also had a few thousand spam bots on Twitter that 1.4 million people had "been in contact with" - a phrase which was left undefined.
Thanks for the link, but it mostly disproves your dismissive attitude: according to the report, posts generated by accounts controller by the Russian company reached 29 million people on Facebook alone (and " may have reached an estimated 126 million people."), with hundreds of thousands of direct followers for several individual accounts. That is not nothing.
It's not nothing, but it's interesting to note that the amount they spent is dwarfed by the amounts currently being spent by top democrats (and Trump's re-election campaign), several of which are in the low millions of dollars.
> According to Facebook, the IRA purchased over 3,500 advertisements, and the expenditures totaled (sic) approximately $100,000
I think everyone would like to know how much actual influence it had in the election but I'm not sure we will ever know definitively.
You need to put things in context. Facebook sells advertising or "reach." Countless groups, and obviously the campaigns themselves, were spending millions on similar ads. $100k is going to provide a proportional level of influence, which is to say not much. The same thing is true on Twitter where you can buy followers from a wide array of third party sources. The main reason this is an issue is because of politics. It's Benghazi, 2016 version. That comparison is particularly apt as it's not to say nothing untoward happened - it obviously did. But the issue itself is/was amplified tremendously for partisan political purposes.
Wasn't that thorough. ODNI report from 2017 was broader in scope (https://www.dni.gov/files/documents/ICA_2017_01.pdf). The 2019 Mueller report looks restricted to areas where there was possible collusion (just some of the social media stuff that was retweeted by Trump campaign people/stuff related to the DNC hacks/etc.).
The two biggest caveats about it in my mind are that 1) all the data was provided voluntarily by the tech companies, with no indication of how they determined what was a Russian account, and 2) they don't really have any way to measure what actual impact it had on the election, or how effective the disinformation campaigns were
Yet people (even here on HN) refuse to look at the evidence and pass around things like the Seth Rich conspiracy as facts[1].
The same thing happened here with the Comet Pizza conspiracy theory.
Facts don't matter, and they just act as ways to polarize people.
Edit: and see how the voting goes on this and the parent comment, and on the thread I linked. The battle for facts is lost, and partisanship is all that is left.
> It sounds like they're looking for someone to blame for right wing candidates' success in the same way people blamed Russian bots for Trump's success because they can't believe that real people could possibly be voting for such obviously "wrong" candidates of their own free will and they must surely have been fooled into it by clever tricksters.
Since the media consumed by EU voters is so disjoint because of language barriers, this should be somewhat possible to test. If one could show a correlation between the amount of "trickery" present in media consumed by voters in a country and that country's election outcomes, you could get an answer to whether voters really are fooled into voting for right wing candidates.
We recently had an Australian election in which there was a lot of fake news around a "death tax" imposed by one of the parties. That party lost in a close election in which at polling booths one of the top 3 issues was the death tax.
The Australian election wasn't particularly close. The two-party-preferred margin was 3.2%, which is the mid-range in terms of margins this century (7 federal elections since 2000, 3 have had higher margins and 3 lower).
Yes the "death tax" was an outright lie. But there were lots of factors which went into that margin.
Also it's important to clarify that (for the most part - except possibly on some WeChat groups) - this wasn't a foreign campaign. It was mostly old fashioned paid advertising by the Liberal party itself.
You would never be able to control for other factors that differ between countries. They already have different left-right leanings to begin with and that's changing with time.
The difference is that those faceless accounts are in many occasions coming from foreign entities. Some of whom have no motivation other than to sow discord and cause chaos.
And nobody is attributing right wing success purely to bots. That's just some strawman that you invented. The issue is that is a significant factor and is causing polarisation and distrust in institutions. And you need those instutitions to work in order to solve the hardest problems e.g. mass migration, ageing population, climate change etc.
I never understood that foreign influence complaint. The whole world is sharing opinions and misinformation about the EU elections, just as the whole world was sharing opinions and misinformation about the US elections. How is it worse if it comes from someone in another country and is a bot compared to a real person in another country or a bot in the local country? Do you really think that these political bots are there to create chaos rather than either to generate ad revenue or get their preferred candidate elected?
No purely. But people certainly attribute it to bots or they wouldn't be concerned about bots if they believed they had no effect.
Yes, some of the fake accounts - not necessarily bots - are there to create chaos. Russian political influencers have publicly boasted about this.
Of course this is no different to the US and the rest of the West using the media and social networks in other countries to promote their own agenda.
The difference is we're supposed to be immune from having it done to us - and of course we aren't. Because it's no different in principle from a fake advertising and promo campaign, or a fake pro-corporate stage-managed PR campaign - just larger in scope, and weaponised as opposed to financialised.
In reality troll and Tweet farms are incredibly cheap compared to other propaganda vectors, and incredibly cost effective.
At some point knowingly misleading the public for political or financial gain is going to have to become a criminal offence.
Currently consumers/voters have more protection from fake claims about a toaster than they do from fake claims made by a president or a hostile foreign power.
If you're trying to be a genuine democracy, that's a bad place to be.
When you say weaponized, do you mean done for a foreign government's interests rather than a person's financial gain?
You seem to believe that people can't be trusted to think for themselves and must be shielded from lies to protect them from their own stupidity. I agree on the first part, but not the second which is arrogant authoritarianism. In democracy, the voters are also the ones who experience the consequences of their decisions and them having the ultimate power over themselves is what protects them from extremes. They might hurt themselves a little bit but it's still themselves doing it, so they won't go too far and can change their mind next election if they made a mistake.
If there's something new and unique about social media that actually breaks democracy worse than anything since the ancients Greeks, then perhaps you'd have a point, but I don't see any evidence of that. I think people will adapt to false information and learn to decide what to believe, just like they currently make decisions about how much to trust any stories they hear. There's nothing wrong with having a few conspiracy theorists or people who believe a wrong version of history. They keep everyone else on their toes and we can't really be sure that it's not us who's wrong. Remember when God obviously created the world and anyone who disagreed was punished for spreading dangerous fake knowledge?
Rightwingers see pro-migration and liberal organisations like the UN and the EU as wanting to sow discord and disrupt society though. So this type of argument goes both ways. The distrust of institutions is already there.
Foreign anti-western forces feed off of that, because the modern western right is essentially anti-western from the start.
racist prejudice displayed in this article is nauseating. seems users from certain asian countries showing interest in european politics automatically become members and proof of disinformation campiagn. shame!
Interesting how bad twitter is at banning fake accounts. However, 200 russian bot accounts are not responsible for the populist shift in european politics, which is what a lot of people want to believe.
As others have said, it's incredibly unlikely that a tweet is going to change your mind about anything.
Having worked both in and out of the journalism industry, I know that journalists get a lot more worked up about what people say online than everyone else does. They think it matters a lot more than it does (probably why they got into the job of saying things online in the first place). Twitter is completely irrelevant to most people. They're vaguely aware it exists, but it really doesn't matter.
In TFA, you can see each tweet getting a few hundred to a few thousand likes/retweets, and most of those identified by the analysts as bots. So each tweet is maybe reaching an audience of a couple hundred humans, at best (and it'll be the same few hundred humans for all such tweets). How many of those humans are going to change their vote because of that tweet?
It's irrelevant.
But it is dangerous. Politicians get very nervous about elections. If the politicians are persuaded that any of this matters, then they'll be more inclined to stop it mattering. And that means laws that curb free speech online, monitor communication, prevent encryption, and all the rest of the shitstorm we're facing.