Hacker News new | past | comments | ask | show | jobs | submit login
Signs you're following a fake Twitter account (nixintel.info)
277 points by floatingatoll on March 12, 2020 | hide | past | favorite | 149 comments



The bot panic is IMHO overblown.

This comment is however NOT a defense of bots. I think bot detection and deletion of bot accounts would be a good thing.

But the concern seems out of proportion.

Society is full of disingenuous message amplification systems: a large subset of advertising (TV, magazine, billboards, direct marketing, etc), biased news media, spam, and now twitter bots.

Twitter bots are a bit worse than the others because we can no longer tell what opinions are actually popular and which ones are manipulated... via Twitter. But the ability to make such judgements via Twitter has only been with us since 2006. The basic problem is not a novel one.

The bot problem cuts both ways because people will overly compensate. I've been accused of being a bot, signalling to everyone that it's perfectly OK to ignore the content of my argument. I fear many other right-leaning people are getting the same treatment. This treatment further divides society (actual bots do too, which I am NOT defending; I'm just showing that the issue cuts both ways). Other commenters in this thread have been flagged and their comments are dead apparently because (if you believe surrounding comments) they are seen as a bot.

The day that Twitter becomes a voting platform, or voting platforms allow bots... that is the day that I will panic about bots.

All that being said, carry on. Let's squash the bot problem. Let's also fix advertising. Let's also fix news media bias. And spam.


You say that the problem is overblown but then go on to talk about the dangerous effects the problem has on discourse. Not only are the bots spewing out propaganda, but as you say they destroy the trust we have in each other which further builds the mindset of "Anything I believe in is true and anything else must be bots". In my mind this makes the bot problem very bad.

The problem is made even worse when you have non bot accounts that spam out low effort bad faith posts like "If climate change is real then explain x". It takes 2 seconds to post that but 10 minutes to properly reply. And when you look in to the post history of these accounts you can see they have been given the same answer repeatedly but ignore it because their objective is not to have a discussion or to learn, their objective is to waste peoples time and create doubt.


Or maybe they've never gotten what they feel to be an acceptable or compelling answer.


There never will be an acceptable or compelling answer to these people though. They refuse to accept new information or anything that is different to the talking point they have latched on to. Its especially true for bad news, we all want climate change to be a hoax, its much more convenient to believe this than to accept reality.


Someone just has to put 10 minutes in, the rest is copy paste.


Then a casual observer will see two sides apparently debating an issue, and infer a false equivalency. The media is especially guilty of assuming that people who disagree with equal passion must be equally correct. I dont think anyone would claim that bots are improving the quality of discourse.


I don't think it's at all possible to have something resembling a discourse on Twitter.


It's ironic that it's often the lone opposing voice that is accused of being a bot. Claiming bots is a valid argument against votes, or against a mass of similar comments and opinions. Claiming bots is not a valid argument against the logic of a single comment or opinion. Yet too often it is the lone contrary opinion, often with supporting evidence and logic (since the person is aware they have a contrary opinion), that is dismissed as simply being a bot.

If a solid logical argument comes from a bot it is still a solid logical argument.


The difference I see between Twitter bots and the other classical examples you mentioned is that anyone and their grandmother can make a destructive Twitter bot on a given day, with comparably little effort, negligible expenses, high anonymity, and hardly any traceability. Which I mean is basically the foundation of Twitter: it's based on the premise that everyone's speech should have worldwide reach with absolutely minimal effort. Whether the larger picture is a good thing or a bad thing, it does mean the bot problem is very much a novel problem and you can't just pretend is similar to the classical ones and deal with it the same way.


Maybe it's because I don't use Twitter, but I don't see much bot panic personally. This article certainly doesn't have anything to do with panic, it's just an investigation of one example.


> His two front teeth (red box) are different shapes and sizes.

Which would be entirely normal for a guy from Manchester. My two front teeth are different shapes and sizes too. We don't really do "cosmetic dentistry" here in the UK, certainly not to the extent it's practised in the US.


> Which would be entirely normal for a guy from Manchester. My two front teeth are different shapes and sizes too. We don't really do "cosmetic dentistry" here in the UK, certainly not to the extent it's practised in the US.

Is your face perfectly conformant as well? And your pupils inconsistent sized?

The point of the article isn't that any one of these features is unusual or worth calling out, it's that few people share all the characteristics of an AI generated face.

FWIW, My teeth are a bit irregular and my face might match the eye/ mouth placement from an AI generated face, but my ears and pupils are fine.


And my ears are asymmetrical. It does show up in photos from front on (though perhaps not to the extent in this article).


> And my ears are asymmetrical. It does show up in photos from front on (though perhaps not to the extent in this article).

Yeah, I was surprised the article made such a big deal about that. My ears are also a little asymmetrical, and Stephen Colbert's [1] are very obviously so.

[1] A famous American comedian with a nightly talk show.


Well sure, but that's ignoring all the other evidence. Just because someone is holding a smoking gun doesn't mean they've murdered someone, but in certain contexts it might.


This person does not exist.


The biggest sign mentioned in the article is the insane number of tweets since the inception of the account - surely this must set off some alerts on Twitter's backend?

I know that this is an extremely difficult problem to solve and that they probably ban accounts in waves but with something as obvious as this you'd think that the ban could/should come more swiftly...


IIRC someone did some analysis on political accounts that did nothing but high volume retweets supportive of a particular cause. Problem is, quite a few of the identified probable bot accounts turned out to be real people that had just become infatuated with the cause...

[Related phenomenon: the UK PM gets lots of obviously bot -like almost identical 'good job Boris' comments on his Facebook account. Trouble is, it appears that it's actually mostly real people with otherwise perfectly normal accounts and verifiable identies: the whole thing probably started off as a forum campaign to rally the appearance of support in an election buildup, but ended up being mostly people who think it's a funny meme. https://www.bbc.co.uk/news/blogs-trending-50218615]


> Trouble is, it appears that it's actually mostly real people with otherwise perfectly normal accounts and verifiable identies

Just hypothetical: how do they come up with ground truth when studying this kind of thing? Seems difficult unless you witness the person interacting in the real world.


The BBC actually exchanged messages with some of them. People on Facebook in particular use real names and have profiles on other websites, including places where they and their family work and can be contacted using real contact details. And there's some extent of life history faking that bot creators aren't going to go to just to be one of dozens of people posting 'Good job Boris' a decade after the account's first formed.

Obviously it's more difficult on Twitter when accounts tend to be anonymous, connections are one-way, a lot of activity is retweeting and very little of it is personal photos, and many real accounts have been around for a lot less time.


From what I’ve understood, the wave ban is partly because they don’t want the spammers to be able to figure out exactly what triggered the ban.


Given the pressure that Elliott is putting on Dorsey to aggressively grow Twitter, I don't think it is likely that things will improve.

https://www.vox.com/recode/2020/3/9/21171482/twitter-jack-do...


They got 3 seats on the board and agreed to create a committee literally to search for Dorsey's successor. It's far too late for Dorsey. Realistically they're just letting him go out on his own terms at this point. In a few months Dorsey will announce that he's stepping down because he wants to focus his attention on his other job and that'll be that.


Do you have a source for the claim that this committee is searching for Dorsey's successor? I found this from the WSJ about three days ago, but the gist of it seems to be that two new board members were added, they're searching for a 'new independent director' and a deal was struck that allowed Dorsey to maintain his position.

https://www.wsj.com/articles/twitter-elliott-strike-truce-th...


https://uk.reuters.com/article/us-twitter-elliott/twitter-el...

>The board will also create a temporary committee to evaluate Twitter’s leadership structure and CEO succession plan that will share the results publicly before the end of the year.


I expect so, but I don't expect Elliott to scale back its expectations significantly after that happens.


I think it could be a relatively easy problem to solve: give any user the option to verify their identity.

In this way, as verified accounts increase, the unverified accounts become more obvious and I imagine we will start to view them with more scrutiny.

It would allow anonymity to remain, just make it so those accounts are more obviously anonymous or pseudonymous.


What do you think the marginal cost of verification is, and how many users will Twitter have to verify in order to make this work?


The marginal cost of verification could be zero. All Twitter has to do is to get together with Facebook, Google et al and agree to support a number of approved independent identity systems such as Yoti [1]. That uses a combination of government documents (eg passports) and biometrics.

Basically, users would be able to sign up for a sort of "digital passport" ID that would, in the long term, be accepted by governments and most websites. Enough people might find it convenient enough to sign up.

Twitter already allows you to block all users who have not registered a phone number and not uploaded a photo or whatever. I used these settings. They actually do remove a lot of bots and bad players. Again, in the long term, Twitter would give people the option to see only verified accounts.

I really wouldn't care if it meant I only saw tweets from a relatively small number of verified accounts. It might be more like Twitter in its first half dozen years, when it was much more engaging and fun than it is today.

[1] https://www.yoti.com/

Yoti also works offline. For example, you can use it to prove that you are over 18 without giving away your name or other irrelevant information.


Verify how


Not exactly sure which method they'd use but seems like if they wanted to do it, they could put a lot more effort into figuring it out. I mean, Bumble, a dating app, does a simple (maybe too-easy-to-game) verification, but with so many banks having to abide by know your customer (KYC) requirements, then I think social networks could figure it out.


You should try signing up on Nextdoor. This isn't a hard problem, nor would anyone be forced to verify themselves if they don't want to.


I don't know if Nextdoor's method of mailing a card to someone's mailbox would work for Twitter, but I like your example of a social network that has taken verification much more seriously.


Nextdoor doesn’t think I’m using my real name and told me to contact support instead. Yeah F that...


There is a general problem with platforms and fake entities of various kinds posing as friendly, relatable individuals. This problem surfaces far less deeply than you have to dig to find a well-crafted fake portrait.

Is that really a crafty artistic person turning out handmade items on Etsy, or is it a front for a factory in China with dozens of pseudonymous identities on Etsy? They take on multiple fake identities in order to appear to be humans handcrafting their items in small batches. If one identity gets their cover blown, it is shut down and replaced.

Is your AirBnB host with a woman's first name really renting out some family-owned cottages, or is "she" a BPO messaging contact center who won't talk to you on the phone? AirBnB's UI makes you sift through tens of pages of reviews to glean hints. And that usually happens after you have suspicions due to a foulup with a rental.

I haven't yet tried one of those "sharing economy" car rentals. But I suspect they, too, have attracted aggregators posing behind multiple fake identities to escape bad reputations if need be.

Social media and all kinds of other platforms need to look for and mitigate damage to their brands from fake identities. These fakes are made, in part, to game the platforms, and, in doing so, reduce their value.


> Is that really a crafty artistic person turning out handmade items on Etsy, or is it a front for a factory in China with dozens of pseudonymous identities on Etsy?

Ran into this when my fiancé was browsing Etsy to get an idea of how to price some of her stained glass work. There was a particular listing that was ridiculously cheap, which raised my eyebrows. I looked at their sales history, and they had sold tens of thousands of this item. Clearly factory production in China.


It's been a problem for Etsy for 10 years. Rather than keep trying to stamp it out, they instead tried to find a way to roll those people into their platform.


Etsy does take their identity verification for sellers fairly seriously, though it's hard to combat if you have a workforce of people whose ID you can use.


Got out of Twitter after seven years using it like a breaking news website from few vetted sources: bots, venom, ads and shameless plugs just made the experience more and more unsufferable, so good riddance. Also unfollowed any contact on LinkedIn, the amount of motivational fluff and plain gibberish just astounding, something like facebook for professionals. Am I happier? Nope, but the void will be somehow filled by something more productive or just interesting.


I've been using Twitter for news, IMO it's not too hard to get things "right": - start from a (small) curated list, like https://twitter.com/i/lists/1014747092554866689 for national politics - add in news publications you read - add in reporters from those news publications (they usually link their account in their bio) - add in accounts of people who appear in the stories - continue for 2-3 years

At the moment I'm following 18k news-ish accounts spread across 4 lists. The main Twitter interface doesn't work for that many accounts, but Tweetdeck does. I open it up in the morning and get a 10-minute dose of "reality", leaving me with 5-6 interesting news stories to read later from opening the links in background tabs. Even with semi-curated lists there's a lot of fluff in the feeds, but since it scrolls by in less than 10 seconds I don't mind as much.

Is this the best setup? Probably not, Nassim Taleb has written some good articles arguing that nobody should read any news. But it's enough to cover most of the popular trends and satisfy that "in the loop" craving.


I am incredibly close to getting rid of my twitter account. Already most of my meaningful interactions have moved to private group chats on Slack (etc).


I unfollowed all accounts, deleted all my tweets and just login periodically to see what's trending or to follow a hashtag associated with a sporting event I'm interested in. And since I do that from home, I have the twitter app on my iPad but not my phone.

Nice way to keep it but throttle it.


Easiest way I've found to spot bots on Twitter is username patterns where both the handle and display name are similar or the same with 1 or 2 words (often a first and last name) and a random number. For example JohnDoe127474. For a more realistic example: I just went to the first trending semi-poltical topic and this was the first bottish account I found (the tweet itself is pretty benign):

https://twitter.com/Elsie43733196/status/1235724530716377088...


This is the automated handle style that Twitter gives you if you don't specify or if your first choice is not available and you click through without realising this.

So that can also apply to naïve users.


Congrats, you've found someone who accepted the autogenerated username!


Is someone who accepts to automated username likely to be contributing anything of value? Human or bot, block and move on seems the best strategy.


I'm curious what you think makes this particular account a bot? To me it looks like someone who might not be particularly tech savvy who signed up for Twitter to find out about and discuss the ongoing global pandemic, which I guess you could call a "semi-political topic". To me it looks like Twitter's attempts to reach out to people who aren't already constantly online might occasionally work, which could result in a screen name or behavior that seems "strange" to people who are already very familiar with the platform


There's one real person in my Twitter environment (I don't follow him, but we follow and interact with a lot of the same people so the algorithm shows me his tweets a fair amount) who has a handle something like John415141. I think he chose that handle ironically.


I'm now wondering if people would think my twitter profile is a bot and report me because the handle has numbers representing my name in leetcode.


It's the same problem EBay had 20 years ago.

User growth is always a KPI that people care about, so there's a bias to not get aggressive about purging fake accounts. Being aggressive will also flag unsophisticated real people as well.


Talking semi-seriously here, but I wonder if the next evolution in bot accounts would be using name and avatars generated from KnowYourMeme content in order to defeat casual verification


Most of the "normal" people I know IRL have usernames like that.


It's such an oddly obvious tell.

I'm left wondering if it's like the advance fee scams; making it blindly obvious means you only attract the truly gullible, limiting the amount of skeptical followers who'd reply publicly to your bullshit tweets.


It's the default username style.


I use This Person Does Not Exist for my shitposting avatar, but I'm certainly not botting. I just can't have my spicy tweets related to my main account because people will cancel me.


>His two front teeth (red box) are different shapes and sizes. The teeth on the right side of his mouth have not been rendered properly at all. An analysis of Max’s facial features suggests he has more in common with an AI-generated fake than with a real person.

Or a real person who didn't have braces?


It's actually a very interesting contrast with what "fake" used to mean in digital media. You see models with flawless skin, perfect teeth, digitally sculpted curves, and the original photo didn't look that flawless.

Now somebody has asymmetric ears like I do, and that's a hint they're fake?


I think its interesting that Twitter has the capability to automatically and instantly tag tweets or accounts as being written by third party apps

Yet they choose not to


I don't understand your comment. All tweets show, next to the timestamp, the client they were posted from. If you tap it, it brings you to a help article that explains what the field means. Unless you mean something else...


At least the web interface shows the client used to create the tweet?


Yes, I am perplexed by this. I have noted an insane number of bots commenting positively on Donald Trump's twitter account. It's obvious they're bots because of the username, lack of other activity, spelling and grammatical mistakes (due to phrases being directly translated from Russian).

P.S. Whenever I mention the above on here, I am subject to a barrage of down votes. Almost as if bots are searching HN comments and trying to down vote anything that is negative towards Trump. If this gets down voted heavily, it'll confirm my suspicions yet again...


I've seen that in some comments I made on another story. Not sure whether it's Russians or Russian bots or just HN-reading Trump fans.


Downvoted already. On further reflection, I think it must be the last category since it would be a challenge for a Russian [bot] account to get the karma to be able to downvote.


You'd be surprised. I can imagine they've got a bit of link submission karma, which allows them to downvote.

For what it's worth, I'm upvoting you!


The down voting has begun! @dang, I wonder if this is something you can address somehow?!


Did you ever consider people don't agree with your opinion? I am anti Trump but not everybody is. Also there are bots on all sides of the political spectrum as evidenced by the article you are commenting on which dealt with an account pumping Bernie.


Yes, I have considered that, however the downvote patterns on those topics are unusual for HN. I've been here a few years, and know when something feels a bit off. Also, I'm not saying something that is wildly incorrect - just go on Twitter onto DT's account - what I'm reporting is self-evident. Do you disagree?

Edit: also, if I don't mention DT's name, but refer to him in other more cryptic ways, I (and other HN users I know) get upvoted. Basically if you mention his name, it seems to get flagged up to bots/users to investigate...


I didn't realize I could potentially get free karma by saying Donald Trump is a legend. Good to know.


It's a well-reasoned respectful comment though. It has been the HN way that downvoting is really to punish overtly bad comments and a lack of upvoting boring comments.


It's also the HN way to downvote comments which include complaints about the comment being downvoted though [more so if they speculate the comment is being downvoted due to bots and followup with a request for moderator intervention]. Suspect that's the wider issue here.

More generally, political comments are bound to attract some downvotes from people who either support the other side or don't like politics on HN, although you'd normally expect more upvotes on a comment which is on-topic, only tangentially related to the figure's politics and well-reasoned and civil


don't like politics on HN

For me this is it. Now I generally don't downvote unless I see the same person going on and on. But, regardless of how I feel about a particular issue, I just don't think it belongs on HN. I come here to get away from all of that. There are plenty of other places to go if I'm interested in discussing politics. I come hear to talk about tech and science.

I'm not perfect and I'm sure if you trawl through my comment history you can find contradictory examples, but, I hope anyway, they are few and far between and limited to one or two at most in the same story.

My rule of being a good HN user is don't complain about downvotes, don't talk too much, if at all, about politics and read and reread my comment before I hit reply. Am I really making a contribution? If not, hit the back button!


>It has been the HN way that downvoting is really to punish overtly bad comments and a lack of upvoting boring comments.

That's the way people want HN to be, but not the way it is.


Perhaps I should say "That's how it used to be".


Here it is okay to downvote if you disagree.


The part where the article tries to "prove" that the photo was not of a real person feels like phrenology more than anything. Real people can have asynmetric ears and teeth, and most photographers deliberately try to frame photos in a specific way so some random headshot having the same composition of generated photos isn't much proof. Honestly it looks like this type of psychotic paranoia over "bots" does more harm than actual bots themselves.


This is worrying. I already get called a bot or a shill on the regular, and now you can be called a bot for having crooked teeth or "asymmetric ears"? Now I can look forward to internet sleuths scanning my profile picture for imperfections.


Now it just takes a 2nd picture of yourself to prove them wrong, so I wouldn't worry about that (yet?).


Systems like TPDNE can generate random fake people but can they generate multiple different images of the same fake person? If that is not the case then the existence of different photos should prove a person is real.


This was directly addressed in the article and no they cannot.


> TPDNE only creates a single image of a person

That doesn't state that it can't, only that it doesn't. Also, I am wondering about other GANs/image generation systems and not just TPDNE specifically.


Seems like the only reason Twitter created verified users was because they got sued by Tony LaRussa for his account being impersonated.

What I'm wondering is when there will be a class-action lawsuit on behalf of all the everyday people who have had their accounts impersonated?

I hope it doesn't have to happen that way, I hope Twitter gives everyone the option to verify their profile, and yet I haven't seen much progress on this front since 2016 when I started looking at it.


I feel like the article went way too much over aspects like TPDNE and the non-existent lawyer, which - while being useful to know - are somewhat poor indicators of fake profiles especially in a tech circle. I personally know a few people who use TPDNE profile pictures and many who use fake names, although none do both (a wise thing in this day and age). The rest was quite lacking.


I've been writing about this for years but it's insane to me that Twitter is not stopping this. They are contributing both to group psychosis and the destruction of democracy through the spread of disinformation.


Likely, they don't really care to stop it. Real people still use it enough that the advertisers don't really mind.

I'd suppose that they cannot really tell the difference between a bot and a real person, at least at scale.

I follow a few non-English speaking hashtags that act as local news sources for their areas. These hashtags have been active for a few years. Though Twitter has a lot of policies for English twitter, these languages really have no policies. By that I mean they live-stream executions, have drug sales, solicit sex-work, etc. Illegal isn't really a thing in these places to begin with. That said, these cases are somewhat rare and these hashtags are mostly used as genuine news sources. Granted, these are 'edge case' languages, but still, it's a free for all that Twitter doesn't care to dive into.

Based on the 'extreme' cases and their long lived incoherence to Twitter policy, it's not hard to conclude that Twitter really does not care to enforce policies unless forced to.


Geoff Goldberg has been analyzing and writing about this a lot[0], he is worth following on Medium, ironically Twitter suspending him from Twitter [1].

[0] https://medium.com/@geoffgolberg [1] https://medium.com/@geoffgolberg/your-twitter-account-has-be...


He was harassing users over his dubious research. Suspecting someone to be a "bot" doesn't justify demeaning and dehumanizing these people.


The point of these bots isn't to spread disinformation or to amplify political content (the article is wrong about this).

The point of these bots is to build up an account with credible behavior that you can use to make money (usually by scamming advertisers into buying paid tweets/followers, or by directly promoting scam products).

Political twitter is so formulaic and high volume that bot/sweatshop posts don't seem out of place compared to the usual traffic from real accounts.

If there were no politics on twitter they'd just send formulaic messages about pop culture or sports or something.


You make a bold claim, do you have any supporting evidence?


Wasn't this somewhat well documented during 2016 where most of it wasn't some coordinated attack from St. Petersburg but teens from south-eastern Europe making money from ads and therefore driving clickbait to its limit because they literally just wanted clicks?


>most of it wasn't some coordinated attack from St. Petersburg

This is completely false. A cursory search will yield results explaining Russia's massive effort on social media (and Twitter in particular) to interfere in the election.

https://blog.twitter.com/en_us/topics/company/2018/2016-elec...

https://blog.twitter.com/en_us/topics/company/2017/Update-Ru...

https://www.theguardian.com/technology/2018/jan/19/twitter-a...

etc....


Not sure if "completely false" is appropriate. There was some Russian activity. The scale of which appears somewhat underwhelming:

> Through our supplemental analysis, we have identified 13,512 additional accounts, for a total of 50,258 automated accounts that we identified as Russian-linked and Tweeting election-related content during the election period, representing approximately two one-hundredths of a percent (0.016%) of the total accounts on Twitter at the time.

If that small of an effort can make that much of an impact on the core democratic process, then we need to consider a significant scale back of XXI century globalism to allow for second half of XX century democracy to survive. Not a popular opinion on a tech forum, where more users means more money, and there are always significantly more users outside one's national boundaries. Sigh.


>Through our supplemental analysis, we have identified 13,512 additional accounts, for a total of 50,258 automated accounts that we identified as Russian-linked and Tweeting election-related content during the election period, representing approximately two one-hundredths of a percent (0.016%) of the total accounts on Twitter at the time.

The comparison of "50,258 automated accounts that we identified as Russian-linked and Tweeting election-related content" to "the total accounts on Twitter at the time" makes no sense at all. It seems intended to mislead.

It would be more informative to know how many total accounts tweeted about the election, or how many politically active twitter accounts that bot army reached.

And in general, the election was decided by ~80,000 people [1]. To think that ~50,000 bots spamming Twitter with non-stop propaganda had no impact on that is extremely naive (especially when you consider that Twitter was just one piece in the overall interference).

[1] https://www.washingtonpost.com/news/the-fix/wp/2016/12/01/do...


Using a very rough ballpark estimate, it costs $1 / month to run a twitter bot using readily available technology, see, for example, https://jarvee.com/get-now pricing list. To think that for $50k / month anyone in the world can have a [significant?] impact on US elections is bizarre.

In practice, there is a negligible probability that posting a tweet from a random bot will end up in the feeds for a target demographic. The feed space of the 80k people WP claims decide the election is finite, and it costs significantly more money to reach them than setting up small scale botnets. Ultimately, Twitter is in control of the feeds, and it's their business model to happily direct any message from the highest bidder to the critical demographic, regardless of who or where the highest bidder is.


Most of those 80K voters weren't active Twitter users so your analysis is invalid.


In comparison, Mike Bloomberg totally failed to take over a US presidential primary, and he spent almost half a billion dollars.


That article says 50,000 accounts were "linked to Russia". Do you know who linked them to Russia and by what criteria? They weren't actually controlled by the Russian government because the article cites that figure at only 3,000.

The hysteria over Russia as an excuse to ignore the concerns of Americans who support populist positions is pretty transparent at this point.


>The hysteria over Russia as an excuse to ignore the concerns of Americans who support populist positions is pretty transparent at this point.

You're calling it "hysteria" to dismiss facts you don't like. "The Russian Hoax!", where have I heard that before?

But, to your second point, I completely agree that many on the left want to pretend it was _mostly_ because of foreign interference. I tend to think that had a pretty small impact, and the fact you pointed out is probably much more consequential: Americans are more receptive to a populist message than people think.


I 'like' it so much (not really) when people from the USA say/write "left" when they have absolutely no idea what left/communism is, apart from what they HEAR. I don't think that more than 1mil people in the USA have lived in the USSR block or the countries of their influence, or have studied communism thoroughly.

And you can recognize the trolls when one says/writes "Left" in the most capitalistic country on this planet. And by "Left" they mean the very minimal/basic social services, such as don't let people lose two legs but only one because the $1k insurance per month doesn't cover that. I wonder how failed is the USA when countries in EU have achieved that with $100 contributions per month. Miracle!!!

Americans live with the dream that they are all billionaire but somebody is blocking them (the commies, the socialists, the left, the Cubans, the anarchists, and in general "they" some without ID).

Russia always plays the looooong game, they try and succeed to sabotage a little bit every step of the way (antivaxers, interference with elections, oil price), you name it, they are in it.

I think I vented/ranted enough. I sometimes feel sorry for the 30-40% of USA citizens, they are confused and haven't read more than 10 books in their lives.. it's a potty for such a prosperous coubtry to suffer like this.


>And by "Left" they mean the very minimal/basic social services, such as don't let people lose two legs but only one because the $1k insurance per month doesn't cover that. I wonder how failed is the USA when countries in EU have achieved that with $100 contributions per month. Miracle!!!

This isn't really true. You're going to get medical help with an urgent medical issue in any developed country, even if you don't have a penny to your name. Doctors/hospitals cannot turn you away for not having money.

There are also very few countries in the EU where insurance payments are $100. The only country I could think of would be Bulgaria and that's because they're the poorest EU country. Virtually everywhere else you're going to pay more than that and everybody has to pay it, including the poor. Depending on the country, if you don't pay that then you won't get medical care (other than the urgent kind) in those countries either.

The US has a lousy health insurance system. The cost is too high, insurance paperwork is ridiculous, and insurance doesn't always even cover you. But healthcare really isn't as amazing in most EU countries as you want to think. Many of them have similar problems.


In the EU, free (at point of use) healthcare is available to all residents, including the unemployed. In several countries, you can get free treatment immediately even if you are not a resident.

That's a lot different from the USA, where tens of thousands of people die from lack of health insurance, and probably hundreds of thousands are bankrupted by the healthcare they get.

A quarter or more Americans put off seeking medical treatment because of the cost. By the time they seek treatment, it may be too late. I'm reminded of a carpenter who won $1 million and said he could finally go to see a doctor. He died a few weeks later from cancer.

A lot of Americans are one accident or illness away from financial ruin and poverty.

For all their problems, Europeans are a lot better off than this.

More than 26 000 Americans die each year because of lack of health insurance https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2323087/

New study finds 45,000 deaths annually linked to lack of health coverage https://news.harvard.edu/gazette/story/2009/09/new-study-fin...

The Americans dying because they can't afford medical care https://www.theguardian.com/us-news/2020/jan/07/americans-he...

More Americans Delaying Medical Treatment Due to Cost https://news.gallup.com/poll/269138/americans-delaying-medic...

Medical Bankruptcy: Still Common Despite the Affordable Care Act https://ajph.aphapublications.org/doi/10.2105/AJPH.2018.3049...

Medical Bankruptcy Is Killing The American Middle Class https://www.nasdaq.com/articles/medical-bankruptcy-killing-a...

New York carpenter who won $1 million lottery prize dies of stage-4 cancer weeks later https://eu.usatoday.com/story/news/nation/2018/02/02/lottery...

N.Y. man dies from cancer 3 weeks after winning $1M lottery https://www.cbc.ca/radio/asithappens/as-it-happens-thursday-...


Did you reply to the wrong comment?


I purposely left spaces between the different "sections" of my semi-long post, just to separate the items/topics. The post I was replying was too 'thick' (imho) and I was just disentangling that spaghetti. (Russia, Left, caring for the fellow human/social reforms, illiteracy).


Not to be too pedantic, and correct me if I'm wrong, but I think "The Russian Hoax" more specifically refers to the beliefs/claims by some that Trump or his campaign was colluding with Russia directly to interfere with the election rather than simply the belief/claim/fact that Russia (or Russians) interfered in the election of their own volition.


They have conflicting incentives, it's the same problem all social networks have. Bot accounts qualitatively detract from the real user experience, but substantially inflate quantitative activity metrics, if artificially.

Policing bot accounts improves the quality of the site, but hands Elliot Management and others more ammunition that social isn't growing to expectations. Unfortunately these sites continue to pick to optimize the latter rather than the former.


> but hands Elliot Management and others more ammunition that social isn't growing to expectations.

If the growth is largely from bots, then perhaps Elliot Management is right even if you disagree with the business dealings.


There's a good argument to be made that Elliot Management is right simply based on absence of stock growth over the past few years. Facebook has grown 200% while Twitter has had incredibly modest gains. That much of Twitter's little growth might be bot-related just compounds the validity of their argument.

Twitter can't even fall back on being a public good, as it is has supposedly been a large vector for foreign political interference, and they haven't taken nearly the drastic manual moderation steps that Facebook has to combat this (hiring 15k manual content reviewers [1]). Facebook received huge pushback from investors over this decision, but it looks pretty savvy in retrospect.

[1] https://www.theverge.com/2019/2/25/18229714/cognizant-facebo...


it might be awesome, if the signature is somewhat obvious, to have an independent entity flagging / creating a suspected bots database?

_maybe_ that would put some pressure on Twitter?

    https://osome.iuni.iu.edu/tools/botslayer
    https://duo.com/assets/pdf/Duo-Labs-Dont-At-Me-Twitter-Bots.pdf 
    https://sparktoro.com/tools/sparkscore


But they're making money from it. And that's all they (and their shareholders) care about. Same reason FakeBook won't ban false political advertising on their platform even though it's really obvious it's destructive to society.


And how exactly do you propose to stop this on scale? The problem is harder than it looks.


I wouldn't begin to claim I have a solution, but I think there's steps Twitter can take to show they are acting in good faith. I believe often times, Twitter flags accounts for suspicious activity - for accounts to be reactivated, a phone number has to be provided. If Twitter wanted to, are they not capable of deploying an algorithm that serves as a dragnet to catch accounts that tick off multiple check-boxes for bot activity and flag them? This would not mean that the account would immediately be banned or shadow-banned, but they could provide some kind of visual indicator to other users that the account in question has recently been flagged for bot-like activity (on the account's comments, retweets, etc.).


There are many real humans who have no phone number.


Or would never give it to Twitter.


Because number of users is a nice pr metric. Just pretend the 15% which are bots are real, and pretend the 15% of suspended and abandoned accounts are active.


I understand your frustration and share it, but I personally believe that we are at the point where this behavior (i.e., failure to act) should be expected of tech companies. The economic incentives of the companies who run the most popular social networks seem to be negatively aligned against robust responses to the propagation of bots and disinformation, and that doesn't appear likely to change.

I also wonder whether the mitigation strategy for this type of societal threat is not to be found via strictly technological means anyway since the line between bot and human actor appears to be blurring more and more as time passes and technology improves (artificial face generation, improved NLP, statistically-informed posting and behavioral modeling, etc.). Imagine how complicated the process of discriminating between automated and human-controlled accounts might become in ten or twenty years from now; it seems as though there will have to be some sort of public education component to reduce the credulity of human users in online environments - an adjustment of weights in the network of information sources among the general population, if you will, in a direction that de-emphasizes online content.


> I've been writing about this for years but it's insane to me that Twitter is not stopping this.

If it doesn't impact shareholders' profits, it goes to the bottom of the to-do list.


How else will the populist cults survive?! /s


Gotta keep those MAU counts up!


[flagged]


Do you believe the same thing about email spam detection? Various authorities decide on the validity of email and webspam.


If people pick companies to do filtering for them, it is their choice. GMail is causing some problems. In general, most people opt to have suspicious mail sorted into a Spam folder, not deleted completely.


Available evidence doesn't support your claims.


[flagged]


It's a big topic and I am reluctant to invest a great deal of time trying to persuade a new arrival. Try following the links in this article for an overview of recent research.

https://www.scientificamerican.com/article/how-twitter-bots-...


[flagged]



I've been on HN for 13 years, I just don't care for the voting system.


The analysis of the generated profile images is all well and good, but a much better indicator of a bot account[0] is the incessant tweeting about certain issues with little to no commentary or context.

0: a bot account or a friend not worth continuing to follow


Today I had someone reply to a comment of mine saying that I'm a bot and to report me.

Am I not in on a joke? Or am I doing something bot like? Or do people use this to exploit mindless masses to abuse the reporting system?


I scrubbed my personal info on Twitter due to aggressive people and the potential of doxing me. No birthday, no bio, no profile picture. Twitter won't even let me follow anyone until I confirm my birthday.


Another sign is if they accidentally repeat themselves. It shows it could be a bot account


Hopefully this is not too off-topic, but - I like that the article focuses on a bot spamming far-left propaganda. I have tried to encourage my far-right relatives to stop knee-jerk sharing political articles and memes and to diversify their news sources, but for some reason they think that anyone who is worried about fake news, bots, or manipulating elections must be a Satan-worshipping communist. I assume this is because the media has tried their absolute hardest to use these facts to slander Trump.

How can we get people to stop focusing so much on the politics and see this as kind of data integrity or trust issue? Showing that "it's not your tribe's fault" is more emotional than rational but people may actually respond to that.


I have a much easier rule set for fake Twitter account detection:

Are they saying something I want to hear? Yes? Fake.

That's the complete ruleset. I do not follow anyone on Twitter as a result.


Twitter does absolutely nothing to exterminate old fake accounts. Probably because the last time they did so and saw how many real accounts there were, Twitter's stocks plummetted down


I think it was in a video with to mscott where woman from twitter talked about this, yes false positives are a big problem when detecting fakes. But Twitter is working hard to keep bad bots / fake accounts away. There was no last time, right now people are working on banning fakes. It a 24/7 game.


Many people use fake avatars, names and locations. Those are not good indicators for bots.

Presumably it is more obvious by the stuff being posted.

However, the whole question is mostly irrelevant. If you don't care what an account tweets about, don't follow it.

The big issue with the whole bot manipulation narrative is that while it is easy to create fake accounts, it is not easy to also get followers for those accounts. Without followers (with voting rights, not some other bots you bought on the dark net), the bot accounts are irrelevant.


Why would I follow someone with only 73 followers in the first place?

I wish twitter can let me choose to filter out any information from accounts with less than 10000 followers, with exception of my own followings.


Why would you limit yourself to hearing only people who have large followings?


Because there are already too much info on Twitter. No?


There is, but popularity doesn't seem like a good filter to determine the value of a given feed.


> Why would I follow someone with only 73 followers in the first place?

Because not all your friends are rich famous celebrities? (Or maybe they are, I don't know you, I'm just spitballing here..)


I meant random accounts. Friends are definitely not bots, no?


This article is full of shit. At least until when I stopped reading there were discussing about physical features that don't matter one bit for Twitter accounts.


Most of this article is talking about fake photos, not fake Twitter accounts. He goes on to say that if you use your Twitter account for politics, then you are fake.

The only thing I agree with is 766 posts in a week is extreme and is an indicator of bot-like activity.


Tweeting at 4-6am and 1-3pm NY time is pretty unusual, and showing up as a dead lawyer in the registry is another quirk. That said, while interesting, I don't know if this article has offered any practical advice. They essentially said, become a private investigator to find out if they are fake.


> Tweeting at 4-6am and 1-3pm NY time is pretty unusual

I dunno - if you're a (purported) lawyer in NY, I could imagine 4-6am would be "getting up, gym, breakfast" and 1-3pm would be "lunch, gym, whatever". I'd be more suspicious about the lack of evening tweets, myself.

> showing up as a dead lawyer

I would probably consider that quite suspicious, yeah.


Only tweeting politics is a flag too; real humans have more than one interest.

Another indicator that's non-obvious is posting time distribution. Humans can't post 24h a day. But on the other hand if they're all in a 8h window, that person is posting from work - are these posts their job?


It's a flag, but not necessarily a sure indicator. For example, I have a separate twitter account I only use for political discussions, so that the replies I get don't overwhelm the other things I'm interested in.


> real humans have more than one interest.

Sure, but plenty of social accounts only focus on a few of them -- people like to segment themselves into spaces. I'm a real person with multiple interests (or am I?) but at the moment my personal blog only has technical articles on it.

Which, to be fair, a blog is not Twitter. But I could easily see myself or someone similar to me picking up a Twitter/Facebook account and saying, "I'm just going to use this space to complain about the government. I need a space to rant."


>But on the other hand if they're all in a 8h window, that person is posting from work - are these posts their job?

Eh lots of people have desk jobs with enough downtime to be posting throughout their work day while they are bored... then maybe they go home and are doing something else.

Although I agree it would be suspect if they never really posted outside that window.


You're all describing Donald Trump, who is all too real.


Twitter has become so low quality that I don't even bother looking at it anymore. The interactions tend to be extremely negative, the ads are annoying and irrelevant, and most of the people who get famous tend to be celebs or narcissistic pseudo-intellectuals who just repeat whatever the populist opinions are for their audience. I've found/met some interesting people though Twitter before, but these days I find it's not worth the noise to find that signal.

Twitter seems to want to resist any changes that would make the metrics look bad, but those changes are necessary to make the platform good. For example, they could offer a paid tier with more options to block/filter low quality content. They should also make a decision to either allow all bots, or ban them outright, and also make it easy for end users to tell whether an account is a bot or not. They should scrap the blue check marks and just offer a "twitter gold" badge for anyone willing to pay $5 a month. I don't really care if Twitter has checked anyone's identity, I just want to know if the person tweeting is a human or not.


> more options to block/filter low quality content

Over and above what you can already do with curating your followers, blocking people, muting people, muting hashtags, etc.? That's quite the range of options for buffing up the quality content of your timeline, no?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: