There's multiple huge problems if Twitter takes this approach to rules enforcement even if they're genuinely trying to be honest and unbiased.
1) Every account on Twitter has "bot activity associated with this account": if for no other reason that random advertising bots retweet, follow, or like random people and posts to attempt to blend in as a genuine account and hide their true purpose. This reasoning gives Twitter essentially carte blanche to delete everybody, even assuming they were willing to be fully transparent about the evidence surrounding their bans (which they're not).
2) Bad Actors can easily sabotage accounts they dislike. Here's just 1 method how this can be done: there's 10,001 shady fly by night firms in India, Pakistan, Nigeria, Bangladesh, Vietnam, and other countries that offer services to "buy followers" or "buy retweets". It's trivial for somebody without programming skill and even a few hundred dollars to just buy mass spam and shill the hell out of an account (the lower quality the better) until somebody notices and that account gets banned.
You don't even have to assign bad motives to Twitter to see the obvious problems here.
I suspect a key difference here is “all of the users other twitter accounts”. If a bunch of accounts registered to the same email address all start boosting each other that’s probably pretty easy to detect, and distinct from someone else trying to do it to you.
You appear to be assuming that the entire team of people at twitter who work on these problems either haven't considered or aren't able to address these issues. While there often are challenges with moderation and rules enforcement, this kind of stuff is pretty easy to address.
But if you look at that profile, he says he works at Google. Think about that for a second. He claims this is an easy problem. But consider Google's approach to spam detection algorithms and see how there are so many false positives and innocent people getting screwed out of their ad accounts and livelihood. If this kind of stuff was easy, why hasn't anybody informed Google about it?
I'm not trying to pick on anybody here, but I believe there's a level of arrogance in thinking that this form of sabotage is somehow a solvable problem unless you somehow have access to all communications on the planet.
I very specifically said that there were hard problems, just that this wasn't one ;)
In this case, a fairly straightforward account age rule would get you like 95% of the way there. My twitter account, which is 7 years old and has a few hundred followers and tweets once a week isn't going to benefit from this. You don't ban it. Similarly, you don't ban a 7 year old account with 10s of thousands of followers that's been doing the same kind of tweeting for 2 years and recently got a bunch of bot follows.
On the other hand, a 3 week old account with a ton of bot follows is suspicious. This'll miss older accounts that were unused and changed (e.g. if my account suddenly started tweeting 17x a day about divisive political content and suddenly got a bunch of bot follows, so you need slightly fancier approaches).
You probably further improve this with some kind of multifaceted account realness score based on a pagerank like analysis of followers (following bots lowers your score, as does being followed by them). And ofc you still ban the bots, but you don't punish any accounts above a certain threshold of "real".
The other part of it is that randomly sabotaging twitter accounts isn't going to happen very often, it costs money, and unless there's an ROI (extorting people), no one is going to do it except a handful of people for teh lulz or whatever. But importantly, most of those people will be bad at doing this kind of attack in a convincing way.
There are lots of difficult problems, but OP chose one that doesn't happen and is straightforward to address.
Your proposal is quite naive and displays the type of aggressive ignorance all too common among software developers. Aged Twitter accounts with thousands of followers are available for sale if you know where to look. They don't even cost very much. Brigading attacks by flagging content in bad faith are also very common for political and ideological reasons.
>Aged Twitter accounts with thousands of followers are available for sale
Yes, this is mostly irrelevant to what we were discussing though (and goes to the "change in behavior" thing I mentioned, which while more challenging, is still often detectable).
> Brigading attacks by flagging content in bad faith are also very common for political and ideological reasons.
Yes, but this is distinct from trying to frame someone for purchasing/using bot follows, and so is mostly irrelevant to what we were discussing. I agree that this is a challenging problem to solve, it's just not the one that was mentioned.
I'll reiterate:
There are challenging problems in abuse detection. Someone abusively using fake accounts to frame a legitimate user else for using fake accounts isn't a particularly challenging (or relevant) one.
a) There's a difference between addressing a problem and solving it: and I'd say that you're doing the former and not the latter. Your general algorithmic approach is likely to block some problems of course. However, it also creates the door for numerous false positives, especially for a platform like Twitter where an account devoted to noteworthy news or viral content can go from account creation to tens or hundreds of thousands of followers in days or weeks.
b) For your page rank analysis, how do you a priori decide who is a bot or not? Oh I know, you can look at a bunch of data like posting frequency, location, keywords, links posted, and other things, and try and create some clever formula to decide the likelihood that some account is a bot. But at the end of the day, even the best possible approach is going to miss bots created by even half-way clever people, and wrongly screw some innocent people over.
c) What about transparency? How can the public trust any of Twitter's claims that bots did X, therefore we had to ban account Y? Where's the proof? Given that bots on Twitter are endemic, this accusation would always be technically true, to some extent.
d) The amount of money and effort thrown around to influence social media is staggering. Even if you can't program, even an average person can buy tens of thousands of followers for what amounts to pocket change. The crazy Twitter activists who spend 18 hours a day on Twitter wouldn't be willing to spend what amounts to a few days worth of minimum wage work to try and take out some arch-nemesis? A shady political activist who wants to influence politics or hide some damaging info and has access to millions of funding isn't able to arrange this? A nation state who wants to put their thumb on the scale of American public opinion can't spend the tiniest bit of pocket change for a concerted effort to move the scale of public opinion?
e) I don't want to be insulting, but your stated approach and mentality here is exactly why Big Tech algorithms screw over so many innocent people.