In separate discussions verified by Motherboard, that employee said Twitter hasn’t taken the same aggressive approach to white supremacist content because the collateral accounts that are impacted can, in some instances, be Republican politicians.
As we all know, algorithmic classification has zero issues [0] and is 100% accurate. Even Twitter's anti-ISIS classification banned a bunch of ISIS watchdog accounts but it was deemed that the collateral damage was worth it. In this case, the possible collateral damage of banning non-racist, non-white supremacists is too damaging to Twitter's reputation among a large portion of their users.
The non-politically charged answer to this is that the algorithm isn't (and likely could never) be perfect and so the statement that it would be politically bad due to collateral damage, exactly as the employee from Twitter said, is accurate. Without all the "nudge nudge, wink wink Racists=Right amiright?" rhetoric provided by the media outlets that reported on it.
Going back to the algorithm, how would one even train it? At one end it would likely be useless and at the other it would be far too inaccurate.
Would it ban for use of the OK hand emoji? How about for saying "it's okay to be white?" Would it judge someone as a white supremacist for pro-white speech or only for anti-POC speech? How about people who tweet about or quote Hitler and how would it know the context? Since "retweets aren't necessarily endorsement" would it ban for retweeting white supremacist speech to draw attention to it/call it out? Would it attempt to do classification and then ban anyone who's mostly followed by or tweets towards/is tweeted at by people classified as white supremacists? Would it ban for racial rhetoric? Being against immigration?
Even assuming it could accurately classify - it quickly becomes a measure of "who's definition are you using" and "how far is far enough and how far is too far?"
https://washingtonmonthly.com/2019/04/25/twitter-cant-ban-ra...