Hacker News new | past | comments | ask | show | jobs | submit login

In separate discussions verified by Motherboard, that employee said Twitter hasn’t taken the same aggressive approach to white supremacist content because the collateral accounts that are impacted can, in some instances, be Republican politicians.

https://washingtonmonthly.com/2019/04/25/twitter-cant-ban-ra...




As we all know, algorithmic classification has zero issues [0] and is 100% accurate. Even Twitter's anti-ISIS classification banned a bunch of ISIS watchdog accounts but it was deemed that the collateral damage was worth it. In this case, the possible collateral damage of banning non-racist, non-white supremacists is too damaging to Twitter's reputation among a large portion of their users.

The non-politically charged answer to this is that the algorithm isn't (and likely could never) be perfect and so the statement that it would be politically bad due to collateral damage, exactly as the employee from Twitter said, is accurate. Without all the "nudge nudge, wink wink Racists=Right amiright?" rhetoric provided by the media outlets that reported on it.

Going back to the algorithm, how would one even train it? At one end it would likely be useless and at the other it would be far too inaccurate.

Would it ban for use of the OK hand emoji? How about for saying "it's okay to be white?" Would it judge someone as a white supremacist for pro-white speech or only for anti-POC speech? How about people who tweet about or quote Hitler and how would it know the context? Since "retweets aren't necessarily endorsement" would it ban for retweeting white supremacist speech to draw attention to it/call it out? Would it attempt to do classification and then ban anyone who's mostly followed by or tweets towards/is tweeted at by people classified as white supremacists? Would it ban for racial rhetoric? Being against immigration?

Even assuming it could accurately classify - it quickly becomes a measure of "who's definition are you using" and "how far is far enough and how far is too far?"

[0] https://www.theverge.com/2015/7/1/8880363/google-apologizes-...


That's what happens when "our country and people first" gets classified as white supremacism.


Phrases like "our country and people first" get classified as white supremacy because they are used as dog whistles by white supremacists.


How would you phrase the sentiment differently? Or is it inherently white supremacist?

What if the country isn't majority white?


But that's as silly as classifying the "ok hand" emoji as white supremacist too.


The Streisand Effect when people try to pile on these things as being some kind of secret masonic handshake is insane.

I don't think actual white supremacists could ever come up with propaganda anywhere near as effective.


The phrase "America First" has some... historical baggage.

https://duckduckgo.com/?q=dr+suess+america+first+comic&iax=i...

If you're aware of this and comfortable with it, more power to you, but sometimes it seems like not everyone saying it knows about this.

Normally I wouldn't touch this with a ten foot pole, but in this case I genuinely wish to be helpful. Best.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: