Hacker News new | past | comments | ask | show | jobs | submit login
Facebook to ban white nationalist content (fb.com)
890 points by anigbrowl on March 27, 2019 | hide | past | favorite | 1525 comments



At what point do such things change into a public discourse problem?

Do we wait until it can be shown that companies such as Facebook, Twitter, Google, etc. influenced elections because they refused to serve information from a candidate they didn't like? If we aren't already there, it's not long before it is.

What do you think would happen if these "private companies" with such deep hooks into our communication infrastructure suddenly decided to remove all data associated to the Republican Party? For that matter, the Democratic Party?

It seems to me that Facebook and Twitter are trying to have it both ways. They can choose to police the content provided by their users but can't be held responsible for said content? Are they a publisher or a platform?

I don't think comparing old thinking based around old methods of communication compares to what we have today, it requires new thinking. These aren't like newspapers sold by kids on the corner in a city that can have dozens of newspapers countering each other. Imagine if there were only three newspapers in the entire country, soon the world, controlled by a small group of people who wish to use their publishing for their own agendas.

Tim Pool is right, at this rate, sooner or later, the Feds will come knocking and will shut that party down.


> What do you think would happen if these "private companies" with such deep hooks into our communication infrastructure suddenly decided to remove all data associated to the Republican Party? For that matter, the Democratic Party?

They wouldn't do that, because there would be justified public outcry. Luckily, as moral agents, we humans are capable of differentiating "general political party" from "white nationalists", and can target the latter and not the former.


I can give you hundreds of examples of people being accused of being a white nationalist without any merit to it. To such a degree that this word almost lost its meaning.

edit: Tim Pool for example was already accused. It will clearly be used to shut down unwelcome dissent.


A while back, my leftist circles kept repeating ad nauseam that Peter Thiel (who is already in a minority, being gay) is a white nationalist. I suspected bullshit and decided to look into it.

All I could find was this: Peter Thiel once spoke at a libertarian conference. Libertarians, being libertarians, permit racism (read: not the same as "support racism"). THEREFORE, PETER THIEL IS A WHITE NATIONALIST.

Are you fucking serious? I pushed back on my leftist friends with what I found. "Duhhhhh, errrrrr, ummmmm, well... everyone just knows he is!" Oh, really.

Bullshit, repeated often enough as truth, becomes evidence for itself.


Honest question, why is this getting downvoted?


Because it's a straw man argument. GP (you) is talking about some argument their friends made, not something anyone in this thread has said.


But it was a related example of how echo chambers can corrupt the truth


Ok but have any of these falsely accused actually had their account removed?


> I can give you hundreds of examples of people being accused of being a white nationalist without any merit to it.

Feel free to do so whenever. Be sure to indicate when that has resulted in serious consequences for the "victim".

> Tim Pool for example was already accused.

TBF Tim Pool pulled the whole "I'm so neutral" act when dealing with a bunch of actual white supremacists. What "dissent" was unwelcome there? That he was pretending like these were totally fine people who just happened to think that genocide was okay?


[flagged]


> just out of interest... such as?

Enjoy!

https://www.youtube.com/watch?v=NQF2-F-GG_o

And here he is with a bunch of others!

https://i.imgur.com/SXqc8PX.jpg

> You could be really helpful here, sugar.

And you could tone down the condescension, sweetie.


I don’t know the facts behind either argument, but now a minimum-wage content moderator has to make this decision 1000 times a day.

Good luck, kid.


Before we make this into 'conservatives being silenced', it's worth pointing out that the censorship problem also exists for leftists outlets that are left of the neoliberal centrist view of the D.C. Democratic Party.


Taking a stand against content moderation was a fundamental leftist position some years ago...

That said, I absolutely agree that leftist positions do get censored. Even if I currently like to underline my dissociation with its authoritarian excesses and general dishonesty on certain subjects, the problem is a general one.


Unrestricted free speech was always a strange issue. Hitchens' very eloquently presented how control of speech was used to suppress a lot of groups, and he knew about the dangers of radicalization, incitement, hate speech, but considered those negatives acceptable.

However not everyone thinks that it's so great.

I don't know, and it's probably not something we can just settle easily.

It's usually true, that a society cannot simply rely on "laws" to save it, and when the power imbalance get too extreme, then laws won't save anyone anyway. Though it seems having some kind of policy and publicly agreed way to stop serial inciters is not a bad idea. After all information censorship generally happens through claims of potential national security hazards and other gag orders, not by a too wide interpretation of hate speech laws. (But of course human creativity is pretty limitless when it comes to suppressing others.)


> the censorship problem also exists for leftists outlets that are left of the neoliberal centrist view of the D.C. Democratic Party.

Which ones, specifically?


such as...


Jimmy Dore, Max Blumenthal, Rania Khalek, Aaron Maté, Michael Tracey, Abby Martin, Matt Taibbi, Katie Halper, Ben Norton, Lee Camp, Ilhan Omar, Mark Ames etc.

There are many, just some off the top of my head.


I'm only familiar with a couple of those names.

Who/what de-platformed Jimmy Dore (here's his YouTube channel; [1])? Or the rest?

[1] https://www.youtube.com/channel/UC3M7l8ved_rYQ45AVzS0RGA


He's been smeared in CNN & Washington Post articles as a crank and conspiracy theorist who runs 'an extremist channel' who should face consequences, has videos constantly demonetized and de-ranked etc.

His channel is not 'deleted', (as is the case with the majority of right-wing commentators as well btw), but it is very much 'shadowbanned'/blacklisted, (economic ruin).

I for example regularly notice the recommendation algorithm skews a lot more towards the centre & right actually even when I am specifically looking for Jimmy Dore videos.

People like Rania Khalek, Abby Martin... were even more explicitly interrogated by the likes of CNN, had their Facebook pages taken down etc.

On Twitter, a lot of them don't even come up in the search results, effectively shadowbanned.

Of course, none of them would be invited on mainstream TV because their viewpoint is not allowed in 'polite circles'.

This is why I have a huge issue with the right simplistically equating D.C. Democrats with 'the left'. I suppose the division is as to what do you care about, social leftism, (identity politics), which is easy & lazy and what many on the right focus on vs economic leftism, which is what many of those I named discuss and is not really allowed on mainstream media.


For me, Twitter's results for "Dore" show Jimmy Dore as the #3 result [1], following two handles with many more followers.

Contrast that to "Limbaugh", for which Rush Limbaugh is the (buried) #11 result, despite having 2x the followers anything else.

[1] https://twitter.com/search?q=Dore&src=typed_query

[2] https://twitter.com/search?q=limbaugh&src=typed_query


He's the very first Twitter account suggested when I type 'limbaugh' into the search [1].

In Moments, there's a lot of other people talking about him, so that's what's shown, (also a lot more notable Limbaughs, as that's a pretty common surname), whereas Dore is pretty much just him, not that many other people are talking. That's how Moments always worked, nothing shady there. It's also true if you put i.e. 'Taylor Swift' in Moments, her actual Twitter account is fairly buried, because there's lots of other buzz that the algo deemed more relevant.

If anything, it speaks to his popularity.

And that's despite him not having a verified account, which are deranked for everyone over verified ones. Type in ie 'Sean Hannity' and you'd see his verified account right up.

Perhaps it's time to admit that the simplistic narrative of 'left censoring the right' is not really true and it's more complex than that. It's really the establishment silencing alternative voices.

The right is more than happy to censor the left on BDS, for example, (with the help of the Democrats even(!)) & cement that into law. I don't see any of the right-wing 'free speech worriers', like Ben Shapiro talk about how wrong that is. In fact they very much support it.

On the other hand, you have left-wing channels like Dore & Secular Talk constantly bring up how it's wrong to censor right-wing voices, even doing long rants on specific cases.

1 - https://imgur.com/a/UfCf9vF


> His channel is not 'deleted', (as is the case with the majority of right-wing commentators as well btw), but it is very much 'shadowbanned'/blacklisted, (economic ruin).

And, as we all know, nobody ever made money off of extremist political commentary before YouTube, right kids?

FFS he has his own website and show. Just because YouTube doesn't give him money doesn't mean he's persecuted. He can feel free to host his videos elsewhere.

> (economic ruin).

Hyperbole alert!

> People like Rania Khalek, Abby Martin... were even more explicitly interrogated by the likes of CNN, had their Facebook pages taken down etc.

Please cite some sources. I'm not really finding any info.


By definition, this arrangement favors viewpoints that generate more public outcry, which sets up any marginalized group for being steamrolled. Do you seriously think that's going to be restricted to white nationalism? I mean, there's already plenty of other stuff that Facebook is censoring that the same people who are cheering these news have pushed back against before.

https://www.aclu.org/blog/free-speech/internet-speech/facebo...


...and yet white nationalists are still free to set their own social network up or use networks that don't care, like Voat for example.

That's called freedom of association: the right for an organization or individual to not wish to associate with someone.


What happens when all ISPs in the country combine forces to block Voat, even though there's no law banning it?

https://en.wikipedia.org/wiki/Internet_censorship_in_Austral...

Are you going to suggest that people start their own ISP?

At some point, private actors can carry so much economic power that their private rules effectively become laws. Much of Jim Crow was implemented in this manner, in addition to actual legislation.


ISPs as a group blocking IP-level access to portions of the web is far and above a different thing than a private company or a group of private companies refusing to host content on their servers that they do not want to host. Your attempt to conflate the two is moving the goalposts and arguing a slippery slope.

But even then, VPN or Tor? The internet is built to route around damage. Has Mastadon, IRC, or ICQ ever been blocked?

Again, just because white nationalists don't have the platforms they want doesn't mean they can't get their message out. But what they want is mainstream acceptance, and that is most certainly not going to happen.


The same Voat that just got banned in New Zealand for hosting stuff the government didn't approve of whilst Facebook, which also hosted it, was left alone?

That Voat?

Somehow I'm skeptical this is a viable approach short of Torrifying the entire internet.


> The same Voat that just got banned in New Zealand for hosting stuff the government didn't approve of whilst Facebook, which also hosted it, was left alone?

Take it up with New Zealand, which has very strong laws about hate speech and promoting extremist communities.

> whilst Facebook, which also hosted it, was left alone?

That content makes up a microscopic amount of the content on Facebook, and it was removed when reported. That content makes up the vast majority of the traffic on Voat, however.

It seems like you're moving the goalposts here, though. We're not talking about governments banning websites, we're talking about the government forcing websites to host content that they don't want to host. That's what's at play here.

And the fact is that you are free to start a public or Tor-based community of your own, unless you're in countries where Tor is blocked, in which case I think you're in far deeper shit than this discussion is focused on.


So this problem probably requires some fragile and "temporary" complex solution.

Letting paranoid xenophobes roam, recruit and incite violence on Facebook is bad. And its ill effects are already felt, and it has significant potential for doing a lot more harm in the future.

Whereas allowing a quasi-public-forum to be controlled and basically censored by an unaccountable entity (FB) is problematic - especially if said control is co-opted by the very same ideology that in the first place that particular control mechanism was supposed to, well, control.

It feels like the problem is that hate speech and other kinds of populist nonsense is currently the tool that easily leads to more and more authoritarianism, more and more xenophobes and isolationists getting into power. But censorship itself is also a great tool for power consolidation.


> They wouldn't do that, because there would be justified public outcry

Where would the outcry happen?

There aren't that many public platforms with any reach, and they're all adopting similar policies.

In [blocked] no one can hear you scream!


You have misused a slippery slope argument to turn the other position into a strawman. There are risks to this FB ban, but silencing outcry about the ban is not one of them.

In practice, the distinction between banning a subject and banning a conversation about a subject ban, is easy to make. For example, Germany bans racist speech, but does not ban speech about whether racist speech ban should continue (and this particular debate is definitely alive and well in Germany). Even in the US we've banned certain words from broadcast TV for many years, but this never limited people from discussing whether the ban should continue!


>In practice, the distinction between banning a subject and banning a conversation about a subject ban, is easy to make

And equally easy to abuse, something which has happened time and again:

Complain about the "ban of X"? Discussion shut down as supporting X.


Germany and the US are constitutional republics with independent judiciaries who adjudicates these things.

At FB, Twitter, Snapchat etc, whoever happens to work in the subject banning division arbitrarily makes these decisions, unless overridden by top management.


> In practice, the distinction between banning a subject and banning a conversation about a subject ban, is easy to make

Apparently not for Facebook’s moderators/censors workforce. Just they other day: https://mobile.twitter.com/OzraeliAvi/status/111040092879067... The day before that they banned a local satirical comics and this keeps popping up regularly.


> There aren't that many public platforms with any reach

Okay? Private organizations are under no obligation to provide you with a platform to spread your message.

If you want, you can start your own social network or host your own blog.


This is a great argument against things I didn't say.


Then perhaps if you could be a little clearer about what you are saying, we could have a discussion about that.


> They wouldn't do that, because there would be justified public outcry.

This is an argument for mob rule. I'm not sure that's as comforting as you intended.


>They wouldn't do that, because there would be justified public outcry.

Somehow saying "yes, these multinational corporation could exert undue influence over a political system, but they just wouldn't" does not seem sufficient. I feel that such an attitude is like saying "The US government would not spy on its on citizens -- they wouldn't do that, just imagine the public outcry!" Perhaps that is a bad analogy, but the issue here is that we are nearing the point where "oh, they wouldn't do such a thing" becomes untenable.

The CFO of Google said, in the leaked video[1] of the TGIF immediately following Trump's election, that they would use "the great strength and resources and reach we have to continue to advance really important values." Going by the reactions of everyone in that meeting, their efforts are certainly not impartial or apolitical.

[1] https://www.youtube.com/watch?v=FRf9UxsM-NE


What's the alternative? Governments dictating what kind of speech private communications platforms can and cannot allow? Is that better?


> They wouldn't do that, because there would be justified public outcry.

If Facebook and Twitter suddenly decided to de-platform and ban any discussion praising, defending, or favoring a political party, where would the outcry happen? On Facebook? Sorry, it’s banned!


Traditional media, competing social media, forums, blogs, posters, demonstrations, sticker campaigns.

Journalists would be absolutely salivating at the thought of writing about Facebook's new policy, especially if it was that blatant.

FB and Twitter are not the entire world. It's good praxis to be involved with people in the real world.


No they wouldn't, because this already happens and you hear nothing.

For instance in the UK Facebook banned the pages of a political party and very little was said because the same kinds of people who control Facebook also control the mainstream media, so they're almost always in agreement, except that journalists would really love Facebook to ban even more stuff to achieve the 'right outcomes', as they see it.

https://www.pcmag.com/news/359847/facebook-bans-far-right-uk...


> the same kinds of people who control Facebook also control the mainstream media

There's that conspiracy language again!

And then you go on to link a piece of "mainstream media" that writes extensively about it! Did you mean to prove yourself wrong?


PC Mag is mainstream media, now? How many readers do you think it has compared to a national newspaper?

It's hardly a conspiracy - the worldviews of these people are formed in the same crucibles and result in the same outcomes. They want to manipulate the narrative to ensure the right outcomes, in their view.


> PC Mag is mainstream media, now? How many readers do you think it has compared to a national newspaper?

You're comparing apples to oranges. What's it's ranking among tech/PC websites?

How about the Guardian?

https://www.theguardian.com/world/2018/mar/14/facebook-bans-...

How about BBC?

https://www.bbc.com/news/technology-46746601

https://www.bbc.com/news/technology-43398417

How about Wired?

https://www.wired.co.uk/article/facebook-britain-first-far-r...

Is that mainstream enough for you?

> They want to manipulate the narrative to ensure the right outcomes, in their view.

Prove it.


I said very little was said, not literally nothing. We see articles in these outlets decrying tech firms for not doing enough to combat 'extremism' nearly every day. How often do we see mention of political parties being banned? It's not discussed anywhere near as much.

Look, the original comment I was taking issue with said this:

"Journalists would be absolutely salivating at the thought of writing about Facebook's new policy [of de-platforming and banning any discussion praising, defending, or favoring a political party], especially if it was that blatant."

That clearly isn't the case because it's happened already and journalists didn't salivate over it - they reported the event once and then it was never brought up again.


> I said very little was said

Really? Because from the citations I've given, it looks like a lot was said.

> That clearly isn't the case because it's happened already and journalists didn't salivate over it - they reported the event once and then it was never brought up again.

Because it was uneventful. It was a universally reviled thing that got banned, and rightfully so. There doesn't have to be "another side" to a story when that "other side" is filled with nothing but hate.


Britain First (the political party in question) is a fascist white supremacist group, known for harassment actions and violence, particularly against muslims.

They're the British equivalent of the NSDAP.


> Luckily, as moral agents, we humans are capable of differentiating "general political party" from "white nationalists", and can target the latter and not the former.

This is highly debatable. There are plenty of examples of benign movements and opinions that have been stifled violently by society and the state.

You can find with little effort plenty of people today calling for censoring or even being violent towards harmless liberal or conservative people because they are "communists" or "fascists". People are not good judgement and measured response.


Luckily, in civilized societies we have a system of checks and balances in place to ensure that doesn't happen.


Do we anymore?

Facebook and Twitter have become a digital public commons for discourse today. The check and balance is "what Facebook decides". That doesn't exactly seem like an adversarial check and balance system to me.


> Facebook and Twitter have become a digital public commons for discourse today

Absolutely correct, however, they aren’t the only places for public discourse. People have never been able to demand a newspaper print their article or that a magazine must include their story—people have always had the choice to start their own newspaper or their own magazine and build their own audience and this is still true today, in fact it’s much easier than it’s ever been.

People who’s business is access to human inputs, whether they are newspapers, music venues, theaters, magazines, etc.. have almost always had the freedom to set their own standards and it isn’t clear to me why business owners shouldn’t have this freedom anymore.


>Absolutely correct, however, they aren’t the only places for public discourse

Small comfort if they are the main places for public discourse - so cutting people and ideas there essentially means relegating them to far less reach.

Strange how when some foreign state censors FB or Twitter it's an outrage, but when FB or Twitter sensor people directly "there are other places".

Not to mention the monetary deplatforming (e.g. Mastercard, PayPal, Patreon and co not allowing funding), in which case there are no "other places" (not many in any way, and not reputable for someone to go pay there).

>People have never been able to demand a newspaper print their article or that a magazine must include their story

Which is irrelevant, since newspapers and magazines where always top-down affairs, written and curated by a specific team. Social media and platforms were supposed to be open to society (hence "social"), not only for a select team of journalists to have an account there.


> Strange how when some foreign state censors FB or Twitter it's an outrage, but when FB or Twitter sensor people directly "there are other places".

Because a government has a monopoly on violence, while a private company has freedom of association. You're conflating two different situations that are only superficially similar.

> Social media and platforms were supposed to be open to society (hence "social")

Yep, and that didn't work out so well. Hence, the bans.


The litmus test I think we should use is "We should honor the intent of the user".

If a group of users explicitly want access to white nationalist content, they should be able to get it. So I would oppose blogs, webhosts, and cloudflare deplatforming anyone for any reason besides the outright illegal.

Facebook and Twitter are not just about serving content to those who have the intent to view it...infact the whole point of these social networks is that they expose content to NEW people who didn't initially have any intent to view. This is promotion, not access, and I have no problem with private entities choosing what they want to promote.

I would apply this same test to payments. Users who have explicit intent to financially contribute to objectionable content creators such as Alex Jones should still have a way of doing so. When Patreon, Matercard, etc etc deplatform him it closes the door to those who already have intent. Of course, I'm all for FB and Twitter shutting down the campaign so the word wouldn't spread nearly as far.

As a moderate liberal who finds sexual content over-censored, yet am disgusted by right-wing and anti-vax (anti-vax is often leftist!) conspiracy theorists, I think this test "honor their intent" test is a great way to keep the internet relatively sex positive, and extremist content relatively niche.


This seems like a relatively moderate view, and I like how it breaks down the individual freedom of intentioned users vs. the freedom of users who have no intention of seeing said content.

But at the end of the day, aren't the companies who are providing payment processing or website hosting profiting off of extremism and hate? If you'll recall, the reason why they started deplatforming individuals to begin with was that large swaths of people boycotted their services until they chose to no longer do business with said extremists.

Isn't that voting with our dollars? Isn't that the Free Market of Ideas in action?


Oh, was that a thing? For example was there a lot of outrage and pressure on payment processors to deplatform FetLife, or for Patreon to remove cam girls? I don't recall anything along those lines.


An organization choosing not to publish someone is not censorship. People choosing not to listen is not censorship.

A government choosing what information you have access to IS censorship.

There are many organizations, anyone can start one. There is only one government and you can't escape it.


That wouldn't be a problem if these organizations hadn't captured 95% of the discourse. There is nothing in the definition of censorship that requires it to be done by the government.

The same sort power brokers that would drive censorship in a place like China are the ones who fund political campaigns, found think tanks, control media empires, and choose advertising spend, and use this leverage to drive censorship on social media.

In the end if the rich and powerful have effectively squelched dissent does it matter if it was through government mandate or some more complex mechanism though private means?


While true by the dictionary definition, the commonly-understood colloquial definition of 'censorship' is government censorship.


I don't believe that to be true, as evidenced by this debate itself and the proliferation of this same debate across the internet.


'a problem' and 'censorship' are two different things.


The censorship by private organizations would not be a problem if...


Luckily, "civilized societies" are some of the most deluded about this point.

From McCarthyism, to J.E. Hoover, to MLK, Gary Webb, to WMD, to the Patriot Act, to Snowden, to the "collusion" BS, to today's de-platforming, the establishment and the media easily stomps on whoever they don't like with impunity.


So, can we as humans learn from history? Can we establish better, more thoughtful, and more balanced societies as time and our understanding progresses?

Or have we already built the pinnacle of society at some past point, and everything we ever do in the future is doomed to be as bad or worse than what we already have?

I'd like to choose optimism here, personally.


Well, starting with free for everybody speech platforms, and letting people make up their mind, would be a good start.

Banning ads would also be another good start, but I don't see the idea getting very popular.


No, it would not be a good start. How do we know this? Because that's what the actual start was. And it led to Facebook becoming the carrier of all the hate people could convince each other to accept. And people targeted each other, conditioned each other, to normalize this hate and acceptance of violence against 'others'. And so we have the situation we have today, where Facebook was forced to acknowledge that they became a platform for hate.


How about instead of a one-stop-shop social network, we go back to the random topic-specific forums of yesteryear? It decentralizes discussion, allows individuals to freely associate among themselves, and doesn't result in a "one size fits all" mentality when it comes to moderation.


Indeed, this is a decentralised problem in need of a decentralised solution. Also relevant are https://hypothes.is/ and IPFS.


>They wouldn't do that, because there would be justified public outcry. Luckily, as moral agents, we humans are capable of differentiating "general political party" from "white nationalists", and can target the latter and not the former.

Why white? The CCP is running concentration camps for hundreds of thousands of Muslims right now, yet I see no ban on the only party that can be called National Socialist today, just because the nation they are supporting is yellow.


Which banned party (on Facebook) are you talking about? You said, '... yet I see no ban on the only party that...'. Which banned party are you comparing it to?


The CCP obviously (Chinese Communist Party, aka CPC), which he already mentioned.

Not sure if CCP has a FB page, but that far is obvious.


Is it though? I can't find anything that explicitly says that Facebook has banned the CCP. Could you provide any sources which say this? Also, the post I was replying to said:

> The CCP is running concentration camps for hundreds of thousands of Muslims right now, yet I see no ban on the only party that can be called National Socialist today, just because the nation they are supporting is yellow.

I don't see any way to infer this as saying that Facebook has banned the CCP–what am I missing here?

Also, if Facebook has indeed banned the CCP, well, turnabout is just fair play–after all, the CCP has banned Facebook from China.


This discussion is about FB and, more broadly, speech on private platforms. Not the Chinese Communist Party or any governments.


Ignoring the obvious attempt to bring fascism as somehow not right-wing, which is all too common on the right today, FB/Twitter etc. make exceptions for governments/public figures, including Donald Trump and the DoD.

I don't think they should, but I also know that if they didn't, Trump and others would get banned and the cries from the fake free speech worriers, (who are quiet as heck on ie BDS), would be a lot louder.


>Luckily, as moral agents, we humans are capable of differentiating "general political party" from "white nationalists", and can target the latter and not the former.

Really? How about plain nationalists? How about communists? How about separatists? How about traditionalists? How about "Occupy Wall Street"? How about "nationalists" in e.g Iceland, a country where nationalists would be predominantly if not exclusively white in the first place?

If your idea of "general political party" is Democrats and Republicans and the occasional third candidate, ie. the bland two-party consensus that agrees on almost everything (foreign policy, more money to big money, etc), but disagrees on token issues (and that increasingly less), then sure.

But the movements and parties that change things up historically were never welcomed as "general political parties" by the establishment and the "good people" of the 10%.

I'm from a generally center-left-leaning country, where e.g. Reagan would be considered the epitome of nationalist and/or imperialist. Would it be OK for Facebook to censor Reagan (or some modern politician with the same ideas) on those grounds?


> Really? How about plain nationalists? How about communists? How about separatists? How about traditionalists? How about "Occupy Wall Street"? How about "nationalists" in e.g Iceland, a country where nationalists would be predominantly if not exclusively white in the first place?

Are they talking about racial genocide? No? Then they're a-okay.

I don't know why this is such a difficult concept for people to understand: If you're advocating genocide or violence against a people, you're gonna get banned. Pure and simple.


Yes, they wouldn’t do that. But no, checks and balances can not be based on something that would or would not happen.


> we humans are capable of differentiating "general political party" from "white nationalists", and can target the latter and not the former.

We’re also good at being boiled slowly, like frogs.


Which is debunked.


Regarding frogs, maybe. Regarding people, it has never been debunked, and it has been proven time and again to be exactly right.


This is correct. Frogs just aren't that dumb; it's been proven. Humans, on the other hand...


you say that as if we didn't just have two years of progressives and media smearing people they disagree with as alt-right.

Doesn't matter what they believe or say: Sam Harris? Alt-right. Tim Poole? Alt-right. Jordan Peterson? Alt-right. David Rubin? Alt-right. And that's just the leftists!


Are you sure? Do we even know, for sure, what "white nationalist" means? I'm white, and I favor a government that's principally oriented toward advancing this nation's interests. Am I a white nationalist?


You're right that some of these terms have ambiguity, but your example is silly - it's well-accepted that the term "white nationalist" stands on its own and means something different from "white and a nationalist."

But even setting aside your specific example, I don't believe that ambiguity should paralyze us into inaction. I think it's fair to say "OK, we'll ban anyone who advocates distributing political power based on race, with the white 'race' getting the most power... when people cross that line won't always be clear, but we'll do our best." For private action especially, we should not let the perfect be the enemy of the good.


> You're right that some of these terms have ambiguity, but your example is silly - it's well-accepted that the term "white nationalist" stands on its own and means something different from "white and a nationalist."

Stephen Colbert would disagree: https://www.youtube.com/watch?v=4nk0dUjYUNI

"You know why you're not supposed to use that word [nationalist]? Because it's the second half of 'white nationalist'. Chopping off the first word doesn't change what it means in our minds."


Stephen Colbert is also a comedian and not a linguist.


Good thing Facebook moderators are linguists. It was high time they get a real job.


> it's well-accepted that the term "white nationalist" stands on its own and means something different from "white and a nationalist."

Is it?

Or is this a deliberately conflated term promoted as an 'official' label in public discourse in order to dissuade association with those favoring the more benign meaning?

Agreed that ambiguity shouldn't create inaction - but it just as well shouldn't promote incorrect action either


> Or is this a deliberately conflated term promoted as an 'official' label in public discourse in order to dissuade association with those favoring the more benign meaning?

Why would you use the term white and nationalist together? Being white has very little to do with being a nationalist unless you believe it has everything to do with it, in which case you would be racist.


Maybe you wouldn't, but if you happen to be white, and taking an (inclusive, not race based) nationalist ideology, you can now conveniently be smeared by describing these two facts..

Oh her? don't listen to her, shes a 'white nationalist'.

Also, if, assuming this confusion to be true, having the term 'white nationalist' existing in the discourse as a negative, those who are not aware of the nuances between 'whites who happen to be nationalist' and 'those promoting a white nation' are pre-biased via faulty discourse to discount the words of 'whites who happen to be nationalist'.

Any popular terminology which deliberately overlooks subtlety and dismisses it when it is pointed out in the discourse is problematic. It's effectively a subtle smear campaign against the non-problematic nuances.

See also the strangely similar situation with the term 'skinhead' -

Initially this was a mostly apolitical working class subculture, most listened to soul music and smoked pot and listened to reggae, and many were apolitical or left/socialist leaning. Genearlly mildly populist, mostly white, but yes somewhat 'dangerous' in that it was a popular social movement of unconventional rowdy people of all stripes. (much like the 'disenfranchised trumpians' that the media is happy to highlight as contributing to the rise of the so-called 'white nationalism' we're talking about here)

Cue one politically motivated overtly racist subgroup acting up and stealing all the headlines, and now the entire term/culture is essentially taboo..

One can argue that this group just got the press and 'messed up the term', but at some point editorial bias is a factor.

For god sakes this is 'hacker news' I shouldn't need to explain this.


> it's well-accepted that the term "white nationalist" stands on its own and means something different from "white and a nationalist."

Many right-wing people claim that in practice, there's not: they're accused of being "white nationalists" for being "white and a nationalist".

Edit:

If people downvoting me think I'm wrong, explain why people like Jordan Peterson, who merely espouse non-leftist positions and happen to be white, routinely get accused of supporting the alt-right and neo-Nazis.

For the people who doubt what I'm saying -- here's a video of Jordan Peterson. This is who the leftists routinely call "neo-Nazis" or "alt-right": moderates who refute their positions and calmly assert values like personal responsibility over collectivist victimhood culture.

https://www.youtube.com/watch?v=o2bFzK2EdIo

Edit 2:

I'd originally written "conservative"; I'm not sure Mr Peterson would describe himself in such terms. I've made it more neutral.


I concur that terms like "Nazi", "white nationalist", "alt-right" and similar have essentially morphed into "person I don't like".

And this applies to both ends of the political spectrum: https://www.youtube.com/watch?v=nYlZiWK2Iy8


> have essentially morphed into "person I don't like"

I’d say it’s even closer to “people I disagree with”.


For many people the latter implies the former.


"White nationalism" is an overloaded term and until tonight I'd never heard the official definition of white nationalism until GPs comments.

I've just assumed it was a vague insult aimed at non-nominal conservatives.


Not sure how you reached the conclusion that white nationalism doesn't have a real meaning. Googling "white nationalism" gives plenty of results that indicate "white nationalism" is an ideology that promotes white supremacy and/or a whites-only nation/racial segregation.

To quote Merriam Webster:

"one of a group of militant whites who espouse white supremacy and advocate enforced racial segregation"


To be clear, I stated that I'd never heard the official definition and used context clues to come up with a running (incorrect) definition.

I am not of the opinion that white nationalism has no real meaning.


Words have meaning, and people know what those meanings are. "White nationalist" does not mean "white people who like their country" and it's disingenuous to pretend it does.


It's much more disingenuous to pretend that words and phrases have universal meanings.


Universal isn't necessary. Well-understood in the American political context is sufficient.


There is an infinite number of definitions for “like their country”.

Nazis in Greece like their country or what they perceive as “their country”.


Wrong. It's a deliberate attempt to language engineer the idea that loving your country is bad. This is not new, the power structures that benefit from centralization (correctly) see national sovereignty as an obstacle.

Note how you will never hear the term "ethnic supremacists" used by conventional media. It's too accurate and does not push the borderless agenda.


I don’t think anyone’s mentioned policing patriotism. You can be as patriotic as you want. America’s great, I love our nation, it’s people, culture, and ideals.

That’s different from saying things like “get that Spanish off the menu, this is America”, “go home foreigners”, and “immigrants are criminals”.

I’m sure you can see the difference.


There will always be some junk quotes people can find (or make up!) to support their agenda.

I don't really care what FB does, I prefer it to have all the rope it needs, but this normalization of taking and modifying the meaning of terms to fit the anti-borders narrative is dishonest and manipulative. Again, it's not even remotely a new thing, the anti-borders crowd has been gunning against nationalism time eternal.


"Loving your country" is not "white nationalism." You're the one trying to redefine terms if you believe it is.


It's an inherently dishonest frame. I cant imagine running around talking about "(insert color) nationalists" when actually referring to people promoting segregation of citizens.


“White nationalism” has been synonymous with segregation and genocide essentially since the inception of the term. I guess if you care really hard about that particular phrase this is a tragedy. But there are ample other, less fraught ways to express that you’re an American patriot. Bemoaning that you can’t say “white nationalist” to mean that seems a strange (or disingenuous) stance to take.


I called the phrase inherently dishonest... and somehow that indicates I wanted to use it for some other context? Like I had positive use for it?

Your comment suggests you have internalized the idea that nationalist and racist are the same thing. Or were you really thinking I wanted to use "white nationalist" (or any color) in some other context?


Isn't the phrase intentionally self selected by these groups? Then what makes it dishonest to use it to refer to their beliefs?


So totally ignore what "inherently dishonest frame" means, and attempt to change the subject. Fine.

Lets take your question as true for the sake of discussion. Do you take your language cues from these people? I don't. I don't see why you would let people you strongly disagree with decide what language to use. Are you concerned you might offend them by not using their preferred terms?

Might you be opening yourself up to some rather trivial social engineering opportunities?

Anyway... def don't talk about frames.


Are you implying outsiders won't find it completely obvious what they really are because of the name? Because it is obvious.


Why take language cues from people so eager to describe things in terms of skin color?


“White nationalism” is an inherently racist concept. Not all nationalism is though. You seem confused that adding the word “white” to nationalism makes the whole phrase mean something else entirely. Maybe consider the context of who uses that phrase now and how it’s been used historically to understand why that particular construction is broadly (and correctly) considered racist and genocidal. Or why other uses of nationalism with other nationalities or colors aren’t.


First you assumed I was "Bemoaning that you can’t say “white nationalist”" and now you are assuming I am confused and don't know what the colloquial use of the term is.

What do you think "it's an inherently dishonest frame" means?

Frames are important, it's why I asked why you thought I was bemoaning the loss of a phrase when I was really describing how the phrase itself is dishonest.


[flagged]


That depends. When you say 'Keep Poland Polish', what exactly do you mean?


Yes. Because history demonstrates our ability to do that - not to allow factions like the KKK to triumph.

We’ve done this kind of thing before, and we’ll do it again. Facebook responding like this IS the marketplace of ideas reacting.


I've noticed this tendency among a lot of right wingers. Lately I've seen it a lot in metalhead circles, particularly black metal, which does have a bit of a nazi problem in some parts.

They'll crow and gloat about how the scene isn't a safe space, that it's founded in hate and intolerance. But as soon as well-known nazi/NSBM-related bands or members of the scene are deplatformed and antagonized for their views, the tone goes straight to "get these leftists out, this is a right wing scene! This is censorship!" and so on.

Apparently only right wing radicals should be allowed to have safe spaces... /s


We know, for sure, the dictionary definition. Consider looking it up & assessing whether you wish to apply the label to yourself.


>Consider looking it up & assessing whether you wish to apply the label to yourself.

But this isn't what will happen. Expressing skepticism about immigration - a fairly normal and mainstream opinion in the 1990s and early 2000s - can easily be construed today as "white nationalism" by many on the political left. It's not you who gets to define what your views are on these platforms, it's the "moderators".


Openly questioning the validity of the capitalist system has been construed as hardcore hunger and famine bread lines planned economy USSR-style communism for decades.

Welcome to not having a safe space anymore.


Openly questioning the validity of the capitalist system is fine. Facebook shouldn't be banning people who don't like capitalism from their platform either.


you are a complaining about your ideology being ridiculed because its bad and leads to awful results, your interlocutor complains about not being able to voice his opinion without actual persecution.


will neo-nazis be able to side-step appropriate regulation by framing hate statements as questions?


Yeah it means neo-Nazis who believe in the innate supremacy of the white race. If you’re one of them I want you to be deplatformed immediately and without sentimental hand-wringing about liberal platitudes.

This will be an ongoing project and will require constant tweaking and adjustment, but I’m all for their removal from the public sphere.

There are underlying economic and social issues leading to the reemergence of this worldview that need to be dealt with urgently and with peacemaking intention. Still, in the meantime this kind of thought if unchecked leads to genocide and must be stopped.



In Sweden perhaps not since white and nation is analogous but in US you are going have a problem.


you think so? republican stuff is already regularly banned from various platforms, mainly because tech companies lean left.

i dont consider myself republican, but the slippery slope seems all too real in this case.


I'm curious, which platforms ban Republicans?


OP said "Republican stuff", which if he meant that various conservative/Republican leaning stuff has been censored on Facebook, that's definitely true. Search around; most cases typically range from "persecution complex/not actually censorship" to "automated processes have implicit bias" to "neutral on the surface but always seem to hurt conservative causes". With FB you tend to see automated things catch conservative stuff, then get overturned on appeal rather than permabanning people.

Facebook banning outside ads for the Ireland abortion referendum comes to mind, as well as the case where a Christian satire site was threatened with a ban for spreading false information because Snopes had fact-checked one of their articles as false. Susan B. Anthony List had a bunch of pro-life ads banned; I think most of them were restored.

https://stream.org/328684-2/ this is a more recent case that follows a familiar pattern: Flagged, rejected appeal, then mysteriously overturned appeal some days later.


I'm sure that tons of conservative/Republican leaning stuff has been censored on Facebook then overturned on appeal.

Hopefully, you also don't doubt that there are also many examples of Facebook censoring liberal/left/non-Republican whatever stuff, and then overturning on appeal, many times mysteriously so, after initially rejecting the appeal. (This post presents like a dozen examples in the genre of black activists being banned for discussion of racism, such as uploading screenshots of racist and sexist harassment they received. You might also want a warning that these examples are interspersed with the author's strongly opinionated and bellicose comments. https://medium.com/@thedididelgado/mark-zuckerberg-hates-bla... )

Due to their scale and the shittiness of their algorithms and processes at this kind of thing, surely you're not surprised there are tons of examples on every side. (This article documents more examples on all sides, including exemptions for a prominent Republican: https://www.propublica.org/article/facebook-hate-speech-cens... )

What reason do you have to believe these mistakes always seems to hurt conservative causes or that you tend to see conservative stuff get caught, disproportionately more than non-conservative stuff?


Can't know for sure, of course. If someone wants to start keeping score, I'd be interested in the results.

If you want to convince me that it's more of a "establishment/fringe" divide than "left/right", that ProPublica piece goes a long way towards making that case.


It's generally considered a "conservative" or "republican" position to contend that males and females differ biologically and that sex is an immutable characteristic. On twitter, if you express this idea towards someone who is trans or advocating trans-issues you will very likely get banned.


You know, those world renowned leftists that run Twitter and Reddit...


> Tim Pool is right, at this rate, sooner or later, the Feds will come knocking and will shut that party down.

What party? The government is gonna come into a private organization and tell them they are obligated to spend money to preserve, host, and broadcast hate speech that doesn't align with the company's values?

Please tell me how this differs from telling small business owners they can't refuse service.


The safe harbor protections from copyright violations for user uploads require neutrality in content hosting, because pre-approval implies you must also vet copyright.

They're welcome to censor to editorialize, but they lose their protections from copyright suits for relaying copyright material uploaded by their users.

For Facebook to be immune from copyright liability for my uploads, when they display them to others publicly for profit, they cannot express prior restraint over my upload. Such commercial copyright violations carry heft penalties, in the thousands of dollars per view: Facebook and Google can't operate in an environment where they're liable to such a degree for uploads.

The government recognized this about internet services, and granted them immunity to copyright related suits in exchange for supporting the American value of free speech on their platforms. (This is a law.) They're free to not accept that deal, but they're liable for their commercial copyright infringement in that case.


Are you a lawyer? If not, can you point to a legal expert explaining why a court would interpret the law as requiring neutrality, or as waiving the protection in the case of pre-approval or vetting copyright, or as requiring that the immunity is only provided in exchange for supporting free speech?

I read and re-read both the DMCA 17 USC § 512 and Section 230 of the CDA, and as far as I can tell, the DMCA only requires responding "expeditiously" to DMCA takedown notices and court orders, and the CDA has no conditions at all but doesn't protect from copyright liability in the first place.

In fact, the CDA explicitly states that its liability protection DOESN'T require neutrality, and extends to “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected”. See https://www.eff.org/issues/bloggers/legal/liability/230


There's a question of if a service which pre-filters (to editorialize) content from a user is actually qualified under 512.c at all -- since the content is no longer at the direction of the user, but at the editorial approval of the service.

An anthology is not exempted under 512.c.


I see. So am I understanding correctly that when you had asserted, in the comment I was replying to, that protections from copyright violations require neutrality in content hosting, you were not referring to settled case law, but rather an untested legal theory that you personally support but that has not been tested in the courts?


I am a lawyer, and while I know nothing about the law in question (not even close to my specialty), I can tell you that statutes are only half of the story. The other half would be court decisions involving those statutes. We live in a common law country which means that court decisions & precedents act as law themselves. They can't just overturn a statute or ignore it, but every court decision on a statute acts as a further refinement to the law in question.

So we'd have to pull relevant case history. You can't just look at the words in a statute. Each single term of art could have a chain of cases arguing over the specific meaning of that term.


> The government recognized this about internet services, and granted them immunity to copyright related suits in exchange for supporting the American value of free speech on their platforms. (This is a law.)

No, it's not. You seem to be mixing an inverted understanding of CDA Section 230 (which was instituted to avoid discouraging host moderation of user content, because prior to 230 exerting any such editorial control risked a host being treated as a publisher rather than distributor, with greater liability exposure for the user content) with the DMCA safe harbor (which, unlike CDA 230, applies to copyright claims.)


> The safe harbor protections from copyright violations for user uploads require neutrality in content hosting, because pre-approval implies you must also vet copyright.

Is this a 'A therefore B' thing due to the way the laws are written, or is this explicitly called out someplace?

> The government recognized this about internet services, and granted them immunity to copyright related suits in exchange for supporting the American value of free speech on their platforms. (This is a law.)

I did a bit of looking (though not a lot) to try and find some sources on this but I wasn't able to really uncover anything that supports this.

Do you have any source available? I'm interested in reading into this idea further, I find it rather fascinating.


Look up Communications Decency Act, Section 230. This is a good primer: https://www.eff.org/issues/cda230


I read the entirety of the page you linked to. Nowhere I found does it say that neutrality is required for the protection to apply, nor that pre-approval or vetting copyright would waive the protection, nor that the protection is in exchange for supporting free speech. Could you point that out to me?

In fact, it links to another page which states:

        Wow, is there anything Section 230 can't do?
    Yes. It does not apply to [...] intellectual property law
https://www.eff.org/issues/bloggers/legal/liability/230

Which directly contradicts your implication that Section 230 is the law that grandparent was referring to when they stated:

    The government [...] granted [internet services] immunity
    to copyright related suits [...] (This is a law.)
Furthermore, Section 230 explicitly states that its liability protection DOESN'T require neutrality, and extends to:

    any action voluntarily taken in good faith to restrict
    access to or availability of material that the provider or
    user considers to be obscene, lewd, lascivious, filthy,
    excessively violent, harassing, or otherwise objectionable,
    whether or not such material is constitutionally protected
(same link)


> Look up Communications Decency Act, Section 230

Sure, but it (1) doesn't apply to copyright—that's the DMCA safe harbor not the CDA one, and (2) was specifically created to eliminate the added liability web hosts were then subject to if they engaged in content moderation, not to require them to abstain from moderation to secure the safe harbor.



No neutrality is required, only best effort removal of illegal material

https://www.lawfareblog.com/ted-cruz-vs-section-230-misrepre...


The US government forces companies to spend money on many things they don’t want to do. Regulatory authority is extremely broad.


The government tells power and phone companies that they can't cut off service to people discriminately.

I'm pretty libertarian in respects to market regulation, but when it comes to mopolistic industries you need regulation.


> Do we wait until it can be shown that companies such as Facebook, Twitter, Google, etc. influenced elections because they refused to serve information from a candidate they didn't like? If we aren't already there, it's not long before it is.

That's an interesting question. What if Google and Facebook would secretly demote and hide articles about one political party and promote articles about another party? Theoretically they are absolutely free to do so but how do you think, will it cause a Congress investigation or not? Those people made noise about much smaller things.

Also, what if Google would start demoting its competitors and everyone who deals with those "untouchable" companies? That would be interesting to see.

I think people don't realise what power these new media have. The monopoly of Facebook or Google is a serious issue and we should not believe that they will always stay neutral.


Well, sure, they could do all sorts of bad things. Normally we don't go after people until they actually do those bad things though, instead of removing their ability to act because they might.


This already happens. "Mainstream Media" doesn't give equal coverage to 3rd party candidates.


Or even DNC candidates, if they're actual progressives: https://decisiondata.org/news/political-media-blackouts-pres...


[flagged]


Here's a recent quote from Alan Dershowitz that's illustrative:

> I received off-the-record information that an order had come from the very top: CNN executive Jeff Zucker didn’t want me on CNN any more. My centrist, nuanced perspective was anathema to CNN’s emerging brand as the anti-Trump network.

That should make it clear that a very small number of media executives ultimately decide what we see on the news.

https://thehill.com/opinion/white-house/436059-alan-dershowi...


Not to go way off topic here, but Alan Dershowitz is the type of person to say anything as long as there’s benefit for him (see: extremely highly paid defense attorney for billionaire child rapist Jeffrey Epstein, and last year Harvey Weinstein), and while he may not be wrong your comment sort of holds him up as having any sort of credibility on the inner workings of the media world. Nobody should take that man at his word.


As a lawyer it's his job to defend people accused of horrible things. If we aren't going to celebrate the fact that accused people can get a good defense, we might as well just do away with trials all together.


I wasn’t arguing about the merits of having aggressive attourneys, I was just indicating that this is a person who’s entire career is built around distorting the truth in his (or his clients) favor.


I agree that it's important to have lawyers defend people accused of horrible things because they could be innocent of those charges. The thing I think I disagree on is that when it is a single lawyer or a group of lawyers constantly defending a group of connected (through power and money) people, it tends to become a lot less clear that they're doing the moral duty of defending all accused instead of defending the accused that will line their pockets the most. The problem here is the one observed in the justice system as a whole, which is that routinely people with less money are disadvantaged in the system due to the lack of money to "convince" these lawyers that shield their actions behind the high-minded moral of defending all accused individuals.

In theory: All accused get representation regardless of accusation In practice: Only the rich who are accused of vile things get a proper defense.


Why should we trust Alan Dershowitz without proof? Giuliani is a lawyer and he goes on TV all the time to lie about easily verifiable facts.


Perhaps the other way around. The media was Hillary’s tool. She asked for increased coverage on Donald Trump because she thought she could beat him easier than anyone else (if you don’t like Trump, thank Hillary) asking to the pied piper strategy from the Podesta emails. She was funding the DNC so she could deliver or withhold media access.

Small point and while I agree overall I’m not sure the media ‘selected’ Hillary. I think Obama Admin did, and after all, it was her turn.


Neutral might not exist.

> It seems to me that Facebook and Twitter are trying to have it both ways.

Of course Facebook and Twitter are trying to have it both ways, and, indeed, all ways — there are many more ways for people to communicate, or stances to take, than “both”.

Platforms like FB and Twitter hope to be the communications backbone of the world. The problem is, the world has opinions on what kinds of communications are acceptable. These platforms try to stay neutral, but the people are not.

Neutral does not exist. It's all relative.

In a polarized world, a neutral platform will die because either side won't like it. In a more interesting world, it might still die because people don't like people who don't think like them.


I like your idea that "neutral might not exist." FB is going about this in the obvious control-oriented strategy: we have a problem, ok, we'll make a rule against it. This doesn't work, and can only lead FB to having lots of rules and everyone unhappy with them.

The problem with FB is that they have built a system that rewards polarizing opinions. Edward Deming said that your system is perfectly set up to give you the results you are getting, so if you want different ones, you need to change your system. Incentivize quality, disincentivize "viral-ness". Maybe limit viral-ness. Optimize for something besides addictiveness^Wengagement. Admit that people think, say, and do harmful things and build a system that is robust to it. Add some kind of negative feedback for posts.


Here's another "way" that's ignored: Nearly all services give users little to no control over the content they see. They can't self-moderate or filter the content coming their way. Instead, users have to "Appeal To Authority" (whether Facebook or the Feds) in order to make changes. It's incredibly disempowering in both cases, and doesn't need to be this way. "Mods" on Reddit help. Page Owners on FB pages help. However these are still "Authorities" that must be appealed to. Even resorting to contacting advertisers to pressure them to not sponsor "bad" content is still an Appeal To Authority.

Essentially, you have no control so the only solution is to not participate or appeal to a higher authority. Both are terrible.

In a weird tangental side-thought: The Internet is to Western Capitalism what Glasnost/Perestroika was to the Soviet Union. The opening of information, while allowing many great things through, also removed the filters that kept harmful content on the margins. In the Soviet state, it was the authority of the State that dictated content. In the Western world, it's mostly those who own/control large media platforms. Since liberalization, each situation found The Authority under acerbic attack from these new wellsprings of content, both legitimate and illegitimate.


"Imagine if there were only three newspapers in the entire country, soon the world, controlled by a small group of people who wish to use their publishing for their own agendas."

Come over to Australia, that is what our traditional media has been like for years!


They're protected by the US first amendment, unconditionally so given that online websites by design work through publishing. You can't compel them to host unwanted speech.

As for liability, here's why the laws intentionally shield them;

https://www.lawfareblog.com/ted-cruz-vs-section-230-misrepre...

And finally, there's nothing forcing you to use them. Convenience (or lack of it) is not a sufficient argument for regulation.


>Imagine if there were only three newspapers in the entire country, soon the world, controlled by a small group of people who wish to use their publishing for their own agendas.

Could you explain how this is any different than the handful of television companies for the past 70 years?

As best as I can tell, the older demographics seem to be the ones primarily watching the news, and it shows in polling data. Their opinions and narratives are easily manipulated by the "news" networks they are faithful to. Companies run by a "small group of people."


>At what point do such things change into a public discourse problem?

When they censor opinions that the top 10% agrees with, since those are that control the media and what's acceptable.


I don't see Germany tolerating any support of National Socialism in their public sphere and rightly so ... some ideologies just too bloody abhorent and against the public good.


For now. The passing of time and new generations will result in the loss of the lessons of the past.


Contrary to popular believe we already had laws against hate-speech in the middle of the 19th century. The authoritarians used that regularly to underline their alleged prosecution. And it worked because they were right to a degree.

You can only remember the lessons of the past if you understood them in the first place. And frankly, arguing for more content controls by authorities is an insult to anyone actually having an understanding of those times.


slippery slope. at the moment there is no dialogue. commenting on white supremacist content on facebook and calling it out currently will get you banned. this statement purports that they will be more balanced from now on


> influenced elections

In fairness to Google and Facebook, etc. TV networks have always been super biased and had far more reach than internet platforms - and it was actually conservatives who shut down (with good reason) the "fairness doctrine" that would have forced them to carry content they didn't necessarily agree with.


You listen to Tim Pool? He’s a right wing psycho that hides behind fake journalistic integrity to cherry pick stories for his target audience of white supremacists, xenophobes, and other anti-social nut jobs.


I, for one, listen to everybody I'm allowed to listen to.


We're already there.

Repeating a previous study, which had shown bias in the 2016 election, Dr. Robert Epstein shows that bias in the 2018 election pushed voters towards democrats. Approximately 4.6 million undecided voters are likely to have flipped. Numerous districts, particularly CA 45, are likely to have been flipped by Google's weaponized bias.

There has also been an awful lot of bans of beginner politicians, all on one side, often corrected after the election (damage is done) with a lame excuse about algorithms making mistakes.


Do you have citations for any of this?



These are both written by the same person, and amount to spreading FUD about liberal bogeymen and painting conservatives as a victim.

For example, he is quoting Hillary Clinton instant search debacle as an example of bias, which was debunked as being technical illiteracy:

https://www.snopes.com/fact-check/google-manipulate-hillary-...


I'm guessing not.


I don't believe these slippery slope argument are sound. You can apply these kind of arguments to almost every alleged free speech issue. Just make it an argument against banning <insert whatever content you find absolutely reprehensible and clearly worth banning>.


In principle, who could be opposed?

In practice, how will this actually work out?

True story. I have a friend who got a temporary ban from Facebook (I think 90 days?) for commenting on a story about a Texas billionaire paying to hunt endangered animals, "How much would it cost to hunt Texas billionaires?" That was considered a violation of their anti-bullying stance. Knowing her, it wasn't. It was sarcasm.

Moving on to this policy, I'm happy to see neonazis and the KKK have a hard time. But there are people in the UK and EU who would see Brexit as being a separatist cause motivated by racial animosity. Will we see discussion of a future such measure banned by Facebook on such grounds? And remaining banned even for people who support it on economic grounds because they think that having to follow EU policies on GDPR, copyright, and so on will be a net negative for the UK?

Facebook is going to have some difficult conversations ahead. And the more of these lines that they draw, the more difficult boundary cases they will run into.


In principle, I am opposed. I don't think hiding unwanted dialogue is very helpful in the long run. It just makes it fester somewhere else that isn't as visible. If Facebook feels they have to do something about it, then I'd suggest just flagging it. That gives people more information that they can use how ever they see fit. There is also the thought of collateral damage, like your friend ran into.

I also oppose it on general grounds in that I'm a big proponent of freedom of expression. Facebook can do what they want, we're not talking about the government here and I understand that, but I would rather they didn't go this route. I don't want to live in a society where what can and can't be said is strictly regulated either by force or by general consent. I seem to be one of the few that feels that way, though, any more.

There is also the practical concern of what happens if the ideas of what is acceptable and what isn't changes over time? Who knows which opinions that you hold now might become anathema at a later date. For example, the opinion I'm expressing now used to be a lot more common than it is today.


> In principle, I am opposed. I don't think hiding unwanted dialogue is very helpful in the long run. It just makes it fester somewhere else that isn't as visible.

Let's prod at this:

- Should ISIS recruitment videos and propaganda be allowed (nay, encouraged) because deplatforming them will make them put their recruitment pamphlets elsewhere? Where else do they put their recruiting materials?

- Should we publicly and loudly encourage people to self harm, ideate suicide, etc. because if we don't, they'll just find secret places to do it?

- Should we consider white supremacist recruiting materials "dialogue" at all? If we're talking about dialogue, as in a formal debate between Richard Spencer and pretty much anyone else, one on one, that's potentially interesting (also embarrassing for Spencer). On the other hand, a video posted by a white supremacist isn't dialogue. It's even less dialogue when they can delete comments they can't aptly respond to, and when the people there are already interested (Richard Spencer videos weren't ever going to cross my feed). You're calling this dialogue, but it's really a one sided dog and pony show with maybe some unlucky sacrifices. Perhaps we shouldn't ban dialogue between white supremacists and normal people, but you're not advocating for dialogue, your advocating for Facebook (and whomever else) to support (distribute, platform, etc.) white supremacist propaganda and theater.

>I don't want to live in a society where what can and can't be said is strictly regulated either by force or by general consent.

How do you propose to create a society where people aren't allowed to dislike you? That's what you're asking for, essentially. "Freedom from consequences" is the common way of putting this, but really what you're asking for is an infringement on my freedom of association. If you piss enough people off, that'll come back to bite you. What's the alternative? That people can't hold you accountable for your previous words? That quickly devolves into a society of 4chan, which, well, I'm not sure why you'd want to live in that.

>For example, the opinion I'm expressing now used to be a lot more common than it is today.

How certain of this are you? Did black people or women have the freedoms you suggest 100-150 years ago in the US? Was anyone advocating for that?


> Should we consider white supremacist recruiting materials

How do we define "white supremacist recruiting materials? A significant number of people have said to me, un-ironically, that opposition to immigration is an instance of white supremacist speech. Same with opposition to affirmative action (they even called a crowd full of mostly Asians opposing affirmative action white supremacy in action). And I live in Bay Area, same place where Facebook is headquartered. That's the problem with trying to define ideological blacklists. Once you put a category onto the blacklist, everyone will try to push their political opponents into that category. This is how you get things like "learn to code" becoming a ban-worth offense (but only when directed to journalists).

Nowhere on the page linked in the original post does Facebook define "white nationalism" or "white separatism".


>Nowhere on the page linked in the original post does Facebook define "white nationalism" or "white separatism".

Do they define ISIS? That didn't seem to cause as much concern.

As for the rest of your post, I'm baffled that you somehow think that "preventing people from trying to kill people" is somehow a negative thing that should be criticized.

I also think that if I were consistently being mistaken for a white supremacist, I'd think about why that were happening, and probably try to distance myself from the things that were causing those mistakes. Your view appears to be that it's better to just prevent people from voicing their confusion.


> Do they define ISIS?

ISIS is (or was) an explicit political group, an organized proto-state. It literally called itself a State. It was very clearly defined.

> As for the rest of your post, I'm baffled that you somehow think that "preventing people from trying to kill people" is somehow a negative thing that should be criticized.

"Preventing people from trying to kill people" was already against their terms of service. The whole point of this announcement is to announce the fact that Facebook is expanding their prohibited categories beyond "preventing people from trying to kill people".

> I also think that if I were consistently being mistaken for a white supremacist, I'd think about why that were happening, and probably try to distance myself from the things that were causing those mistakes. Your view appears to be that it's better to just prevent people from voicing their confusion.

As I stated earlier, multiple people have told me that desiring stronger border security and building the border wall is white nationalist. Others have told me that supporting any expansions of immigration restrictions is white nationalist. A few have even told me that opposition to affirmative action (even among groups in which Asians are the ones primarily opposing it) is white nationalism. I did not get the sense that they were saying these things ironically or in jest. Are these views tantamount to "preventing people from trying to kill people"? I live in San Francisco, which while not exactly the same environment as Menlo Park, is still in the same metro area as Facebook's HQ. There's a significant possibility that folks with similarly liberal definitions of white nationalism exist at Facebook.

Also the way you say you would "probably try to distance myself from the things that were causing those mistakes" really makes it sound like the chilling effect this has on discussion is a feature, not a bug. As significant number of people suspect that tech companies' expansions of prohibited speech is becoming a means of partisan manipulation. Statements such as yours likely reinforce this belief.

> Your view appears to be that it's better to just prevent people from voicing their confusion.

I am not trying to prevent anyone from voicing anything. The issue is that a significant number of people do confuse (or deliberately label) mainstream political views with "white nationalism" and "white separatism". Thus, Facebook's banning of these things is very likely to be seen as - and perhaps actually be implemented as - a means of suppressing legitimate political discussion. It probably would have been better to keep their prohibited categories the same, and perhaps more aggressively police certain circles and keep their policy - as you put it - "preventing people from trying to kill people"

There's already enough suspicion that Facebook is acting in a partisan manner, and more stuff like this is going to inspire ever greater calls to enforce stiffer regulation on tech companies and perhaps even breaking them up. This announcement seems like a shot in the foot for Facebook.


>ISIS is (or was) an explicit political group, an organized proto-state. It literally called itself a State. It was very clearly defined.

Are you suggesting that groups like the daily stormer, the national policy institute, etc. are not organized political groups? The national policy institute is quite literally a lobbying organization.

>As I stated earlier, multiple people have told me that desiring stronger border security and building the border wall is white nationalist.

Like I said, if I were being confused with a white nationalist with any regularity, I would take steps to correct that perception, perhaps by trying to understand why people think such things. Through conversation, directly, with those people, or on my own with research. Not by trying to appeal to some higher ethics that the people who think I might be a white nationalist are wrong.

Something about everywhere you walk smelling like shit and all.


> Are you suggesting that groups like the daily stormer, the national policy institute, etc. are not organized political groups? The national policy institute is quite literally a lobbying organization.

Facebook's announcements do not specify these groups in particular, and do not seem to indicate that its definition of "white nationalism" and "white separatism" will be nearly as clearly defined as banning recruitment to the Caliphate.

> Like I said, if I were being confused with a white nationalist with any regularity, I would take steps to correct that perception, perhaps by trying to understand why people think such things. Through conversation, directly, with those people, or on my own with research. Not by trying to appeal to some higher ethics that the people who think I might be a white nationalist are wrong.

> Something about everywhere you walk smelling like shit and all.

Sorry to burst your bubble, but calling people white nationalists almost certainly isn't going to change their views. Quite the opposite, all it accomplishes is lessens the severity of these terms and makes it such that people roll their eyes when they see the label thrown around. And it further alienates people like me, who do support immigration and affirmative action etc, but are increasingly turned off by the every diminishing threshold at which terms like these get thrown around. Furthermore, it makes it even harder to distinguish between actual white nationalists and legitimate views that people are trying to make socially acceptable by painting them with the white nationalist brush.

Something about crying wolf and all.


[flagged]


> So your concern is literally just "I don't know how this will be enforced." If so, why not just...wait, and voice your concern when you have evidence of facebook abusing this, instead of doing what you're doing now, which looks to me a lot like what you disdainfully refer to as "crying wolf".

After that, the damage has been done and trust in Facebook will have been diminished. And there is a very big difference between "crying wolf" (as in, actually mislabeling something benign as something dangerous) and voicing concern based on previously observed behavior. Calling opposition to immigration white supremacy is the former, saying that there's a distinct probability that Facebook's enforcement of these rules will impact non-white-supremacist views is the latter.

> Sure, but there are few enough white nationalists now that I don't particularly need to worry about them. The issue is if their mindshare grows. And calling them out keeps it that way.

You seem to be missing the core point I've been making. The issue is not with actual white nationalists which, as you point out, are few and far between. The issue is with mainstream views getting consistently labeled as white nationalist which introduces several issues. One, it makes distinguishing between the former and the latter more difficult making it very likely that mainstream views get banned under the label of white nationalist. And two, it makes it so that people are less concerned with claims of white nationalism thus making it more acceptable for the actual white nationalists to operate openly.

> See, I know a lot of people, and the only ones who ever say things like this are ones on the internet. I've never met a real actual human who is so concerned at the thought of someone calling someone else a name that they're going to stop supporting immigration reform or affirmative action.

You're right, but you're refuting a straw man. Alienation doesn't mean ceasing support for particular issues. More often than not it means refusing to identify with political groups. This isn't speculation, this is backed up by evidence. Record numbers of people don't identify with either Democrats or Republicans [1].

> Like I said, you seem really, really concerned about not being called a white nationalist. If you really do support things like affirmative action and reasonable immigration policy, I'm not sure why you have such a concern. Either no one is calling you a white nationalist, in which case, again, why do you care about the nonexistent strawpersno who might do so? Or, the one person who is is so fringe as to be easily ignored.

Yet again, you're talking about things I never wrote. I do not get called white nationalist and I don't think I ever have been called as such. But I do see co-workers and former classmates call mainstream views white nationalists, and I do see how it makes political discussion toxic and non-productive. These aren't strawmen, these are people I go to work with every day that are adopting stances that make it impossible for them to engage with people with opposing political views beyond hurling insults. Even though they aren't calling me a white nationalist, they're still calling my conservative family members and friends white nationalists when they say things like supporting the border wall makes someone a white nationalist. This isn't healthy for a democracy and it makes me not want to identify with the groups that they are a part of.

And to circle back to the original point that was made, the prevalence of mainstream getting called white nationalist makes it a real possibility that Facebook will start banning mainstream political views. If "build the wall" starts getting banned as white nationalist as many of my co-workers want it to be then Republicans are going to be very eager to bring down the regulatory hammer on Facebook and perhaps big tech companies in general.

> https://news.gallup.com/poll/225056/americans-identification...


>Calling opposition to immigration white supremacy is the former, saying that there's a distinct probability that Facebook's enforcement of these rules will impact non-white-supremacist views is the latter.

So, you have historical evidence of facebook mislabeling things that are clearly not white supremacist as white supremacist?

If not, then yes, you're absolutely crying wolf, because you're ascribing the behavior of some entity to some unrelated entity, apparently based on geographic location (fun fact, the people enforcing FB's policies are probably in Austin or Phoenix[1], not MPK).

Also, your poll is outdated[2]. The numbers are back up. And looking at broader trends, more people identify as liberal than ever before[3], so all of this stuff that you think is causing this shift towards conservatism, or at least away from identifying as a democrat, isn't, since people are more willing to identify as Democratic and liberal now than in 2016 or 2012. I don't think these are related, but since apparently you do, I hope that this helps you understand that your reaction appears to be the minority reaction, and that this public shaming that you so despise is working.

[1]: https://www.theverge.com/2019/2/25/18229714/cognizant-facebo...

[2]: https://news.gallup.com/poll/15370/party-affiliation.aspx

[3]: https://news.gallup.com/poll/245813/leans-conservative-liber...


Allow me to summarize the story of the boy who cried wolf, because you seem to not have the correct understanding of the story:

In a village there exists a boy that is tasked with protecting a flock of sheep by shouting "wolf!" if he sees a wolf, to alert the townsfolk to come to his aid. Out of boredom (or self-satisfaction of his ability to get a reaction out of the townsfolk, depending on the variation) he shouts "wolf!" despite not seeing any wolf. After a couple instances of false alarm, the townsfolk no longer heed the boy's alarm and do not come to his aid when a wolf really does attack the flock.

The boy knew that there was no wolf attack, but claimed that a wolf was attacking the sheep anyway. I am doing no such thing.

Facebook as only just announced this policy, so no one has any observation of how they are enforcing it. This is obvious. The post itself states that the policy will only go into effect next week. That you are asking me if I have historical evidence regarding a policy that has yet to go into effect does not indicate that the original post was read in much detail (though it does make your previous statement that this announcement is about "Preventing people from trying to kill people" a lot less surprising).

I am not, and have never, claimed that Facebook is using the guise of white supremacy to ban mainstream politics. Again, this policy isn't even in effect yet. I am, however, highlighting the fact that a significant segment of Facebook's work force (tech workers in the Bay Area) espouse a view of white supremacy that does categorize things like opposition to affirmative action and immigration as white supremacy. Will this impact the enforcement of their views? I don't know, but my take on this situation is that it's a significant risk.

Also, I'm not sure why you're claiming that my data is outdated. My data was from 2018, at which point independents were at 44%. The latest figure on your linked Gallup poll is 42% - not very far off. It's still significantly above the historical average of the mid 30s.


If you want an analogy, then a white nationalist terrorist organisation like, say, Atomwaffen or Combat 18, would be the equivalent to ISIS. There's no problem banning those - indeed, we already have laws for that.

On the other hand, banning white nationalism as a whole would be more like banning Salafi Islam as a whole, instead of specifically ISIS, al-Qaida, al-Shabab, Boko Haram etc.


>There's no problem banning those - indeed, we already have laws for that.

This is not correct. It is not illegal to be a member of, or recruit for, Atomwaffen. Its illegal to commit crimes. But associating with a group isn't a crime (for good reason!). That doesn't mean that we shouldn't take steps to keep people from associating with people who will cause them to commit crime.

Or like, should we just go whole hog and encourage MS-13 to post recruitment videos on youtube too.

As for salafism, no that's akin to something like the WBC, which while reasonably considered a hate group, hasn't been banned from anywhere as far as I know.


My understanding is that membership of terrorist or rebel groups is illegal (and in fact can be grounds for US force to kill US citizens without trial like several US citizens killed while being a member of Al-Qaeda* ). But membership of groups that are... let's just say "unsavory" is not itself illegal. I don't know about groups like Atomwaffen to determine whether they belong to the former or the latter.

\* Which did stir up some controversy, but is not all that surprising. The precedence for this dates back to the US Civil War, it would be ridiculous to claim that the Union Army criminally murdered hundreds of thousands of US citizens on the battlefield without trial when the latter were fighting for the Confederacy.


What about this for an idea (and I'm not sure I love it, but it's just spitballing): We allow expression of any _ideas_, but we put a limit on _recruitment_. I.e., anybody gets to say "White people need a home country" or "the Caliphate shall reign supreme", but nobody gets to say "come to the meeting at 10th and Main 5 o'clock". That way we at least know what's on people's mind so we can engage them but we're putting a limit on helping them grow the movement (until and unless we decide they have some sort of valid point, which is part the point of letting them speak).


The recruitment rarely happens on the platform. The "normal" process is

1. Get exposed to $radical_group on social media

2. Go searching for $radical_group because their ideas are intriguing/edgy/whatever

3. Find the actual site of $radical_group, and now they control the entire process.

Think of it this way: you see a poster for some cool band that says "RadicalPandas's music will change your life. We have concerts, but they're secret and we can't tell you where they are because the man wants to keep us down". If anything, that sounds even more edgy and intriguing. So instead you just have to tear all of the posters down.


People with mental problems are always going to exist. Using them as props to keep chopping away at the boundaries of free speech is foolish.


The really radical stuff doesn't happen on open platforms, they're more of a tool for early exposure.

The actual planning and such happens on private sites with secret forums.

As a real-world example, a Danish right-wing nationalist network called ORG was exposed in 2011, after having allegedly been active since the mid-80s. Its members counted a number of well-to-do individuals, as well as members of the police and known violent white nationalists. The group ran and had control over several sports clubs, including martial arts and gun clubs, which they used for training members in street fighting and tactics.

They ran a database of basically every politician and public figure who ever espoused left-wing views, and thousands of ostensibly left-wing citizens as well, labeling them as "traitors" to be "dealt with".

At least one of their members was from the armed forces in some security-related capacity, and they had reasonably good opsec procedures in place.

I have no doubt that even after being exposed and having its members publicly named, the group or a new equivalent still exists, with new secret sites and a heightened level of paranoia towards possible infiltrators.

By deplatforming these people and driving them further underground and paranoid, we limit their ability to attract people through social media, for fear of exposure.


Do you have a way of going about this that isn't authoritarian? (Extending the definition of "authoritarian" to Facebook acting within its own platform). EDIT: Sorry this was really vague. By authoritarian in this case I mean that some authority has to decide what is and isn't an acceptable opinion (as opposed to, say, deciding on an impartial process that somehow weeds out bad opinions).

And somewhat related, does this mean you have as dismal a view of humanity as it seems, that you don't want people exposed to dangerous ideas? This all seems necessarily paternalistic. EDIT I can't imagine, for instance, trusting in democracy with such a view.


I agree with KozmoNau7 here, so many notions of constructing some system that will somehow weed out the "bad" ideas seems misguided in my opinion. The reason is that these systems necessarily treat all ideas as equal valid inputs by requiring anything be allowed. This is done when we have bountiful evidence to the contrary, evidence that fascism and racism lead to horrendous consequences if left to spread through disingenuous tactics and deception.

In fact, we humans already form a decent system for weeding out bad ideas/opinions, we just don't listen to our own past experiences on the matter. We found out that fascism is putrid and yet now we try to come up with a new system that will weed it out instead of just chucking it into the garbage bin and moving on.

It's not paternalistic or a dismal view of humanity or that "humanity cannot be trusted with such a dangerous idea", it's because the people advocating for it constantly lie about it and intentionally trick/indoctrinate others into following. Under the right circumstances of me growing up, I fully believe someone could have deceived me into believing it, so I don't think I'm better than anybody who got sucked in. We can trust in democracy as long as we take proper precaution against things that prey on the freedom of expression and association in order to remove those rights from others.

Put another way, what real benefit is there to "freedom of speech, except for advocating fascism" as opposed to "freedom of speech, no exceptions"? The US already doesn't have pure unrestricted free speech because there are exceptions made for outlier situations where free speech is not protected in efforts to secure the safety of others. That hasn't led to total collapse or censorship.


I completely agree with what you wrote. I think too many people erroneously presume that free speech somehow guarantees an audience and acceptance, and when neither one happens then they feel their rights are being violated.

Personally, I feel there should be legal protections for all speech, but not protections from social ramifications. By this logic, I am totally fine with the idea of punching fascists. Which itself could be hit with assault charges, but then, how many juries would disagree with the reasoning for the violence? Let the masses decide for themselves, basically. It's what bothered me about the firing of James Gunn and Roseanne both despite their polar opposite politics. The companies that employed them cared so much about what the public thinks or feels that they couldn't be bothered to let the public decide for themselves to support either celeb or not. (And I know Gunn has since been rehired, but this happens so often, my point stands.)


In the case of ORG, it was exposed by a leftist research network that is primarily anarchist or non-authoritarian communist in nature. As far as I know this research network has no central authority, but rather a horizontal democratic structure.

I lean towards anarcho-communism myself, generally close to the original communist definition of libertarianism. So my answer to which authority should run things is "none". Facebook runs their own ship and are free to choose which content they will platform. While I would prefer a complete absence of hierarchy in all aspects of life, I think we can agree that this is probably not going to happen at FB.

I also strongly support anarchistic deplatforming and generally hindering fascists, nazis, racists and other bigots from spreading their noxious views. We have to realize at some point that some views simply aren't worth spreading.


> By deplatforming these people and driving them further underground and paranoid, we limit their ability to attract people through social media, for fear of exposure.

Do you have any evidence to support that statement? Would you expand on why you think so? I would estimate that by driving them more underground, you make them seem more cool in the eyes of angry teenagers. You also make them more radical because underground their opinions are not exposed to contradicting views. In my opinion, we should do exactly the opposite - we should give them platform to speak and oppose them. Isn't that what open society is about?


The number of people being radicalized by isis decreased and their social media reach disappeared.

There are a few studies in this thread that conclude similar.

The whole sunlight is the best disinfectant trope is only a trope, not a truth.

See also any conspiracy theory and the amtivax movement. When groups appeal to fear, not logic, as a recruiting tool, you can't logic people away.


Who's "we"?


For exploratory purposes, just imagining we can make an agreement. Don't worry, I don't like the concept of collective decisions either.


Just to put this out there from the beginning, I don't know what the exact appropriate balance between censorship and freedom of expression is. Also, FB can do what they like and don't have to listen to me.

With regards to ISIS, if their recruitment videos are publicly available, then people can contest them publicly too. They can explain why this viewpoint or whatever might be appealing to certain demographics, but look at this evidence for what happens to those people once they join.

The self-harm / suicide question is harder for me. I still think it's a good idea to have this happen in the public sphere so people can expose those who encourage such things in other people. However, if an individual is being targeted specifically then maybe there should be a line there. I don't know.

As for white supremacy, yes, I said "dialogue" when what I really meant was "speech". I don't know who Richard Spencer is, but if he's blocking comments then people can still post rebuttals in other videos.

I guess my main point is that it's better to confront such things openly then to try to hide them under the rug.

There also seems to be this idea that ISIS and white supremacists and whoever can magically infect others with their dogma as a form of mind control or something. The danger, in my opinion, lies in trying to bury it. Then when people stumble upon them, or are recruited and given a login somewhere, they have access to only one side of the argument and are effectively in a bubble. In the public sphere where this plays out openly, they have ready access to conflicting information on the same platform.

I don't want a society where people aren't allowed to dislike me. I don't quite get where you're coming from. I want a society where people can say what they think and not get censored for it for the most part. It requires that we all come to terms with the fact that some people hold different opinions that we really hate.

I wasn't alive 100 to 150 years ago. However, growing up 40-ish years ago, I remember hearing such things as "I don't agree with what you say but I will defend to the death your right to say it". Not something you hear much, any more.


> There also seems to be this idea that ISIS and white supremacists and whoever can magically infect others with their dogma as a form of mind control or something.

At scale, this is exactly the case. If the probability of the average person becoming radicalized is greater than the probability of the average radical de-radicalizing, then open discourse will, on average, increase the number of radicals until those probabilities equalize.

So now I ask you: how successful have you seen discourse at deradicalizing Jihadis, white nationalists, westboro baptist church members, anti-vax parents, or flat-earthers? Is it more, or less, successful than those groups at radicalizing new people?

>I want a society where people can say what they think and not get censored for it for the most part. It requires that we all come to terms with the fact that some people hold different opinions that we really hate.

Someone says "I want to diddle your kid". You tell them to never speak to you again. They say it to someone else. That person blocks them too. The same person keeps telling everyone that they're interested in child molestation. Free association says that we should all be able to shun that person because yikes.

But you say something different. That we shouldn't shun the potential kid-diddler. We should continue embracing them and their views, because shunning them censors them. And all views should be cherished and protected.

Or maybe you aren't saying that, but then its very tricky, because if everyone blocks them on facebook, they've been deplatformed. So why is that okay if facebook banning the person isn't? Or maybe it's that proclaim support for child molestation isn't okay, but proclaiming support for genocide is. In which case we're right back where we are now, just that your definition of "for the most part" is "expression of child molestation is bad, but white supremacy is fine". This brings me to my final point:

>I don't agree with what you say but I will defend to the death your right to say it

This only works when what you're saying isn't a threat to me. See how willing people 30-40 years ago were to defend to the death the right of black power groups to call for black empowerment. (hint: people got so scared the republicans banned guns)

If I start saying "we should kill all free-speech absolutists", how long are you going to defend my ability to say that? Only as long as I don't have power. Once I have the ability to actually carry out my threats, say, if I'm the president, do you really want me saying I'm going to kill some subset of society? Are you going to defend my right to say that? What if you think I might actually carry out what I'm claiming?

As long as the speech isn't a threat, people are okay with defending it. This is why no one really cares about flat-earthers, they aren't killing anyone, they're just the easy butt of jokes. Same with, until very recently, anti-vaxxers. Jenny McCarthy is kooky and their stuff is nonsense, but what's the worry? Well, measles, the loss of herd immunity, etc -> hmm, maybe we should stop this kind of thing.

Sometimes its possible to make the fix reactively. Kicking unvaccinated children out of school stops much of the harm that unvaccinated children cause, and in many cases forces parents to vaccinate their kids despite their views.

Unfortunately for explicitly violent groups, there isn't an easy reactive solution. You can't un-shoot someone.

As soon as people are threatened, they stop. "Expression of white supremacy is acceptable, but child molestation isn't" is just an expression of what you find threatening. What you're seeing is actually a shift toward more people speaking. Those who were voiceless before are speaking out and saying hey, this was really shitty before, let's not go back, and all the people saying otherwise are actually, truly threatening to me, so perhaps lets have them not say these things.

And on the other side you're seeing people threatened by this change in how people are considering speech, and feeling threatened, and saying "hey actually let's legislate these platforms so that I can keep spewing my garbage". White supremacists feel threatened by censorship, black people don't. That should tell you all you need.


The problem is who defines what hate and a hate group is. It is usually the people you don’t want to do so, people that want to tell you what to think. Eg to some Ben Shapiro, Jordan Peterson and Thomas Sowell are considered white supremacists.


That is a general problem but it doesn't address the argument. We already find ISIS recruitment videos and suicide ideation groups bad but where do we draw the line? If it's so difficult to define what should not be allowed in the open then I don't know what can be unallowed, including ISIS recruitment videos in the name of freedom of speech.


Talking about ISIS is a red herring. Are they considered white suprenacists now?

If we are to judge by social media companies past behavior [1] it’s regular conservative and liberal opinions that will be considered hate speech or white supremacy

[1] https://www.google.com/amp/s/www.foxbusiness.com/technology/...


The comment I originally responded to wasn't about white supremacy, but about censoring discourse. ISIS propaganda is as much "discourse" as white nationalist propaganda is, so why didn't you defend ISIS when it was deplatformed years ago? I realize that's kind of a gotcha, so I'll ask a similar question: given that it appears you feel that any deplatforming based on ideology is bad, do you criticize facebook for having deplatformed ISIS?

To pre-address your other comment, neither white nationalist propaganda nor ISIS propaganda necessarily includes calls to violence. So the incitement of violence standard applies to neither.

If not, what differentiates ISIS from White nationalists? Both are violent groups that wish to kill people different from them. As far as I can tell, one just looks a lot more like me. They're both dangerous, and we shouldn't let either recruit people on facebook. What is the flaw in that line of reasoning?

As for that case, it was dismissed earlier this month[1], apparently because the plaintiffs didn't actually have any examples of wrongdoing on the part of companies, and just filed the suit to raise awareness of freedom watch's advocacy[2].

[1]: https://www.pacermonitor.com/public/case/25495855/FREEDOM_WA...

[2]: https://law.justia.com/cases/federal/district-courts/distric...


ISIS has a clearly defined religious teaching and a core part of their message is using power as well as violence to push their religious teachings onto others to create a world Islamic kaliphate.

On the other hand what is being called alt right and white supremacisrs by mainstream organizations is bullshit:

- The economist called Ben Shapiro alt right today https://mobile.twitter.com/TheEconomist/status/1111248348114...

- The guardian says Jordan Peterson is supposedly alt right https://www.google.com/amp/s/amp.theguardian.com/science/201...

- Bret Weinstein is supposedly alt right

And ridiculously enough Ben Shapiro is a Jew that is the biggest hate target of the alt right, and Jordan Peterson is hated by the alt right such as Richard spencer and Vox Day.


You're including the alt right on an argument about white supremacists. Could you perhaps answer my questions without taking about the alt right, who aren't being discussed by anyone but you.

White supremacists also advocate violence against other races often to subjugate them. This is highly similar to isis.

Or are you claiming that all members of the alt right are white supremacists?


ADL defines alt right and white supremacy to be the same: https://www.adl.org/resources/backgrounders/alt-right-a-prim...

SPLC also say they are the same: https://www.splcenter.org/fighting-hate/extremist-files/ideo...

The opinions of SPLC and ADL is especially relevant because mainstream media and social media companies such as Facebook seem to have relationships with them as well as similar orgs to define actions against people.

So no, we are taking about the same poorly defined term because these orgs is an authority to Facebook.

The reason it’s bullshit is that these orgs doesn’t relate it to truth and evidence, use their power to slap this label onto anyone that disagree with them and punish people for things they didn’t do.


>This vague term actually encompasses a range of people on the extreme right who reject mainstream conservatism in favor of forms of conservatism that embrace implicit or explicit racism or white supremacy.

Unless you're of the opinion that "racism" and "white supremacy" are synonymous, the ADL does not define them as the same.

>So no, we are taking about the same poorly defined term.

To be clear, the "alt right" isn't white nationalist, it is, however, a white nationalist recruiting tool. Sort of like how you don't just start out as a random person and then the next day you become Jihadi Jane. You're slowly radicalized. The alt right to full-on white supremacist pipeline is the same way.

You start off interested in self help, so you read 12 rules for life; then get caught up in Peterson's weird ideas about western culture; then pretty soon you're seeing not just his youtube lectures, but other lectures about western culture; and then videos about the decline of white/western culture; and then you're watching Richard Spencer; then you shoot up a church.

Now absolutely, granted, not everyone follows the entire path, very few people end up radicalized, but while I'll absolutely agree with you that Peterson isn't a white supremacist, he's absolutely a useful idiot for them.

But again, let's stick to people who have been accused of being "White Supremacists" since you're the one using the term alt-right, not Facebook. Which innocent people are getting accidentally confused for "white supremacists" specifically?


From the same ADL source on white supremacy:

> White supremacist Richard Spencer, who runs the National Policy Institute, a tiny white supremacist think tank, coined the term “Alternative Right”

Or from the SPLC source:

> The racist so-called “alt-right,” which came to prominence in late 2015, is white nationalism’s most recent formulation

Both clearly use them in the same breath.

We can argue semantics about one being the tool for the other etc etc. But that doesn't change the fact that they are being tightly related by organizations that is an authority to Facebook.

SPLC and ADL is very aware of its power, and has related much of the prominent dissenters of wildly different ideologies to what they claim is alt-right; Maajid (an arab muslim, they lost a lawsuit on this one and apologized), Jordan Peterson (an individualist liberal), Ben Shapiro (a pretty normal conservative), Dave Rubin (a classical liberal) etc etc.

SPLC and ADL are by this evidence too often bullshitters that don't relate what they say to truth, but instead relate it to their ideology that is not preoccupied with truth-seeking. It is really a shame their views are elevated like this as a justification for Facebooks and other organizations abuse of power.


You still haven't given an example of someone being called a "white supremacist". Would you please? Or admit that you are fearmongering.

Again, Facebook didn't use the word already right. They're talking about white supremacists. You're arguing that these are the same thing. So if that's the case, you should have ample examples of people calling Jordan Peterson a white supremacist also.

If not, then maybe they aren't the same thing, and your equivocation is unwarranted and you should stop attempting to confuse the subject.


With regards to examples I can do better than showing how people use it in the same breath. Here [1, 2] are examples of where SPLC specifically uses alt-right and white supremacy to describe the viewpoints related to Jordan Peterson, Ben Shapiro, Dave Rubin etc in the same article.

I've also shown that even the authoritarian sources Facebook trusts use alt-right and white supremacist in the same breath when defining the terms. These articles are no accident, this is what they believe.

This is not fear mongering. This is people that don't mind using their power to accuse viewpoint opponents for things they didn't do and with no evidence claiming they hold reprehensible viewpoints.

Edit: can't reply due to message depth limit, but the debate here is about facebook suppressing people like Jordan Peterson, Ben Shapiro, Dave Rubin using arguments made by SPLC and ADL such as the one in [1,2]. Why do they have the right to suppress other peoples viewpoints on dubious grounds with no recourse?

It is pretty clear from your arguments that you agree that they are related or as you say "a white nationalist recruiting tool" regardless of how you otherwise view them. The process facebook institute will therefore suppress these viewpoints based upon no evidence and by subjectively mischaracterizing their viewpoints, with no recourse for this power abuse.

[1] https://www.splcenter.org/hatewatch/2018/06/07/prageru%E2%80...

[2] https://www.splcenter.org/hatewatch/2016/08/25/whose-alt-rig...


Right so both of those article said exactly what I said: there's a process of radicalzation and people like Rubin (and Peterson) feed into that process.

Once more: please give an example of someone calling Jordan Peterson a white supremacist, or admit alternatively admit that no one has done so and you were fearmongering.

Not saying jbp associates with white supremacists, or serves as a useful idiot for white supremacists. That's all well known. You claimed that people called Jordan Peterson a white supremacist. You still haven't justified that claim, and that's because, quite simply, it's false.

You're fearmongering. No one has called Jordan Peterson a white supremacist. That's just a factually incorrect statement. They've said he unintentionally helps white supremacists, he occasionally associates with them, take selfies with them sure, but for all your trying, you still haven't been able to find someone actually call jbp a white supremacist.

So perhaps, just maybe, your worry is misplaced.


I’d like to more clearly find out where and if we disagree where it matters the most. Do you think it is justified to ban or suppress content on social media from people that SPLC/ADL view as part of the radicalization process? Eg jordan Peterson, Ben Shapiro and Dave Rubin.


>Do you think it is justified to ban or suppress content on social media from people that SPLC/ADL view as part of the radicalization process? Eg jordan Peterson, Ben Shapiro and Dave Rubin.

I certainly do, but evidence (e.g. historic enforcement patterns) indicates that Facebook does not.

Note that my opinion on this isn't global and without nuance. There are things that Jordan Peterson does and says that aren't, at all, controversial and are even occasionally interesting and thought provoking. Not everything he does radicalizes people. Same with, I assume, Rubin and Shapiro, although I don't pay them enough attention to know or care either way.

However, that's all beside the point. Facebook *doesn't appear to thing that they're worth censoring. I'm not sure why my opinion is at all relevant.

You still haven't given me that example, by the way.


Huh? How is it a red herring? Their propaganda is considered speech that is worth censoring. May be specifically naming ISIS makes this too easy. How about generally censoring radical Islamist propaganda that doesn't necessarily name ISIS specifically? The point that you're not really replying to is that people are apparently willing to censor speech for various reasons, one of them links to particular ideology. Is this okay or is it not?


Free speech doesn’t cover incitement to violence, and it is already illegal and that is why it’s a red herring. Advocating for violence against innocent people is not primarily ideology, but someone being a bad human being that should go to jail for breaking the law.

You really don’t need hate speech and white supremacy policies to prohibit speech that is already illegal, but you need it to suppress speech that some people might disagree with when they want to abuse their power to suppress it.


Regarding the first paragraph, I know, but that isn't under discussion. What is under discussion is censorship of speech due to ideology, perhaps because it leads to violence if although there are no direct calls for violence. White supremacy and radical Islamism both need not call for violence, but the ultimate conclusion of the ideology is violence. It sounds like you don't want to censor either, which I think is consistent.


You are still assuming a clear definition of what white supremacy is, which Facebook doesn’t provide.

How do you think about the false positives?


ADL defines alt right to be white supremacy to be the same: https://www.adl.org/resources/backgrounders/alt-right-a-prim...

SPLC also say they are the same: https://www.splcenter.org/fighting-hate/extremist-files/ideo...

These orgs provide the best definition we can get right now due to Facebook relying on others to define it instead of providing clear definitions, and these orgs is an authority to Facebook. If there was ever an attempt at using power while abdicating responsibility that is it.

The reason while any action build upon this is bullshit is that these orgs doesn’t relate it to truth and evidence, use their power to slap this label onto anyone that disagree with them and punish people for things they didn’t do.


You were given a definition of white supremacy below and elsewhere so I won't repeat others. It just seems like under this sense of freedom of speech we shouldn't censor radical Islamism or white supremacy if we can't ban speech merely due to ideology leading to violence. That is a consistent and fair sense of morality I think.


That is a bit extreme from my worldview and not necessary to be more balanced. I am arguing that facebooks process is biased, and facebooks authoritative sources are unaccountable to fix their own mistakes and ideologically biased in a way that is not evidence based. A better process with an adversarial appeal with authoritative viewpoint opponents would have compensated for some of this.

To figure out if we disagree where it matters the most, do you think it is justified to ban or suppress content on social media from people that SPLC/ADL view as part of the radicalization process? Eg jordan Peterson, Ben Shapiro and Dave Rubin.


free speech in US does actually cover incitement to violence. in Europe it doesn't


Incitement to imminent violent action is illegal https://en.m.wikipedia.org/wiki/Imminent_lawless_action


Well, ISIS is a criminal organization for reasons unrelated to their speech. Once they have become one, it's not unreasonable to make their recruitment videos illegal to distribute to other people, on the basis that doing so amounts to conspiracy to recruit people to commit crimes (but mere possession and watching them should still be legal, because in and of itself that doesn't help them).


What happened to Heather Heyer was also not speech. What happened to Clementa Pinckney and 8 other African-Americans at the Charleston church shooting was not speech. What happened at the Tree of Life Pittsburgh synagogue shooting was not speech.

Shouldn't the same argument apply to the videos that radicalized those murderers?


No. Why should it?


My parent comment had said that conspiracy to recruit people to commit crimes is not an unreasonable basis for making it illegal to distribute recruitment videos for ISIS, because they are a criminal organization due to their violence, not their speech. That implies that that should also not be an unreasonable basis for making it illegal to distribute recruitment videos for violent white nationalist organizations.

Does that answer your question?


Yes, thank you for explaining. I misunderstood the context of your question.


If you can prove that any particular video did so, sure, I don't see a problem with that.


are you blind? white supremacists kill more people in the west than ISIS


You can improve this comment but removing the question and providing a source for your claim.


Rubbish. You're claiming that "white supremacists" kill more people than the Bataclan murders, the bombing of the Ariane Grande concert and a host of other atrocities (Sweden, etc.) carried out by self-proclaimed ISIS supporters.

That's not only wrong it's highly offensive.


You are offended by an unsupported claim, and proceed to make an opposite unsupported claim. Now I'm offended not by your assertion, but by your lack of substance.

I googled for "extremism murder statistics": This is only the US [1], and I have not looked at methodology, but it supports GPs claim, not yours. Please discuss, but do so using substance, not master suppression techniques.

[1] https://www.adl.org/murder-and-extremism-2018


Well I think the problem lies in that all his examples were from outside the US, and not only during 2018 which is when and where you gave an example. Here's another source that is only in the US that seems to agree with Wildgoose [1](pdf) from 2010-2016. This source certainly doesn't count as many deaths (note your link said their were 313 deaths from right-wing extremists from 2009-2018) it only goes from 2010-2016 saying there were 140 deaths (note your link also says 50 deaths happened by right wing extremists in 2018). I'm not sure what the discrepancy is there in not accounting for all the deaths from your link. This link states that from 2010-2016 68 deaths were from Jihadist-inspired terrorists while 18 deaths were from white nationalists and extremists. Yet again it would be nice to see data from the whole of the West.

Just to give you another source from the same people which seems to agree with Wildgoose's parent atleast with its reasoning (albeit by making a few qualifiers on the dataset that bend it to be in favor of that reasoning) [2] from 2001-2016 in the US.

[1](pdf): https://www.start.umd.edu/pubs/START_IdeologicalMotivationsO... [2]: https://www.start.umd.edu/pubs/START_ECDB_IslamistFarRightHo...


Wait, are you claiming they’re not? Lol


I do not have to claim that an untruth is such. It is rather those peddling untruth and statements not relating to truth at all (bullshit) that have to confuse everyone else to a sufficient degree to twist their worldviews for their purpose.

Unfortunately trying to control others viewpoints hurts yourself as much as anyone else because you are making yourself and what you can learn from others less adaptive to reality.


Is it trying to control others’ viewpoints to say that the KKK is a racist, white supremacist group? I thought we’re all in the marketplace of ideas here. You are literally telling me what to think, which is what your original comment purported to be against.


Show me one example of a white nationalist action by Jordan Peterson or Ben Shapiro, and I’ll be right there beside you fighting it. It’s on you to justify your claim.

Marketplace of ideas is not about throwing out unfounded claims and with a high likelihood have others treat them like a proposition founded in truth seeking. Rather it will be treated as the bullshit it most likely is unless you supply evidence to the claim.


> It just makes it fester somewhere else that isn't as visible.

That's a win in this case. It already festers somewhere else. White nationalists would like to spread their ideology outside of their normal bubble. They often talk about "redpilling normies." It's harder for them to do that if "normal" people have to explicitly seek out that kind of speech.


>It already festers somewhere else

excluding people from a conversation doesn't help them learn why/how they think is wrong.. those "hidden" places that let the wrongthink fester only create more actions brought by wrongthink (guess what they talk about? wrongthink).

i dont understand the logic of removing someones ability to converse if they dont understand why they're wrong.. they cant ask questions

someone explain to me how banning content/people = deradicalizing


White supremacists use Facebook for the purpose of recruiting and radicalizing people who are not yet white supremacists. The point of a ban like this is not to deradicalize people who are already white supremacists, it is to make it harder for people to become radicalized in the first place. People do not typically seek out white supremacists; white supremacists are always looking for people who are vulnerable to their message. Facebook seems to have come to the conclusion that their platform is just too useful to such terrorist groups and have decided to ban them before the problem gets worse.


>it is to make it harder for people to become radicalized in the first place

I've been in fb group chats with holocaust deniers etc. (mans literally inboxed me a youtube video, I never bothered clicking the links he sent but it was funny)

"banning white nationalist content" is a cute headline but doesn't hit the problem of seemingly benign normal users that would never "reveal their power level" slowly radicalizing their friends with content not on their site lol


I said "harder" not "impossible." Yes, white supremacists are still going to use Facebook and are still going to try to radicalize people, but it will be harder.


"harder" not "impossible."

The Rhetoric Tricks, Traps, and Tactics of White Nationalism https://medium.com/@DeoTasDevil/the-rhetoric-tricks-traps-an...

the overt white nationalism content you think of doesn't really exist, i guess maybe if ur a boomer it still lingers? idk


I sure saw a lot of overtly antisemitic, racist, white supremacist imagery in that article...


those memes are really old, theres more subtle content that radicalizes people

its less orchestrated and just friendly until a point


>i dont understand the logic of removing someones ability to converse if they dont understand why they're wrong.. they cant ask questions.

I don't think I've ever seen someone already radicalized reason their way out of it through conversing on the internet. I'm guessing the calculus is that exposing this content to people makes it easy to be suckered in, but we don't see the opposite affect.


You should read this awesome New Yorker article from several years ago: https://www.newyorker.com/magazine/2015/11/23/conversion-via...

That said, I think I generally agree that conversion via social media is unlikely. Still, I’m also not sure about my position on banning people. But the above article is a great read.


The Phelps people are disgusting, but they are not terrorists. The thing that is missing from most conversations about white supremacists is that their hate goes beyond words. These are terrorist organizations whose members have committed one violent, murderous act after another, year after year. Banning white supremacists is no different from banning ISIS.


Makes sense. But Westboro is radical, and the above poster had been talking about radicalization, not terrorism.


Most people do not critically evaluate content, especially if their own ego or group identity is involved.


I was waiting for the "only idiots get suckered into wrongthink" comment


Then I'm an idiot too. There are a whole slew of cognitive biases that cause us to prefer our in-group.


welcome to the club!


Actually, their point isn't "only idiots" more than it is "everyone." Cognitive biases are real and hard to overcome and we all can succumb to them.


1hour later

thats how quick it can take to slowly expose people to content and how useless it is to "ban white nationalist content"

v cute headline tho


outbound content is dangerous ban outbound content to make it "harder" but not "impossible" to expose "idiots" to "radical content" https://imgur.com/o14rjegrhk21.png


> It just makes it fester somewhere else that isn't as visible.

Is that true? Would we have the anti-vax movement if it weren't for Facebook & co.?


We had the anti-vax movement before Facebook.

And if it were banned, they'd just spin it as the pharma companies paying to suppress the truth. It would only further confirm their beliefs.


That "spotlight is the best disinfectant" is a very convenient narrative. I do realize in the end we all want to believe in whatever is convenient for us (notably including anti vaxxers). But is there actual scientific evidence supporting that particular narrative to work?

Through all of mankind we had strong mechanisms to form consensus, including social repercussions. Those don't work anymore, since what would have become outcasts in earlier generations can now easily (for example on facebook) find like-minded communities and fulfill their social needs/get approval/etc. It'll be interesting in how different realities people can believe in before society breaks apart. I'd prefer we wont let that happen.

The question is what actually works. Censoring them might reinforce their believe, but I can accept giving up on some if it does effectively stops the spread.


> Through all of mankind we had strong mechanisms to form consensus, including social repercussions.

This is a dangerous, dangerous path to go down to if you belong to any kind of enlightment inspired ideology. What kind of things were supressed the hardest? Sexual Deviancy. questioning authority. Questioning relgion. You really want this kind of society? I think maybe we can stand some antivaxxer ...


> > Through all of mankind we had strong mechanisms to form consensus, including social repercussions. > This is a dangerous, dangerous path to go down to if you belong to any kind of enlightment inspired ideology.

Mechanisms to form consensus does not necessarily mean rule by mob, quite the opposite. Positive consensus mechanisms can be trust in the scientific method, institutional credibility, and acceptance of reason. These mechanisms can support an enlightenment ideology, not prevent it.


I wasn't talking about mob rule. I was talking about an orthodoxy that's ruthlessly enforced by those in power. Cause that's what it was before.

"Freedom of speech" is not an accident. Enlightenment thinkers have been pondering this for 200 years and more and objections have been successfully adressed over and over again. It's as much of the type of consensus you describe as we will ever have. 'But computers' is not sufficient to just do away with it.


Pretty naive if you apply this conclusion to many countries in the world.


> The question is what actually works.

Make them pay a material cost: link vaccination to welfare benefits/family tax rebates, works well in Australia (search no jab, no pay). Also make an up to date vaccination record a requirement for enrolment into schools.

They will complain and might keep on spouting shit, but at the cost of ~$10k/child/year they'll change their actions pretty quick.


this is what the free market would do but I thought we don't want that because 'everyone should have affordable healthcare'. Isn't an antivaxer 'everyone'? It's kind of spooky, progressives convince themselves they have no ingroup bias by making their ingroup unreasonably large but the price for that seems to be that they un-human people who have a different view.


> Through all of mankind['s history,] we had strong mechanisms to form consensus, including social repercussions.

So, you're saying this mechanism is a good thing? Because mechanisms that reinforce the current social consensus, whatever it might be for that era, tend to maintain the status quo. And a desire to maintaining the status quo is a big chunk of the philosophy of, yep, that's right, ... conservatives.

I've often said progressives aren't liberals because they don't agree with liberal philosophical values of the Enlightenment. Their "maintain the current societal consensus" argument is the strongest evidence of that yet.


>Through all of mankind we had strong mechanisms to form consensus, including social repercussions.

And throughout history people lived to be 30 and died of plague wallowing in their own excrement thinking the devil did it.

If your ideology needs thought control to work it needs to be taken out at the back of the sheds and shot. The second a society stops discussing ideas openly is the second it starts sliding towards a new dark age.


In fact, the anti-vaxx movement is more than a century old:

https://www.historyofvaccines.org/content/articles/history-a...


Maybe it wouldn't be as big but I can assure you nothing gives a conspiracy theory more credence among it's followers than being deplatformed.


This is what people said about Alex Jones and his network. When he was deplatformed there was a surge of downloads for his app from white supremacists trying to show support and people gave your exact argument that now he has been legitimized.

But what actually happened were those numbers were inflated by his followers and without the echo chamber of the larger base he had on YouTube his numbers eventually plummeted. Instead of being a regular in the news he occupies a small corner of the internet without access to the larger impressionable ever refreshing base he once had.

He has now resorted to trying to sneak videos back on to you tube.


[flagged]


MLK was heavily persecuted in the sixties for his views and message by private and public entities. What point are you trying to make.


> MLK was heavily persecuted in the sixties for his views and message by private and public entities.

And people should be okay with his persecution by the people who had the political upper hand in that era? No? Then why should anyone be okay with deplatforming Alex Jones (as bad as he is) by whoever has the political upper hand at the moment?


>And people should be okay with his persecution by the people who had the political upper hand in that era? No? Then why should anyone be okay with deplatforming Alex Jones (as bad as he is) by whoever has the political upper hand at the moment?

Because one was a indispensible paragon of civil rights and an avatar of anti-racism and the other is a shrieking whackjob entertainer peddling freeze-dried survivalist food while spreading lies, mental illness and grief to families of murdered kids.

Hey, you're the one who brought up history.


And when the next generation's MLK shows up and gets persecuted again by the powers that be, except now there are even better deplatforming tools to silence him? Whoops, game over.

Hey, you're the one who failed to think things through. The tired old "b-but deplatforming will only ever by used against _bad_ people" argument is exactly a failure to appreciate the lessons of history, of which MLK is an excellent example.


>Hey, you're the one who failed to think things through.

No, I'm the one who corrected an utterly insane equivalence drawn between a commercial figure who drives grieving fathers of murdered kids to suicide and a man who was literally killed for his demanding equal treatment for all people. Again: if you bring up history, you are now forced to deal with historical results and categories. Deal with them. Don't just pretend you are.


I don't see a valid counterargument in your response?

It's ironic that, on a site where the UNIX philosophy is widely appreciated, that so many fail to appreciate the wisdom of the old quote "UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things.". So too, with Alex Jones, MLK, and freedom of speech.


I upvoted this comment because I saw people were downvoting it.


I wonder what would happen if they flagged it the same way my thermostat flags my monthly energy usage.

“Most of your peers consider this post racist neonazi propaganda.”


> Facebook can do what they want, we're not talking about the government here and I understand that

That isn't so clear: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=139661


Scott Alexander's theory is that having an open forum has become incredibly difficult, maybe tending towards impossible. All the incentives line up to make moderation steadily more costly or steadily more draconian over time, or both:

https://slatestarcodex.com/2019/02/22/rip-culture-war-thread...

It's not clear how to preserve a forum for free expression while spammers or trolls seek to suck up all the available bandwidth for their messages. You can end up with a sort of spectrum allocation problem, where if everyone is allowed to crowd the same frequency then communication on it becomes impossible. But once you start moderating content, it's really hard to find a satisfying balance. And on the margins, some trolls are indistinguishable from some people with really unbelievable opinions.

I don't know what the answer is, it seems like a really complicated problem.


From the release:

> Our own review of hate figures and organizations – as defined by our Dangerous Individuals & Organizations policy – further revealed the overlap between white nationalism and separatism and white supremacy.

It doesn't appear that this content is being banned because Facebook surrogates find the politics in question abhorrent (though they likely do so find). It is being banned because it is acting as part of a recruitment funnel for an ideology that openly advocates violence. They collected evidence of this before acting.

The line is somewhat blurry, but it does exist. On one side, we have a sensible use of editorial discretion; on the other, we have censorship. Is Facebook going to cross the line at some point? Almost certainly. Basically every individual or entity to have editorial discretion has both made mistakes and abused that power.

It remains to be seen to what extent that will happen here. If anything, they've erred on the side of permissiveness so far (e.g. how long it took to ban ISIS). If that changes, we should call them out at that time.


> It is being banned because it is acting as part of a recruitment funnel for an ideology that openly advocates violence.

That would be believable, if it was applied as such across the board - e.g. also removing Christian Dominionists, Stalinists, Salafi Muslims, NoI etc.


Facebook already deplatforms radical Islamic preachers on a regular basis.

I doubt that have enough data to show the links for the other groups.


Are you sure about that? Try searching for "الموت للغرب" on facebook (Death to the west)



This is "terror content" though, not the ideology behind it. I tried looking around, and there are plenty of pages preaching the ideology. It sounds like they're okay so long as they don't directly call for violence, showcase violence (execution videos etc), or try to recruit for either of those, even though the end goals are inherently violent. Most white supremacist content is similar in that regard.


I'd note that "Death to the West" has a very mixed set of meanings.

Often is isn't meant as a threat, but instead is a political statement calling for the end to the hegemony of the West. However, sometimes it is meant as hate speech (often when associated with calls for the destruction of Israel).


I don't know about Salafi Muslims specifically, but Facebook already does censor radical Islamist groups like ISIS. This move is trying to be more across-the-board by including white supremacy under more of its aliases.

As for the others--when's the last time a Stalinist drove a car into a crowd or committed a mass shooting?


[flagged]


The argument being pushed here (whether they themselves understand it or not) is basically that "seeing something a little bit to the right > leads a little more to the right > then even more to the right > NAZI!"

But it's not at all! Viewing racism as a left/right thing is wrong altogether. While it isn't obvious in US politics today, the left has often had a problem with race too.

Racism should be abhorrent to both left and right, and that should be independent of immigration views.

Which hilariously is a concept born almost exclusively out of modern leftist and current "_blank_ studies" thought/ideology.

Well actually Facebook has data showing this, so I don't think that's correct.


Because Facebook can misapply its policies does not mean it should have no policies, it merely means it should implement them better.

Obviously there are other kinds of content that Facebook could ban, but merely because those things exist doesn't mean this shouldn't also be banned.


"it merely means it should implement them better" this is the theory of every government committee, ever. In practice, generally, _it doesn't happen_.

I believe private companies are usually better, but still highly dependant on size and a number of other factors.

I definitely don't have faith that FB will wield this well, and the worse part is it will probably pretty hard to find out from the silenced parties when they f!@# up until much later.


why not just have their policy be the first amendment and they abide by the same rules as the country that created them? Why should corporations take on censorship powers that are prohibited for the government? Why should a private for-profit corporation controlled by one man have more power than a transparent institution controlled by democracy?


Governments have vastly more power over their citizens than any corporation. Facebook cannot imprison or kill users that violate their rules, and being banned from Facebook is not all that big of a deal (I have not had a Facebook account for more than a decade, and honestly I feel no particular need to go back). Moreover, if you required private for-profit corporations to adhere to the first amendment, you would quickly run into problems. Scientific journals could not exist -- the first amendment comes with no requirement for scientific rigor. Newspapers could not exist -- the first amendment has no requirement for journalistic quality (hence the legality of fake news).

The framers of the constitution could certainly have included a requirement that private institutions follows the same rules as the government. There were certainly big and powerful non-governmental organizations at the time -- the Catholic Church, for example.


I think the standard answer is that corporations are less restricted because they are not able to enact those policies through force. The government can fine you, imprison you, etc. regardless of what you think; however, you don't have to use Facebook in the first place.


that is the default answer but there comes a point when you have to reconsider it in light of the facts of a particular situation.

Take email for instance: first class mail is protected by the 5th amendment, and a warrant is required to open and read first class mail. Move communications to a private digital platform and the presumption of privacy is completely flipped into a presumption that all your emails will be read and stored for future reading. When all communications go online that leads to a very significant change in how the law applies to private communications and that is exploited by the government just as much as it is exploited by the corporations. After many years of there being a free-for-all where law enforcement and intelligence claimed they could view anything online without a warrant there was finally some push back by courts to reassert the rule of law, but email still has substantially less privacy than first class mail.

Speech on public forums is a similar case. The 1st amendment applies to the town square but if a corporation creates a digital town square then it gets the right of censorship. Then the government applies pressure on the corporation to censor on their behalf and the government begins to exercise a power it did not previously have. That was not a problem when online platforms were small and inconsequential, but when they grow to billions of users globally, then the power to censor speech on those platforms becomes quite influential on the exercise of real world power.

Corporations do not have police powers but they are subjects of governments that do, so once they get power they can always be made tools of the people that hold power over them.


No, it means that while you can cheer this decision today, how do you know it won’t be used against you in the future? Or if the decision maker sensed your intentions or sarcasm or if you don’t beleive your speech to be hateful but someone else does.

The whole point of defending speech you don’t agree with is it promises that arbitrary rules won’t be used when YOUR speech runs afoul or someone else’s feelings.

Let me phrase it to you like this... if Mark Zuckerburg who you may generally agree with leaves Facebook and is replaced by let’s say Donald Trump and his assistant Mike Pence; do you still think it should be Facebook as a service provider’s policy to gauge what offensive speech is?

Maybe it’s not Facebook’s job to ban legal things you personally don’t agree with.


"Maybe it’s not Facebook’s job to ban legal things you personally don’t agree with."

The problem is not that I personally disagree with white supremacy. The problem is that white supremacist organizations are a significant and growing terrorist threat, and they are using Facebook to recruit new members. This is no different from the various platforms that banned ISIS.


Actually, while arguably it may be growing (kind of hard to tell with single and no digit yearly numbers) It isn't all that much of an issue overall or even relatively speaking.

https://www.bbc.com/news/uk-47626859


Did you read your own link?

3 of the 4 graphs against time showed right-wing extremism increasing, in Western Europe and North America, the UK, and the USA, respectively. The only graph that didn't show that right-wing extremism is a growing problem instead showed that it has been consistently bad in Germany, worse than the other two categories combined (in that graph, the other two categories were left-wing attacks and "Not Identified").

What's arguable about it?


It raises through animosity mostly. It had been on the net for years without any significant effect. Rumors have it they tried to recruit some weeaboos, but they were too smart.

The poster is pretty much correct.


Did you mean to reply to someone else's comment? This doesn't appear to have anything to do with my comment.


> No, it means that while you can cheer this decision today, how do you know it won’t be used against you in the future?

Because I'm not a white nationalist? Facebook already bans child pornography, pro-ISIS content, and doxxing. Does anyone seriously believe that Facebook is slippery sloping to banning all speech?

But let's say that Trump and Pence do indeed take over Facebook and make it so they ban all users that don't loudly praise Trump. If they do, honestly, so what? Facebook can't imprison you or legally remove your property. The worst they can do is ban you from their platform. Which, well, that sucks, but there's lots of sites on the Internet. In fact, you could go ahead and legally make a non-censorship Facebook.

...of course, sites like that already exist, in this world, not in the Trump/Pence Facebook world. And they're dominated by child pornography, white nationalism, and doxxing. No one at large uses them because they're horribly toxic and disgusting. Almost like some rules about content are actually helpful for sites on the Internet.


It seems you and I go to different websites; and I’m quite happy about that.

But I beleive my comment said legal things. Having a different opinion is still legal, for now. You jumped to all sorts of illegal things.

And my context was what if “the bad people” are suddenly running these services or in positions of power and relative to them, you are the one with hate speech? Perhaps it shouldn’t be up to them, or you, or anyone to police what someone else MIGHT find offensive.


Pro-ISIS speech isn't illegal but it's still banned. Pornography isn't illegal but it's still banned. So the legality argument here isn't particularly compelling.

> Perhaps it shouldn’t be up to them, or you, or anyone to police what someone else MIGHT find offensive.

You have got to be kidding. If it's on a platform I control I can make any kind of legal judgments about the content I find acceptable. And if you're banned, again, so what? Facebook is not a government or even a government-adjacent entity: I have no right to participate on the site, and they have no obligation to platform my speech.


Now I'm curious about what non-censorship sites you are using? Because the only ones that I am aware of are full of terrible people. If you have one that isn't over-run by the bad guys, please share it because others here might want to use it too.


> No, it means that while you can cheer this decision today, how do you know it won’t be used against you in the future?

I don't.

My view on this action is a view on this action, not every potential future action Facebook might take in the future that is loosely analogous.

> if Mark Zuckerburg who you may generally agree with leaves Facebook and is replaced by let’s say Donald Trump and his assistant Mike Pence; do you still think it should be Facebook as a service provider’s policy to gauge what offensive speech is.

Yes.

I also think that they would do a horrible job of it, and the board of Facebook should not choose them for that job. I also think Donald Trump does a horrible job directing the policy of the executive branch of the US government, but I don't think that fact means the the President of the United States should not direct executive policy.

If I thought that no one should have any responsibility or authority if Donald Trump would fail in the responsibility or misuse the authority, then, well, I wouldn't allow anyone to do anything.

> Maybe it’s not Facebook’s job to ban legal things you personally don’t agree with.

It's absolutely Facebook's job to decide what messages they are willing to relay on their platform. That's a direct consequence of the First Amendment.


The part where if people who you strongly disagree with were making the rules that you insist you would still be ok with that - is intellectually dishonest.


> how do you know it won’t be used against you in the future

I think it's foolish to wring your hands over the arbitrary machinations of a multibillion dollar corporation's online platform. They don't care about you or me and frankly they have no obligation to do so. This logic is akin to complaining that McDonalds puts too much lard in the fries, maybe that's true, but if that's the case then don't give McDonalds your business; the fact that there is a McDonalds on every street corner doesn't mean they are obligated to modify their business process to be within the parameters of your approval.

> if Mark Zuckerburg who you may generally agree with leaves Facebook and is replaced by let’s say Donald Trump and his assistant Mike Pence; do you still think it should be Facebook as a service provider’s policy to gauge what offensive speech is?

That's Facebook's prerogative. I don't really use Facebook, but if I felt like Facebook was unfairly targeting me I'd stop using their platform, if they elected Trump to the board of directors I'd stop using their platform. The solution to all these Facebook problems is very simple: stop using it.


Sarcasm and bullying aren't mutually exclusive.


Indeed. In fact, it's a rare bully who won't protest that they're "only kidding!" whenever they're called on their bullying. And in their mind, they probably were only kidding.


> That was considered a violation of their anti-bullying stance. Knowing her, it wasn't. It was sarcasm.

That line is grey and I'd argue that your friend's comment, while obviously hyperbole, does not respect human life.

There was an article posted about cyclists that points out the danger ous dehumanizing others; that comedy / hyperbole help group think achieve it.


So we are banning jokes now?


I think it's pretty reasonable to ban jokes about killing people. Maybe it was an overreaction here, but I certainly try to avoid even jokingly saying things that could be interpreted as a death threat if taken seriously, especially on the internet where it's easy for tone and context to get lost.


Just the unfunny ones


Of course we are. Where have you been the last 5 years?


> That was considered a violation of their anti-bullying stance. Knowing her, it wasn't. It was sarcasm.

That's what bullies usually say when you confront them.


A one off joke, or even mean remark, is not bullying.


Why should we tolerate any mean remarks at all?


How much would it cost to hunt [insert group of people here]?

Even if someone commented that as "sarcasm" it would look pretty bully to me.


Nope. You misunderstand what bullying is. You can't bully an abstract group.


In principle, who could be opposed? In practice, how will this actually work out?

We've had Nazis and Klansmen effectively shut out of public discourse for decades. They aren't on TV. Their ads don't run in newspapers. If you go shopping in a pointy white hood and a swastika armband, the store (or the shoppers, more insistently) will likely ask you to leave.

Facebook (and similar) are the ones with the odd new 'principle' thing they tried which is 'it's ok for the hyper-overt bigots to shit everywhere with no consequences'. In practice, it has not worked out well so they are belatedly changing it.


Blegh, the worst is coming in here and reading people criticizing and outright banning the use of sarcasm, or implying that they know your friend better than you and that she definitely literally meant what she said.

This is the end of freedom and, along with Europe decisions recently, the beginning of all the worst dystopias we have already listened/read/watched about.


I don't like recent trends either, but you really need to read more.


In principle, Facebook has never branded itself a bastion of free speech.

In practice, we've seen a lot of examples across YouTube, Facebook, and Twitter where things are blanket deplatformed under the guise of policies like these, and it's concerning.

One thing is for certain - Facebook can't possibly do a worse job than YouTube has at policing content.


If only Facebook focused its products on encrypted, private conversations, they wouldn't have to worry about / could claim to be incapable of policing content...


I promise you, people would still want to police communications like the kind that are being discussed here.


Corporations and organizations always had the ability to police their content to their own desires. A more interesting question is when does that become perhaps ethically wrong; Are people against the big companies policing content _in general_ (and thus likely against the first statement), or at people against the big companies policing content that violates popular definition of free speech once they reach a certain size?

I mean, we're discussing this topic on a site that highly polices content, albeit in one that does in different ways, but such different ways that it attracts the very commenters in this thread to read and respond to content rather than visiting other sites -- or any at all.


Until there are objective, rationale ways to determine what content to ban and what not to ban i’d prefer that we stay clear of using filter based on “I know it when I see it”.


If we did that with pornography we'd still be swimming in porno ads.


> Knowing her, it wasn't. It was sarcasm.

Did she know that the person she directed that comment to did not know her?


You don't have to know anyone to read that comment as satire. I suppose Facebook would also have banned Johnathan Swift for his "Modest Proposal". https://en.m.wikipedia.org/wiki/A_Modest_Proposal


But people do literally post on Facebook that they're going to kill someone and then follow up and do it. Remember Brenton Harrison Tarrant?


A tiny minority... who would've done it anyway even without Facebook.


Surely you see there's a difference between a Modest Proposal and posting a Facebook comment suggesting someone should die with no other context informing people it's a joke?


Doesn't the context of billionaire endangered animal hunter inform us that it may possible be not-serious.

Surely?

On the other hand, there's bound to be people who believe fair-trial-judicially-decided capital punishment would be fair retribution for anyone who intentionally kills endangered animals.

I'm going to say something like: I'm against capital punishment because it tends to kill innocent people at least occasionally, not because it isn't often deserved.


The point of a comparison is that there is a parallel, not a difference. There is always a difference.

What is the parallel? In both cases, people who did not recognize the author's point took offense. (And yes, a lot of people were seriously horrified by Swift.) And in both cases the author's actual point was close to the direct opposite of the one that those people thought. Some people found obvious, others didn't. To people who found it obvious, it can be surprising that it wasn't obvious to others. People who didn't find it obvious think it a horrible thing to say.

In this case the comment is directed against the justification that the billionaire offered that it is OK for him to kill the animal because he paid lots of money to do so. And her point is that just because you pay lots of money to do a wrong thing, doesn't make it OK to do that wrong thing. To see it, put the billionaire in the animal's shoes. How much money would it take to make hunting OK? Obviously no amount of money would suffice! Just as the billionaire's having paid lots of money didn't make his hunting OK either.

That said, this one actually gets complicated. The money from these hunts goes to anti-poaching efforts. So the billionaire kills one animal, and his money saves others. Which still makes the billionaire a shitty person, but there is a utilitarian argument for allowing it.

That said, how would you feel if we were talking about hunting children dying in a famine instead of black rhinos on a preserve? The same utilitarian argument applies, but I think most would be for putting the billionaire in jail. How you feel about that is likely close to how that friend feels about what actually happened.


Ultimately I'd prefer nobody was threatening kill anyone in any context on a social media site, no matter how witty they think they're being. It doesn't seem like it should be such a high bar to set but apparently it is.


The thing that makes it an obvious joke is that it's a simple reversal of a previous statement. That is a very common pattern for jokes.


Define "directed that comment to".

I am quite sure that Texas billionaire Lacy Harber does not know her.

I am also quite sure that the person who shared the post she replied to about his hunting an endangered black rhino did know her.

It would also be a safe bet that some friends of friends who saw that comment did not know her.

I would say that she "directed that comment to" the second. It is impossible to tell who reported her, or what they thought.


> It is impossible to tell who reported her, or what they thought.

Wouldn't Facebook know these things?

Under what circumstances do we have a right to confront our accusers?


The whole point of an anonymous complaint system is to allow accusers who might be intimidated by the idea of confronting you to have a way to get their concerns met.

Which means that anyone who has created an anonymous complaint system has traded off your right to confront an accuser with the accuser's right to not be intimidated and decided in favor of the accuser.

However there is usually another counterbalance, such as having the accusation silently disappear unless a neutral third party thinks that there is a point to the accusation.


> In principle, who could be opposed?

Ends vs. means, buddy. Ends vs means. Every single political topic on HN seems to be arguments between people who can't separate the two, and those who do.


Or say some Spanish politicians comments re Gibralter basically right wing nationalists who want use Brexit as excuse to annex Gibraltar.


Seriously tho, how much would it cost to hunt Texas billionaires? And world billionaires?


> Moving on to this policy, I'm happy to see neonazis and the KKK have a hard time. But there are people in the UK and EU who would see Brexit as being a separatist cause motivated by racial animosity.

No one is confused about the difference between "I want to kill all Jews" and "I think the British economy would be better off with fewer Polish plumbers".

> And remaining banned even for people who support it on economic grounds because they think that having to follow EU policies on GDPR, copyright, and so on will be a net negative for the UK?

This is a non-issue and just scaremongering, not all that different from "they allow sex with children next!" from the anti-gay idiots (no one is confused between gay rights and "pedo rights").


>No one is confused about the difference between "I want to kill all Jews" and "I think the British economy would be better off with fewer Polish plumbers".

You clearly haven't been paying attention to modern day discourse over issues like illegal immigration and such. If you listened to some people (unironically the same people pushing for stuff like this) having an issue with illegal immigration is basically treated like a confession to guilt. And that's even ignoring some of the newer types of arguments they are making.. That merely holding an opinion that could be seen (solely determined by them) as problematic or a "whistle" (in effect anything to the right of left-of-center or non-PC) and thus a "gateway to extremism". So "hate speech" needing to be banned next.


I have been paying a great deal of attention, and it turns out that there is more than an insignificant overlap between actual "kill all Jews" Nazis and regular anti-immigration crowd.

Furthermore, as soon as the literal neo-Nazis get criticized the anti-immigration crowd either outright jumps to their defence or shrugs any concerns of. Trump's "very fine people" remark is a good example of that, or a recent article which "proved" anti-conservative bias on Twitter by pointing out that people like David Duke (former KKK grand wizard) or Richard Spencer (literal neo-Nazi who wants to forcibly eject all Jews and Blacks from US) got banned from Twitter.[1]

So, if you don't want to be treated like a duck, then don't walk and quack like one. How else am I supposed to interpret an article defending literal neo-Nazis as "Conservatives treated harshly on Twitter"? Look at the data that is presented[2], it literally includes the the American Nazi Party. Now, if you want to make the argument that we should allow these people because "muh free peach" then okay, but read that article again, it just talks about "Conservatives" and "Trump supporters" (aside: there are other problems with this "study" as well, such as not including various Liberal accounts that were banned for unstated reasons).

I also agree that sometimes anti-immigration views are brushed aside as "racist" far too quickly, and it annoys me as well. But this kind of confusion is a bed of your own making.

I am not as pro-immigration as you'd might think based on the above, I think there are some real problems caused by both legal and illegal immigration, and that they should be addressed. But it's plenty evident that the anti-immigration right has long since been infested by some very nasty people, which is doing a great disservice to anyone else, especially those with anti-immigration views.

I suggest you first take so effort to purge the toxicity before complaining about "PC".

[1]: https://quillette.com/2019/02/12/it-isnt-your-imagination-tw... [2]: https://docs.wixstatic.com/ugd/88a74d_d231bdbfb13c4b9ab77422...


I don't even need to write up a full reply, because you went and proved my point perfectly.

Idiots like David Duke and Richard Spencer make weak strawmen here. And i'm not even going to play into that game.

The problem is that paying better attention to whom you... defend(?), works both ways. Certain groups and people are literally believed and "protected" by the mainstream media by default, when they are clearly coming from a pretty strong POV.

>your own making

No it's a silencing tactic, no more and no less.

When one side stops calling everybody else nazis, then maybe we can all come together and have an adult talk about "toxicity" but that's not going to happen as long as some keep acting like little children on the playground calling others names to shut down debate.


I'm not shutting down any debate, I am pointing out there are literal neo-Nazis being defended under the banner of Conservatism. Be angry at Quillette, not me. There are plenty more examples. You can side-step that all you want with vague accusations against "mainstream media".

And "one side calling everybody Nazis"? Really? Did you forget that Dinesh D'Souza wrote an entire book calling the left "Nazis"? That there were ACA protesters with Obama defaced as Hitler? Never mind shining examples of "adult talk" such as "Assume the Left Lies, and You Will Discover the Truth"[1].

[1]: https://www.nationalreview.com/2019/03/left-lies-trump-russi...


“Joking” about murder or gun violence should be banned. To use satire to critic hunting when gun violence is a bigger issue is tone deaf.


That's really a perfect time to use satire.


If people weren’t shooting up schools and places of worship every other week... still no. It’s still callous.


> In principle, who could be opposed?

I am opposed in principle.

White nationalism (or black nationalism, or yellow nationalism) should be able to speak on a social platform like Facebook.


I suspect there is some value in pushing such content off of large sites to reduce the surface area available for recruitment.

I used to think that pushing such content off a site or forum was futile and calling it for what it is on a site had value, I'm not sure that is the case anymore.

If you look on platforms like reddit and such where you have groups in subreddits calling for race wars, general hatred, and violence and of course the lesser efforts to just normalize hate takes place... there's no discussion anyway. In their own subs any hint of anything less than absolute commitment to their talking points of the day, ever changing theories, and general hatred is an instant ban from their subreddit(s). Then they venture out spam their fake news articles / take over other subs and push their message with endless streams of new accounts. There is no discussion, just constant spam, memes and etc.

We've seen topics like Gamergate, incels and such (while not that tasteful to begin with) quickly pulled to the extreme and go from something about "gaming journalism" or general angst to an identity movement steeped in fear / hate, harassment, and for some actual violence.

I suspect there may be some value in ushering these people away, if only to prevent recruitment of some others who otherwise might avoid their message that fear or dissatisfaction with life and other problems can be explained with hate.


>calling it for what it is on a site had value,

The terrifying thought is that this might be a product of scale more than anything else.

Because of the bullshit asymmetry principle, it's easier to produce propaganda than debunk it and "call it what it is," so any sufficiently large and scaled platform will almost certainly eventually be flooded with propaganda bullshit.


Has anyone touched on a sustainable solution to this?

Minimum wage human shit-sponges with limited mental health benefits are the current industry standard.


I spend most of my time on Reddit, and you’re spot on here. Deplatforming is the most effective way to fight the spread of hate. It’s literally a cancer, and it’s goinf to take the chemo therapy of stepping on the toes of free speech to protect our country and society.

Sucks, but that’s what it takes - especially when they wrap themselves up in the flag and hide behind the first amendment to spew their hate.


You are not forced to look at subreddits that offend you.

> hide behind the first amendment

I think people need to hide behind first amendment, you misunderstand what it is.


Not if your speech is advocating for the erosion of the rights of others’ speech. Just as there is a paradox of tolerance, there is a paradox of free speech, and the hate groups use it to great effect in this country.


>You are not forced to look at subreddits that offend you.

I think this is the greatest misconception about the push for de-platforming. The goal is to remove the platform from becoming an echo chamber for racism/hate/violence not to stop people from being offended by content they dislike. By removing mainstream sites as options to promote hatered/racism/violence it's likely that it will make a measurable reduction in people joining these groups as there will be less traffic.


>Deplatforming is the most effective way to fight the spread of hate.

As evidenced by what?

From everything I see, deplatforming creates filter bubbles, which lead to radicalization, which lead to hate.

After the rise of deplatforming tactics in the last 5 years, American society is more radicalized than before.


There have been a few informal studies that have shown its effectiveness. Additionally, the mechanism for action is clear - deplatforming actually breaks up the very filter bubbles that allow hate to become organized. It makes it harder for those seeking to spread hate, and gives more of an opportunity to consider alternatives for those who are perhaps caught in the drain of that bubble.


have to agree here. deplatforming is definitely not the answer. To use a crude but fundamental analogy, if you want to reduce the concentration of something you need to dilute it not separate it from the rest


I think it’s the best tool we have, and we can either use that tool or twiddle our thumbs while this problem gets worse.

As for your dilution analogy, deplatforming does have that effect because it breaks up the filter bubbles of the hate groups. It makes the media feedback loop of hate more difficult for those caught in that. From the perspective of those considering the hate, their input has been diluted.


I think the current discourse if far more polarized than it was maybe 5 years ago. I think your assessment is delusional if you think your country and society got saved to any degree. On the contrary.


Describing groups of humans as a cancer is stage 4 of the ten stages of genocide:

http://genocidewatch.net/genocide-2/8-stages-of-genocide/

You’re engaging in behavior with truly horrifying antecedents.


I’m describing the spread of hateful ideas as a cancer, not the people themselves. We use the description of “virality” to describe the meme or idea, not that the people sharing are viruses. Same thing here.


In that case, we should ban /r/politics, the_don, shitredditsays, latestagecapitalism, news and worldnews, and every other obviously partisan subreddit, because by time they're obviously partisan, they're downvoting and banning any dissent shared within their community, and thus are spreading hate of everyone who doesn't conform to their extremist bubble standards. We should also perma-shadowban all users who contributed any posts or comments to any of those communities, because they are agents of hate. We should only leave the most abstract or concrete communities focused on real things with nothing but objective curiosity, like gaming, or DIY, or buildapc.


>Gamergate <went> from something about "gaming journalism" or general angst to an identity movement steeped in fear / hate, harassment, and for some actual violence.

https://www.youtube.com/watch?v=c6TrKkkVEhs

According to ^this video, gamergate actually started out as the latter and only later claimed to be the former.


It blended very quickly from the start, but the claims about "gaming journalism" while hallow as far as their actual activity, was IMO a semi legitimate claim as far as some of the initial outrage..... but really it moved on quickly and didn't matter.


All those patriarchal forums are the top-end of a stochastic terrorism funnel.

1. [The Pewdiepipeline](https://youtu.be/pnmRYRRDbuw] 2. [Kill All Normies: Online Culture Wars from 4chan and Tumblr to Trump and the Alt-Right](http://www.goodreads.com/book/show/34858587-kill-all-normies)


>Our efforts to combat hate don’t stop here. As part of today’s announcement, we’ll also start connecting people who search for terms associated with white supremacy to resources focused on helping people leave behind hate groups. People searching for these terms will be directed to Life After Hate, an organization founded by former violent extremists that provides crisis intervention, education, support groups and outreach.

The fact that they're readily going down the slippery slope of propaganda by identifying people with Wrong(TM) beliefs and directing them at the Right(TM) beliefs does not sit well with me. Censorship aside, this is a different issue and a very dangerous precedent for a "platform" to be setting IMO.


Also, By mere searching for specific terms, Facebook gladly assume your motives to be to in particular way. This seem misguided and not well thought through.

Finding a book in the library about frogs, doesn't make you a frog.


I find that specific part to be terrifying.

You could be searching for those terms for a good reason, maybe I'm writing an essay about the spread of white nationalism and want to see how many groups exist online, or whatever.

It's very dangerous to label people and infer their intention automatically.


So make a specific account for research purposes like everyone else does.


> like everyone else does.

Some people who research white nationalism don't make specific accounts for research purposes [1]. In general, and based on my own usage patterns, I think it's probably dangerous and definitely scary to assume that googling for "X" on your primary account implies that you support "X"

[1] Source: Currently sitting next to someone who is literally a full time white supremacy researcher.


Sounds like pretty bad opsec to me, but to each their own.


The whole point of natural rights is that normal law-abiding people shouldn't need "opsec" against their own government or domestic corporations.


What exactly makes you uncomfortable with calling genocidal racism "Wrong(TM)", and helping people leave it? Are you also uncomfortable with them banning child porn, or is that an example of punishing wrongthink too?

EDIT: I invite downvoters to tell me why they disagree?


I find it worrying that a) you’re being downvoted and b) there actually being sympathies for white nationalism and fascist thought on HN. We all who love freedom and love our friends whatever background they may have should be working to not let these kinds of people obtain more power.


> I find it worrying that a) you’re being downvoted and b) there actually being sympathies for white nationalism and fascist thought on HN.

Ugh... I knew this day would the come. The slashdot crowd has found out about HN.


I don't know what to make of it. Has this always been here? Has it surged recently? I really don't know what to think.


I don’t know what slashdot is but thanks for the ad hom. Btw you’re not some kind of special person only because you have an account on HN.


I had the same interpretation as you but after rereading and looking at their comment history I think they are actually quoting you to concur with you, i.e. the white nationalists are the slashdot crowd


Yeah I was getting way too defensive in this thread. You're probably right and it's terrible the fascist threat makes me react totally unjust to fellow commenters meaning well.


yes. It is incredibly upsetting and scary for me. I'm so glad there are other sane people out here. I hope we get through this.


> genocidal racism

not allowing people of European decent to have their own state, country, etc IS genocidal. Is it "supremacy" and "racism" to simply advocate for the preservation of a culture and people?

Whether you agree, they should at least be able to talk about it without being labeled "genocidal racists".

(Not talking about actual white supremacists. Don't believe I've ever met one.)


One of the best parts about having the freedom of speech is that I’m free to label those people genocidal racists.Freedom of speech doesn’t protect you in any way from being shamed for holding disgusting beliefs.


> Whether you agree, they should at least be able to talk about it without being de-platformed for being labeled "genocidal racists".

Fixed.

>One of the best parts about having the freedom of speech is that I’m free to label those people genocidal racists

And now you can do so on facebook in peace with all the other genocidal racists.


I think you have a misunderstanding of the word "white nationalism". I'd recommend looking it up to understand what is happening here. But in short it means the belief that there is a white race and that it has a right to preserve itself and it's culture.

Genocidal racism was never allowed on facebook. After this move they will censor a lot of legitimate things.


I just don't believe that you neutrally believe "there's a white race" without also believing in its superiority over other races - whatever reasons you euphemize it in, like culture, or IQ, or something.

And this idea of "preserving itself"... what exactly does that mean? Does that mean getting to kick other people out of their homes? Threaten their livelihood? Threaten their social safety nets? What exactly happens in this supposedly benign process of "preserving culture"?


White nationalism is not a legitimate thing and I implore you to get in contact with networks that help individuals get out of hate groups.


Your jumping to conclusions exemplifies why this move by facebook is so incredibly dangerous.

You jump from "censoring white nationalism will censor legitimate opinions" (such as there "there is a white race" - a question on which there are good arguments for both sides, and which it's reasonable to debate), to I hold such opinions. Then you jump from a few of those opinions to hatred. And then you jump from there to being part of hate groups.

Calm down and think.


There is no slippery slope argument when you're talking about literal Nazis.

We went to war on the subject.


This comment perfectly encompasses why it's a slippery slope. The definition of "literal Nazi" is use so broadly in political discourse. Politicians have been called Hitler and Nazi since Hitler and Nazis became a thing.

The massive danger is going to be the countless false positives this thinking generates. Just like the craziness of arresting the guy whose dog made a nazi salute in the video.

He would be denied access to social media because of stupid interpretations of a joke.

This isn't about defending or persecuting Nazis, it's about protecting the public from systems of abuse.

Not to mention the ineffectiveness of online censorship. The only worse thing are Nazis organizing on public platforms is forcing them to congregate only with other Nazis on radical websites where no one will challenge their worldview. But that's a totally different discussion.


The only worse thing are Nazis organizing on public platforms is forcing them to congregate only with other Nazis on radical websites where no one will challenge their worldview.

Sorry, don't agree. Forcing them to congregate on unpopular websites is great because those websites have a terrible reputation and that massively hinders recruitment. Sure, they're echo chambers, but extremists already use private fora for more underground organizing activity and likewise there are people who specialize in monitoring and infiltrating those websites.

If your argument is that they just need to be talked around with reason and logic, you can always go to one of their websites and give it a try. Sure, deradicalization can work and it's great when it does, but generally the odds are poor and it's extremely labor intensive.


That's a lot of words to type to say, "I support Nazis being able to openly operate and recruit on Facebook."


Yes, that's exactly what my comment said. Better ban my FB account! (/s)

If they promote violence or brigade, harass, or attack other members directly on FB then. Then yes, they should have their pages shut down. That is more than sufficient rather than saying any language that can be categorized in a very broad and vague category will be banned.

The simple fact alone if you asked various people in America to define a "literal Nazi" you'll get a hundred different answers of who should fit into that category.


Being a nazi is literally promoting violence and in any case amounts to an end game involving genocide. I agree there should be no censorship. But then you should be very vigilant to not let nazis control the discours as they do now. Nazis are not be talked with, they need to be scared out of the public space so that either they reevaluate their positions and change and can be reaccepted into society or they live pithy lives in solitude.

And for definitions: I think it’s very clear to indentify fascist thought - favorable views on ethnic cleansing and stories about mighty groups of people that can’t be proven should never be tolerated.


> We went to war on the subject.

This seems historically inaccurate, unless from the perspective of German resistance.

Britain waited on the war to come to them, and the U.S. didn't join until Pearl Harbor.


Your comment implies that Nazi Germany had absolutely ZERO redeeming qualities, which no credible historian would agree with. (e.g. no other country since has had the degree of focus on conservationism and environmentalism)

And we went to war because of Japan, whom we put into concentration camps.


> Your comment implies that Nazi Germany had absolutely ZERO redeeming qualities, which no credible historian would agree with. (e.g. no other country since has had the degree of focus on conservationism and environmentalism)

There are plenty of nations, both before and since, that have so much higher of a focus on conservation and environmentalism that they don't compromise those in the interest of conquest and genocide; the Nazi campaigns directed at those aims, leaving aside the moral dimension of their more direct effects on humans, were fairly disastrous from an environmental and conservationist perspective.


Are you really going with, "But think of all the GOOD things Nazis do?"


> But think of all the GOOD things Nazis do?

I was talking about the National Socialist German nation. Not modern "Nazis". Take the good, reject the bad. Since when is it shameful to be a history buff?


You are REALLY going with, "But think of all the GOOD things Nazis (regardless of the era) did."

I mean, there's literally no responses to that. I'll pray for you.


I am also nearly a free speech absolutist, but I feel that platforms like YouTube and FB are nearly in-scope. I can't specifically say what the criteria is, but there's some notion of how much responsibility they take for the content.

Something like the notion of a DMCA shield; where a platform can expose third-party content without consequences so long as they response to take-down requests means that they do not get to do filtering. If they get to do filtering and curation, then they are also directly responsible for copyright violations or libelous use of their platform -- the shield shouldn't apply unless it cuts both ways.

That is, by accepting some immunity from the consequences of free speech, they are then subject to free speech restrictions on their enforcement of their own opinions. A newspaper, for example, is not subject to the same restriction -- they are responsible for all content they publish, so can choose to publish or not publish whatever they want.

This isn't a precise framing, but I'm trying to capture some of my general thoughts on why I find it unappealing that YouTube and Facebook can filter out content that they deem offensive.

Facebook has a natural filtering mechanism through the ability for a user to restrict what they see. If you find content offensive, unfriend or block the person and you're done. Youtube can exercise some discretion in auto-play and recommendations, but they shouldn't be able to ban content unless they are required to do so by inheriting legal requirements.


Should anyone be forced to pay for someone else's speech that you don't agree with? Because that is what you're asking these companies to do.

I think the explosion of hate speech on the Internet is entirely because someone else is basically paying for people to post whatever they want. They don't need to create their own newspaper and get advertisers. They don't need to print signs. It's all free for them. You can consume as much services as you possibly can given your free time and those with more free time, therefore, get more free (as in beer) speech.


> Should anyone be forced to pay for someone else's speech that you don't agree with? Because that is what you're asking these companies to do.

If they are going to get the benefit of _not paying_ for speech that they would ordinarily be penalized for (libel, copyright infringement) then that's the cost, yes.

I think they should be able to impose arbitrary limits on the cost of serving content, either the size of the content or the bandwidth consumed by serving it, but by invoking any sort of copyright/libel safe harbor, they should lose their ability to curate based on content rather than metadata.

Sort of like being able to opt in to being a common carrier -- it carries both benefits (immunity from most criminal and civil prosecution for content served) and obligations (complying with takedowns and not discriminating based on content).


> If they are going to get the benefit of _not paying_ for speech that they would ordinarily be penalized for (libel, copyright infringement) then that's the cost, yes.

Why? That's an interesting statement, but I don't see how one follows from the other.

If you create a community online, should you not have some control over that community? Hacker News frequently censors and bans people in order to maintain a certain community. What if you created EverybodyBeNice.com that wants only "nice" posts? Is it not that fair that these companies can enforce their own standards while also getting protection from libel and copyright infringement made by their users? I don't think there is a good argument why one should be tied to the other.

I think arguing that every single website should be a common carrier takes that concept way too far.


> Why? That's an interesting statement, but I don't see how one follows from the other.

You're right, of course, I don't think there's a direct implication between the two. There's a difference of degree, not kind, to some extent. My intuition is that the value that YouTube derives from being a place where people can post videos (and, somewhat importantly, monetize them) is much higher than the value that HN derives from people posting content.

EverybodyBeNice.com is an interesting thought experiment. It wouldn't be very convincing, though, as a shield, if there was no effort put into active moderation not only for "niceness" but also for copyrighted or libelous content. The idea lives in that grey area between "completely curated content" and "just a place where people can post content". I guess my feeling, roughly, is that at some point during EverybodyBeNice.com's rise to serving billions of videos, they can no longer credibly claim that they are preserving the "niceness" unless there is some sort of review process that by its nature must also police for other kinds of illegitimate content.


> My intuition is that the value that YouTube derives from being a place where people can post videos (and, somewhat importantly, monetize them) is much higher than the value that HN derives from people posting content.

I don't see how "value" contributes to the argument. YouTube makes money showing videos to people that other people have posted. Hacker News makes no money linking to articles that other people have written and showing comments other people have written. But in neither case is there a moral or legal imperative to allow any video or post on either platform based on any concept of value. Value doesn't (and shouldn't) matter.

> It wouldn't be very convincing, though, as a shield, if there was no effort put into active moderation not only for "niceness" but also for copyrighted or libelous content.

If such a site did or did not moderate for niceness, copyright, and libelous content I don't think it matters. You can't assume perfect moderation and some mean comment, copyrighted content, or libel might get through. As long the site makes an effort when discovered to remove copyrighted and libelous content then they should be shielded by the law. Furthermore, I think that's morally acceptable. It's incredibly difficult to filter for niceness or copyright.

> The idea lives in that grey area between "completely curated content" and "just a place where people can post content".

There is no such thing as "just a place where people can post content" in the sense that you are going for. Every place is owned and controlled by someone. They have no moral obligation to allow any content. It's also perfectly reasonable for them to also be shielded from copyright and libel if they make a good faith attempt to remove it.


> There is no such thing as "just a place where people can post content" in the sense that you are going for.

It's a bit contrived, but the internet itself is such a place. I think it's widely accepted that the internet itself, and to some extent DNS, is a common carrier in the sense that it shouldn't be doing discrimination based on content, just on bandwidth. I mean, I don't want to start a whole net neutrality thing here, but a router or the operator of a router does not face any consequences for routing traffic, no matter how offensive or illegal that traffic is.

Web sites that are principally public repositories, where people can upload their content, seem to be a borderline case of that -- if they want to offload the risk of hosting the content, then they can do so, but it seems to me that they have to provide a service to society in return, namely, some sort of neutrality about the content posted.

I think it's equally reasonable to say that they should have no shielding at all -- society has no vested interest, necessarily, in allowing such a site to exist that has absolutely no say over what is posted on it. Allowing users to contribute to a site is fine so long as the site, in the end, bears the responsibilities for societal negatives associated with posting content (primarily libel and copyright, the two unambiguous restrictions on free speech).


> It's a bit contrived, but the internet itself is such a place.

The Internet isn't a place and that's what makes it a carrier. Just like the telephone system, the Internet connects you to other services.

Websites are not principally public repositories; they are privately owned services. If they allow you to post, that doesn't make it public in an ownership sense or carrier.

> If they want to offload the risk of hosting the content.

But they aren't. The law doesn't allow companies or individuals to get away with hosting copyright infringement. It simply makes it possible for companies that accept user uploaded content to function in any way as the alternative would make this impossible. That's all. Implying this requires a moral imperative to allow any content is a stretch at best.


> Should anyone be forced to pay for someone else's speech that you don't agree with?

Facebook isn't an "anyone" - it's a thing.

We can decide to do whatever we want to these things - they exist only because we permit them to. They have no natural rights.


I think that unless they are in a market-dominant position, or there's collusion between several players to effectively create a cartel that has that position, then there's more leeway.

But the ones that are - where their private rules have an extremely broad stifling effect that in practice makes it comparable to government censorship - should have very little leeway. I'd rather not allow such dominant positions in the first place (i.e. cap the size directly), but if we are doing that, it should come with strings attached - e.g. smaller companies get a choice between common carrier and curated platform with responsibility for everything, but something as big as Facebook is forced to be a common carrier.


DMCA is a United States law, isn't it? So what about mix-region content? What about content from a region in which the content is legal but it's not legal in the region from the person making the complaint?


I believe the DMCA's safe harbor provisions were adopted independently by most other jurisdictions, and there may even have been international treaties establishing some of the rules. Most sites are unlikely to contest take-down notices regardless of jurisdiction; that is incumbent on the party posting the content.


Article 13 is going to change this...


I consider myself a free speech absolutist. Or very nearly so.

But that runs both ways. Not only do I think that almost all (to borrow a mathematical term) speech should be permitted, but also that the onus is on the speaker to make themselves heard or to find an audience. No company or platform owes them anything.

In earlier times, that meant that no newspaper or printing press was required to print everything sent their way. Today it's the centralized websites that also don't have this obligation.

If FB decides to ban bigots, or YouTube wants to kick anti-vaxxers off their site, so be it. You can host your own video if you'd like, or maintain a personal blog site.* You are not entitled to widespread distribution. Never have been.

* Which leads to a place where I'll agree on obligations: root infrastructure like DNS or network connectivity. Those are the common carriers of our Internet.


I'm going to respond to this comment because my response is just kind of in general to the whole conversation this spawned.

In what way is facebook more of a public space than hacker news, reddit or any public forum, message board or website where users can sign up and post content?

You couldn't post a bunch of white nationalist comments here or on most other sites without getting banned.

Why would and should facebook be treated differently than other websites? Having a large amount users and money doesn't suddenly make them a public entity. The fact people use their real names and identities doesn't(they're not the only site that does this). Them tracking you across multiple websites and aggregating and selling your data doesn't either.

What is it about facebook that causes people to have this misconception?

If this were ISP's, DNS providers, even hosting providers really shouldn't be able to(as far as i'm concerned, but that's a different discussion and does happen), but facebook doing this is the same as any other website that allows user discussions and user content setting rules.

There's an internet outside of facebook. People making the choice to reside entirely there doesn't automatically make them any different than other websites.

It's the users that choose to give them this power over things by using their service. Though...i know not entirely not entirely...pre installs on phones and things like the free data facebook messaging in the Philipines puts things outside.individual users control sometimes. But that's on poor government regulation and other phone manufacturers.

And...i'm not a big fan of facebook. I find them shady and greatly distrust them. But, part of that is because I understand they're a shady private corporation with zero obligation to the public. Pretending otherwise is unhelpful.


One thing that I've been thinking about, and I'm curious on your thoughts on, is Facebook's "Internet.org" project.

If you're not familiar, Facebook is offering low-cost (or free) internet access in less developed countries to a hand-picked selection of sites [0] and has roughly 100m users. [1]

It's been argued that they're essentially acting as an ISP and are significantly violating the principles of net neutrality. [2]

You say "people [are] making the choice to reside entirely there", but for millions of people around the world, Facebook is the only social media platform available to them at a reasonable cost.

Should operating this platform be enough to treat them differently? I could see arguments both ways.

[0] https://en.wikipedia.org/wiki/Internet.org

[1] https://techcrunch.com/2018/04/25/internet-org-100-million/

[2] https://www.theverge.com/2017/7/27/16050446/facebook-net-neu...


I've never heard of that specific program. The only one like that I really knew about was the free data stuff in the Philipines because of a friend of mine there. I know in that specific instance it comes down to basically non-existent regulation over carriers and ISP's and a lack of any real choice.

If Facebook is becoming a defacto ISP in countries it should be subject to the same regulations and really should be required to follow net neutrality rules. Though I don't think those exist in a lot of developing countries(or I guess America also...).

In the end though, it's those country's governments allowing facebook to do this.

These are my realistic thoughts on the matter.

My own personal thoughts...

That shit's downright evil. Government's should absolutely not be allowing facebook to do this. But I feel the same way about ISP's in general owning content providers and favouring their own services.

This practice would be the same as Ford or gm owning a bunch of gas stations and giving preferential prices to drivers of Ford or gm vehicles. I honestly don't understand why it's looked at differently when it comes to internet companies or technology in general.

If a social media company wants to be an ISP or a an ISP wants to be a content provider or host, they really should not be able to give preferential treatment to their own content and companies that do this should be scrutinized heavily for bullshit.


I think it's because Facebook is both more visible and personal than other platforms. Horrible, racist, violent, and outright evil content has existed on the internet since its inception. The difference is that journalists don't browse 4Chan and I imagine when people see content on anonymous forums they're better able to rationalize it because they can come up with a multitude of excuses for the poster. It's a little more surreal to see "real" people openly and publicly calling for the extermination of non-Aryan people.

I think that's why Facebook has been getting more criticism and attention in the media than Reddit or Twitter, despite wielding comparable social influence. On the flipside, I also think that's why people are more outraged about the "censorship" issue. If you ban a Reddit user you're just banning some username. If you ban someone on Facebook, you're banishing a person and stifling their voice.

The slippery slope arguments are sensationalist. Facebook had many motivations for implementing this policy, some of them undoubtedly motivated by money, but ultimately I would argue they implemented it because American and western society pushed them to. To the extent that western society has unequivocally decided that racism is very bad and not good, this policy change will also not push us over the edge into total censorship because that's not what we want (and I think it would be difficult to take the stance that that's the direction we're heading in.)


> Why would and should facebook be treated differently than other websites? Having a large amount users and money doesn't suddenly make them a public entity.

Because FB is not targeted at specific communities of people like HN, and less so reddit, it's welcome to everyone. Everyone is in fact actively encouraged to join. That's their mission. Private spaces that are open to the public often have first amendment protections:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=139661


I would think it has to do with the invisible/nonexistent/impersonal nature of Facebook moderation.

If there’s no face, if the site is designed so you never feel like a guest on someone else’s website, you'll assume you’re in control of the social rules.


The question here isn't whether Facebook/Twitter/etc. have a right to control what is on their platform, they absolutely do. The question is whether there should be safe harbor provisions that protect them from legal repercussions of what they do choose to allow on their sites.

A case can be made to protect someone who allows all comers, who increases the reach of the public to communicate with each other, and who extends the scope of public discourse to be protected from the legal consequences of what their users say. The same case is a lot harder to apply to an institution that curates the content they choose to publish. By way of comparison, telephone carriers, an industry that exerts no editorial control, are largely not legally liable for what customers say over the phone network. Newspaper publishers, who have absolute editorial control, on the other hand are legally responsible for what they publish.

Which of these are Facebook and Twitter? My answer is that the more they limit speech that is not legally criminal then the more liability they should have for what they allow.


The problem is that, for a paper or a TV network, control scales. You can pump your network TV signal out twice as many antennas, and still control every frame of video.

For the telephone network, control does not scale. There is no way for the phone company to censor what happens in one billion daily phone calls. It can't be done (until we get adequate speech recognition, but that's not an answer that I want to see applied in this way).

Facebook, YouTube, and Twitter are like the phone company, not like newspapers and TV. Censorship does not scale.

Twitter, being text, could think about trying to automate censorship. The problem is that people do all kinds of typos, circumlocutions, innuendo, and dog whistles to get around such things. Words are really flexible, so it's a hard problem.

Facebook and YouTube are worse. They can try to have automatic assists, but pretty much the only way it can be done is via user reporting. Even that doesn't scale all that well.


I think the interesting case is the presence of a monopolist (and I take no position here as to whether any particular company is a monopoly or not). Other [classically] liberal principles like freedom of contract have been abrogated in such cases, largely out of the realization that competition and choice are what generally make such private exercises of power acceptable.


A popular argument for Net Neutrality posited that allowing non-government monopolies to exercise free-reign over what they've built will result in free speech being infringed. Here is an article and call-to-action from the EFF:

https://www.eff.org/deeplinks/2017/06/attack-net-neutrality-...


While I agree that Net Neutrality is necessary, the argument given in this EFF article seems to contradict itself with regard to FB, Twitter, et al. The article notes that social networking sites have been used by people to coordinate in order to get their viewpoints heard. But then it says:

"What does this have to do with net neutrality? Simple: all of these services depend the existence of open communications protocols that let us innovate without having to ask permission from any company or government."

But anyone who uses FB or Twitter does have to "ask permission" from those companies: the companies own the sites, not the users. And the idea that FB, Twitter, et al are based on "open communications protocols" is obviously wrong.

If the argument is that FB, Twitter, et al should have to use open communications protocols, let anyone use them without asking permission, etc., then that is an argument for ending FB, Twitter, et al as private companies and making them public utilities run by the government. I'm not sure that's a good solution. It seems to me that a much better idea would be to encourage competition in free speech platforms on the Internet, so that people do not have to depend on FB, Twitter, et al to coordinate and get their viewpoints heard.


There's a fundamental difference between an ISP and an edge provider. The service provider is like a system of roads whereas the edge providers are destinations.

As long as the roads are free, it should be fairly easy to buy a plot of land and build your own destination. However, if the road company won't allow you access, it stops being feasible.

Now in this analogy, large edge providers are campuses with internal roads that they control and many entrances and exits. They too can become a problem when they grow large enough.

So therefore, I would think keeping the road system neutral insofar as the connections it allows would be the first priority. Making sure edge providers don't subvert the system either by growing too large or by organizing to control the main system of roads would be of similar but lesser importance.

Moreover, if the main roads aren't neutral, there's no reason to make campus roads neutral.


Like a newspaper deciding what to publish?


There was a time in history where everyone could start a newspaper. Pre civil war in the US, the modern day price of a printing press was $10k and almost everyone in a town would buy papers and take them on a journey across America. That's how information spread back then.

By WWI, the cost in today's dollars for a newspaper grew to several million. People can't live off small papers today. They live off Patreon and YouTube. They depend on distribution platforms in a way we didn't before.

If Amazon bans your books, it's effective censorship for people who only read eBooks. Sure you can distribute the epub/mobi DRM free, but few people are going to bother with that (or know what to do with the file).

I've written about this before:

https://fightthefuture.org/article/the-new-era-of-corporate-...


What? The reason starting a newspaper became so expensive is because of a much larger, national distribution model. You're still free to buy a $10k printer and sell newspapers in your hometown. If it doesn't contain hateful, violent rhetoric, you can even advertise it on Facebook!

If Amazon bans your books, you can still buy the book elsewhere, or get it from the author's site. Just because a piece of information isn't in the medium that you prefer to consume doesn't make it "censorship." It's not on Amazon that "few people are going to bother with that" when they ban you for violating their ToS.


You can make your own website, though.


But Google and Facebok can ban your website and you would not receive any visitors. And you probably would lose old visitors who don't use bookmarks and type your site's name into an address bar.


I must have missed the part where Google and Facebook are obligated to ensure people can find your site via their services...

All of this seems to be something along the lines of "it's inconvenient, so it's bad, and [they] shouldn't be allowed to make it inconvenient on _their_ platform".


Your example of the Daily Stormer is interesting, because it illustrates two ways in which the Internet as it is set up today allows corporate censorship. The first is the one you describe: hosting companies and domain registrars can refuse to do business with you based on content. The second is one you don't mention: ISPs can (and virtually all of them do) prevent you from hosting content on your own servers at your own IP address. This wasn't the case in the early days of the web: most early websites were hosted on personal machines in people's homes or offices, connected to the Internet with their own personal or small business connections.

A truly libertarian Internet would still have the first issue, because of freedom of contract. But it wouldn't have the second, because having an IP address reachable from anywhere on the Internet and hosting any content you chose would be a basic right. (One could argue that having a DNS routable name for your IP address should be part of that right too, but the absolute requirement is the IP address: anyone can reach you with that even if there is no domain name corresponding to it.)


By all means Facebook can decide to be a publisher if they wish. That does mean though that they shouldn't be able to hide behind DMCA safeharbor. If they want to prioritize content they need to accept the responsibility that comes along with it.


"Goebbels was in favor of free speech for views he liked. So was Stalin. If you're really in favor of free speech, then you're in favor of freedom of speech for precisely the views you despise. Otherwise, you're not in favor of free speech." - Noam Chomsky


Thank you. Free speech isn't just an amendment for the government, it's a principle for society.

If Facebook were a television station with limited air time for a single track, it would be perfectly valid for them to decide (reasonably) who gets air time. However, Facebook, Twitter, and YouTube have infinite tracks and time. I don't think the OPs argument is as strong, with that in mind.


How does the amount of bandwidth capable for carrying channels relate to whether something should be carried or not in a legal or constitutional sense?

Carrying white supremacist content has a cost to both Comcast and YouTube/Facebook. Nobody expects Comcast to carry the KKK channel and yet they have hundreds of channels and clearly space to carry it. So why aren’t they compelled to do so if Facebook is?


To answer your first question, the constitutional response would be "it doesn't". Legally, I'm unaware of any edge cases, except potentially some net neutrality related things. (I'm not very certain about NN.)

My point, however, is that this is neither a legal nor constitutional obligation, but a social and voluntary obligation to a free society.


Agreed. Just wanted to see if there was some precedent I was missing :)


Comcast is few-to-many model, Facebook is many-to-many.

A few content producers create the content that Comcast could potentially carry. Comcast selects what they want to provide to their many customers. Comcast has never marketed themselves as a place where anyone can have their content provided to subscribers. Facebook has, in fact that is the essence of what they do.


Well now you're violating my free speech by telling me I can't decide what content I host on my website.


If your website's registered userbase is 1/4 of the population of the entire planet, then yes, that's what we're telling you.

It's no longer _your_ free speech.


What's the exact threshold? Just curious so I can make sure to stifle my business growth so as not to give up my free speech to not host racist content.


It's the publisher vs platform debate. If you're curating content based off of promoting your company's vision of acceptable free speech, then you become a publisher and a whole different set of rules apply to you. You're liable for much more in the eyes of the law.

You can't cherry pick the advantages of both and hide under the guise of being an open platform.


I think the terminology we have here for publisher and platform don't exactly fit this scenario. I think it'd be wise to find a new label and set of rules that better fits this type of company.


If you want to take that tactic it's probably just better to reframe the argument.

Most of the problems that come from being a massive social media juggernaut like facebook/twitter/etc come from those services being free.

Being free is what makes them massive monopolies and gives them such broad reach. I think 99% of the problem cases AND the lack of competition in the market could be plugged by breaking the ad-supported/VC-supported model and forcing these companies to charge something to their customers.


Absolutely not! I'd be interested in knowing how you came to the conclusion that was my argument.

Rather, it is important for society to support free speech, not just the government. While society is under no obligation to, it's unhealthy to censor speech.

It's even worse when society self censors free speech, and then pretends to still advocate it.


I was taking the point to an extreme. You're saying a society should support free speech so Facebook should let the white nationalist content stay on their site (by their own good will, not forced by government). Is that not accurate?

Deciding what I allow on my site is an exercise of my free speech. Why is hosting white nationalist free speech more important than my free speech to not host racist content?


the difference is that youtube/facebook are not publications themselves, but platforms that allow you to publish. So you already aren't deciding what to host, as a user-content driven site, you by default allow anything, then naturally hopefully for moral reasons but necessarily legal ones, you ban illegal content. Beyond that, it still is a privately hosted thing so I'd still agree that your self run user-content based site may ban whatever it likes. However when you reach a general use size, where people rely on your platform for free speech, things certainly feel different. I'm not sure what the right move is, but I don't think its as simple as "my private site is my private site, thats the end of it"


>I'm not sure what the right move is, but I don't think its as simple as "my private site is my private site, thats the end of it"

Well get back to me when you've worked out exactly when a private entity shouldn't be able to decide what content they allow. This ambiguity is really too difficult to figure out.


I don't know, or I'd know what the right move is! For this, its sort of a "you know it when you see it" thing, as facebook/youtube/twitter are major platforms used by everyone, and so they feel a lot more like an online town square than a private space. So they are definitely past the line, and ought to have different rules on what content is allowed/disallowed than smaller lower population platforms.


I can agree with you. It's something I've been thinking about lately and I keep coming back to how you'd define this in legal terms. It's tough.


The problem is, if they are forced to allow "any legal content", they'll be overrun with spammers and other no-goods, and the users who just want to keep up with their family will all leave.


What about the legal risk associated with letting the Frog Brigade post their terrorist cookbook on your platform? Does Zuckerberg have to come to their house and load the bullets into the gun for them so they can make a statement about how angry they are? Because he made an app for people to cheat? I fail to see the connection.


>What about the legal risk associated with letting the Frog Brigade post their terrorist cookbook on your platform?

Like I said, banning illegal content is the first step that anyone ought to take. Generally this is done via moderation staff of the site + user submitted reports. If you're providing a free platform and terrorists start using it as their communication channel, then you're only at legal risk if you do nothing to stop them. However, I'm imagining things like the NZ shooter's 8chan post, where a direct threat is made or a direct implication of violent acts to come. Posting a "terrorist cookbook" actually is a good example, as its sort of right on the line. It seems to be the sort of information that only has negative uses, on the other hand its just information, as opposed to an actual threat. I'm not really sure if such things should be allowed... it could be helpful to terrorists which is bad, but also censoring information is nearly impossible and almost always seems the wrong way to go.

>Does Zuckerberg have to come to their house and load the bullets into the gun for them so they can make a statement about how angry they are? Because he made an app for people to cheat? I fail to see the connection.

As do I...I really don't get what you're trying to say here, can you restate?


That's property infringement, not violating speech.


There is a huge difference between your website in which you post in your name, and Facebook which was created as a platform to allow billions of people to post in their name.


Facebook, YouTube, and Twitter also have as their primary purpose the propagation of information and messages by and among their users. It's not some incidental or secondary feature. It's what they do. And they promote this to the public with almost zero barrier to entry. Yes they are still private entities, but like a hotel that is open to the public, they become constrained in the restrictions they can impose since they have chosen to be in that business.


What is the alternative to a private company making their own decisions on what to allow on their platform? A blanket order to not ban any content whatsoever?


Free speech isn't just an amendment for the government, it's a principle for society.

Please point out a single society that has upheld absolute Free Speech as a principle for any significant period of time.

The United States has even killed its own citizens when their free speech collided with national security: https://www.nytimes.com/2015/08/30/magazine/the-lessons-of-a...


Yelling fire in a crowded theater and calling for the killing of americans are not protected by the first amendment and never have been. They are calls to violence. So no, his free speech didn't collide with national security, his call to violence collided with the full weight of America's call to violence against him.


"Shouting fire in a crowded theater" is a quote from a Supreme Court decision that put American citizens in prison for the crime of protesting against the draft during WW1.

That decision has since been overturned, thankfully. But it would be good if people who keep coming back to this phrase remembered where it came from, and what it was already used to justify.


Good old Chomsky. I really like his views on this topic. However the question leads down a rabbithole.

Absolute freedom is a contradictory concept. How can you kick an unwanted visitor out of your house because you don't like what he's saying? Aren't you infringing on his rights? Someone needs to draw a line.

So what about a social media oligopoly? How dominant does Google need to be before you have to tell them to allow freedom of speech.

What about network neutrality?

Etc etc...

It's not so easy to answer this question.


It's a centuries old debate initially had out by Hobbes and Rousseau.

Hobbes' famous phrase from Leviathan "bellum omnium contra omnus" (the war of all against all) describes absolute freedom as being problematic indeed. Rosseau provides an answer in The Social Contract to say you can't have a war of all against all, you instead need the state to be responsible for being able to imprison people and collect taxes on them and this alleviates (generally speaking, to a large degree) the problems absolute freedom present.


He's not talking about absolute freedom in general, though. Only about absolute freedom of speech.

And there's no paradox of tolerance with speech - you can tolerate intolerant speech just fine, so long as you police the actions.


> How can you kick an unwanted visitor out of your house because you don't like what he's saying?

This is unrelated. If it is your home, you are free to protect it by any means.


There's a difference between supporting someone's right to speech and handing them a microphone and gathering up an audience for them.

There is some room for human judgment and give and take here, at this time there is absolutely no shortage of platforms that allow anyone to broadcast any awful thing they can dream up. Email, web sites, street corners, I believe there can be different standards for different places.


There's no shortage of street corners to protest on. Why are you mad that we pepper spray you when you try to protest in front of the bankers' offices?


That's all fine and good until somebody burns a cross on your lawn. Pretty sure you're not going to be happy with their free speech rights at that point.

Free speech absolutism is naturally at odds with property rights absolutism. The question then becomes under which circumstances should one be given precedence over the other.


This.

A big problem is oligarchic in nature: if you believe that ISPs should be classified as common carrier utilities that should not be permitted to limit or censor the data you receive within reason, then the oligarchic content hubs should similarly be common carriers of speech.

I don't have a rock solid argument on this, this is just an equivalence I'm asserting.


ISPs are utilities because you only have one option for ISP (running a wire to your house). Content hubs take less than a second to switch, there is no lock in.

The lack of ability to switch providers is what makes it important to have regulations, not just the fact that it is labeled a common carrier.


There is a public lock in. You can switch to another content hub like you can move to a new town for a different ISP. I think that's at the heart of disagreement here: do these content hubs count as public spaces for unregulated communication, or as private spaces, regulated at the will of the owner of the space? For many, due to the population present, other content hubs are not the same thing as facebook/youtube/twitter.


What lock in? There is no friction in typing in a different website's address. Just because other people use Facebook doesn't mean you're locked into that. People watch different TV shows, different movies, read different books, this is no different.

No one is stopping you from making your own forum or website.


> What lock in? There is no friction in typing in a different website's address.

The fact that all your photos, content and friends are on one platform and not the others, obviously.


That you chose to put there, on someone else’s computer. Other than you and your friends’ and family’s willingness to do work, there is no lock in, and I don’t believe that’s a useful way to define lock in.


So the fact that in the 90s you put your sentences into Microsoft's proprietary .doc format, or that you decided to run your business on Microsoft Windows computers is not lock-in either? What definition of lock-in would you propose that would include the above which is widely regarded as lock-in, but exclude FB as lock-in?


The utility of the forum comes from other people. Social graph is absolutely a means of lock-in, and it's long overdue for us to recognize that.


Facebook and YouTube are private corporations and have no obligation to platform any idea. They have the freedom to remove any content on their platforms they like, and your freedom of speech doesn't trump theirs. Of course, you can continue saying whatever it is you like -- you just have to find a platform for it yourself.


Essentially. A person's freedom of speech doesn't imply curtailment of another person's freedom of the press.


... and yet, child pornography possession and distribution is illegal. Do we think Chomsky would find this unacceptable?


I suspect that Chomsky would consider child porn - or at least the part of it that is illegal - not speech in this sense.

Speech is the process of communication ideas. Porn is the process of sexually arousing someone remotely. Criminalizing the latter is not criminalizing the former.


You do end up criminalizing numbers though.


There has to be a point where "criminalizing a number" doesn't really work.

A HD-DVD master key is one thing, a huge binary file is another. HD-DVD master key requires specific "numberness" to work, while a binary file can still convey similar information yet be a completely different set of numbers. if I take two pictures in rapid succession of a still life, the two pictures' jpeg data and checksum hashes aren't going to be even close to matching.


Chomsky’s position is that the burden of proof is on state violence to justify itself, and I think he would agree it’s clearly justified in that case.

However, that’s a very narrowly scoped restriction (and it’s also not clear that such distribution is a type of “speech”). A more appropriate question would be: would Chomsky support free speech for an advocacy group such as NAMBLA, or for creating mangas that depict drawings similar to CP? Those are exactly the type of speech/expression that are clearly vile, yet should still be protected from interference by state violence, from the civil-libertarian perspective.


>No company or platform owes them anything.

This is discrimination of opinion, strange as that sounds.

There is a reason we don't see signs up like; 'No Blacks', 'No Gays', 'No Jews' in private establishments. Its because its not allowed, we either have a list of protected groups - or a general rule that forbids banning anyone because of their 'group'.

If we don't fairly quickly extend free speech into these platforms, then we will have to eventually legislate a list of protected opinions.


It's the Civil Rights Act of 1964 and it has its limits just like everything else. Namely employment and "places of public accommodation". If you read the SCOTUS opinion on Masterpiece Cake Shop you will see "public accommodation" mentioned a LOT.

So, I'm certain you will find legally hung signs with racist and discriminatory content in certain places.

> If we don't fairly quickly extend free speech into these platforms

That's outside the scope of the First Amendment. It may not even be possible to pass such a law without infringing on those platforms First Amendment rights... I would expect the SCOTUS to strike any such law down.


> There is a reason we don't see signs up like; 'No Blacks', 'No Gays', 'No Jews' in private establishments.

Could you imagine the immediate public backlash against this type of behavior by a private establishment? I don't think the law is doing the heavy lifting here.


The supreme court heard a court case in 2014 in which a business refused to serve people because they were gay: https://en.wikipedia.org/wiki/Masterpiece_Cakeshop_v._Colora...

A large number of people are still alive back from when Jim Crow laws were still actively in effect. In 1957 the president had to send the 101st airborne to Arkansas to escort black children to school and seized control of the state's national guard because they were being used to block desegregation of schools: https://en.wikipedia.org/wiki/Little_Rock_Nine

A non-trivial percentage of Americans would love to have these signs up and are kept from doing so by the law. The women's movement, LGBT rights, and minority rights are actually a fairly new thing, and are still not without controversy in the US.


The cakeshop very much did not "refuse to serve gays", and really there is absolutely no self-consistent way one could agree with FB deciding that certain content is "racist" and banning it while insisting that Masterpiece Cakeshop is in any way obligated to make wedding cakes for someone whose wedding plans they do not agree with.


I'm not speaking about Facebook at all, never mentioned Facebook anywhere in my comment or in this thread, and have not made any statements about the validity of speech restriction in any context so I'm not sure why you're mentioning that. I'm speaking to a general issue brought up by the parent.

As for your comment, the cake shop has a product they are letting people buy (in effect, serving them). They are willing to produce that product for a certain group of people for a certain activity (celebration of a state recognized marriage) based on their identity. They are not willing to produce that product for another group of people for the same activity (celebration of a state recognized marriage) based on that second group's identity. Now you can argue that the shop is within their rights to do that for religious reasons and the Supreme Court would agree with you, but it's still refusing service based on identity and you know it. If you want to split hairs and say they "refused to provide COMPARABLE service because they were gay" as the buyers may have been able to buy other products then so be it.


This is a thread about Facebook, isn't it? And the parent,m or GP by now, very much alludes to it, or platforms in general.

In the bakeshop case though, I do not think it has ever been alleged, that they ever refused to serve anyone cakes just for being gay. I think the whole case hinged on the question whether making a wedding cake is significantly more than just selling a cake off the shelf to anyone who comes in (I am really not sure if, as a matter of principle, private entities in a free society should be compelled to do business with anyone they do not want to, be that marrying gays or cynical coders who go by "Fins" though). So no,I don't think saying that they "refuse to serve gays" is a proper characterization.


May I suggest reviewing the history of the civil rights movement? That sort of discrimination used to be normal in large parts of the US and was overcome through a combination of civil disobedience and legal challenges.


There's plenty of places where there would be zero public backlash for those types of signs, if they were permitted by law.


Lots of country clubs are still white exclusive. Private christian schools still exclude gay teachers.


Saw a plumber's vehicle driving around recently with a blatant anti-Chinese sticker on it.


> Its because its not allowed, we either have a list of protected groups - or a general rule that forbids banning anyone because of their 'group'.

It's the former. There is no law, at least in the US, that forbids banning people from private property for group membership in general.


Based on the evolution of the web, indeed, it looks like new businesses will be made out of this.

With this banned group, I am concerned about what social scientists call the 'backfire effect', where beliefs turn even stronger and bolder. For instance, this happens with groups of people who hold conspiracy theories; when society rejects them, the conspiracy theorists double-down on their beliefs.


The key thing you're missing is recruitment. White supremacists need access to the larger platforms in order to recruit new adherents to their bigoted ideology, which is significantly harder to do if they're forced into "underground" spaces. De-platforming works -- if we make it clear that hateful ideology isn't tolerated in large platforms, then people have to actively seek out such ideologies. If such ideologies are already frowned upon in the public sphere, it makes their recruitment much more difficult.


They existed long before the internet and will exist even if they’re banned from it. When it comes to racism, a lot is spread through family and friends. Get to know certain people long enough and they’ll do the old ‘look both ways and say “they don’t want you to know this, but the truth about (x group of people) is...”’ routine.

Ban what these clowns say on the internet and it might end up giving them more power, since it lends to the “they don’t want you to know” aspect.


All true, but it's a fact that they recruit through the internet and have had a digital organizing strategy since the 1980s. The armored car robberies mentioned herein were used to finance a nationwide chain of bulletin boards known as Aryan Nations Liberty Net.

Recruitment matters because groups that don't recruit successfully eventually lose momentum, are penetrated by spies, or dissolve into infighting over everything from leadership status to money to women. In fact, the biggest mortality risk for RW extremists seems to come from their own peer group, though I don't have stats for this offhand.

Certainly those attitudes and ideologies will continue to exist in the face of deplatforming, but raising the costs of recruitment, retention, and operation seems to be an effective strategy.

https://en.wikipedia.org/wiki/The_Order_(white_supremacist_g...

https://www.adl.org/sites/default/files/documents/assets/pdf...


Totally agree, recruitment will be much more difficult. It's what happens to many conspiracy theorists over time. Their recruitment takes the hit in society.


The purveyors of such ideologies simply do a wink-wink-nudge-nudge dance.

Look at Germany and its Streitbare Demokratie policies. Then look at how AfD polls.


Right – but then consider what its popularity might be today if AfD-aligned content had been banned by Facebook and other platforms from the start. Parties like the NPD have been around for ages, but unlike the AfD they were kept out of the Overton window and never managed to grow to significant numbers.


And how exactly would you define it to ban it? Germany already has some of the most extensive hate speech / anti-extremism laws in Europe. I mean, it's not just censorship, but e.g. outright bans on parties expressing certain views in their platform, or even just insufficiently democratic internally, and with various specialized institutions that enforce all that proactively. And all that wasn't enough to prevent AfD from operating legally.

Facebook can just ban them by name without any explanation, of course. Are you suggesting that it's a good thing - that, because we seemingly can't write fair laws on this, we should delegate both the laws and their enforcement to private industry, where it can be done without any checks and balances?


I think you're right. Every time you censor something you create market pressures for a way to host content that's more difficult to be taken down or traced back to a source.

It's why I think censorship is self defeating, it will eventually lead to a populace being competent enough with certain tools to side step the censor, and you can't walk that back.


Good luck to them


I tend to agree.

I think the main argument against that can be made is that the Internet is now so complex that the forbidden speech can easily be silenced entirely, if you start your own blog independent of FB. Especially with DDoS attacks, it can become prohibitively expensive to stop those and the core/root internet infrastructure simply doesn’t stop it. It’s basically a paid add-on and if you want to do it yourself you will need both funding and/or technical expertise.

Either way, I’m still in support of Facebook and others removing this type of speech as I think it’s a net negative to society to amplify it in this day of discovery algorithms.

Seems like we also need to do more to improve the core infrastructure so that you don’t need 5Tbps of bandwidth to keep your blog online.


"But that runs both ways. Not only do I think that almost all (to borrow a mathematical term) speech should be permitted, but also that the onus is on the speaker to make themselves heard or to find an audience. No company or platform owes them anything."

Thank you - that is very succinctly put and is a reasonable and actionable standard that I hope more people will subscribe to.

That being said, I can't imagine how difficult and frustrating it will be for facebook (and others) to police any kind of definition for "white nationalism" which seems very vague and open to all manner of misinterpretation.


The paradox of tolerance is that we must always be intolerant of the intolerant.

You are always FOR free speech until the speech starts hurting you. ie would you be tolerant of isis posting on FB?


Yes, so long as they aren't breaking the incitement to violence clause.


The problem is, by being intolerant (even of the intolerant), you're intolerant. So if the "paradox of tolerance" is accurate, then we must not tolerate you.

"But", you say, "they were intolerant first, so they are the ones who should not be tolerated!" From your perspective, sure. Maybe not from theirs, though. That gets you to a cycle of each side being sure that the other is intolerant, and therefore that their own intolerance is justified. Maybe this "paradox of tolerance" isn't the right way forward, even in the face of intolerance by others. You should be better than them, not imitate them.

Worse, if each side can think that the other side started it, well, what if the other side is right? What if you actually are the problem? Using the "paradox of tolerance" as your justification is a handy way to justify your own intolerance, and never have to examine yourself.


Neo-nazis and their ilk tend to be pretty explicit about wanting to kill specific groups of people as part of their ideology, something which your hypothesis seems to overlook.


[flagged]


Those who are actually observant rather than just snarky might notice that I said no such thing.

What I actually said is, first, when you don't tolerate intolerance, you become (at least in technique) similar to the people you are fighting. And second, don't be so sure that you aren't the bad guys. And the more absolutely sure you are that you're the good guys, the more I want you to back off and take a really hard look at whether that is actually true, because the bad guys can use exactly your logic (or at least gameskrishnan's) to convince themselves that they're the good guys.


[flagged]


> What utter garbage

This opening, along with your straw man in the second sentence, shows that you're not debating this topic in good faith.

It would be nice to think that every horrific event in history was simply a case of bad people knowingly doing bad things.

Sure, sometimes it is, when we're talking about a lone actor or despotic leader who just wants to bring about destruction and misery for their own pathological reasons.

But at least as often, everyone thinks they're the good ones fighting for a just cause. As has often been said here before, one person's terrorist is another's freedom fighter. One person's liberating saviour is another's tyrannical occupying force.

It might be easy for us to look at 1930s-40s Europe and judge which side was right and which was wrong, but to an ordinary German or Italian citizen or soldier who didn't have a full understanding of what was going on, and just knew that their country had been economically depressed for too long, it wasn't so clear.

And in plenty of other major conflicts since then, the rightness or wrongness of each side is very hard to discern.

This is not suggesting that racism or bigotry is ever to be endorsed or validated.

But bigotry is usually an expression of a deeper distress that has nothing to do with the target of the bigotry. It's often just scapegoating for a largely unrelated problem.

By paying attention to what the real source of that distress is (e.g., post-WW1 misery), it's possible to respond to the real problem and prevent the bigotry and consequent horrors from developing at all.


No, it's objectively garbage to take yourself out of the equation and accept fascism like it's not going to hurt anyone. I'm not going in retrospective here and say my judgement would have been perfect 80 years ago. I say we need to learn from past developments and mistakes made in the course of history. Stop abstracting away the problem at hand which is White Nationalism and other fascist thought becoming more and more acceptable.

> By paying attention to what the real source of that distress is (e.g., post-WW1 misery)

You think you have it all figured out and you 'get it' now. That's naive in my opinion. And I'm not talking about good and bad, some kind of universalist dualism that informs my every action; I'm talking about being concerned about my friends that may or may not fit certain views on human existence that fascist thought provides. At least try to get where I'm coming from instead of throwing out useless internet jargon ('strawman') without even understanding why I'm reacting so harshly.


> You think you have it all figured out and you 'get it' now

I've spent over 7 years undertaking a variety of formal approaches to deep personal introspection, and I've never felt less like I can, or anyone can, "have it all figured out".

But one thing I've learned a thing or two about is the tendency to ascribe the greatest threats and wrongs to some easily-identified "other" without paying adequate attention to the failings and risks in our own group and within our own self.

For what it's worth I spend vast amounts of time thinking about what kinds of approaches to eradicating bigotry and oppression might actually be effective, and what role a humble individual as myself might be able to play in moving the needle over the long term.

To suggest that this is a matter I believe is "not going to hurt anyone" is about as wrong as you could possibly be.


This alleged "paradox" is just a convenient sledgehammer with which those in power can hit the powerless over the head to keep them quiet.


No, it’s a solution to the problem that if you are tolerant of everything someone will eventually whack you with a sledgehammer. To avoid that you don’t tolerate those who are violently intolerant, which in the US takes the form of the “Imminent lawless action” doctrine.


You only need to be intolerant of people whacking others with sledgehammers to avoid that issue. There's no problem with tolerating speech in your example. In general, the problem with anybody who is "violently intolerant" is the "violently" part. So when speech is a direct call to commission of a crime, or a conversation in conspiracy to commit a crime, we do not treat it as protected - but it's not because of violence of the speech itself.


That, actually, has nothing to do with that BS "paradox" and well predates it. Indeed, you can't do illegal things, and inflicting violence on others is generally illegal. What the "paradox" is used for is to silence those with inconvenient opinions -- if you've followed discussions of the Damore affair, "paradox" has been trotted out as justification for "punching him in the face" multiple times. But on the other hand, to use the simplest example, US is quite tolerant of speech (in the 1st amendment sense) and hasn't turned into communist or other dictatorship. USSR, or China, or Nazi Germany on the other hand did not tolerate any "intolerance" -- that did not turn out too well.


Indeed, you can't do illegal things, and inflicting violence on others is generally illegal.

If someone is consistently threatening violence at some point you need to make the determination about how to deal with that. You can of course do nothing about it but if the person eventually goes ahead and commits violence you now have a bunch of injured or dead people, which seems a worse ill to me than deplatforming.


You think deplatforming is going to stop him from killing people? It might make it more likely - he may convince himself that the deplatforming is part of the great conspiracy against normal people, and he has to strike a blow against it. (On the other hand, it makes it harder for him to convince others to join him.)

Don't deplatform him. Investigate him. Get a warrant and find out who's listening to him. Find out what's going on in their private communications - are they making actual plans, or are they just talking?

Deplatforming is about the laziest and least useful possible response. As Fins said, deal with the violence (including plans to commit it).


Deplatforming and investigation aren't mutually exclusive, not sure why you'd think so.


If someone is threatening to commit violence, there are already perfectly effective legal ways of dealing with it. If anything, if you do deplatform them , noticing that speech becomes significantly harder, while they become more radicalized (and if Zuckerberg did not exist, anti-semites would have had to invent him; but he does exist).

To go back to the original article, though, it is interesting that they are banning white nationalist content. Does it mean that any other nationalist content is just fine. And can I start an Even[0] separatist movement on FB, calling for eradication of the whitey? We'll start with Russians, of course, but after them we're coming for you! /s

If I didn't have better things to do, a conspiracy theory of FB (they did elect Trump, as everybody clearly knows) being in cahoots with Stormfront and specifically calling out whites to stir resentment almost writes itself here.

[0] https://en.wikipedia.org/wiki/Evens


Are you advocating for Facebook being legally barred from banning any content on their platform that isn't explicitly illegal? That's the only alternative I can think of.


Actually, I am for Facebook having the right to ban anything and anyone from their platform. But that does not mean that doing so would necessarily be a good, or right, or moral thing to do.


I think your attitude is still extremely mild. It doesn't matter if we believe ourselves to be neutral, there are groups that consider themselves our opponents, and will work against everything we stand for. And, unless we decide to fight back or at least stop helping them to promote their agenda (which is often incompatible with values widely shared among secular / educated communities), they will prevail in some manner. See also: 2016 U.S. election.


> to promote their agenda > ... > they will prevail in some manner.

As it takes two to tango, so does it take two sides to promote something: one that is doing the promotion and the other that is following the promoter. If the argument is that people are so fickle they will eat up whatever is being directed at them, join the ranks of the promoters, who will thus prevail, then woe upon us. I prefer to think of ourselves as somewhat more capable of resisting insane and malicious messages.


I see what you mean, but it does seem different because FB or YouTube are not publications, but public forums, public communication spaces to publish as a person. Things published on youtube are not on behalf of youtube. Basically FB and Youtube and things like it form space where content fills in.


> Things published on youtube are not on behalf of youtube

Tell that to people using YouTube. YouTube will end up being inextricably tied to whatever content is uploaded using their platform as long as they are the ones who manage its display and recommendations.


I think a better retort would be "tell that to 4chan".


I agree with the spirit of this totally, if I run a coffee shop I sure as heck would kick out white nationalists and would support other coffee shops having that ability.

There’s a weird side to this though that if you want to host a video let’s say YouTube is the place for it. Or if you want to organize a group Facebook is THE place for it. Well what if your group is about shutting down Facebook? Facebook could close it. Heck FB took down Elizabeth Warren’s post critiquing FB.

I think these mega corps need to be broken up so they don’t get so much power in deciding what is and isn’t allowed on the internet. Sure they make good choices like this one but they also make tons of other choices that are only because it benefits them.


I like your comment, I only disagree with the word "spirit". I believe this policy is a slippery slope. No one likes these groups but alienating them, creates a precedent/a policy that legitimizes removal of the one element of our existence that allows for our individuality(spirit). Silence of that which we loathe is failure to embrace our own capacity for loathsome behavior. We all like and dislike different elements in our life. I am a married man; is this not discriminatory to the rest of society. When a corporate entity profits this much off the indoctrination of public, they should be obligated to enforce the tenants of said republic.

“It's an universal law-- intolerance is the first sign of an inadequate education. An ill-educated person behaves with arrogant impatience, whereas truly profound education breeds humility.”

“Gradually it was disclosed to me that the line separating good and evil passes not through states, nor between classes, nor between political parties either -- but right through every human heart -- and through all human hearts. This line shifts. Inside us, it oscillates with the years.”

― Aleksandr I. Solzhenitsyn,


  In earlier times, that meant that no newspaper
  or printing press was required to print 
  everything sent their way. Today it's the 
  centralized websites that also don't have 
  this obligation.

  ...

  You are not entitled to widespread 
  distribution. Never have been.
This sort of pointing-to-the-past-to-pave-the-future is disturbing to say the least. We don't apply the received wisdom of the past blindly to issues of the present much less the future.

All else being equal, the history of what was allowed to say (and what was not) in newspapers, radio or television is rife with examples of heavy handed editorializing, gate-keeper-ship, outright bias and favor and misguided attempts to selectively allow what truths are judged to be fit to be heard by the masses.

Those were the dark ages for the dissemination of information.

Do we really want to go back to those ages or even a shade resembling what was considered acceptable practices back then?

Heck, most of that stuff happens unimpeded in large parts of the world today, as I write this.

Do we want tell those countries and regions that its okay to sieve and favor what's allowed and what isn't in their public discourse, if a substantial majority of their populace finds it morally repugnant?

Won't those places and people, then apply their own standards of morality in that judgment, which could be diametrically opposite to ours?

Is this the example we want to set for the world to follow?


So what are you advocating? That we force private entities to publish speech, and allow no recourse to enforce social norms through private action, only through government? That seems far more dystopian than the past that you are speaking out against so strongly.


How about we force private entities in a dominant market position in any kind of information-distribution market to be common carriers?


Counterpoint - if every media, platform or provider blocks the content, and the only audience you're allowed to speak freely to, is to yourself - is it still "free speech"?

I'm not saying we're there yet, but we could be at some point.


You literally have the right to go stand on an actual soapbox outside your government offices and yell at people. That's what's constitutionally protected.

No one has to pay attention though.


There's many arguments that can be had either way but the one that always resonate for me is this:

Once you ban speech on any ground that isnt directly putting people in danger, even real nazis for example, they DO NOT disappear. They still recruit. They still operate. You just don't see it.

The people they recruit are suddenly in a bubble where they DO NOT see the opposite view point. Their opinions become unbalanced and they become radicalized. You now have more hard-core nazis.

In this case, I believe FB does this for 3 reasons:

1. many people inside FB believe in censorship and that you can forbid people to speak or think and that it makes the problem disappear. Many people in tech actually believe this.

2. FB sees this as good marketing, and good to retain eyeballs on their platform, specially vs the "we reached out and FB refused to ban nazis, FB is siding with nazis"

3. FB wants to control speech so that they can ask money when you want to be heard. Effectively, this means "media company X pays X so that their message appears more often". The more you control speech on the platform the more you can push for sources of content that will actually pay you money.

Of course, these are just my opinions, and stuff.


> 1. many people inside FB believe in censorship and that you can forbid people to speak or think and that it makes the problem disappear. Many people in tech actually believe this.

That sounds like you're making many assumptions and implications that aren't borne out by evidence. Why do you leap to this conclusion rather than:

"Many people inside FB believe that they don't want the platform they've built and created to be a platform for views they feel abhorrent, and if this means that those views need to feel an alternate home, that's not their problem"?


I am no longer a free speech absolutist. And the reason I am not is the sort of abstract, angels-on-a-pinhead bullshit arguments that fill the responses to the parent post. The more it can be abstracted to "free speech", the less oxygen there is for discussion of the real harm caused by white supremacist thought and white supremacist actions.

I put the safety of society at large first. And if you think that's oppressing you, maybe you should ask yourself why.


Because the safety of society at large, especially if we look at the perception of it rather than the real goals, can translate directly into oppression of those who are perceived to be a threat. The whole "build the wall" thing is pitched as being about safety of society, for example. There are people in this country - including prominent elected politicians representing party that dominates state legislatures, and that has broad popular support in most those states - who sincerely believe that BLM and CAIR are terrorist organizations. There's that whole "BDS is anti-Semitism" thing, and some countries already have laws on the books that make it a felony to participate.

Would you say that the political and judicial system of our societies is fair, or is it biased against minorities? If it's biased, why do you expect it to enforce any such "safety of society" laws in a way that is not similarly biased towards the groups that already are the beneficiaries of that society?


Who do you propose gets to decide what speech is dangerous?


That's a disingenuous question. Let me come back with some questions about your question.

Are you proposing that nobody can decide what speech is dangerous?

Are you suggesting there is no such thing as dangerous speech?

Are you suggesting that having someone decide what speech is dangerous is actually worse than the dangerous speech itself?

If you're not suggesting any of these things, then how do you get out of the proposition that someone has decide?


It's not disingenuous - it's the most fundamental question that needs to be answered if you propose speech be controlled.

> how do you get out of the proposition that someone has to decide

I agree ultimately some subset of people would need to decide. My question is who?


You clearly have done answer in mind since you've already conceded the core argument. Who do you think? I'm good with the discourse between the public and Facebook leadership in so far as it targets widespread social norms and ignores the few loudmouths.


> Are you suggesting there is no such thing as dangerous speech? Are you suggesting that having someone decide what speech is dangerous is actually worse than the dangerous speech itself?

How is that working out in the People's Republic of China? They have a very concrete definition of what dangerous speech and have legions of people to decide whether individual instances are dangerous and to remove them.

Clearly, their system illustrates the hazard of your viewpoint.


Ah, but does it? China is currently going through a period of peace and incredible growth (per capita income has grown about 130x since 1960, in constant dollars). The people are safe, secure, and enjoying opportunities their grandparents couldn't even imagine.

And I suspect that if you polled in China, you'd find a majority agree with the regulations on speech, and sincerely so (not saying yes out of fear). Meanwhile, things are much freer than they were. I'm currently reading a Chinese novel, The Three Body Problem, set partly in the Cultural Revolution, and it pulls no punches about how awful it was.


> And I suspect that if you polled in China, you'd find a majority agree with the regulations on speech

Of course you would, polls like all speech are tightly controlled.


Even here, on an American-centric discussion full of grossly privileged white American men, you'll find most agree on some regulation of speech. It's just a question of degree.


I guess I am remembering it wrong but I thought sticks and stones were dangerous but words could never hurt me. (Calls to action dont hurt me either, but I can see how one could categorize them as dangerousness due to the resultant action)


Words inspire action. That's why it matters. To paraphrase the Supreme Court, the problem with shouting fire in a crowded theater isn't the shout, it's the dangerous stampede that follows.

And those who deliberately inspire violent behavior with their speech are very, very careful to make sure it's not an explicit call to action. This is not new. "Will no one rid me of this meddlesome priest?" has been around for a long time.


We do, collectively, as a society.

Sometimes we'll get it wrong, and need to correct it. That's fine. We're capable of that.

What's not fine is throwing up our hands at even the concept of performing that judgment, because it's subjective or whatever. That cedes the public square to extremists, which results in a shittier society.


And observational data suggests that American society (and global society) has gotten better, not worse, at this judgment.

When we started this game, we had slaves, who were defined as three fifths of a human being for purposes of political representation of slave owners. Voting was restricted to white land-owning men. It's getting better.


We as a society decided homosexual advocacy was dangerous speech just decades ago. We were so enlightened then, right?


'Let's do gay sex' vs 'let's kill people in group X' seems like one of those apples and oranges comparisons people are always talking about.


This comment goes against HN guidelines.

"Be civil. Don't say things you wouldn't say face-to-face. Don't be snarky. Comments should get more civil and substantive, not less, as a topic gets more divisive."

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

Consider this a warning, as we eventually ban accounts of repeat offenders.

https://news.ycombinator.com/newsguidelines.html


No, we got that one wrong. But, now we're on the path to correcting it. This is good and just and how things should work.


What's wrong and what's right? Who defines it? When? And why?


So, popular vote? Let our representatives decide? Something else?


Our representatives. Our judicial system. Our public institutions. Our private businesses.

You're WAY too hung up on this "who decides?" question, ignoring the reality that either someone has to decide, or that no one should decide and society should simply not regulate speech in any way, ever.

Good luck with the latter if you ever have kids.


If we do get there, I hope we can ban the most dangerous speech of all - religion.


Ah, but the devil is in the details.


And you're still dodging the fundamental questions I asked you a while back. You're just playing devil's advocate in the details. It's a lame and boring argument.


All of the above, and more. Organizations like Facebook can decide to deplatform white nationalists, and we can vote with our clicks or whatever in response to that. You and I can get on message boards and advocate for what we think is just. Over time we all shape our society in our image.


You're right. The correct solution is for everyone to stop using the services providing by companies when they begin the downwards censorship slide. Unfortunately most people value using facebook in the >$1000/year range.


Similarly, Fox News isn't required to carry content they don't want to, and their political reach is enormous.


And CNN, MSNBC, Vox, HuffPost, Slate, The Guardian etc. are not required to carry content that doesn't align with their political agenda. I'm not sure I understand your point?

With media outlets there are plenty of alternatives. There are no direct alternatives to Facebook that provide the same services at that scale and reach.


Why do scale and reach matter?


I agree. I've always been in the camp favoring freedom of speech and the marketplace of ideas. My thinking became more nuanced when I read the book Twitter and Tear Gas by Zeynep Tufekci [0].

Changes in technology alter the playing field for culture—and we've seen this in practice as the Internet and social networking have caused a fundamental shift in what it means to censor information. Censorship used to mean putting the newspaper under state control or blockading the radio station to prevent the message getting out.

Today it's much harder to censor a message by completely suppressing it. But it's also much easier to swamp the signal in noise, by flooding the communication channel with disinformation and distractions [1]. A group doesn't need to be large or powerful to muddle the truth of an issue, especially after accounting for the effects of filter bubbles and confirmation bias.

So to preserve the marketplace of ideas, we need more nuanced approaches in thinking about what communication is permitted. One approach that has been gaining support is in making a distinction between "freedom of speech" and "freedom and reach" [2].

[0] https://www.twitterandteargas.org/

[1] https://www.wired.com/story/free-speech-issue-tech-turmoil-n...

[2] https://www.wired.com/story/free-speech-is-not-the-same-as-f... .


The town library is not required to carry every book. Librarians make curation decisions; that is what we pay them for. It is still a tragedy when small groups of activist Puritanical community members get enough political leverage to remove the books they don’t like. The reason it’s a tragedy is not unrelated to “free speech.”


One might argue that the wide reach of platforms like youtube or FB or twitter makes them almost like root infrastructure.


"If FB decides to ban bigots, or YouTube wants to kick anti-vaxxers off their site, so be it"

Would you take a similar stance if Facebook banned LGBT content or YouTube kicked atheist videos off their site? I am not saying you wouldn't, but both your examples seem to be a bit one sided.


"also that the onus is on the speaker to make themselves heard or to find an audience"

On social media, don't you need to find your own audience?


"... like DNS or network connectivity."

If I recall corrrectly, early in the history of the commercialisation of domain names (1990's), for a time the sole appointed registrar Network Solutions would not accept domain name registrations that contained certain "offensive" words.


> You can host your own video if you'd like, or maintain a personal blog site.

Making that argument would require Facebook to admit there's an internet beyond their walled garden. So far they seem happier to be grilled on national TV in front of congress than make that admission.


> * Which leads to a place where I'll agree on obligations: root infrastructure like DNS or network connectivity. Those are the common carriers of our Internet.

And payment processors, and credit card companies, and mobile carriers, and instant messaging providers.


I’d agree with all of those except instant messaging providers. The others involve massive capital investments to replace them at all; a small IM server can be run on a $5 VPS.


Do you think that a website like the daily stormer should be forced off the web? And should payment processors be required to allow them an account? (for server donations, etc).


> No company or platform owes them anything.

Precisely. I worked on a college newspaper. It was common , had been going on for years and years in face, that a specific hate group would send an order for advertisements, and threaten to sue if we didn't print them. They were usually Holocaust denial ads w/ anti-semitic conspiracy theories. We never printed them, but some schools inevitably did, failing to understand this aspect of free speech. Everyone gets to speak. But if you want a microphone, you have to buy or build it yourself.


Bad example. The college newspaper acts as an editor and has to review what content to publish before it's being published. The college will be held responsible if there's misinformation or something of criminal nature being shared.

FB pretends to be a medium of information for anyone and they don't moderate what is being published before being published. They do however have an obligation for content being distributed after if it's breaking the ToS or the laws. They have to take that down after the fact.

Here the controversy as I see it, is to prevent a group from publishing if it's been labeled as "hate", or "white supremacist" content, where the labeling aspect is very subjective and arbitrary.

Who's to decide the labeling?


> The college will be held responsible if there's misinformation or something of criminal nature being shared.

This is precisely the dynamic that is driving facebook. I don't think the comparison to a newspaper is perfect, but I do think it's apt. In both cases, neither venue is obligated to give anyone their own soapbox to stand on. That was my point.


The problem with FB is they want all the benefits of a platform without any of the drawbacks.

Or rather, it's the public perception of what FB should be and FB is stuck in the middle juggling with politics.

Obviously if you're a shareholder then you want to play it cool with the medias and portray a good image of the company.

I just think it's bad policy.


What if one company builds a loudspeaker that basically drowns out everybody else, just by virtue of scale alone?


I think based on the outcome of this action, it has the potential to do real harm to the internet.

If the current executive branch decides to get tough, they could withhold section 230 exemptions from the private companies.

The net effect would be to destroy the companies platform as no company in their right mind would operate with that potential liability.


>But that runs both ways. Not only do I think that almost all (to borrow a mathematical term) speech should be permitted, but also that the onus is on the speaker to make themselves heard or to find an audience. No company or platform owes them anything...

This!

A thousand times THIS!

People seem to think that free speech absolutists were saying that Facebook should be forced to host extremist content. For my part, I wasn't saying anything of the sort. Facebook has rights too, and as a free speech absolutist, I will jealously defend Facebook's rights as vigorously as I defend the rights of the Nazi/Confederate/whatever you want to call them.

(I think I agree with you on the "almost all" part though, because I do think pedophilia should be stamped out wherever we find it. But that's just MY line in the sand. Others may draw the line somewhere else.)


You're misunderstanding (or deliberately misrepresenting) what free-speech absolutists are saying. Nobody is saying that Facebook should be "forced" to host any particular content, any more than anybody is saying that homedepot.com should be forced to host any particular content. What we are saying is that Facebook is no longer a platform for free and open exchange of idea and is engaging in censorship - and it's open season to point out that everything that they _don't_ censor (e.g. calls for violence against politically unprotected groups) is something that they explicitly condone.


==What we are saying is that Facebook is no longer a platform for free and open exchange of idea and is engaging in censorship==

When was this ever not the case? Violent imagery and sexual nudity are already not allowed.


>What we are saying is that Facebook is no longer a platform for free and open exchange of idea...

???

It never was.

I couldn't watch porn movies on Facebook.


Yet you probably can find a public FB page for some porn distributor publishing updates about their new movies and stars.

Point is, even if you find the content questionable, it's still allowed to be shared, as long as it's within the laws obviously.


s/explicitly/implicitly ?

I mean this was the approach they were taking to White supremacist material until now, which was why people were giving them a hard time.


Facebook and others are not responsible for what their users upload and enjoy the legal protection that comes from it.

If they decide that they are going to be curators of content on their platforms, they are no longer a mere service provider, but a publisher. They might find those protections vanish, which would more or less destroy their platform.


Would you be similarly comfortable with ISPs or cell service providers censoring views they disagree with?


Why do they have the right to use power to suppress viewpoint opponents with 100% flexibility in claiming an individual has wronged a group with no evidence, no clear definition of what the terms are, and no recourse?

It seems like terms like this and hate speech often boil down to “people that are effective at calling out my bullshit”.

It is unethical to with no evidence call republicans or classical liberals Nazis, and then use that as a justification to abuse your power to shut them up. Why not make an argument instead?

On this topic I really recommend Frankfurts [1] terse one hour read arguing that there are three types of statements; attempt at truth, attempt at lies, and attempt to just bullshit people with no regard to relate it to truth. This kind of attempt fall in the third category.

[1] https://www.amazon.com/Bullshit-Harry-G-Frankfurt/dp/0691122...


It's quite dangerous to be allowed to be judge, jury and executioner.

FB has become too large and ubiquitous, they shouldn't be all of those.


Exactly, and I am surprised they don't know they should have a better process that is feels fairer to their customers. This seems to be treating customers too logistically, without enough regard to your own potential mistakes and biases.


It is quite telling how this is getting downvoted with no argument. Always judge people on their actions, and especially how their actions impose upon others.

Edit: and then this is also downvoted with no argument. A better demonstration of why this facebook policy is a terrible idea could have not happened.


It's not a novel argument, it begs the question from the outset (boil down to “people that are effective at calling out my bullshit”), and ignores objective evidence about the content and aims of the speech in question.


Bullshitting is what many talking heads do on TV, YouTube and tumbler to generate content required for their job. Basically, truth seeking has not done its job on it so the person doesn’t know if it’s true or not.

People are right now most often in a biased content bubble and judging content from what they learn by emotionally mistaking what they learn from possibly bullshit content as objective might lead to injustice.

Most of us agree that people might say bad things and that it would be great if they didn’t, but my claims was that facebooks judgement of the content is unlikely to be objective because:

1) no clear definition of terms like white nationalism and hate speech is a recipe for bias. What liberal or republican haven’t been called a alt-right/Nazi at this point?

2) because of #1 there is a risk that a company that has a clear ideological tilt will most likely end up suppressing viewpoint opponents when the definitions are unclear, because too much is up to subjective judgement and no due process protects against these biases

3) no recourse is bad when things go wrong

In due process it’s the process that reduce bias so that we don’t get the mistakes of mobs of the past (eg people that think they are objective while not having an accurate mental model of what they judge), although it can’t eliminate it.

How do you think facebook has ameliorated these concerns so that they can be objective?


I'm just telling you why your earlier posts got downvoted. I'm not interested in how Facebook has ameliorated these concerns until I see how they do at achieving their stated objective.


It is the stated objective that is highly problematic, because of the lack of truth seeking and humility in how they pursue their objective.

I think another outcome for Facebook is more likely because they seem to be forgetting that they are not the ones creating Facebook, rather they are capitalizing on part of the activity in the relationships amongst their users as long as they can express something relevant to them on Facebook.

They are effectively attempting logical rationing with a lack of emotional meaning to their relationships, and overvaluing what one can learn from subjectivity without truth seeking.

As they said I met a traveller from an antique land, Who said—“Two vast and trunkless legs of stone Stand in the desert. . . . Near them, on the sand, Half sunk a shattered visage lies, whose frown, And wrinkled lip, and sneer of cold command, Tell that its sculptor well those passions read Which yet survive, stamped on these lifeless things, The hand that mocked them, and the heart that fed; And on the pedestal, these words appear: My name is Ozymandias, King of Kings; Look on my Works, ye Mighty, and despair! Nothing beside remains. Round the decay Of that colossal Wreck, boundless and bare The lone and level sands stretch far away.”


What about the post office refusing to deliver letters? Or the phone company? How about no heat for nazis from the gas company?

It using the gas company’s platform to heat your racist meeting place allowed?

Facebook is a Privately Owned Public Place more than it is a platform.


Thats fine, but how does the line blur when Facebook takes government money in the form of subsidies and political ads?


I think if they allow public signups then they should not have the ability to censor anybody, except by court order or to comply with the laws of the users region.

Because of course "anything I don't agree with is hitler" is a problem and an inevitability. Its fine if they do not allow public sign ups and all people they are publishing are considered employees or enter into contract with the company but facebook, twitter etc. are different animals and I really don't see how anybody supporting free speech can support facebook inhibiting the speech of certain people.


By doing that you're effectively turning these websites into 4chan. Can you imagine a website like Hacker News if no moderation was possible, nobody could get banned and nothing could be removed without a court order? You'd just have every thread spammed by bots.

That's the problem with unmoderated online discourse, cheap inflammatory posts are easy to make and draw a lot of attention while in-depth analysis takes time to write and to read. Why bother coming up with a clever retort if you can just spam "no u" and "fuck off" until those people are driven away?

In my experience no moderation doesn't result in more varied opinions, instead it turns into an echo chamber where the bullies take over. Again, look at 4chan or some of the most popular, less moderated subreddits. It's pretty crap.


The ideal state of affairs is when you can have multiple platforms, moderated in different ways as they compete for customers in a free (i.e. competitive) market, where such moderation, or lack thereof, is a part of their pitch to new customers.

But we let companies monopolize whole segments of the market. And then every such segment becomes a vast echo chamber of whatever the prevailing public opinion at the moment is.


I don't understand your position. The 1A is fulfilled—in word and spirit—to your satisfaction so long as government puts up no barriers to speech, even if private, quasi-monopolistic firms filter speech to their own ends?


His position was that government should not stifle speech, but that a private company should not be forced to re-print it. There's a big difference between prohibition/regulation and coercion of speech.


I think you do understand it, yeah. The only caveat to that is the private control of what I believe to be public infrastructure. The wires that connect all of us.


What makes the wire public infrastructure and facebook not? What would facebook have to do to become public infrastructure, or comcast have to do to become not?


Clearly not a definitive answer but some elements to think about:

Facebook's business consists in promoting some of the content their users put online (the content that makes them win money) so they cannot claim to be a neutral carrier.

Comcast cannot be asked to filter content since what they carry is encapsulated at least and encrypted most of the time.


Comcast can't do anything to no longer be public infrastructure. Rather, municipalities would have to rework their regulations to allow anyone that follows a defined process to install the same infrastructure that Comcast has.

Likewise, Facebook would become public infrastructure when the government puts regulations in place that make it impossible or prohibitive to start a new social media site.


That's an interesting perspective!

How do patents interact with it? If facebook got a patent on some new VR technology that allowed face to face interaction at a distance, and didn't license it out, would that turn the relevant portion of facebook into public infrastructure? What if they licensed it out but only to other people who also won't serve people saying <x>?


One is contracted by the US government to provide public infrastructure (and AFAIK still hasn't fulfilled their end of the agreement), and the other is an entity that operated and grew organically to provide a service that the government didn't ask for.


Is google fiber (which, afaik, was not contracted by the government) private infrastructure in your model then?

(Both yes and no are certainly valid answers, I'm just curious about how you view this)


I don't know what Google's contracts and agreements are with local municipality, so the short version of my answer is that I don't know how i'd classify Google Fiber myself.

For a more general point of view, i'd take into account a lot of factors such as:

- who "owns" the service, both from an "on paper" POV and who builds and maintains it

- what does the contracts between the entity and the government say

- did the government pay for a service to be built by the entity, vs the entity asked for permission to build the service, vs the entity built the service using non-government-permission-required resources

- what was the spirit of the relationship between the entity and the government

- who will be using the service

- is the service a government-allowed monopoly

- etc.

Facebook does pass some of those tests (eg. the "public at large" is the target userbase for it), but fail a lot of the other tests at least in how they operate in the United States. Comcast's general laying out of wire + offering services over that wire passes a lot more of those tests and puts the service into public infrastructure in most folks' eyes.

edit: formatting


You could compare it to the street vs a private hall - everyone has the right to walk on the street, but not to automatic admittance to private venues.

Now the problem is that social networks in many respect fill the old function of the 'town square' which was a common area accessible to everyone. That commons area has effectively been divided up in the name of capitalism, in parallel to the Enclosure Acts which destroyed the British commons, so now we have a large number of (private) platforms instead.

There are of course protocols don't suffer from this problem of private capital and monopolistic control, but (like land for the commons) protocols without infrastructure can't compete effectively against platforms' huge network effects. The practical upshot of this is that extremist groups will usually just head for other less discriminating platforms, eg there's a lot of white supremacists on Vkontakte (though I'm not sure how this will pan out if Russia rearranges its internet connectivity with the rest of the world).


"Free Speech" != 1A


Exactly. Freedom of speech isn't just freedom from government censorship. Other entities can infringe on it.


And when you're in my house around my children you absolutely do not have it.


Right. Freedom of speech logically cannot be unlimited for everyone at all times. That's why government protections of freedom of speech tend to be limited in scope.


I am in agreement with you on this point. FB isn't obligated to give anyone a publishing platform.

People with hateful, racist rhetoric are can certainly attempt to fully self host their shit content on their own servers. Essentially all they need to do is figure how how to run their own authoritative DNS servers, and their own http daemons. Various publishing CMS that work on a common LAMP stack server are GPL/BSD licensed and require no external connections to any platform services.

Hopefully, ethical ISPs will refuse to host their VMs or dedicated servers, so they'll have a bit of a harder task to find a publishing platform. But if you spend 30 seconds googling "russia VPS" you can find VM services which don't even disconnect customers for hosting virus/worm/trojan/botnet command and control sites, so the racists can go there.


This seems absurd and unreasonable once you look at it more closely. If any group of people say they want to go live apart from others (white, gay, men, linux users, whatever), I'm fine with that. I wouldn't want to live in a racial separatist region myself, but I have no problem with other people choosing to do so. Treating black nationalists as heroes (which in a historical sense they were) while making white nationalists a target of scorn seems...unsustainable.

If FB wanted to redirect funding from "bad" things they don't like to a decent group, Life After Hate is about as bad as SPLC. There have to be better groups focused on helping people of various types.


The problem is when you say you want to live apart from others, but you want that to be right here where there happen to currently be other people. When you say that, even if you claim to not be advocating for violence or forceful removal, that is effectively what you’re advocating. There are white nationalist groups that actually use the phrase “peaceful ethnic cleansing.”


There are also Muslim groups who say as much. "We will just out breed them" is a common tactic used to great effect in Kosovo and Western Macedonia by Albanians.

The shooting in Christchurch referenced a video from the Mosque in question that spelled the same thing out. Unfortunately I can't share the video due to laws which would put me in jail. [0]

Is your objection to them saying it out loud, or saying it in English?

[0] Isn't censorship grand? Now instead of seeing the video yourself you have to trust someone else to tell you what's in it.


> There are also Muslim groups who say as much.

My criticism applies to them as well, and as far as I can tell, they are also banned from Facebook.

> Is your objection to them saying it out loud, or saying it in English?

English very clearly has nothing to do with my objection.


>My criticism applies to them as well, and as far as I can tell, they are also banned from Facebook.

Again, I can't link to either of the documents because censorship but the facebook group of the mosque that was shotup in Christchurch absolutely did have a video explaining why the Christian west was doomed due to having a birthrate of 1.x children per woman and why Muslims were going to inherit the countries because their birthrate was 5 children per woman.

That was left up long enough for the terrorist to find it, put it in his manifesto and me look at it an hour after the shooting had happened.

Good luck banning the FB page of that mosque after the terrorist attack against them though.


If you can't link the document, why don't you link the video he purportedly referenced? There is nothing stopping you from sharing said video, you just can't share the doc.

So I'm going to call baloney on your claim unless you can provide evidence for it.


Those identity categories are not all equivalent. Wanting to live among only "white" people is not a coherent or rational preference. It can really only be grounded in racism.


I agree with you.

But let's say you have some group who happen to opposes certain policies/bills in regard of immigration at the moment. Some people will label them automatically as racists, which is a broad statement that cannot be proven/unproven, what do you do with that?

Where do you draw the line and who does?


Do you realise that any person has a right to have a racist point of view, exchange it in conversations with like-minded people and discuss it with others, unless it doesn't result in real discrimination?


> unless it doesn't result in real discrimination?

It's incredibly naive to think that "hav[ing] a racist point of view" wouldn't result in "real discrimination" out in the world.

Here's something relevant from William Cliffor'd "Ethics of Belief":

> I shall surround myself with a thick atmosphere of falsehood and fraud, and in that I must live. It may matter little to me, in my cloud-castle of sweet illusions and darling lies; but it matters much to Man that I have made my neighbours ready to deceive. No belief held by one man, however seemingly trivial the belief, and however obscure the believer, is ever actually insignificant or without its effect on the fate of mankind, we have no choice but to extend our judgment to all cases of belief whatever. Belief, that sacred faculty which prompts the decisions of our will, and knits into harmonious working all the compacted energies of our being, is ours not for ourselves, but for humanity.


Sorry, I don't think a misused quote can justify a thought crime. It would be funny and educative if we could temporarily reanimate Mr. Clifford to give him a chance to appreciate some twisted logic that is required to apply this particular quote in this particular context. But we can't, so I suggest we leave dead in peace.

I never said that such point of view don't have consequences. But there is a law and means to enforce it when discrimination is realised. And you can indeed define some reasonable limitations on a limit on speech. But as soon as you go after personal opinions, you do more harm than good.

By doing so, we steal from people a chance to make their own minds and actions. It's an important part of human experience for many. And when that freedom is threatened, they often resist as it were almost an existential threat to them.

We know for a fact now, that humans are evolutionary wired in favour of tribalism and prejudices and overcoming our own nature is a hard and delicate process. And slow, conflicts can last for hundreds and thousand of years. History says that even if you manage to keep them dormant using state force, it's not forever. State will eventually fail and people will continue on killing each other with same amount of enthusiasm if not more. Just have look at pogroms that took place the moment USSR collapsed in brand-new national states, an excellent illustration.

There must be a solution to those problems that relies on force as little as possible, otherwise it's not very sustainable. But instead I see justification of mob justice, thought policing and global censorship. The last one is especially infuriating. Tools that are required to implement such censorship should not exists and will certainly be exploited.

After all, people we disagree with are just regular people, not monsters. Likely, troubled, needing a group they can identify with, lost in rapidly changing world, unneeded, susceptible to human flaws we all have to a certain degree. And yet it's not uncommon to believe that it's a good idea to just dismiss and alienate them and at the same time somehow pretend to hold a higher moral ground. I like those opinions no more than you do. But I see hate, arrogance and cruelty just shapeshifting and nothing good coming out of it.


The difference being that white nationalists generally don't want to run away to a large private homestead and live out their lives privately and whitely - they want to convert the existing multiracial society they live in into a uniracial one.


My question is this: Are they also going to ban other ethno nationalist movements? Last I checked, the Hebrew Israelites (a black nationalist and anti semitic group) are on FB. They believe that the people who currently identify as Jews are evil impostors.


Are any of them communicating and chest-beating to organize violent acts? I realize its tough to tell with some of these groups. I think Facebook in this situation is looking at net-effects (esp given the timing of it all).

Note German, Dutch and French Heritage celebration isn't banned. Neither is Palestinian or Syrian.

But Hezbollah is banned [1], not sure why White Nationalists shouldn't be.

[1] https://www.timesofisrael.com/facebook-twitter-pages-of-hezb...


Even the extremely left-wing SPLC has warned about the Return of the Violent Black Nationalist.

https://www.splcenter.org/fighting-hate/intelligence-report/...


Yes, yes they are. Plenty of videos on YT of black israelites attacking innocent bystanders, yelling deaththreats and racial slurs etc.


I wonder about this too — the black Israelites are a fringe group, but what worries me are ethno/religio-nationalist groups that hold political power in the nation they reside in. Lots of examples in the Middle East and Africa; and some would say in the US as well (and the “some would say” is kind of the point here).

The more I think about these kinds of questions, the more inevitable the total Balkanization of the Internet feels. Countries are finally waking up to the fact that “individual privacy == national security” but they all have their own notions of how much privacy the individual is afforded and from whom.


And the last time there was a “Hebrew Israelite”-motivated mass killing was... when, exactly?


Since when was triggering "mass killing" the standard for censorship on the internet?


At least since Facebook's current community standards were written, probably earlier:

We aim to prevent potential real-world harm that may be related to content on Facebook. [...] We remove content, disable accounts, and work with law enforcement when we believe there is a genuine risk of physical harm or direct threats to public safety.

In an effort to prevent and disrupt real-world harm, we do not allow any organizations or individuals that proclaim a violent mission or are engaged in violence, from having a presence on Facebook. This includes organizations or individuals involved in the following:

- Terrorist activity

- Organized hate

- Mass or serial murder

- Human trafficking

- Organized violence or criminal activity

We also remove content that expresses support or praise for groups, leaders, or individuals involved in these activities.

That seems like a pretty clear statement that people who are engaged in / advocating for / recruiting for groups that engage in mass killing will be censored.

It would surprise me if Facebook were the first internet platform with rules like this.


Dunno about that one but the last anti-Jewish mass killing wasn't that long ago.


> the last anti-Jewish mass killing wasn't that long ago.

Indeed. And it was motivated by white nationalism, not the Hebrew Israelite movement, so it's kind of a weird thing to bring up if you are trying to make the case that deplatforming the latter has as strong a case as the former.


Racism is racism. Is there a difference?


Some racists are more kill-y than others.


Not for Deontologists, but I'm a Consequentialist myself.


The man who shot the 9 cops in Dallas had a lot of their literature.


That isn't true.


Irrelevant? Violent hate speech doesn’t have to result in a mass killing before you ban it.


> Violent hate speech doesn’t have to result in a mass killing before you ban it.

I would say that, where there is debate over the level of danger and need to take action to restrain hate speech, whether or not that particular kind of hate speech is motivating violent acts and on what scale is a quite legitimate factor to consider.


[flagged]


[flagged]


Dang, communists killed 100 billion people, let's keep repeating that when ever people ask for affordable healthcare or economic democracy. Also important is that we never talk about the mass incarceration of black people, native American genocide, or Bengal famines.


> Are they also going to ban other ethno nationalist movements?

Yes, at least sometimes?

I think they did ban the collection of accounts in Myanmar which advocated for genocide of the Rohingya, but only a few years after the genocide was underway, so yes https://www.nytimes.com/2018/10/15/technology/myanmar-facebo... .


It was odd they are only going after white nationalism.

I have the same question. What about all the other nationalist movements?


> It was odd they are only going after white nationalism.

They aren't. They went after ISIS years ago, and they've recently been trying to go after accounts contributing to the Rohingya genocide in Myanmar, as a sibling comment noted and provided a link for.

If Hebrew Israelites start driving cars into crowds or shooting up churches and synagogues, I'd hope Facebook goes after them, too.


As I understand it though, Facebook is going after while nationalist groups that don't drive into crowds and shoot up churches.

(Also, those are odd examples...AFAIK each time it was single person operating alone.)


Is it your understanding that it's pure coincidence that an avowed white supremacist chose to drive 100 miles to Charlottesville and then into a crowd, by happenstance on the same day in the same city as the day of direct action by those white nationalist groups?

It is your understanding that no one else in the world contributed to the Tree of Life shooter's belief that murdering Jewish people was necessary and important? He, operating alone, picked a synagogue purely by chance?


?

> by happenstance

It was deliberate choice.

> picked a synagogue purely by chance

Also deliberate.

But though the deliberate choice of targets is clear, I'm not advocating banning Congregate Charlottesville or closing synagogues. Right to assembly is a basic human right.


Agreed. No one is talking about closing synagogues. Not sure why you brought that up.

And I wasn't suggesting you thought he rolled a dice when choosing his target. I'm asking what you mean by "operating alone": do you think it's happenstance that his deliberate choice of target, and his ideology, coincided with white nationalists'? That the lone Orlando Pulse nightclub shooter's ideology coincided with ISIS'?


Well, "white" isn't a heritage at all unlike Irish, English, German, Scandinavian.. etc.


The rabbit hole of whataboutism goes nowhere.

These people have been shooting up places of worship and running people over with their cars, all the while their supporters cheer them on.

They can get right the fuck out. Black Israelites start doing the same then by all means do the same to them.


"White" isn't an ethnicity. It's not a culture.

Irish, German, Roma, Italian, Russian, are all ethnicities. All of them have particularly Irish, German, Roma, etc cultures. But there is no shared "White" culture.

White nationalism defines itself not by a shared ethnicity or culture, but by making it crystal-clear who it excludes. A white nationalist can't tell you what being white actually means - but he can tell you exactly who isn't.


By that definition, "Black" and "Hispanic" and "Asian" aren't ethnicities or cultures either. Each of those groups have numerous distinct and often warring subgroups.

But if Hispanic can be an ethnicity, so can European, AKA "white".


This is purely an argument of semantics but a more technical term is the US classification of race.

You have Asian covering disparate people from india, east asia, and southeast asia with completely different cultures and economic states.

Relatedly, there's African-american which covers both black americans and black africans, but not people don't really consider white South Africans to be African-American.

And then we get to the white americans which in modern times covers most europeans, but at various points of history didn't cover Italians, Germans, and Irish.


African-American not only excludes the descendants of immigrants like white South Africans, but also Arabic North Africans whose ancestors have lived in Africa for as long as we have historical records. It's a terrible misnomer.


This is nonsense. Well meaning nonsense perhaps, but nonsense nevertheless.

If some measure of "white culture" does not exist, then what is it that people are code switching to in the workplace? What is it that comedians often make fun of, and why do audiences (including white members) laugh in recognition?


Are you sure they aren't using it as shorthand for a particular class of people? Hipsters, rednecks, or oblivious middle class bumpkins, depending on who they are making fun of?

If you don't agree, how would you describe white culture?


As a non-white person living in America, this is scary to see. It's a policy targeting a specific race of people's content... How is that not racist? It will only foment more resentment and retaliatory sentiment while serving to make "certain kinds of racism" socially acceptable as long as it's the "right race" to discriminate against. Ban ethno-nationalists if you feel so inclined, not {{insertRaceHere}} nationalism. Otherwise this is a slippery slope to head down


I don't see why facebook would be obliged to host content they don't agree with. These people will always find a platform on the internet. Facebook doesn't need to make their recruitment more convenient.


I get what you're saying but if they're going to curate what content that they put forth, then they cease to be a platform and they become a publisher. Publishers are liable for much more under the eyes of the law. They can't cherry pick the advantages from the two systems without any of the drawbacks.


Thats quite an interesting question. Filtering is not publishinng, the content is user generated. However they are taking on an editorial role. Also, they are not a newspaper, so terminology from that kind of medium might not be useful.

Any platform that offers freedom is going to have problems on the head and tail of their platform. I feel facebook is trying to adjust where they make that slice and are justified in doing so, from a moral basis and according to the wishes of their shareholders.


Platforms choose what to remove, and Publishers choose what to include. No matter how much a Platform removes, it never becomes a Publisher.


This seems a rather pedantic difference -- "let it be posted and remove it later" versus "reject it upon submission."


I honestly wonder why do you feel the need to defend a business worth 138 billion with more than 2.23 billion monthly active users, 210 million in the U.S. alone.

Isn't it obvious that they are not just ANY business like a mom and pop store, they are THE place where public discourse is happening with a massive influence on culture, society and politics.

Yet people feel the need to jump in and defend their right to do whatever they want, because laws we came up with when people were selling bread did not mention anything about global corporations who control elections.


No need to defend a large business, but the idea that they can regulate content on their own platform is what I defend. If I were to do the opposite and pile on them for being large and profitable, I'd have just as weak of a case as for defending them based on that.


In America, white nationalism almost always means some form of hate against non-white groups. It’s associated with the KKK and Neo-Nazis. Whether or not it’s ppssible to have a non hateful “white ethnic interest group,” that’s currently out of the question in the US and will be for some time.


[flagged]


Personal attacks are not ok here. Please don't do this again.

https://news.ycombinator.com/newsguidelines.html


I am not a ‘white person’ (whatever that means) ...but I don’t want to see ‘white nationalist content’ banned.

When people communicate on a public platform unself consciously, they reveal themselves openly and freely. There is no greater protection to vulnerable populations than transparency. Freedom of speech is a gift and a valuable tool.This move is dumb beyond words.


> There is no greater protection to vulnerable populations than transparency

This is somewhat true, but your argument has a pretty big flaw.

> When people communicate on a public platform unself consciously, they reveal themselves openly and freely

This simply isn't true. You aren't revealing that you read this content. THe authors aren't always revealing themselves. Those who don't really do themselves any harm because they tend to be well established as white supremacists already.

Richard Spencer doesn't care if someone calls him racist and evil.

This is why people take photos and plaster them online when white supremacists march. People get recognized, that's the point where they can be called out.

> Freedom of speech is a gift and a valuable tool.

These people are radicalizing young men into committing violent acts. They should be banned the same way terrorist content has been removed from other platforms.


I think terrorist content is different from nationalism/racism. The latter is still a deeply held belief. Regardless of how we feel about it, everyone has the right to feel love or hate or superiority.

This is really a failure of state to install controls and protections from out of control elements. This is a breakdown of state measures to deter/punish unlawful acts and failure to protect WHEN actions occur rather than breakdown of society.

One cannot make an entire society conform. To me, that’s terrorism too.


> The latter is still a deeply held belief

and I need to kill the <undesirable group> isn't sincerely held?

I think you have to believe pretty sincerely to be radicalized to the point of committing murder.

> Regardless of how we feel about it, everyone has the right to feel love or hate or superiority.

Why? I was perfectly happy living in a society where that was not the case. Where freedom of expression is guaranteed, rather than absolute freedom of speech. It's not flawless, but it's better than nazis murdering people all the damn time.


This action doesn’t have an impact. If one thinks racism will erode because Facebook redirects one to a page imploring one to not hate, then I don’t think one has been on the receiving end of racism at all.

If anything..driving free speech..however hateful..to dark recesses with only a willing or curious audience ...and without the judge mental and opposite view in the public..will only make the situation worse. Nasties multiply faster when they are not witnessed.

On a simple level, think of an ocd monologue about whether your malaise is flu...vs checking with a doctor and he declares you ill(or well). Being witnessed is a huge deal. It affects people’s opinion of themselves.

It is well known that we behave better when we know we are being watched.


> The latter is still a deeply held belief.

And Salafi Jihadism isn't? I have some Saudis and Pakistanis I should introduce you to...


This is a breakdown of state measures to deter/punish unlawful acts and failure to protect WHEN actions occur rather than breakdown of society.

You could say that Facebook has effectively privatized prior restraint, given the "town square" arguments made elsewhere in the thread. Great when we happen to agree with the views being restrained, I suppose.


> This is why people take photos and plaster them online when white supremacists march. People get recognized, that's the point where they can be called out.

I've found it interesting that the supposed "bad guys" are willing to openly espouse their views while the opposing side frequently utilizes masks to hide their identity.


> I've found it interesting that the supposed "bad guys"

Friend, if you aren't willing to condemn the white supremacists as the bad guys I seriously question your decision making.

There is zero value in maintaining this veneer of both sides being in any way the same.


I've talked to some people that didn't seem so much white supremacists as pro-white. They were mostly concerned about the willingness for society to ignore attacks on white people while vilifying the smallest transgressions against other groups.

I've also seen the violence from the masked, hard-left types while never any initiated from the other side. At least in some parts of the US. I can't speak for what occurs in Europe.


> I've also seen the violence from the masked, hard-left types while never any initiated from the other side.

In Europe and the US the main sources of terrorism are i) the far right, ii) Islamic extremists, and then a long way behind iii) animal rights and environmental extremists.

If you haven't seen far-right violence it's because you're not looking. It's pervasive.


I'm sure it (far-right violence) exists. But most of the videos I have seen online show antifa initiating violence. As far as I can tell, antifa seem to be the soldiers for the extreme left. Are we just more willing to accept their behavior because they are fighting for the "correct" side?


> I'm sure it (far-right violence) exists.

Given multiple, recent mass killings (as well as additional disrupted plots) with explicit far-right motivation that have gotten widespread, in some cases global, news coverage, it is more than just something that vaguely exists.

> But most of the videos I have seen online show antifa initiating violence.

That really says a lot more about your selection of information sources than ground truth.

> Are we just more willing to accept their behavior because they are fighting for the "correct" side?

I think we're more willing to accept their behavior because there aren't bodies being piled up in job lots due to antifa-initiated or -motivated violence.


This is whataboutism to it's extreme.

The entire motivation for this change at Facebook is because someone just finished killing 50 people in the name of white supremacists and streamed it. You would have to bury your head pretty deep to not have heard anything about this.


How do you transparently protect those in Christchurch NZ or at one of the sites of nationalist, extremist violence? Transparency does nothing to deter the mentally ill and psychotic individuals who perpetrate mass murder.


Just going to add a caveat that most terrorists are not, in fact, mentally ill, and arguments of the form 'you'd have to be mentally ill to commit this sort of violence' stigmatize mentally ill people without having any foundation in fact. They're often advanced by people who are unfamiliar with political violence and don't like discussing it.

Here's a decent and current introductory article on the topic: https://medicalxpress.com/news/2018-11-link-terrorism-mental...


Then let me amend my statement: I was not intent on stating that those who are mentally ill are more likely to be violent, rather I'm interested in if those who do indeed commit mass murder have problems with mental health - would you disagree with that interest? Is it the act of a healthy mind to kill dozens of people? If we allow that someone who can willfully kill 48 peaceful individuals may have something unhealthy in their mind, then maybe we should look into the treatment of that specific subset of mental illness.

I'm not part of the group who says the tail wags the dog here. I don't think mental illness leads to violence, but that violent people may be more likely to have certain kinds of mental illness.


The thing is that empirical inquiry already suggests that mental illness is a much less important factor than personal associations. Considering that psychiatry has a severe etiological problem already, I don't think it's such a productive line of inquiry.

I mean, a pacifist could argue that choosing to join the military or obey orders that might result int eh deaths of civilians is pathological, or that states themselves suffer a sort of institutional pathology. But if one accepts a casus belli as a valid premise for military action, such questions are immediately collapsible into a cost-benefit analysis - which is, to some extent, how military forces actually operate.

https://psycnet.apa.org/fulltext/2014-33751-001.html


Being aware of people and surroundings in their true form is better than gagging some of the people around us, yes?

We have been hating on others not like ourselves since the caveman days. I honestly can’t think of any time in history when we haven’t been doing horrible things to each other. It’s like cancer. We are able to diagnose it and hear more about it now.

It’s like saying ...we are going to install a ban on sharing that you have cancer. “If we don’t know about it, we don’t have to acknowledge that there is a terminal illness amongst some of us. And all will be fine and dandy.”


Yes and no. It's definitely better if you're in the business of tracking down violent extremists and interrupting their recruitment or action plans. On the other hand, not everyone is cut out for that and it's unreasonable to expect everyone to do that all the time for themselves.

Rather than gagging certain people, anti-racism and CVE focuses on raising the cost of certain kinds of speech acts, specifically ones that are less about intellectual inquiry than about intimidation and coercion, which behavior may endorse or actually escalate into violent physical acts.

Despite the aphorism 'sticks and stones may break by bones but words can never hurt me', hate speech imposes a significant burden on the unwilling recipient from general anxiety up to full-on fight or flight reactions, and the infliction of this burden is deliberate and strategic. It seems entirely reasonable to me that the people who want to engage in such speech should experience for themselves how their speech acts feel for the recipients.


There is a certain honesty in being openly racist. I kind of appreciate that because I can walk away.

I can walk away!! That is an enormous relief and I am in control. I got a cue..information and I am free to act upon it.

If I don’t have cues, I will walk INTO unknown circumstances and be clueless with those who don’t want me in their midst.

True story: the most vicious racism I have experienced is the covert passive aggressive kind in the heart of liberal Berkeley. And I have lived in the south. Yes. The South. Some were absolutely lovely and others..they don’t want you eating corn and grits with them at their Waffle House. And that’s fine. Because I knew to stay out of certain waffle houses.

But Berkeley tho’...I needed therapy. It was serious mindfuck.


I can walk away!!

And yet quite a lot of people get beaten up or murdered by racists, why didn't they just walk away? Because, in many cases, they didn't have the option. Some racists just sit there and say hateful things, others are more active. I'm guessing you do not have much experience dealing with the latter variety.


You are missing the point of the original article. It’s about Facebook censoring content based on words. That is not going to stop people from acting in reprehensible and violent ways.


And you're ignoring the points I made above that don't support your argument, so it appears we've reached a conversational impasse.

That is not going to stop people from acting in reprehensible and violent ways.

It's going to make it harder for them to get the audience they desire, and harder to recruit, radicalize, and organize. Contrary to the 'lone wolf' trope, violent actors almost always have identifiable and often leaky support networks.


I understand what you are saying, but for good intelligence, information must be able to be traceable and visible.

Transparency is helpful. And non transparency hinders detection and pursuit.

There is a lot more damage prevention and damage control you can do if everything is above board.


I don't know as much about hate as I should, perhaps, but it seems arbitrary to make rules about what's hate and what's not.

This part, especially, from the article... We also need to get better and faster at finding and removing hate from our platforms. Over the past few years we have improved our ability to use machine learning and artificial intelligence to find material from terrorist groups. Last fall, we started using similar tools to extend our efforts to a range of hate groups globally, including white supremacists. We’re making progress, but we know we have a lot more work to do.

My question is, how are you going to know when you're done? What would Facebook look like if they didn't have any more work to do on hate? I'm honestly asking this.


The definition of hate speech and its boundaries has been thoroughly explored by the US supreme court. A taste of it:

https://en.wikipedia.org/wiki/Hate_speech_in_the_United_Stat...


THANK YOU! That was helpful to read.

This is my favorite bit, from right at the top: "Hate speech in the United States is not regulated, in contrast to that of most other liberal democracies.[1] The U.S. Supreme Court has repeatedly ruled that hate speech is legally protected free speech under the First Amendment. The most recent Supreme Court case on the issue was in 2017, when the justices unanimously reaffirmed that there is effectively no "hate speech" exception to the free speech rights protected by the First Amendment."


IIUC, it becomes discerned by US law if someone acts on violent rhetoric. Then, the violent rhetoric can be used as evidence that the actions were part of a hate crime.


I'm 99% certain this is wrong. The boundary is "imminent" "lawless" action.

If your speech inspires someone to shoot someone an hour from now, it's legal. It wasn't imminent. If you speech will inspires someone to commit a non-violent but illegal action immediately, such as buying some drugs, it's potentially illegal (outside of 1st amendment protection).


I agree with you. Apologies; I was unclear. Scenario in my head: actor (a) spreads hate speech, actor (b) goes and kills someone named as the hated party by (a).

Unless (a) directly incited (b) (via the boundary test you cited), (a) isn't legally guilty of anything.

But if the law discovers (b) heard (a) and acted because they believed (a)'s rhetoric, (b)'s crime rises past just violent crime to hate crime (because their reason to commit violence was a hate-crime motivation).


I like that part too, because it also brings up the notion that US version of near absolute free speech is not the only on that exists. That other countries have banned hate speech without falling into totalitarianism, banning legitimate political discourse, or falling into the other traps that a "slippery slope" argument pushes.


Most of the governments of those countries have been around since the end of WWII

What happens long-term when the people whose memories of the horrors of the early 20th century die off? That is still an open question.


==What happens long-term when the people whose memories of the horrors of the early 20th century die off?==

Surely you could think of race/religious/class based horrors that have occurred since WWII. There have been genocides in places like Sri Lanka, Myanmar and Yemen this century. There is an ongoing one in Darfur, Sudan [1]. Rwanda [2] and Bosnia and Herzegovina [3] had genocides in the 90s.

[1] https://www.cbsnews.com/news/witnessing-genocide-in-sudan-08...

[2] https://www.history.com/topics/africa/rwandan-genocide

[3] https://www.history.com/topics/1990s/bosnian-genocide


Curiously, some of those horrors are actually counterexamples to the way we think about these things. For example, the Rwandan Hutus were historically oppressed by the Tutsis as well as European imperialists (who reinforced the dominant position of the Tutsis). Prior to the genocide, it would have been relatively easy to write off Hutu nationalism as sympathetic and understandable while being more concerned with Tutsi nationalism (especially since there were Tutsi militias roaming the countryside.)

Likewise, I don't know about Bosnia, but it would be pretty understandable to sympathize with the Serbian attitude towards Croatia declaring independence from Yugoslavia because the last time Croatia declared independence from Yugoslavia, it was to literally collaborate with the Nazis. So Croatian nationalism would have seemed vaguely fascistic and threatening, but Serbian nationalism, not so much.

The reason censorship doesn't actually prevent genocide is because any ideology that's popular enough to actually inspire a genocide is too popular to be censored.


==The reason censorship doesn't actually prevent genocide is because any ideology that's popular enough to actually inspire a genocide is too popular to be censored.==

Do you have a source for this claim?

It seems to me that in every case where "ideology" plays a large role in people doing horrible things to each other, one of the common threads is propaganda (often built around de-humanizing the opposing side). Post-WWII Europe might be the best example of what a society with "censored" or regulated propaganda looks like.


> Do you have a source for this claim?

It's my own opinion.

> It seems to me that in every case where "ideology" plays a large role in people doing horrible things to each other, one of the common threads is propaganda (often built around de-humanizing the opposing side). Post-WWII Europe might be the best example of what a society with "censored" or regulated propaganda looks like.

My argument is that you cannot trust anyone to apply a fair standard for which propaganda could lead to people doing horrible things to each other and which propaganda could not. The far greater risk is for such a mechanism to be hijacked by the very demagogues it is intended to disarm.

It's easy. First, you pick out the most extreme and unpalatable examples of your opposition. Then you scapegoat them as a public menace worthy of censorship. Then you exploit the powers of censorship you've received and apply them more broadly. The rise of Hitler[1], for example, followed this literal pattern--he convinced the people that the Communist Party was enough of a public threat that they should be banned, received the executive powers necessary to do so, and then banned the Social Democrats too, for good measure.

It's understandable that post-WWII Europe has the particular set of scar tissue necessary to prevent the rise of another Hitler. The only problem is that Hitler himself rose to power on the back of the scar tissue that post-WWI Europe developed to prevent the rise of another Lenin.


> other countries have banned hate speech without ... banning legitimate political discourse

This is, of course, entirely a matter of opinion.


The US has plenty of history of censorship. The "imminent lawless action" standard was only established in 1969. Prior to that, for example during and shortly after World War I, teaching people how to evade the draft or advocating for them to do so was commonly prosecuted as "sedition". Eugene V. Debs ran for President on a Socialist ticket a number of times, many of which were from prison because he had been convicted of sedition.

This seems absurd to us today, but it was justified the same way people justify banning hate speech today. Even the cliche about "crying fire in a crowded theater" was coined in one of the Supreme Court cases that upheld the laws against sedition, as well as the more general arguments about whether the enemies of open, tolerant societies should be able to weaponize that openness and tolerance against them. If you squint, a lot of the rationale against banning hate speech looks a lot like they're just trying to make the world safe for democracy.

(Was the German Empire really an existential threat to the safety of democratic, open societies around the world? Is white nationalism?)


I was thinking yesterday of the EU copyright law. Wouldn't something like that not be possible in the US due to the first amendment?


>I don't know as much about hate as I should, perhaps, but it seems arbitrary to make rules about what's hate and what's not.

It is arbitrary. This is a closed for-profit platform, run by decree. In the interest of shareholders, it can (and ought to) make whatever rules it wants, as long as they don't violate any laws.


Yes, well said. That's a fair point. If I think about my response to an announcement that said, "This kind of content is being banned because it reduces our shareholder value." I can't argue with that. I think, "Sure, you're running a businesses. It's a business decision. Good on ya." It's the moralist explanation that confuses me, and your comment helped me see the real decision free of the spin and justification that cam with it.


I don't see what's so complicated about it. Facebook will set some standards and they will meter out moderation action as they feel is necessary. Will it be selectively applied in some cases? Yes. Will they get it wrong sometimes? Yes. So what? People who don't like Facebook's approach to moderation can quit, protest or otherwise boycott Facebook. This is the same problem of moderation that has always existed in online communities, the only difference between today and the past is that today the masses feel like they are entitled to use these platforms.


the other difference today is that 'the masses' used to mean ~millions of Internet-connected people staring at screens on their desks, now it means the whole damned planet, at all times.

as these networks have grown to global scale we need to assess them for what they are, not for abstractions that may be less relevant now that these systems represent unprecedented "public squares"


> millions of Internet-connected people staring at screens on their desks, now it means the whole damned planet, at all times.

So what? Those people use Facebook because they like to do so. If you feel like Facebook treats you unfairly then stop using it. It is permissible to communicate with people that use Facebook even if you don't use it. If you fundamentally disagree with the way Facebook operates then stop using it. Just stop using it.


I deactivated my account years ago. My point wasn't about me, but about the fact that these platforms are intertwined enough into society they are blurring the lines between optional service customers can switch off of, and global-scale utilities that are necessary to operate in society.


> these platforms are intertwined enough into society they are blurring the lines between optional service customers can switch off of, and global-scale utilities that are necessary to operate in society.

That supposed blurring is illusory. There are no impactful consequences for not using Facebook; it is wholly unimportant and an entirely superficial distraction, in fact, less use of Facebook correlates positively with mental health. You most certainly do not need Facebook to operate in society.


>There are no impactful consequences for not using Facebook Yes there are. I decided not to use facebook years ago. I also don't use any of their other services. Consequences range from being an outsider socially to being outright excluded from university/school/work events, because they're organized solely on Facebook and nobody told you. Everybody just assumes you're on there. When you enter a new peer group of 20+ people you can't just tell them to completely mix up their habits because you happen to not like what they're doing. You might be able to do that when a network is reasonably small. When a third of the world population is on there, that doesn't work anymore. The thesis you're defending is wrong.


Sorry but Facebook is not a social necessity, period.

> When you enter a new peer group of 20+ people you can't just tell them to completely mix up their habits because you happen to not like what they're doing.

You're constructing an image of reality that isn't correct in order to support your position. If you encounter a gathering of your peers and someone says, "hey, you on Facebook?" and you say "no, I don't really use Facebook" in reality, the conversation just moves forward with a discussion of a different communication medium like SMS or snapchat or twitter or e-mail or whatever. The suggestion that lacking a Facebook is social suicide is obviously wrong.

Perhaps in your particular friend group people only communicate through Facebook, but that is definitely not typical and it still doesn't mean you need Facebook, it means that you find the Facebook network convenient for communicating with a particular group of people. Participating in a group-chat is not a necessity. What if your entire friend group communicates on Discord, does that mean you need discord? Obviously you don't need it, you just find it convenient. If you refuse to use Facebook and none of your friends care to adjust their habits to communicate with you that really has nothing to do with Facebook and certainly doesn't imply that Facebook is now a social need.

> The thesis you're defending is wrong.

My thesis is just common sense. The idea that Facebook is a necessity is laughably absurd.


The overarching point is arguable even if you don't think Facebook is necessary -- this is subjective and is why I used the term "blurring" and not "blurred." It's certainly the case that platforms like Facebook and Twitter provide examples of what could be on track, if they are not currently, towards becoming privately owned, unregulated, Internet-based, global scale communication networks that anchor our daily lives.

Even if those two platforms die off before they reach the tipping point where most would agree membership on the platforms are necessary to operate in society, we should still realize this is a potential eventuality. If you disagree with the core point of this potential outcome, then I'd be interested in the argument, but it certainly seems possible (and I'd argue, likely) that Facebook or a site like Facebook could quickly evolve into something that you need to be connected on to conduct business, interface with the government, get a job, get into school, or in general be connected to society in a way that is not making you "sub human" in a future where the whole world is continually connected via the Internet at all times.


You are arguing from theory. That's not how people work. YOU might work this way, but that doesn't mean this applies to people in general.

If you make something addictive, people still hold personal responsibility and can decide to quit, HOWEVER the majority of people will still find it hard to do so and therein lies the issue.


I'm not understanding your point. Are you suggesting that there are people who hate Facebook but they're so addicted that they can't quit? So what? They should seek counseling for addiction. If someone hates Facebook but can't muster up enough self control to quit that's entirely their own fault, their addiction to it certainly doesn't imply that they are entitled to it. If you lose your wedding ring in a bad beat at the casino, that is entirely your own fault, you don't get to blame the casino for being addictive.


There is merit to maintaining an open environment. Censorship makes the censored content seem lucratively dangerous; it looks like the powers that be are afraid of what some people are saying. I think it is best to let bad arguments fail on their own merit. The problem is that most people are unable to recognize a bad argument.


I should preface this by saying that I'm kind of a "strict Constitutionalist" so to speak.

In my view all these arguments people are making about "open environments", or "bad faith actors", or extremism or what have you are completely irrelevant. A privately owned, NON-governmental, environment is allowed to be as "open" or as "closed" as they please. If a "bad faith actor" wants to act in "bad faith", (whatever that is?), then that's their prerogative. That's their Constitutional liberty. (You ever consider this? Who, exactly, is defining what "bad faith" is anyway?) And extremism in content is fine, you don't like it, don't look at it. (Or in Facebook's case, don't have it on your system. That's fine. That's FB's Constitutional liberty.)

The Constitution really does provide room for obvious solutions to all of the arguments people are making. There's really no need to get bent out of shape about a lot of these things.


The trouble is that today, these companies have a monopoly on distribution. People who are banned go to other sites that capture extremists views (Gab/Voat/4chan).

At some point, platforms need to realize there is a moral value in allowing different view points so that they can discuss and be seen. Otherwise, people leave for more "open" platforms that breed exclusive extremism. It then further divides the split.

The other issue is that these platforms are so big that they literally dictate narrative in many cases, or at least to their audiences.


>The trouble is that today, these companies have a monopoly on distribution. People who are banned go to other sites that capture extremists views (Gab/Voat/4chan)...

If the banned go to Gab and 4chan, then there is no "monopoly" on distribution. The Constitution is what allows for the alternative options to even exist. It's GOOD that we have the alternative options. That's not a "bad" thing.

And what is wrong with people going to "open" platforms that "breed exclusive extremism"? I don't see anything Constitutionally wrong with that. (Keep in mind, you or I not liking something does not mean it should be Constitutionally restricted.)


What is the moral value in allowing group X to threaten the very existence of group Y? I don't mean saying that group Y should be kicked off Platform.com, I mean threatening to kill group Y IRL.


The problem is that an open environment cannot work when there are bad faith interlocutors actively seeking to undermine that open environment. There's a famous Sartre quote that deals with the paradox of tolerance in regards to anti-semites.


Either they are presenting valid arguments or they aren't. If you have the mental tools to analyze and quickly dismiss arguments it doesn't matter if they aren't acting in good faith.


> Either they are presenting valid arguments or they aren't.

I agree. But one problem with bad-faith actors is they can bring productive discussion to a near standstill, due to the time / attention / emotional-energy costs.


[flagged]



Except they are not censored. If these groups feel like they want to espouse these dangerous ideas they still can. They don't need Facebook to do and the rest of us can ignore them. But at the moment Facebook's and Youtube's algorithms put some of this crap front and center in your browser. To the unsuspecting this can seem like validating and endorsing those ideas.


Are you really saying that these groups aren't going to be censored by Facebook? Censorship does not require a government.


These groups aren't censored, full dead stop. Corporations have a TOS. IF you don't like it you can take your speech elsewhere. Governments are different. You are free to chose a private platform other than Facebook, Apple, Google. You are not free to chose a different government.


The definition of "censorship" applies to contexts other than government so it's not clear what argument you are making.


They are free to express this content on any other platform, including their own.


Does that include platforms like Andrew Anglin's Daily Stormer, which:

>can not accept cash donations via any payment processor

>has had their domains revoked from multiple providers

>is only barely able to be able to protected from (numerous, expected) DDoS attacks due to being kicked off Cloudflare, etc.

There comes a point when a person is not free to express their content on any platform on the web despite a lack of government censorship.


I want to be sympathetic to this sort of thing in the general sense. I really do.

But... I'm not at all sad that Daily Stormer has trouble staying online.

Free speech means you're allowed to say whatever you want. It doesn't mean that anyone has to give you a platform to say your piece. It doesn't mean that people can't react negatively to what you say. It doesn't mean that there are no consequences to what you say.

Assuming there is reasonable diversity in your options for platforms[0], and yet you still can't find a place for your speech, maybe you're just an irredeemable asshole and should change your views[1].

[0] The current availability of this is certainly debatable.

[1] Maybe you're not and you actually are being unjustly oppressed and censored. But I don't believe that's the case with the Daily Stormer.


Free speech is an interesting ideal but historically there's always been "acceptable speech" and "discouraged speech", where discouragement comes on a spectrum between "outright banned and prison" to "people don't invite you to parties".

In hunter-gatherer days if you had extremely antisocial ideology your tribe probably just left you in the desert and you died.


You are proving the OPs point. No government compelled cloudflare to stop providing services for the daily stormer. They stopped because most people don’t want services from your neighborhood white supremacy DNS provider and that was what cloudflare would be known as if they did.

Should the government compell private internet service firms to service anyone?


An argument can be made that if the ability to use the web is a human right[1] then yes, governments have a responsibility to ensure their citizens can actually use it. You can make an analogy to (private) providers of healthcare denying coverage to patients they find disagreeable.

[1] https://www.theverge.com/2016/7/4/12092740/un-resolution-con...


In what ways are domain name services analogues to healthcare?


Both can be true. These platforms are censoring and people are free to go to other platforms.


Of course! But when China says you're free to discuss the Free Tibet movement in any other country but China, we rightly call it censorship. What makes this any different? Or, I suppose, what makes the differences one of kind and not of degree?

In a lot of ways, Facebook is as much of a country as the Vatican is.


The analogy holds no water, it is infinitely easier to broadcast your speech on another platform (or make your own platform) than to deal with state level censorship. Publishing them on a company's dime and expecting platform neutrality is preposterous.


It's not easy to make a new platform. it's insanely difficult. These big social media companies have huge moats at this point, due to network effects and scale.

When other things have such powerful network effects we force them, through the state, to treat people fairly. For instance, utilities like water and electricity are natural monopolies because of the limited physical space for the infrastructure. Social networks aren't quite the same but there are powerful similarities.


They are indeed differences of degree, not kind. I don't know of any culture that has embodied 100% free speech through-and-through, it's all degrees of what you can say and what you can't.

The key difference is the penalty: is it going to jail or some people roll their eyes and stop being your friend?

I don't think it's ever going to be "people enthusiastically try to understand your positions and respect them". That has been the ideal in the US, but historically I can't find the pages in a history book where it's been largely true.


> Of course! But when China says you're free to discuss the Free Tibet movement in any other country but China, we rightly call it censorship. What makes this any different?

If you talk about a Free Tibet in China, you risk having your legs broken and your family put in camps.

If you violate Facebook's ToS, you might get banned from their website. Then you can just type literally any other website into your address bar and be on your way.


I would really appreciate if people making terminology arguments could cluster int he same subthread instead of (likely inadvertently) resetting numerous individual subthreads with the same point.


You don't solve nazi ideology with reasoned debate. You don't solve it with deplatforming either but I doubt facebook is going to start throwing bricks so I'll take it.


And of course as long as you agree with FB's determination of who is a Nazi and who isn't, it's all fine and peachy?


If they overreach then we circle back, basic iterative process. I see nothing in their statement to indicate that they're going to overreach.


"Move fast and break things" doesn't quite work even in the SV bubble. Letting somebody as demonstrably immoral as Zuck do it in real life does not look like a good idea.

Better not let him iterate the overreach in the first place.


Not if 'not iterating' means letting literal Nazis have a soapbox in the tech equivalent of the town square.


Even assuming that this premise were true, FB, Google, or any other tech company deciding who is a "literal Nazi" is a terrible idea.

And of all the ways FB is making the world worse, "letting Nazis have a soapbox" is least of their problems.


> You don't solve nazi ideology with reasoned debate.

How did you reach that conclusion?


1940-1945


Total non sequitur, low effort post. The mods are really showing their bias in this thread.

People respond with their best arguments when they are confronted with bad ideas.

People respond with censorship attempts and violence when they are scared that their own positions are weak.


There was reasoned debate from ballpark 1932-1938, when the Nazis have roughly 40% of the vote to go into a coalition government then somehow levered that into total control of the German state but that's more evidence for your argument.


Yes, exactly my point.

History repeats itself and people on the left want to pretend you solve this problem with the debate club.


Are you advocating for violence against "nazis?"


Yes, it's like book burning. It makes the banned content more enticing. This is the big issue with New Zealand banning the recent shooters manifesto. What if news organizations want to read it to report on it (even if it is just to say, this is why he's a nutjob)? Banning his manifesto grants it more important. It shows the government is afraid of it for some reason.

Mein Kampf is still available for purchase in NZ. I've personally looked through his manifesto and think it should be discussed in schools, so students and teachers can explore his breaks in logic and contradictory statements. If you understand people like that, maybe you can identify them early on, or encourage attitudes in general that would prevent those xenophobic ideas from developing.


You can get a waiver from the NZ censor for having copies of that material if you're a scholar or researcher: https://www.classificationoffice.govt.nz/news/latest-news/ch...

it should be discussed in schools, so students and teachers can explore his breaks in logic and contradictory statements

How about the explicit calls for assassination of specific individuals, among other direct incitements to terrorism?


> You can get a waiver from the NZ censor

The fact that NZ has a censor is abhorrent.


US has censored fundamentalist Muslim extremist material in concert with Twitter and Google since at least 2009.

Where were the free speech advocates back then? And why are they so vocal now that the extremists are a different skin color?


Twitter and Google are not government entities and can remove any content they want. That does not make the content illegal.


Copycat violence is a thing, when you offer people to make a political statement and then get immortalized, history has proven that people will take you up on that.

Secondly regarding the most recent NZ manifesto, it isn't a well argued thesis as much as a dump of memes and shitposting.


Marsh v. Alabama, 326 U.S. 501 (1946), was a case decided by the United States Supreme Court, in which it ruled that a state trespassing statute could not be used to prevent the distribution of religious materials on a town's sidewalk, even though the sidewalk was part of a privately owned company town. The Court based its ruling on the provisions of the First Amendment and Fourteenth Amendment.

https://en.wikipedia.org/wiki/Marsh_v._Alabama

> "Ownership does not always mean absolute dominion. The more an owner, for his advantage, opens up his property for use by the public in general, the more do his rights become circumscribed by the statutory and constitutional rights of those who use it."


This subthread started as a reply to https://news.ycombinator.com/item?id=19503270.


This case was about a "company town", where there was very much a blending of public and private.

> The Court initially noted that it would be an easy case if the town were a more traditional, publicly administered, municipality. Then, there would be a clear violation of the right to free speech for the government to bar the sidewalk distribution of such material. The question became, therefore, whether or not constitutional freedom of speech protections could be denied simply because a single company held title to the town.

It's also not a good idea to bring up this case without bringing up Cyber Promotions v. America Online, in which America Online put up spam filters and Cyber Promotions tried to use the above precedent to say America Online was blocking their right to free speech. You can probably guess by the fact that we still use spam filters how that case came out.

As a final note, even that original case has had limitations based off of the type of property-

> In Lloyd Corp. v. Tanner, the Supreme Court distinguished a private shopping mall from the company town in Marsh and held that the mall had not been sufficiently dedicated to public use for First Amendment free speech rights to apply within it.


This is a nuanced topic, but I think the subsequent cases you mentioned all make sense and are consistent with the proposition that certain social media sites actually should obey the "company town" precedent.

Malls aren't and have never been the primary space used for public social discourse and commentary. Today, such discourse increasingly happens online, and it's increasingly true that sites like Facebook and Twitter actually are the primary space for public discourse.

It's really never before happened in history where a private company has become the mediator and facilitator of discourse involving millions of people simultaneously. Previous forms of media had extremely restricted participation roles in comparison. Newspapers, TV, and radio can all only publish a relatively tiny amount of content, so the idea that millions of people could simultaneously publish content to those platforms was never viable; they were only capable of pushing out content from a few folks to many, but they were never many-to-many. This almost infinitely scalable many-to-many communication property of modern social media makes it feel much more like a public space than newspapers, TV, and radio ever did.


>the proposition that certain social media sites actually should obey the "company town" precedent.

This only follows if you believe you're under an obligation to use social media.

>It's really never before happened in history where a private company has become the mediator and facilitator of discourse involving millions of people simultaneously.

This depends on what you mean. On the one hand, up until ~30 years ago, discourse involving millions of people was impossible. So in one way you're correct. On the other hand, media companies of all kinds have historically had a huge level of control over what people see. When the only form of communication was the town crier, he controlled what you heard. When we added printed paper, the people with the press had outsized control over the media. And the barriers to entry were much higher then: I can spin up my own website in an hour. Your average person couldn't afford a printing press.

>This almost infinitely scalable many-to-many communication property of modern social media makes it feel much more like a public space than newspapers, TV, and radio ever did.

What public space offers infinitely scalable many-to-many communication? Public space is distance limited, these platforms aren't. Just because these platforms chose not to heavily regulate the content doesn't preclude their ability to ever do so in the future, nor does it make a comparison to a theoretical public space that never existed any more valid.


> This only follows if you believe you're under an obligation to use social media.

Were the town residents in Marsh v Alabama, required to inhabit the company town? I think the fact that they did is what made is a public space, regardless of whether they were required to.


I don't think that's quite right. The issue there is that the company town wanted to be an actual town- they were providing services, inviting people to move in (including people who didn't directly work for them), and generally saying "anyone who wants to can be here"- they were very explicitly trying to create "public spaces".

Facebook is not the same thing- people aren't moving there and people don't live there. I know we want to pretend that digital and physical are similar, but this particular case that everyone is citing was very specifically about the company towns and precedent afterwords has been pretty clear.


> What public space offers infinitely scalable many-to-many communication? Public space is distance limited, these platforms aren't. Just because these platforms chose not to heavily regulate the content doesn't preclude their ability to ever do so in the future, nor does it make a comparison to a theoretical public space that never existed any more valid.

You're still extremely limited on Facebook and Twitter by natural space and time constraints. Just because these take a slightly different form and aren't based on physical distance between communicating parties doesn't mean that communication on the platform is just completely unfettered.

So "infinitely scalable" does not mean that each person can actually consume all content produced by all other people. That's not possible in all current iterations of social media and presumably is fundamentally unachievable. Rather, it means the system can scale to an arbitrary number of producers and consumers, each of whom is still subjected to natural physical constraints, whether those be determined by physical location, screen space, the loudness of your voice, the speed at which you read, the speed at which you type, etc.


I believe that with network effects far more pervasive as things move online, monopoly/competition law needs to play a greater role. The basic idea is that any party that controls more than (TBD) percent of a significant market/distribution channel/medium/etc. (significance by total $, total # of users, etc., TBD) loses the right of a small, private party to unilaterally ban users.

If you create a FaceBook competitor, identical in every way except dozens of other users instead of hundreds of millions, it would not be a realistic alternative. Likewise for Twitter, or real estate listing services, and a growing number of other things where it's not the tech so much as the other users.

When network effects dominate, as is so common online, and users gradually lose the realistic ability to just switch to a different service with terms they like more, service providers must also lose the ability to impose any terms THEY like and to refuse service to users. What they can charge, who they can ban, etc., should become matters governed by regulators.


There's no objective, consistent way to define a "significant market" for highly differentiated products. What market is Facebook in? Is it all web sites? All mobile apps? All media? All social media? All communication tools? Those categories are totally subjective and arbitrary.


Legal status is frequently based on reasonableness. When you have a service with hundreds of millions of users, you already have a reasonable argument for "significant" even if the lower bound of significance is fuzzy. If you then survey the users of a giant system regarding practical alternatives, and the answer is overwhelmingly "even if there were one, I couldn't switch to an alternative offering the same features and price, because it would lack the critically important feature that everyone uses this one," you have a reasonable, practical, realistic means of identifying some of the entities that are too powerful, relative to their users, to be allowed to just do whatever they want with prices, who they will do business with, and any number of other policies.

If you want complete freedom to refuse to do business with users for whatever reason you want, charge anyone whatever you want and change as you like, and so on, you can have it as long as your users can reasonably exercise similar freedoms by doing business with alternatives. If network effects remove those options from your users, they should remove them from you, too.


As someone that refuses to use Facebook / etc... I still see the free speech issue in this.

Imagine it from this perspective; whom can a telephone company decide to deny service?

If they have stepped beyond merely disagreeable speech (and I agree, it's often very disagreeable) where was that line crossed so that our court system might process their transgressions?


Telephony is a commodity service with a legally defined market. The rules for such services are irrelevant when deciding how to regulate non-commodity Internet services.


>it's increasingly true that sites like Facebook and Twitter actually are the primary space for public discourse.

That's probably true, but in my opinion the overall problem is the centralization of the web (and the internet at large), not the policies of these websites.

You won't convince me to play devil's advocate for white nationalist content on Facebook, I would however gladly embrace any effort to help make the web more decentralized.

And it's not like it's an unsolvable problem, bandwidth and hardware have never been cheaper and yet hardly anybody hosts their own content anymore. That's what needs fixing IMO.


"And it's not like it's an unsolvable problem"

No, it is not. But it is not a solved problem either. Self-publishing content has been tried. For years. And the reason we are where we are is not because hosting your own content is hard or expensive, it is because it has unsolved problems that social networks solve. How to consume content you want to consume? How to share your content with people you want to share with? How to build and manage an online community? You might say that aggregators solve the first problem and blogs solve the second one and forums solve the third, but all of these things existed before Facebook and other social networks. Having a unified platform for sharing content with people you choose and consuming content from creators you choose won the market against multiple single purpose services like blogs, forums, news aggregators etc. And Facebook wasn't the first, the "original" social network LiveJournal started out as a sort of a blog hosting service, yet simply by adding an ability to have "friends" and read a selective news feed from people you "friended" it became more popular than any other blog service of its time. Facebook and other social networks are doing exactly the same and they are successful because they provide functionality that does not exist otherwise.

Could this problem be solved? Could we have a decentralized content hosting, yet keep all the benefits of a centralized service like Facebook? Most likely. But it is not a problem that already has a solution.


Considering how "corporate friendly" the SOTUS has been, and how often they think that corporations are people who have their own free speech rights (particularly with regard to spending money), I'm guessing they are not going to side with "free speech advocates" on this one.


The problem is someone can screenshot a Nazi Facebook post with the Facebook logo right there in bold blue and spam it across the internet as evidence that Facebook implicitly supports white nationalism (by giving it a platform).

I don't think there was ever an outcome other than the one we've seen - Nazis getting banned. The company town precedent doesn't make a good comparison in my head - this is Facebook content stored on their servers and printed on their timelines. I believe they should be allowed to control what appears in their timelines.


How is that a problem? It's true regardless of any policy of Facebook's.

You can get a screenshot of anything in a web browser. Screenshots of a browser have no evidentiary value at all.


“The problem is someone can screenshot a Nazi Facebook post with the Facebook logo right there in bold blue and spam it across the internet as evidence that Facebook implicitly supports white nationalism (by giving it a platform).”

I don’t think this is a real problem unless Facebook follows its current path where they are acting like they do endorse content on their network (because they don’t ban it).

A nazi can take a picture holding a coke bottle and spam it everywhere. That’s not coke endorsing them. But if coke decided to only sell to “cool people who weren’t nazis” and then NAMBLA sends out coke pictures people might get confused.


In interesting conclusion of your suggestion: people may have to start capping user sign-ups to their site in order to maintain free speech rights:

    if (users.count.toString.length > 7) {
      return response.send(noMoreSignupsAtThisTime) }


Is this not a highly inefficient implementation of an integer comparison?


Sure. I was going for explanatory, not efficient.


    if (users.count > 9999999) {


  if ((ln(users.count) / ln(10)) > 7) { ... }


    if (users.count > 1e7){ ... }


  ... >= 1e7 ...
If we want to preserve (users.count > 9999999).


How about ATT, Verizon, Tmobile/Sprint if they banned certain content from private text messages? On one side, nearly everybody will agree to allow anti spam measures, denying certain unwanted annoying people speech. On the other end, many would object to disallowing private messages based on morality scores of the content.


Are you asking about my personal opinion, or about what the law currently says?

Telecoms are regulated, which is why they can't do things like that. If we want to have the same thing apply to social media then we need to regulate social media to do so. Depending on a court precedent that has already been deemed by other cases not to apply to tech companies isn't going to work- legislation needs to happen.


Is there an established assumption that Facebook will extend their censorship to private messages also? Otherwise this analogy doesn't hold much ground.


I'm convinced a lot of people do not understand "free speech" as it applies to the First Amendment and as a fundamental principal of democratic systems of government.

The First Amendment puts limits on congress. It's a check against government oppression, full stop. The Fourteenth Amendment extends that all the way down to local governments.

I feel like many people are then trying to bend its scope past the breaking point to fit their worldview.


It's also a cultural principle, right? The whole idea of freedom of speech precedes its enshrinement in the Constitution. It's very important to put it there. But it would be meaningless words on paper if our society didn't generally tolerate and respect the idea of free speech. Book stores tend to carry both religious and anti-religious material, for example. Nobody tends to boycott the book store and force their hand, because we believe in that sort of discussion and debate about things.


> But it would be meaningless words on paper if our society didn't generally tolerate and respect the idea

Yes, but the idea is that the government shouldn't be able to suppress the spread of information and ideas, or persecute on the basis of ideas. It would only be meaningless words on paper if the courts didn't uphold the rule of law, and the First Amendment specifically, and the executive didn't enforce it..


Not if we had a culture that didn't condone it. There's the law and then there's cultural values. And the purpose of freedom of speech is to recognize that a healthy marketplace of ideas and debate is good for our society. That's a cultural value as much as it's a legal value.


This is true, that the First Amendment only refers to government action. But its underlying spirit is often taken to be that no entity with power should seek to use it to suppress free discourse.


This would invalidate all speech policies provided by corporations and corporate person hood. The underlying spirit is irrelevant when there's clear reference in the Constitution. It's not ambiguous where "spirit" can come into place, ie. Second Amendments "right to bear arms"


AFAIK its underlying spirit has pretty much always been in regards to government with origins in ancient Greece as it pertains to democracy(the system of government).


> In Lloyd Corp. v. Tanner, the Supreme Court distinguished a private shopping mall from the company town in Marsh and held that the mall had not been sufficiently dedicated to public use for First Amendment free speech rights to apply within it.

So if a site has a monopoly on video commentary, that would seem to be "sufficiently dedicated to public use" to me.


These are separate issues though- having a monopoly doesn't automatically make you a "public use" company.

The fact is that Facebook already has rules that restrict it far more than "public use" would. Facebook blocks people for bullying, nudity, discussions about drugs and violence, age, and probably many other types of things. You can't even access many of the site unless you register. It's very much different from a town that allows everyone access to its streets (which is the original case we're all talking about here).


having a monopoly doesn't automatically make you a "public use" company.

Having a monopoly on a form of speech does.


Facebook does not have a monopoly on internet speech. Facebook does not even have a monopoly on internet speech for its users; people who have Facebook accounts can also have Twitters or Tumblrs or blogs or what have you.


Facebook does not have a monopoly on internet speech.

Being able to dole out the potent combination of virality/discovery+monetization to only one particular ideological group basically is a monopoly on Internet speech in 2019. Facebook doesn't control all of it, but a rather small group of people in the circles adjacent to SV CEOs basically does control all of it for the entire English speaking world. That is way too much unaccountable power in too few hands, particularly because it effectively drowns out the 1990's and prior conception of Freedom of Speech.


Social networks don't exist in a vacuum. You can leave if you don't like what's happening.

Myspace collapsed after user dissatisfaction. So did Digg. If someone in Wichita is unhappy with the current situation they can always do something on their own. Which is why Voat or Stormfront exist.


If someone in Wichita is unhappy with the current situation they can always do something on their own. Which is why Voat or Stormfront

Do you realize how arrogant and dismissive this can sound to people who happen to live in Wichita?

What is happening in 2019, is there is an active program to label all dissent even quite reasonable dissent like that of Steven Pinker, as something unsavory. Basically, what we are seeing in 2019, is that everyone who isn't a diehard adherent of the party line is to be relegated to Voat or Stormfront.

Korean American journalist from the wrong side of the tracks in Chicago? Call him "Alt Right" and get Antifa goons to go and beat him up. (Literally happened. You can find the videos online.) Left leaning Professors who are widely respected and cited in their fields are called "Alt Right" and "White Supremacists." Professors no longer feel free to say exactly what they think! There is no due process for people falsely accused this way. It's a form of power which can instantly exile people to a de-powered, effectively pre-Internet society. It's stacking the deck for a particular group of people (who happen to have been demonstrably wildly enmeshed in groupthink, and dramatically wrong about matters of public import -- again and again) to the tune of orders of magnitude greater effectiveness at media.

What if a cabal of corporate oligarchs decided only the people they liked got to have virality and discoverability over the Internet. That would be a competitive skewing of the odds far worse than a 1990's style monopoly. But instead of startups and business, instead apply that to the most potent forms of discourse and Free Speech.

That's a recipe for mass groupthink and oppression through the control of speech and thought. It's one which is being enacted right now.


I would think a massive online platform spanning multiple types of media would be analogous to a digital "company town".


In a town, the only way for people to physically hear your speech is to be present in the town itself.

Facebook users are not restricted to only using Facebook. If they want to go to your blog they can.


twitter, facebook,youtube all banned alex jones at the same time. you can use say oh go host it youself if all the tech companies conspire together on the same damn day. Plus they are the only ones who can truly allow your words to get heard. you cant ddos youtube but you damn can get ddos'd on your puny server.


Freedom of speech and freedom to publish are not the same thing. The latter does not exist; newspapers and book publishers are not required to publish every opinion.


> This case was about a "company town", where there was very much a blending of public and private.

With the pervasiveness of regulation in modern United States, pretty much everything worth mentioning now is "a blending of public and private". Any business has restrictions of how it can hire and choose their clientele. A person with minarchist political bent could consider this wrong but it is undeniably the reality today.

> You can probably guess by the fact that we still use spam filters how that case came out.

I don't think this decision means what you think it means. Of course, AOL was found to be right to block spam. But was it because spam is repugnant, not because AOL rights are absolute. Consider what would happen if AOL blocked communications by a competitor, or say political speech by a popular group? I can not guarantee it, but I ascribe very high probability to the outcome being very different. If so, the conclusion is not that the provider can do whatever they want, but they can do whatever they want as long as they serve public interest. Which means, the current politics defines what they can do and what they can't.


Consider what would happen if AOL blocked communications by a competitor, or say political speech by a popular group? I can not guarantee it, but I ascribe very high probability to the outcome being very different. If so, the conclusion is not that the provider can do whatever they want, but they can do whatever they want as long as they serve public interest. Which means, the current politics defines what they can do and what they can't.

AOL would have still won. That case didn't turn on the content of the spam. It absolutely was predicated on AOL's rights to control its own software/community.

Companies block competitor ads all the time. Companies frequently refuse to air or display political ads.


In the article you linked, Cyber Promotions v. America Online seems far more relevant given that it pertains to an individual's right to use an online, non-state-owned platform as a host for arbitrary speech.

In the decision[0], they cite three distinct tests used in previous cases to determine if the company acted as the state by asking if its activities are "state action" (i.e. meaning that removal of speech constitutes a violation of first amendment rights)

1. Is Facebook exercising the exclusive prerogative of the state in its activities?

2. Is Facebook acting with the help of or in concert with a state official?

3. Did the government insinuate itself into a position of interdependence with Facebook to assert itself as a joint participant?

All of these hold for the Marsh v. Alabama ruling, but the argument that these tests pass for Facebook's daily business activities is less convincing.

[0] https://scholarship.law.berkeley.edu/cgi/viewcontent.cgi?art...


I consider the sidewalk to be the last-mile connection to the Internet. The businesses lining that sidewalk may still trespass annoying pamphleteers from within their building.


Company towns were a bizarre (and, arguably, pathological) special case in American history.


Company towns were a bizarre (and, arguably, pathological) special case in American history.

Social media in the late 20-teens were a bizarre (and, arguably, pathological) special case in world history.

https://www.youtube.com/watch?v=wAIP6fI0NAI


No doubt, but that doesn't imply the lessons gleaned from how law applied to company towns are immediately portable to social media.


Given the potential for the company to absolutely control discourse within a community, it looks very portable to me.


No matter how much the definition of monopoly is twisted, I’m sorry, but FaceBook isn’t one. Having the biggest audience and being monopolistic are not the same thing. I’ve never used FaceBook yet here we are talking through a similar medium. I don’t use Twitter, but I can still publish a blog. It’s seriously problematic when people try to redefine words to suit the argument of the day, because the erosion of the language persists and communication breaks down.

Nothing about FB is equivalent to living in a town owned by your employer, complete with public services, law enforcement, and grocery stores. YouTube is dominant, because poeple make a totally free choice to use it, not because competition can’t exist. Nothing about being dominant in a market or network effects directly implies a monopoly. It’s not as though we’re dealing with Standard Oil, and if you want heat or a running vehicle you do business with them, or get bent!


No matter how much the definition of monopoly is twisted, I’m sorry, but FaceBook isn’t one.

Monopoly isn't important. It's the control of discourse within an entire community which is key. Hell, it's not one entire community these companies control. It's hordes of communities across entire forms of online discourse for huge fractions and even the majority of populations online.

YouTube is dominant, because people make a totally free choice to use it, not because competition can’t exist.

Due to network effects, effective competition can't exist. This gives YouTube the potential to nearly absolutely control the very broad category of video discourse online.

Nothing about being dominant in a market or network effects directly implies a monopoly.

In 2019, this just seems ridiculous. This has got to change.

It’s not as though we’re dealing with Standard Oil, and if you want heat or a running vehicle you do business with them, or get bent!

It's as though we're dealing with Standard Oil, and if you want to be a full time political vlogger, you hew to YouTube's rules and politics, or get bent.


First you say that “monopoly isn’t important” and then go on to claim:

Due to network effects, effective competition can't exist.

And

>Nothing about being dominant in a market or network effects directly implies a monopoly.

In 2019, this just seems ridiculous. This has got to change.

And then

It's as though we're dealing with Standard Oil, and if you want to be a full time political vlogger, you hew to YouTube's rules and politics, or get bent.

Now first of all your last point is conflating the right to express yourself with the right to make a career on someone’s platform expressing yourself. The latter is not protected, but as you live in a free country you are welcome to try to change the law I guess. You can after all still be a full-time political vlogger without YouTuber, Alex Jones seems to manage it (unfortunately). It will be harder to find a mass audience and make money, but that isn’t protected and neither does it imply monopolistic practices.

As far as the need for network effects to be classified as monopolistic, I disagree, but again if you feel otherwise then by all means, try to get that passed into law. What isn’t acceptable is pretending or arguing as though it already is. I suspect that your attempt to pass such a law would be met with scorn and incredulity, because after all you’d be arguing for the inaliabel right to make a living spouting views online. As it stands no one is stopping you from attempting to do so, and no one owes it to you either.


First you say that “monopoly isn’t important” and then go on to claim:

The 20th century concept of "monopoly" is entirely insufficient. This is the underlying cause of the pseudo-paradox.

Now first of all your last point is conflating the right to express yourself with the right to make a career on someone’s platform expressing yourself.

Controlling monetization+virality/discovery is too potent a combination to let a single entity control for censorship purposes.

As far as the need for network effects to be classified as monopolistic, I disagree

It's hard to convince someone of a fact, if their bread and butter depend on their not acknowledging it.

As it stands no one is stopping you from attempting to do so, and no one owes it to you either.

If a bunch of people want to hear, and others are blocking, then the right to Free Speech is being abrogated. If people want to pay, and others are artificially blocking, then the right to Free Speech is also being abrogated.


How about this analogy:

3 newspapers dominate the news industry. Their circulation reaches the majority of people in a country. They refuse to publish editorials / letters from a certain point of view.

That fits all of the tests that you have outlined above, yet I think most would agree the newspapers have the right to editorialize and selectively publish what they choose.


3 newspapers dominate the news industry

In that case, these newspapers have a tremendous capability to manipulate the public. Don't have to take that from me: https://en.wikipedia.org/wiki/Manufacturing_Consent


You can definitely political vlog by renting space on a CDN to host your own webm's. Or BitTorrenting them and sharing magnet links. Google can't stop you.


Google can't stop you.

In 2019, the potent combination is virality/discovery+monetization. Google along with the rest of the SV CEOs and other people of equivalent status can indeed withhold that, despite the wishes of large numbers of people. Just "putting things online" is so 1990's. In 2019, the real power comes from that dynamite combination of virality/discovery+monetization. A 2019 formulation of Free Speech needs to account for that.

We saw this before in the 1980's with Manufacturing Consent. The western papers technically weren't censoring like Pravda. But in effect it amounted to the same thing.

The litmus test should be: Are there people who want to hear, yet a few are using infrastructure control to stop them? Then Free Speech is being blocked. Are there people who want to pay, yet a few using infrastructure control to stop them? Then Free Speech is being blocked.


Again, you're conflating free speech with indexing. You're as entitled to be listed in a newpaper's classifieds as you are entitled to be indexed on a search engine or hosted on a private website. Freedom of speech is not freedom to be listed by someone else through force.


Again, you're conflating free speech with indexing.

In 2019, indexing gives you the capability to define reality. People riding on the coattails of giant SV corporations have been using this capability to basically put words into the mouths of people and ascribe positions to them that have no real bearing on the truth.

In 2019, indexing is part and parcel of Free Speech. In 2019, you don't have Freedom of Speech if you aren't indexed, and you can't trend. In 2019, trying to exercise Freedom of Speech like that is like giving a 19th century speech with a tracheotomy. It's not the same by literal orders of magnitude. It's as disingenuous as giving one man a club and your political crony a rifle, then declaring trial by combat and saying it's a fair fight.


And in 2019, Freedom of speech is not freedom to be listed by someone else through force. Reality is not the internet, cable tv, radio, or the printing press.


"No" and "No," then. "Strangers don't find out about my podcast" is nothing like "nobody can hear me." "PayPal won't middle-man my transaction" is nothing like "I can't get paid" (mailing checks or cash is a thing).


That's technically true in the same way that western media technically didn't censor news favorable to the Soviets. It did appear, but buried in the back, expressed in bland language.

https://en.wikipedia.org/wiki/Manufacturing_Consent

The salient truth, is that corporate oligarchs are trying to game the system so that their own side has several orders of magnitude more voice, and everyone they don't like has several orders of magnitude less voice -- then try and claim it's "Free Speech" and "fair." No. The system is rigged, and the disingenuous arguments are made simply because the law and cultural norms haven't caught up with technical reality yet.


A 15 second google shows the top social media sites based off monthly active users. Tell me an equivalent, viable platform to rival Facebook/Instagram that doesn't share the same political values.

https://www.statista.com/statistics/247597/global-traffic-to...


Having the biggest audience and being monopolistic are not the same thing.

Heinz Ketchup is the utterly dominant brand of ketchup in America by an order of magnitude. They sell more ketchup than every other brand combined in the U.S. and it is not a monopoly. Every attempt to make some new upstart ketchup has failed, and generations are totally sold on just Heinz with a degree of brand loyalty FB would kill to get... and yet they are not a monopoly nor has it been seriously alleged that they are.


Right, because it's extremely easy to find other brands that are effective in the same way. A social network's purpose and efficacy is directly related to the number of users in that network. Not at all analogous


Not about an analogy, it’s just illustrating the point that market share and monopoly are not synonyms. As to a social network being effective in direct proportion to the number of users, it would really depend on what you want to use it for. Eternal September theory certainly shoots down the general case of your claim though.


Never claimed they were.

"it would really depend on what you want to use it for" It was my understanding that social media platforms were for connecting with people and sharing content/ideas with them.

If you're trying to rebut my claim, you're going to have to actually do it, not point to something else and claim that it does.


What claim is there to rebut?

A 15 second google shows the top social media sites based off monthly active users. Tell me an equivalent, viable platform to rival Facebook/Instagram that doesn't share the same political values.

I’m not seeing a claim there, and what you did bring up (marketshare) I’ve responded to. If you want an alternative to mainstream social media, they exist and aren’t as popular, but since we’ve agreed that popularity isn’t monopolistic... who cares?


The 1990's notion of "monopoly" is as outdated as farmers owning the sky above their farms. The power in media in 2019 comes from the combination of virality/discovery+monetization. A small number of people controlling virality/discovery+monetization can seek to effectively control all speech that matters and seek to control public thought. Doling out virality/discovery+monetization on an ideological basis, contrary to the wishes of large numbers of people in public basically nullifies meaningful Free Speech in 2019.


Nobody's stopping these "large numbers of people" from swapping URLs.

They may get stopped from having their message show up in the "recommended videos" column on YouTube, and that's fine. Nobody has a right to be in that column.

(When the man said "The revolution will not be televised?" This is what he was talking about. ;) )


Nobody's stopping these "large numbers of people" from swapping URLs.

In 2019, you might as well restrict those guilty of wrongthink to standing on soapboxes and distributing pamphlets. Everyone you don't like will be restricted to the technology of 20 years ago.

(When the man said "The revolution will not be televised?" This is what he was talking about. ;) )

Basically, in 2019, if you don't have virality+monetization, you don't have the franchise.


In 2019, you might as well restrict those guilty of wrongthink to standing on soapboxes and distributing pamphlets.

Putting aside the hysterical Orwellian reference, the world would be a better place if everyone from ISIS to Nazis had to go back to shouting on street corners. This implicit argument you keep making about parity of amplification being equival not to free speech is ludicrous. Given how often people in this thread alone have confronted you on the issues with it, and the cagey nature of your responses it’s increasingly difficult to assume you make this argument in good faith.

You can believe that people are owed a Youtube channel and a Facebook page, but at least recognize that no law or principle in law supports that belief.


First, this is cherry picking US-only data. Social networks are not regionally-locked monopolies, and QQ and other have decent marketshare:

https://www.statista.com/statistics/272014/global-social-net...

To further Pharmakon's point - being a popular product on the market doesn't mean monopoly or lack of competition. A monopoly is a company with dominant marketshare, but not all companies with dominant marketshare are monopolies. Facebook has competition.


"Monopoly" as formulated back before the 1990's is outdated in terms of the issue of Free Speech in 2019. Mere dominant marketshare enabled by network effects can be used to enact effective censorship. Censorship doesn't need to take the form of absolute censorship -- and hasn't since the 1980's. (See Manufacturing Consent)

The power of viral trending and discovery, combined with the synergistic economic advantage of monetization amount to an insurmountable advantage in 2019. A small group of Silicon Valley oligarchs reserving this power for only the opinions they like is absolutely dangerous. Calling the situation "fair" would be like calling "fair" a race between someone in a Tesla and someone on horseback.

Opinions guided by such power have nothing to do with debate or seeking truth. That's all just power controlling discourse, and through that, controlling society's thought. That's like declaring a vigilante group to have "won" the public mandate, because they have all the guns, and no one speaks up against them. If SV's CEOs really want the right ideas to win in the marketplace of ideas, they need to create fair stages and level playing fields. They need to fully acknowledge the human right of Free Speech. They need to stop coercing.

It's the bias and use of power (essentially corruption) which fuels the horrible voices.


I don’t disagree; but, it could be argued that social media giants are equivalently bizarre institutions, that also blur the line between private property and public Commons.


How is it much different from monopolistic digital 'company towns'?


For starters: you live in a company town, and your continued employment at the company is geographically and practically tied deeply to that relationship.

Competing digital empires are always a click away.


You could move between company towns just like you can move between digital platforms. In fact moving from Facebook to another platform is probably more difficult than moving to a different company town, given the fact alone that there are maybe less than a handful of competitors, if at all.

Facebook really is nothing else than a company town of the digital space, or to put it even more strongly, facebook is almost the equivalent of a digital nation state, with its own market, its own rule of conduct, its own laws and contracts and so on.

To circumvent the speech issue by acting as if Facebook is your neighbours mom and pop business completely distorts the nature and influence it has over everyone's life, and the amount of hardship you encounter if you're forbidden from speaking on its platform.


We're in the context of https://en.wikipedia.org/wiki/Marsh_v._Alabama right? If I understood correctly, Marsh didn't live in Chickasaw.

The ruling was about the the monopoly over the receivers, not over the sender.

In other words, my friends/receivers aren't a click away to becoming migrated from a digital monopoly to another 'competing digital empire'.


Except when Google tried it, they failed.

Not even size will save you.


Of course, the follow-ups are important, too:

https://en.wikipedia.org/wiki/Lloyd_Corp._v._Tanner

I'd put Facebook more in the "mall" example than the "company town" one.


The mall got out of it because there were many alternate places to distribute messages, so banning mall speech didn't significantly stifle overall speech. That probably still also applies to Facebook today, but not by a wide margin.


Except the primary purpose of a mall is retail business, not message distribution. Facebook's primary purpose (as far as its users are concerned) is to facilitate social interaction. It's almost exactly analagous to the village commons or town square. Or maybe, to a "place of public accommodation." I was originally sympathetic to the "it's their platform, they can do what they want" argument but this has given me something to think about.


I thought Facebook was a living room now, not a town square.


One could argue that this implies an antitrust issue, no?


If places like Twitter didn't exist, sure.


The group of SV CEOs has a disturbing combination of 1) too much uniformity of viewpoint, 2) too much power over discourse, and 3) too much collusive inter-communication. This combination is highly disturbing.

EDIT: Given their uniformity of viewpoint and their power, I would expect a moral, intellectually honest, and highly self-aware of such powerful people to act with forbearance and not attempt to use their power to coerce the public. They should be seeking the consent of the weaker. They should be concerned about the consent of the governed.

Instead, they are revealing just what they think of the rest of us.

Manipulation is not a substitute for consent!


If someone's a douchebag and gets banned from all the malls in a city, that doesn't necessarily mean the malls colluded.


SV CEOs go on interview shows and talk about getting advice from each other. We're not only going by results here, but by self testimony. They also demonstrate the same idiosyncratic biases -- which very often involve the mislabeling of beliefs they don't like. They behave in concert, often taking similar actions on the same target, all on the same day.


> SV CEOs go on interview shows and talk about getting advice from each other.

This describes basically every big company CEO in existence, and hardly proves the idea of collusion.

> They behave in concert, often taking similar actions on the same target, all on the same day.

That's hardly surprising. Media attention has a habit of snowballing, even absent active collusion. It's pretty easy to mistake the two.


I agree with you that they probably aren't explicitly conspiring or coordinating with each other, any more than millennials explicitly conspired or coordinated with each other to all start wearing skinny jeans. But, to stdcredzero's point, there doesn't necessarily have to be an explicit conspiracy for there to be a legitimate concern here.


This describes basically every big company CEO in existence, and hardly proves the idea of collusion.

I highly doubt that excuse would hold up under discovery.


I highly doubt Zuckerberg and Dorsey are emailing around conspiratorial "let's ban the frogs-are-gay guy" missives.


I don't doubt something interesting would come up with PayPal, Patreon, and MasterCard.


One prominent reporter calling a couple payment processors and asking the same question - "I'm doing a series on why you help <bad person> fund their operations" does the trick just as easily as a big (extremely risky) conspiracy.


Groupthink which has such an effect should be considered scarier than a conspiracy. In due time, history will find both at play.


This is an excellent point ablut collusion. The Alex Jones debacles proves conclusively the media giants act in concert to censor political dissonants.


https://www.lawfareblog.com/ted-cruz-vs-section-230-misrepre...

There's good legal arguments for why online publishing is excluded


Large privately-owned platforms carry so much discourse across today's society, that censorship and deplatforming in those spaces has the same impact as governmental censorship, for most intents and purposes. Even if these corporations do not constitute what we might traditionally call a "monopoly", they control a large-enough share of traffic to have significant impact when they take artificial actions. That sizable impact is exactly why they are being targeted (not just on this topic but others) by activists or other agents pushing for deplatforming/censorship favorable to their causes.

The big risk is this: when only a few entities funnel so much societal discourse or control our communication infrastructure or process payments, those entities making arbitrary decisions about who they serve has similar impacts/risks to the government imposing similar restrictions through the law. These companies should not act as a thought police and should not impose their own personal governance above what is minimally required by the law. Nor should they rely on the judgment of an angry mob to make decisions.


"The same impact as governmental censorship, for most intents and purposes." False and dangerous. Government censorship means imprisonment and other forms of punishment, sometimes including death.

This doesn't mean your point is wrong (I don't think it is), but it's more effective to make your argument on its own merits rather than making a false equivalency.


In practice, it's going to be an arbitrary mess. They draw a series of vaguely defined lines across highly nuanced issues, make their fancy press release, and then send it off to implemented by underpaid and undertrained workers (in 3rd world sweatshops last time I checked).


>3rd world sweatshops

Jeez man, Arizona isn't that bad, I wouldn't call it third world [1]

Jokes aside, you are correct that Facebook does have lots of third-party contractors that do moderation located in underdeveloped countries and Facebook seems to exploit a lot of their contractor labor, regardless of where that labor is located.

[1]: https://www.theverge.com/2019/2/25/18229714/cognizant-facebo...


Why not ban all racist nationalism entirely? What is unique about white nationalism versus Chinese nationalism in China, or black nationalism in South Africa? There are many racist movements all around the world.

Only banning white nationalism is giving fuel to the white nationalist belief that whites are being unfairly targeted. That's not something we should encourage IMO.


The notion is that the history of European colonialism and imperialism "justifies" black and Chinese nationalism as a countermeasure to white supremacy.

There's a "punching up" vs. "punching down" model that pretty much explains most "hypocrisy" on this issue. For instance, even subtle hints of anti-Semitism expressed by white Westerners (e.g. being anti-George Soros in particular) are condemned far more vehemently than anti-Semitism expressed by black people or Muslims, because of the perception that Jews, being perceived as predominantly white and Westernized themselves, are more privileged than blacks and Muslims but less privileged than white Gentiles.

I don't agree with this mentality myself because the tides can turn a lot faster than your ideological model of privilege and oppression can change to reflect it, but it's similar to the ideas that are commonly expressed on the left and serves as a decent predictive model of their attitudes.


Facebook found specifically that such power dynamics are exceedingly contextual, though the role of Western European culture as effectively globally predominant puts an additional burden on anyone arguing for Western European superiority.

Radiolab covered this recently:

Post No Evil | Radiolab | Duration: 1:11:45 | Published: Fri, 17 Aug 2018 07:20:00 -0400 | Episode: http://www.wnycstudios.org/story/post-no-evil/

Media: https://www.podtrac.com/pts/redirect.mp3/audio.wnyc.org/radi...

Podcast: https://www.podcastrepublic.net/podcast/152249110

Back in 2008 Facebook began writing a document. It was a constitution of sorts, laying out what could and what couldn’t be posted on the site. Back then, the rules were simple, outlawing nudity and...


Just because white nationalists are white, doesn't mean that they've received any kind of largesse because of it. If you look at the demographics of white nationalists, it will be largely overrepresented by poor whites from marginalized, backwater communities.


Oh, for sure.

Let's talk about the word "cracker". In the US, it's a derogatory term for white people that was coined by black people. The etymology is that a "cracker" was a white man who was hired by the plantation owner to crack the whip at the black slaves. Even in white society, these "crackers" were pretty much what we'd call "white trash"--they had effectively zero social status, received no largesse, and lived in poverty themselves. Literally the only thing they had going for them was that they were white, which meant they were the ones cracking the whips and not having the whips cracked at them. Which only made them all the more eager to crack the whips and do the dirty work of oppressing black people. Being white was literally the only thing these people had to be "proud" of. And so, the most evil people weren't necessarily the plantation owners (who were evil in a detached, hypocritical way), but rather the crackers (who were evil in a directly hateful, sadistic sense). A plantation owner could come around to the idea that slavery was wrong, as Washington and Jefferson did, and merely live in a state of complicated hypocrisy. A cracker is a pure mass of utter racial animus.

The same dynamic remained after the abolition of slavery, and it's the same dynamic that you describe.


I don't think you understand the concept of "privilege" very well.

Privilege is basically the logical inverse of discrimination. If driving while black means getting pulled over more (discrimination), then driving while white means not getting pulled over more (privilege).

Hope this clarifies how even poor whites are "privileged".


>The notion is that the history of European colonialism and imperialism "justifies" black and Chinese nationalism as a countermeasure to white supremacy.

One could equally well point the the anti-colonial resistance in the Balkans and the Caucuses against Ottoman imperialism. The result of which were massacres and genocides up until 1925. You could still buy white sex slaves in Istanbul in 1915: https://en.wikipedia.org/wiki/Slavery_in_the_Ottoman_Empire#...


Agreed. There isn’t necessarily a strong black or Chinese nationalism movement in the US, but it does make sense to target the behavior of ethnic nationalism as opposed to any specific brand of it. It’s:

‘We don’t like it when individual ethnic groups in our multi-ethnic empire presume to have privileged rights over all the others’

Vs.

‘We don’t like it when just white people do it.’


Chinese nationalism probably should be banned in China (but not the US) for the same reason that white nationalism should be banned in the US: the racial designations "white" and "black" are completely arbitrary terms, not grounded in any scientific fact, and are social constructs based around an in-group (whites) wielding political, economic, and cultural power over an out-group (blacks). "Irish" is an ethnicity. "Lithuanian" is an ethnicity. "White", on other hand, is most definitely not an ethnicity, and identifying as such implicitly supports the present racist status quo, because the white in-group already wields overwhelming power in American society[1]. White nationalism asserts that only those Americans who identify as white are "real Americans", and therefore explicitly advocates for the exercise of white cultural and political dominance at the expense of all other out-groups. And at its most extreme, it advocates genocide as a means of accomplishing its vision.

1. http://www.mdcbowen.org/p2/rm/reports/cerd.pdf


That seems incredibly arbitrary.

1. a Chinese nationalist can post about China while living in the USA.

2. You don't need institutional power in order to do harm to other people.

3. How do you handle cases where power is evenly split? 60/40? 70/30? Who gets to be racist?


Try to stop thinking like an engineer and read my comment as the elucidation of a philosophical position on race rather than a Facebook policy rule.


Why can't China be for the Chinese? Sure don't kill anybody. Don't hate them. But why can't German's want their German ancestry to be Germany?

The USA is a unique case in the world and should be exception, not the rule.


Chinese nationalism is behind the cultural genocide of the Uyghur people (over 1m in camps)[1], brutal suppression of Tibetan political movements[2], and mass murder of political prisoners for the purpose of organ-harvesting[3]. The CCP uses cultural nationalism to get away with brutal suppression of non-Han ethnic minorities. This should not be overlooked, but it should also be obvious that this doesn't exonerate or excuse American white nationalist movements in any way.

   1. https://theglobepost.com/2019/01/17/cultural-genocide-xinjiang/
   2. https://newint.org/features/web-exclusive/2016/02/04/chinas-oppression-of-tibetans-has-dramatically-increased/
   3. https://www.news.com.au/lifestyle/real-life/true-stories/the-reality-of-human-organ-harvesting-in-china/news-story/14d3aa5751c39d6639a1cc5b39f223b7


It's taxonomically arbitrary but not completely arbitrary, and in fact is grounded in scientific fact.


The terms attempt to classify people based on an arbitrary collection of visually distinct phenotypes rather than fundamental genotypes -- however, the One-drop Rule (https://en.wikipedia.org/wiki/One-drop_rule) works only on the basis of the latter. This inconsistency alone makes the terms scientifically irreconcilable.


Well, that straw man is obviously useless, which is why the bad old days had an intense vocabulary for all the different proportions of mixed race. Research today shows that people's classifications generally lines up with genetic ancestry. So, for example, doctors are well served to judge racial/genetic makeup by appearance when diagnosing and treating patients. Likewise, if you want to make an ethnostate or the opposite, it's certainly a useful concept. (American immigration laws by design do the opposite, using national quotas as a proxy for race.)


One step at a time. Facebook is based in the US so it makes sense to tackle problems occurring in the US first.


>Only banning white nationalism is giving fuel to the white nationalist belief that whites are being unfairly targeted. That's not something we should encourage IMO.

Well its easy to believe in something that is simply true.


At a high level this is fine but it doesn't address the issue of what kinds of nationalist content are ban-worthy.


"White nationalism" is just a rebranded, softer term for "white supremacy". The word "nationalism" itself doens't mean just nationalism, it means literal Nazi ideologies that might not be present in other forms of nationalism.

I'm personally a fan of them taking a more fine-comb approach to banning certain forms of expression on the platform, rather than just saying blanket "The idea of nationalism is banned".


The obvious questions are and always have been "what constitutes 'harmful speech'?" and "who gets to decide?". Sure, we can agree that "white supremacy" is harmful, but in the last 5 years we've seen the definition of white supremacy slide from "people who espouse views that whites are better than nonwhites" to "people who espouse views that are not in line with progressive orthodoxy".


And what about more complex things? Twitter and Twitch bans people who question some of the modern beliefs on gender ideology, even if they bring up the discussion respectfully.

Not to mention, it just means extremists go to platforms that welcome them (Voat, Gab and others). That means platforms that use to have a big mix of views, like Reddit, are now horribly one sided and those other platforms are all extremist.

Keep in mind, we're not seeing the "right" or "moral" view. We're seeing Twitter/Facebook/Reddit's CEOs/Mods/deciders views.


Twitter is considering telling journalists to learn to code as hate speech now - we have already slipped down that slope even on the simple things.


Twitter said that was part of a "targeted harassment campaign", not hate speech.


Does it matter what section they use to classify it? The fact is that they ban it. And many more. When important topics are debated, even the most respectfully formulated argument can be deeply hurtful to the side that is on the opposing end. For example, if you respectfully tell a religious person that their religion is bunk and they are wasting their time on a bunch of fairy tales - it could be deeply hurtful. Should we ban all atheists from social media?

Banning such discussion very soon precludes any disagreement with increasingly narrow set of orthodox doctrines. Of course, Twitter has fully legal right to do exactly that - but what's the use of such platform? And if all major platforms do that - what happens to the society where important questions can not be publicly discussed?


Yes it matters, because it isn't banned based on the content, but on the context. The context being the repeated harassment of a single person by many.

In your example no one is going to get banned for that, but they will get banned if after getting blocked they create a harassment campaign of hundreds of people to repeatedly tell the person their religion is bunk. There is no "discussion" there, it is purely for harassment reasons.


In the case of "learn to code" the context was the content, not harassment.


Do you have a citation for that? All of the reports which I saw were that specific people harassing specific users were flagged as violating the abusiveness clause of the terms of service. Nobody said anything about hate speech or even non-targeted rude remarks before the usual right-wing grievance machine started whining about a private company enforcing its contractual terms.


> we've seen the definition of white supremacy slide from "people who espouse views that whites are better than nonwhites" to "people who espouse views that are not in line with progressive orthodoxy".

No we haven't. We've seen white supremacy being legitimized at the highest levels, and as a result white supremacists claiming to be "victims" openly because they feel emboldened by the election of white supremacist politicians.


you've watched too much tv


[flagged]


Please define "progressive orthodoxy" and how it relates to white supremacy.


Not my term. I wouldn't have a clue how to define it.


[flagged]


"If you need to debate the definition of 'witch' to make sure it excludes you, you're a witch."


Was your friend Franz Kafka?


>"people who espouse views that are not in line with progressive orthodoxy".

Nah disagree there. We'll see if Facebook starts arbitrarily banning libertarian content or whatever, but I don't think this spells the doom that conservatives think it will for the bits of their ideology that aren't racist.


You're proving the exact point that you're replying to. You've proceeded to qualify everything except the things you decide of a particular ideology as "racist".


I mean, white nationalism is racist. Do you disagree?


Of course it is, provided “white nationalism” means “white nationalism” and “racism” means “racism”. The problem, of course, is that lots of people use these terms to refer to moderate political views in a sort of motte and bailey equivocation.


Well call them out if you see that, I guess. Or if you see Facebook doing that. Seems like a non-issue for me.

This is the world Muslims deal with every day - radical Islamic terrorism gets shouted about, and suddenly Muslims have to say "so er, we're worried about being labeled as terrorists just arbitrarily." Way I see it this will help, because another class of people will now be aware of that distinction issue and perhaps be more sensitive to the differences between, say, a Muslim and a radical Muslim terrorist.


> seen the definition of white supremacy slide from "people who espouse views that whites are better than nonwhites"

And even there it is debatable - does saying "people with pale skin are better at producing vitamin D in low-light conditions than people with dark skin" make me a white supremacist?


If you're saying it in a conversation about wether or not we should allow non-white people to be in a country with low light, yeah. There is no objective, neutral statement.


And if I'm not saying it in a conversation about creating an ethnostate, but perhaps one about seasonal affective disorder, or about vitamin D deficiencies? My point was that the GGP's definition, even the most restrictive one, was arguably still too broad.

(Or perhaps people need to become more accepting of a distinction between better at x and better. Michael Phelps is certainly better at swimming than I am, but that is mostly orthogonal to whether he is a better person than I am)


You're re-contextualizing parent's statement and thereby changing it's meaning. That's unfair and dishonest.


That's not what is meant by white supremacy and you damn well know it.

Stop making bad imaginary corner cases and stick to the actual problem.


> That's not what is meant by white supremacy ... stick to the actual problem.

That is precisely the actual problem--the term is loosely defined and thus lends itself to partisan equivocation. For example, lots of people assert that the MAGA hat is a symbol of white supremacy (and indeed it is, albeit only for those people who sincerely believe it to be).


So what about black nationalism? The Dallas mass-shooter, Micah Xavier Johnson, was reportedly radicalized in part by various black power Facebook groups:

https://www.latimes.com/nation/la-na-dallas-police-shooting-...


Well... the fbi has a specific intiative targeting them(BIE). And while they are a problem, they aren't nearly as big of a threat.


Many are rightfully making the argument that they are a common carrier because of their monopoly in the space. Right now there really isn't a viable competitor to them. The same is true with Twitter etc. because they are all substantially different products.

IMHO I would supports Facebook's right to do this if: 1. They gave up the safe harbor protections and were subject to the same liabilities as news organizations. 2. Or they were broken up.

Otherwise, this is akin to kicking someone out of the town square without the ability to move to another one....and that is not free speech...


I'd argue that there are lots of alternatives. But the smaller and less exclusive ones which tolerate extreme content often have reputations like Mos Eisley spaceport, so while people can gather and organize there relatively freely it seriously limits their fundraising and recruitment outreach.


It's OK to say you believe that you are pro free-speech only in cases of government/military/police censorship. It's not OK to redefine free speech to mean "government censorship", especially not to call your position "very nearly absolutist".

You believe that private companies have the moral right to censor and filter user-generated content. That's your opinion/morality. But it's not "free speech absolutism" in the common English language.


> It's not OK to redefine free speech to mean "government censorship"

I beg to disagree, and in fact I believe you are the one that is redefining free speech. "Free speech" as I have known it has always referred just to government restrictions, because the government is the only entity that has the very unique punative powers that they do. Indeed, a strong corrolary to free speech, freedom of the press, by definition means that a (private) journalism organization is free to publish (or not publish) whatever they want.

I do agree that the relatively new, immense power of large technology companies with respect to how we communicate may require a sense of updating what we mean by "free speech", but this would be an update - it is not inherent to the original meaning of free speech.


You are referring to the most common definition. The are, in fact, many.

The parent comment is referring to absolutist FoS. This form of FoS implies that you do not have the right to walk away from a discussion: the speaker has "the right to be heard" and you do not have the "right to ignore." If someone were to post involuntary pornography of you online, that would be their FoS and you wouldn't be able to issue a take-down. This type of FoS is ludicrous because it sacrifices so many other freedoms to achieve purity in a single freedom.

The common definition of FoS deals mainly with government censorship and fighting words. The government must step in to protect the speech of someone if they are not violating the rights of others; conversely the government must stop speech that might result in the rights of others being violated (e.g. promoting violence towards a group). This approximates the UDHR definition, which is basically the "dictionary definition" of FoS (even this definition has been amended).


Free speech means freedom from censorship. And from https://en.wikipedia.org/wiki/Censorship

> Censorship is the suppression of speech, public communication, or other information, on the basis that such material is considered objectionable, harmful, sensitive, or "inconvenient". Censorship can be conducted by a government, private institutions, and corporations.

From Merriam-Webster (https://www.merriam-webster.com/dictionary/censorship), where there is no mention of 'censorship' being a government-specific concept:

> the institution, system, or practice of censoring

From Oxford (https://en.oxforddictionaries.com/definition/censorship), where again there is no mention of 'censorship' being a government-specific concept:

> The suppression or prohibition of any parts of books, films, news, etc. that are considered obscene, politically unacceptable, or a threat to security.


> Free speech means freedom from censorship.

As I understand it, it means freedom only from prosecution, but not persecution or censorship. I agree with GP in that this is their platform and their rules.


If you take all of this out of context, sure I see where you are coming from. But in context this is simply being pedantic.

Facebook is a US based company. When talking about Freedom of Speech in America, it's almost exclusively used to refer to the Constitutional protections around Freedom of Speech, as that is the only codified Free Speech parameters for the country at large. So in context, OP would be saying that they are near absolutist on Freedom of Speech as it relates to the Constitution.

I don't know many people that would interpret Freedom of Speech as a more generalized concept unless it was defined before hand.


The definition of free speech has historically also included the right not to be compelled to produce speech against your will.

It could be argued that requiring companies to reproduce speech against their will is more a violation of "free speech absolutism" than defending their right not to do so.


No, it is your redefinition that is the new one. And the right-wing is not standing on consistent principles when they try to self-servingly redefine freedom as the individual's freedom to appropriate a company's private property only for the case of popular social networks and nothing else.


And the right-wing is not standing on consistent principles when they try to self-servingly redefine freedom as the individual's freedom to appropriate a company's private property only for the case of popular social networks and nothing else.

False. As the Marsh case demonstrates, Free Speech takes priority over property rights. No inconsistency there.


> As the Marsh case demonstrates, Free Speech takes priority over property rights.

As the "subsequent history" section of the Marsh v. Alabama Wikipedia entry indicates, not always.

https://en.wikipedia.org/wiki/Lloyd_Corp._v._Tanner


A mall is just one subset of the shops in a town. It's somewhat secluded from the open streets. That's a far cry from trying to reserve an entire viral/exponential mechanism for your own political side. That's more akin to only one political party getting to use Television and Radio, then claiming things are "fair" and Free Speech is upheld because the other side can still distribute leaflets. This is exactly what SV oligarch class is seeking to do. They are seeking to reserve all the power of virality and discovery by search in synergy with monetization -- all for their own political side.

That's like telling the part of the population you don't like they don't need cars and mass transit, they can still just walk. They want to rig the media landscape, such that their side has 1000 times the online voice as the other, then declare they've "won" because they can't hear the others. All the while, huge numbers of people find their comments and reviews are erased, while the content creators they want to hear from are de-platformed, kicked off sites, and demonetized.

No, that's not winning in the marketplace of ideas. That's just forcing people with economic power. It's as disingenuous as anything ever propagandized in history.


> A mall is just one subset of the shops in a town.

As is Facebook a subset of social networks on the Internet.

> That's like telling the part of the population you don't like they don't need cars and mass transit, they can still just walk.

We do that. License suspensions are a thing.

> No, that's not winning in the marketplace of ideas.

Sure it is. The idea that "Nazis are harmful" has won a bit of a battle here. May it continue to.


Great! Then you have no legal grounds to oppose me when I hold a political rally on your front lawn tomorrow. See you then!


I fully support scrubbing hateful content from any place where it appears.

I think people should take note though, that this is the exact point in time when the machinery for censoring and filtering the internet is being implemented. Although hateful speech is a great use case for this technology I can't help but wonder what it will get pointed at next. This isn't the kind of development that can sit back idly after performing its primary function.


The arguments for censorship and "deplatforming" follow the fashions of the times. In one era, the notion of screenwriters inserting subtle communist propaganda into Hollywood films was terrifying enough to motivate an industry blacklist. In an earlier era, advocating resistance to the draft was considered dangerously treasonous. Somehow these threats always seem less severe in hindsight than they seemed at the time, though.


I think that's an interesting argument, but I also think you need to consider the type of tooling and the reach of content in today's world. The apparatus that we have for tracking an identifying individuals and their media consumption habits gives unprecedented control over perception at a very granular level.

It also sounds like the events that you mention were dealt with quickly and severely, maybe they would have been more severe threats given the space and time to form more completely.

My fear is that we'll get to a point where people won't know about the suffering or injustice that is occurring in the world without physically going to the places where it's happening. Or that people will have to go to extreme lengths or break the law simply to escape their filter bubble.

/tinfoil hat


> It also sounds like the events that you mention were dealt with quickly and severely, maybe they would have been more severe threats given the space and time to form more completely.

In the case of draft resistance in the world wars, perhaps. We did win those wars within 2 and 4 years of the US entering them, respectively.

In the case of the Hollywood blacklist, the Communist screenwriters became a cause celebre and, once the anti-Communists went overboard, were rehabilitated within Hollywood. This is despite the fact that they were members of an organization under the control of the USSR under Stalin. To some extent there was even an informal counter-purge of Hollywood conservatives who supported the blacklists and investigations.

> My fear is that we'll get to a point where people won't know about the suffering or injustice that is occurring in the world without physically going to the places where it's happening.

Ah, back to the last hundred thousand years of human history!


The problem is the platform might one day decide you are the witch.


"This isn't the kind of development that can sit back idly after performing its primary function."

To me, this sounds like you _don't_ fully support this development. And rightly so. It'll soon be technology in search of additional applications.


They've left a lot of room for the details to be stupid.

1) In particular, what about black nationalists, Chinese nationalists, etc, etc?

2) One of their main justifications is "conversations with members of civil society and academics". It seems unlikely that these conversations are with politically neutral groups, particularly since academics focused on race relations tend to self select for slightly off-beat views of the world.

Also, to poke touchy subject, white nationalism includes some much milder ideas than white supremacy does like 'white people should maintain their majority in majority-white countries'. A reasonable and rational person could hold that view after reviewing what happens to minorities in any country. Look at the Jews in Germany or the white farmers in Zimbabwe. Whites don't have a magical pass to be free from racial discrimination and ethnic cleansing.

Whites should be allowed to organise around race like everyone else, as long as they keep it civil and nonviolent.

[0] https://en.wikipedia.org/wiki/White_nationalism


And when Richard Spencer asks if we should maybe just send every black person to Africa and create an all-white America, he's being civil and nonviolent.

Which is the problem.


When I say that we should send every non Indian person back to where they came from and create an all Indian North America I'm also being civil and nonviolent. Yet I don't see facebook banning that type of speech.


Because you haven't done that, can't point to any groups which have done that, and haven't bothered to file any complaints about these nonexistent groups with Facebook moderation.


I absolutely have during the Dakota pipeline protests.


There was a time when the FB newsfeed was chronological, and just showed what your fiends posted as they post them.

It seems like less blame for the world's problems could be put on FB if it was just treated as a dumb bulletin board. Why not go back to Facebook being a "dumb pipe?"


How does that change Facebook's dilemma? Facebook faces PR pressure because they host the content at all, serving it chronologically or non-chronologically makes no difference. Who is complaining that the problem is really that Facebook serves white nationalist content in the wrong order?


Facebook went public and now has an obligation to shareholders to make money so that means replacing the dumb pipe with highly targetted ads and articles


Yes, FB is a private company and all that. But they have clearly been censoring speech and cannot be protected by common carrier rules, as they are a publisher.

This change of designation, of course, would be a significant financial blow, once they started losing lawsuits. But you can bet that if one considers their gov't influence and status as a money-spigot for leftist politicians, they have no fear of this actually happening.

It's only the little people who have to worry about running afoul of the law---not HRC and not FB. Welcome to the late Roman Empire.


Hmm, I don't even know what "white nationalist" mean. Nationalism is often confused with chauvinism. The difference is rather important. Nationalism is a natural and mostly positive attitude: a person lives at some place (country, city, village), surrounded by people with that person spends most of the time, with whom that person talks, trades. So it is natural to be more eager to help those people around than someone living on the other side of the globe.

On the other hand, chauvinism is a negative attitude, this is misunderstood nationalism. The good example is the following.

Let's say I have kids and my neighbour too. Obviously I want the best for my kids, so I take care of them, send them to a good school, so they have the best possible education and perspectives - this is "nationalism". It does not mean that I wish anything bad for my neighbour and his kids.

Now, chauvinism means that I am going to hurt my neighbour's kids, since they are competition for mine, and this is obviously bad.

So, coming back to the FB declaration, "white nationalist" does not make much sense. The colour of the skin is hardly a predictor of common interests. I assume FB means chauvinism, otherwise they will have to ban all international sport competitions news (funs are naturally nationalistic since they support national team), they will have to also ban all national holidays, including 4th July, as they are nationalistic by very nature.


> So, coming back to the FB declaration, "white nationalist" does not make much sense

That's because you are trying to reason out what it might mean as if it were a simple combination of two words whose meaning should be transparent from the constituentsz rather than a name for adherents to the ideology named “white nationalism” which isn't “white” modifying “nationalism” but rather a construct with (very loosely; it was originally a bit of a PR term for those selling it) the same relationship to the idea of a (putative) “white nation” with specific traits that “nationalism” has to “nation”.

But, you know, you don't have to guess about things like that, there are search engines and decent general references at hand any time you have access to the internet, so even if you didn't know the meaning, you could easily look it up and find out. [0]

[0] https://en.m.wikipedia.org/wiki/White_nationalism


Does this mean Facebook can now be considered a publishing platform, and thus liable for content put on the site?


It should.


"Let’s go all the way back to 1996 and talk about Section 230. I think historians are completely in agreement that this is the law that made the internet what it is today.

----

We thought it was going to be helpful. We never realized it was going to be the linchpin to generating investment in social media. We envisioned that the law would be both a sword and a shield. A shield so that you could have this opportunity, for particularly small and enterprising operations to secure capital, and then a sword [by allowing them to moderate without facing liability over the practice], which said you’ve got to police your platforms. And what was clear during the 2016 election and the succeeding events surrounding Facebook, is that technology companies used one part of what we envisioned, the shield, but really sat on their hands with respect to the sword, and wouldn’t police their platforms."

- Ron Wyden on Section 230

https://www.theverge.com/2018/7/24/17606974/oregon-senator-r...

I see a lot of people in this thread expressing the view that by moderating their content Facebook is becoming responsible for it. To the extent that's true under the law, it's not necessarily a great idea. Certainly the intent from a lawmaking perspective when some of the underlying framework was written is that this wouldn't be the case. You could moderate and remove bad actors from your platform without fearing legal persecution for it.


This ethnocentric approach to censorship is terrifying. Somewhere in Facebook is a small group of people who have the hubris to say they can decide on what’s right and what’s wrong, and are enforcing these views on some 1/9th of the world’s population. With outright banning white nationalism, they’re essentially shown us that they’re willing to play ideological kingmaker if anything enters the current news cycle that is viewed as evil.

How are people ok with this? Whatever your opinions are on the subject matter that was banned today are, are you honestly ok with the idea that people are now deciding what billions are allowed to see in broad strokes? History is filled with examples of controversial viewpoints being viewed as societal cancer only to be later accepted in some form as sane. Are we really going to now decide that we’ve reached final evolution and can now start judging wrongthink?

I view this as part of an alarming trend of people simply getting tired of seeing people they disagree with continue to exist and wishing they and their speech were just gone. It terrifies me that this is starting to get political will and we’re shifting to censorship as a solution.


> With outright banning white nationalism, they’re essentially shown us that they’re willing to play ideological kingmaker if anything enters the current news cycle that is viewed as evil.

There have been doing this since they launched. You could not post semi-nudity even for educational purposes. Remember the breast-cancer debacle? You couldn't post anything that incites violence. The list is fairly long and is often referred to as their "Community Guidelines".

Twitter has rules. Reddit has rules. Most online communities do in fact have rules of some sort and in the end it's up to humans to patrol it.


If white nationalism is 'later accepted as sane', I don't want to be alive in that world. But chances are, I won't have a choice in the matter.


This seems very America-centric. I wonder what it means for the rest of the world.

I guess a Finnish person living in Finland can now no longer use Facebook to express a wish that the demographics of his home country stay unchanged?


"I guess a Finnish person living in Finland can now no longer use Facebook to express a wish that the demographics of his home country stay unchanged?"

Your guess is probably correct.


Curious that a post about eliminating white nationalist content actually includes an image with white nationalist content.

https://imgur.com/a/fSm1pHa

It's perhaps an unintentional example of the difference between advocacy (banned) and education or discussion (not banned).

Very interested in how Facebook will distinguish between them when both use identical words.


Did you read the sentence right below that conveniently cropped screenshot?

| Searches for terms associated with white supremacy will surface a link to Life After Hate’s Page, where people can find support in the form of education, interventions, academic research and outreach.


I'd encourage people to please read the post before they comment. It seems that this isn't actually Facebook targeting a specific ideology, it is removing a previous rule that excluded that ideology:

> We didn’t originally apply the same rationale to expressions of white nationalism and separatism because we were thinking about broader concepts of nationalism and separatism – things like American pride and Basque separatism, which are an important part of people’s identity.

I've been plenty critical of Facebook in the past, but this post is refreshingly honest. I hope it's followed up by some discernible results.


I don't think this is correct. They seem to be saying that they allow posting about nationalism and separatism, and white nationalism content was allowed under this umbrella, but now they specifically disallow white nationalism and separatism.

Honestly though, posting about any serious separatist movement has a good chance of violating their "terrorism" community standard: "Any non-governmental organization that engages in premeditated acts of violence against persons or property ... in order to achieve a political, religious, or ideological aim"

You can't tell me with a straight face that if Facebook existed 50 years ago, Sinn Fein would be allowed to post on Facebook, because they were not the IRA.


"Free speech" comes up so fast in this conversation it stifles any other aspects, like what counts as "public". We have to figure out exactly what it means that Facebook is a massive platform with billions of users centrally controlled by a corporation and how that affects us, and free speech issues are only one aspect.

I find it so strange how people on the left and right swap argument styles on the issue of free speech, though. As pointed out by Freddie DeBoer, people on the left want to strictly define free speech as government intervention and criticism, contradicting their usual tendency to interpret law more broadly. Then the right, contradicting their own approach to law interpretation, want a more expansive interpretation of free speech to include things like corporate censorship on platforms that aren't necessarily "public" as we'd traditionally understand it, but have public-y qualities nonetheless.

Anyway, good riddance to white supremacy, but this is a transparent bid by Facebook for PR repair. Their public image was trashed by the 2016 election on several fronts, and this hardly rectifies it. It seems like cynical bullshit to me.

EDIT: It also makes me think of Freddie DeBoer's essay "Planet of Cops": https://medium.com/@jesse.singal/planet-of-cops-50889004904d


The great thing about the Internet is that anyone can publish a website and any other Internet user can access it.

The web is pleasant over here in the blogosphere. People write what they want, and readers read what they want. There’s no clamor about ads, algorithms, and censorship. Try it sometime!


As an anti-racist conservative, the most fun I have on the internet these days is arguing racists out of their beliefs on the armpits of the internet. I relish my downvotes there. I would be bummed about these beliefs being banned everywhere because I wouldn't get to exercise my Internet debate skills to make a positive impact on the world without worrying about trampling on somebody's right to an echo chamber.


What does it mean to be an anti-racist conservative, and what do you think about the conservatism movement in that you have to use that extra qualifier?


The vast majority of both conservatives and liberals are passive about the vast majority of issues. It would make sense that if you are actively seeking out specific conflicts, you can add an anti- prefix. If the parent specifically seeks out and engages with racists then they are much more anti-racist than the average person irrespective of political alignment. The conservative side definitely does not include "arguing with people on voat" as a platform.


> The vast majority of both conservatives and liberals are passive about the vast majority of issues.

While this might be true, it doesn't necessarily mean that the views held by conservatives and liberals as a whole are comparable.

I cannot for the life of me imagine someone being called a pro-racist liberal, it just doesn't compute. But someone who looks the other way and claims to be conservative? Sure, plenty of those folks to go around.

> The conservative side definitely does not include "arguing with people on voat" as a platform.

While true, I think the author recognizes that sadly conservatism doesn't often include anti-racism in its platform which is what my comment was trying to highlight.


I strongly suspect neither 4chan nor 8chan are going anywhere anytime soon.


[flagged]


Not everyone has equal abilities. Societies function best when they are set up to find a place for everyone in a color blind way where they can live up to their full potential doing whatever they're adequately capable to do to contribute to society.

On these boards many people believe all members of a given race have the qualities of their worst members. I try and dissuade them from this by showing many counterexamples and that they should really focus on convincing all member of society to call out bad behavior in whatever groups they associate and have influence with to improve society. My beef with progressives is there is too much special treatment given to ones own group just because they belong to that group by virtue of their birth and this allows bad behavior to go unpunished and uncondemed.


> and this allows bad behavior to go unpunished and uncondemed.

This is interesting, I haven't quite heard this before. What examples of bad behavior go unpunished or un-condemned beacuse of special treatment to a minority group?


A minority female music performer recently bragged about drugging and robbing men [1]. If a white man did this, his career and freedom would be over, but since she's a minority woman she may get a pass. Other minority women should call her out and criticize her and demand that she be brought to justice, and many conservative ones do, but there's a risk of the whole thing just blowing over because certain members of her affiliated groups are willing to give her a pass because of "her circumstances."

[1] http://archive.fo/gzsKg


No white man has ever been a black sex worker.

Robbing johns is heroic.

Edit: Upon re-reading your comment, I realize you give no account to such things as "circumstances," which means you're no empiricist. If circumstances count for nothing in your estimation, under what framework do you pass judgement?


Hmm, maybe this is a matter of vocabulary. Are you including supporting policies with unintentionally disproportionate impact in your definition of racism?

If so, you're going to find a lot of misunderstanding with conservatives when discussing the topic.


[flagged]


If you are unable or unwilling to extend civility and assumption of common good intent to a negotiating partner, a successful outcome is unlikely.


> Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and separatism.

So you can praise and support your ethnic nationality, as long as you're not white.


White is not an ethnic nationality. A person from Ireland has no ethnic or national ties to a person from Italy. The fact that the citizens of those two countries were not considered "white" 100 years ago demonstrates the invalidity of "white" as an ethnic or national group.


The way I see race actually agrees with you, but the issue is that "white nationalists" DO see themselves as a race and ethnicity:

https://en.m.wikipedia.org/wiki/White_nationalism

It's in the first sentence of their description:

> White nationalism is a type of nationalism or pan-nationalism which espouses the belief that white people are a race and seeks to develop and maintain a white national identity.


You can't understand what White Nationalism and Separatism if you're happy to come conflate it with "praise for your ethnic identity".


The problem is that there is no objective distinction. If I don't understand what it means to you, it likely means something else to me.

For example, if someone praised their white race, a lot of people would consider that white nationalism. Especially in the context of, say, a national news story.


"Only white people can be racist because they got the best color. The unfortunate need pride to feel good about themselves so we are keeping them"


I don't think that private censorship is necessarily a problem, but I do think that corporations having the authority to exercise what is a pretty effective form of muting is a problem. This is of course due to the size and prevalence of Facebook as a platform. I don't think the solution (or best solution, at least) is to break up the tech giants as Elizabeth Warren would suggest. I would instead prefer to see some form of socialized media platforms that take the market question out of the equation (does white supremacy content hurt Facebook more than removing it would?). This is of course, because, the market is not always right (even though in this case I would make a strong argument that it is).

I'm not sure what the best social media model for such a platform would be -- preferably something mostly non-anonymous like Facebook, but self-moderated (within the bounds of the law) like Reddit used to be.


In the case of facebook. If they removed one side from debating that side will leave. Creating a smaller more similiar group of people. That would go against the goal of connecting the world and change the goal to connecting the left or right. Plenty of those sites. Facebook's ideal strategy is not to get involved.


Except deplatforming is shown to work against alt-right hate speech: https://motherboard.vice.com/en_us/article/bjbp9d/do-social-...

Anti-semitic speech has traditionally relied on soap boxes to spread their message, for decades before the internet even existed. That's why most academics won't engage them, because these groups are not interested in genuine discussion but in broadcasting their prejudice.


The original post referenced if one party was removed. Either one take your pick.

The alt-right or alt-left is a tiny group. An entire party is 35-50% of the population. Rep or Dem.


VICE is trash content, not journalism. And the current debate obviously stems from the tragic results of increased alt-right hate speech, despite all the "deplatforming".


Ad Hominem.

If you want the data, check out these studies from the quoted researchers:

1: http://comp.social.gatech.edu/papers/cscw18-chand-hate.pdf 2: https://datasociety.net/output/oxygen-of-amplification/


1. Is a study on how effectively Reddit banned some users 2. Is some woman’s opinion on how people should write about Internet-related news

Neither of those sources demonstrate that banning “alt-right” content “works” unless you mean that banning results in bans.


From page 2 of the first study:

> r/CoonTown was a racist subreddit dedicated to violent hate speech against African Americans. It contained “a buffet of crude jokes and racial slurs, complaints about the liberal media, links to news stories that highlight black-on-white crime or Confederate pride, and discussions of black people appropriating white culture” [28]. Their banner featured a cartoon of a black man hanging, with a Klansman in the background [20]. It had over 20,000 subscribers at the time of banning.3 The following is a representative, highly-upvoted comment from the subreddit:

> “It would be so much easier if this [n-word] was taken outside and shot. Then rasslle up his eight or nine [kids] and shoot them so we can terminate that line of genes.”

And from the abstract:

> We find that the ban worked for Reddit. More accounts than expected discontinued using the site; those that stayed drastically decreased their hate speech usage—by at least 80%.

Would you like to try again?


“The ban worked” means something very specific in the context of one website, that does not generalize to reasonable public policy.

And yes, it borders on the tautology I pointed out before - banning content results in banned content. This fails to address the question: is banning content a good idea?


Why not read the study?

> 1.2 Research Questions & Findings > We analyze the effects of the ban at two levels: the user level and the community level.

> RQ1: What effect did Reddit’s ban have on the contributors to banned subreddits?

> RQ1a: How were their activity levels affected?

> RQ1b: How did their hate speech usage change, if at all?

> RQ2: What effect did the ban have on subreddits that saw an influx of banned subreddit users?

> RQ2a: To which subreddits did the contributors to banned subreddits migrate after the ban?

> RQ2b: How did hate speech usage by migrants change in these subreddits, if at all?

> RQ2c: How did hate speech usage by preexisting users change in these subreddits, if at all?

>And yes, it borders on the tautology

It's the study of the before-and-after effects of banning two subreddits, particularly on users of those subreddits as they migrated to others. Where is the tautology?

> that does not generalize to reasonable public policy.

The second paper, from the person you Ad Hominem described as "some woman’s opinion on how people should write about Internet-related news," directly talks about how you can generalize these ideas into combatting attacks on the Internet to prevent their amplification.

EDIT: That "some woman": Whitney Phillips; PhD 2012, English with an emphasis on folklore, University of Oregon

Would you like to try again?


Usually when people say "<news organization> is trash" when presented with an article it's an attempt to poison the well and/or resist having to read it for themselves - i.e. confirmation bias.

If VICE is trash, then it should be easy to point out where the article is wrong, right? Which studies did you disagree with?

What about Tech Crunch [1]? Mashable [2]?

[1] https://techcrunch.com/2017/09/11/study-finds-reddits-contro...

[2] https://motherboard.vice.com/en_us/article/594kq5/alex-jones...


> If VICE is trash, then it should be easy to point out where the article is wrong, right?

Yes, although there is no need to defend my opinion every time after having bothered to read this kind of junk often enough. And the well is already poisoned by past articles.

The trashy VICE article has a broad claim in its headline that suggests (and is used to claim by the post here) that deplatforming works as a general tool against alt-right hate speech. The study itself only found that within a few weeks of shutting down certain subreddits, hate speech in a few other subreddits did not compensate. They did not investigate whether the hate speech went to other platforms, decreased in general or even increased due to radicalization from this. Nor did it examine long-term effects. And to be honest, it's not particularly convincing since it's heavy on ideology (example: "These constitute our invaded subreddits. Using this method, we identify 1201 subreddits invaded by r/fatpeoplehate migrants and 275 subreddits invaded by r/CoonTown migrants.").


Alt-right hate speech is increasing because you've got a lot of people angry at the current state of economics, and a political party that is in half-in-bed with the alt-right, that keeps making promises that they will fix all those economic problems.

Without deplatforming, the situation would be far worse. With a better economy, the situation would be far better.

You're in a sinking ship, and there's a few people desperately using bailing buckets to get water out. You point at the bucket brigade, and say: "You are the reason the ship is sinking."

No, they are the reason the ship hasn't sunk, yet.


Except by the standards of the mainstream media, the economy is ROCKING. Even Twitler himself says so! So if the economy is actually bad, that points to a problem with how we're even discussing the economy.

(I'm not saying it's not. Read Andrew Yang's The War on Normal People for a superb analysis of how we're talking about the wrong things, economically.)


[flagged]


Personal attacks are not ok here. We ban accounts that do this and https://news.ycombinator.com/item?id=19505358. Please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting to HN.


So you hate free speech. Got it.


HN is a site for speech which lead to interesting conversation. People clubbing each over the head is uninteresting.

Since you don't seem to want to use this site as intended—based on your account history—I've banned the account. If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future.


If that side has a large percentage of people who actively advocate for violence against my people or regularly try to make it so me and mine can't work to live I'm fine with kicking them to the curb.

No one ever starved because they couldn't call me a f-----. Not too long ago the fact that I am a f----- would have prevented me from working in my chosen or almost any field and gotten me harassed or assaulted on a regular basis.


I'm glad. Now we can stop using it as a scapegoat for all of the worlds problems.


People will never stop blaming human issues on businesses and devices. It's much easier to blame Facebook and demand they change that it is to get people to change.


Attitudes towards free speech have changed a lot in recent times. I believe this is related to the rise of social media. The advent of social media had made it too easy to spread hate online. This has caused an upheaval in attitudes towards free speech. For example, consider that UC Berkeley which gave birth to the Free Speech Movement is now making news for banning controversial/harmful speech, such as that by Ann Coulter. The people (as opposed to governments) have decided that some censorship is in order. This is a natural evolution of societal norms. This particular evolution was causated by social media, and it is befitting to see that a social media company is now in the news for censoring harmful speech. This type of censorship, as opposed to absolute free speech, will be the new normal.


I'm seeing comments mainly focusing on "free speech" and that no company or monopoly should control it. I think we are stemming away from the actual GOOD that comes from banning such content. Why are we allowing people who influence others to do harmful things to minorities? Why should we give people with harmful ideas a platform to begin with? If they feel like they need to be heard, sure let them develop their own platform and spread it among themselves. Rather than influence young people who don't know any better. That's just my opinion on the matter.


There was a good Intelligence Squared debate about this recently “Constitutional Free Speech Principles Can Save Social Media Companies from Themselves”: https://www.intelligencesquaredus.org/debates/constitutional...


What would happen if Facebook introduced friction during sharing, making the decision to share more salient? For instance, click "share" and get prompted with questions about whether the source is trustworthy and whether you personally would accept responsibility if the content were misleading or misinforming, etc.


As an anarcho-capitalist, I feel very happy that Facebook is banning nationalist content. They should have done it ages ago.

However, banning all separatists?

Well. Not all separatists are created equally. Sometimes it is a question of getting out of a union for the sake of reducing the state, and this is in no way vile. Quite the opposite.


> Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and separatism.

Can anyone elaborate on this? Speaking as one of them, I'd hazard to say that the majority of 'white people' in the United States are so far distant from their 'ethnic heritage' that we'd just define it as 'American'.

Mexican-Americans celebrate their Mexican heritage, Japanese-Americans celebrate their Japanese heritage, etc. What does your average 'white person' celebrate? We don't know a heck of a lot about my mom's side other than that her father is French Canadian and her mother is Polish American. Dad's side has been here since the 17th century (according to my grandmother anyways).


>What does your average 'white person' celebrate?

Holidays they imported from their European past. Saint Patrick's Day isn't originally a drinking holiday, it's a Christian religious holiday. My friend with Serbian ancestry celebrates 'Serbian Christmas' or Julian Orthodox Christmas.

Lots and lots of Christianity.


Who decides what white nationalist content is? This is just another move towards wrongthink.


Facebook, Instagram, Twitter, Youtube, etc.. are user-generated content publishers. They have the right to refuse whatever content they feel like.

This is the same as a newspaper refusing to publish an op-ed they don't agree with.


I'm in favor of values-based governance. At some point you have to say "these are our values" and if you don't share them then don't use our platform. They do not necessarily teach a specific solution to a specific problem, but they can guide you toward an internally consistent position on one. As a practical matter, you must be willing to rank-order your values. Many emerging rights issues, particularly those involving speech and religion, simply cannot be dealt with if you treat all rights as equal and inviolable. This is government 101.


Regardless of anyone's political affiliation and/or fear or trust in moderation, FB is a company operating a private service. Private as opposed to general elections where you and I have a constitutional right to participate. FB service is FB's home not our collective. So, I don't get why their new rule (long overdue IMHO especially after the NZ shootings) is a cause of concern, criticism or outrage.

I should also remind you that similar (equally sane IMHO) bylaws apply in the current forum - at least from what I've seen as long as I'm here.


Have you tried reading more of the discourse before proclaiming to the world you don't understand it.? Or did you skip that step.


I think it's fine if Facebook wants to ban white nationalist content, however I don't trust their judgement with respect to deciding what is "white nationalist" content or not, because as we've seen the MSM will label anything that it doesn't like as racist, and Facebook definitely falls in with this crowd, and this could be just an excuse to block things they don't want because it goes against their political agenda, and they can use "but muh white nationalism" as an excuse.


Blame the white nationalists for deliberately trying to turn everything into a dogwhistle, with stuff like "Operation O-KKK" where they flooded social media with fake outrage at nonexistent people saying that (HN doesn't support emoji, ok symbol should go here) is racist, thereby turning the it into a racist symbol through the medium of trolling. Nazis around the world now use it to identify themselves while calling it """ironic""" to anyone who asks.

Turning benign stuff racist is how modern racists thrive and spread their ideas, and pretending they don't exist doesn't help.


Well what can they do really?

If they leave content up after big incidents they are enabling. If they take it all down, they are promoting certain points of view over others. Both will have folks screaming.

Probably they were smart enough to do the math and determined which action likely results in less blowback. That's all.

Then again, Facebook like many public areas always was a cesspool in my opinion. Not a fan of drinking from cesspools. So they can do whatever brings in the most money and I'm sure they will.


Not everyone on the orange website is a ‘free speech absolutist’, or foolishly conflates freedom of speech with platform access.

Good on them for finally cracking down on bad ideas.


This is good. I am, though, concerned about the negative externalities that could arise. Will this lead to new and stranger commercial activities?


If you want a negative example, check out Voat. It started as a "less moderated/censored reddit" and quickly filled with actual nazis and other vitriol. A business cannot survive on "We accept hate speech" in the same way 4chan has never really been a business proposition; Nobody wants to look like they are supporting hate speech so advertisers won't go near it with a ten foot pole.

IMO, this is a good thing, and implies there is at least some decency in the world


On the other hand, a website doesn't need to be a good business proposition to be a good platform. In fact some might say being a good business proposition is outright opposed to the values needed in a good platform.

(In case it's unclean, I don't think Voat or the .chans are good platforms either.)


So true, I can see that point, thanks.


"The trouble with fighting for human freedom is that one spends most of one’s time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all."


With respect, putting a bad idea in quotation marks doesn't make it a good idea. Mencken was extremely intelligent, but applied over-broadly, the concept embodied in this quote implies we should defend even murderers against the oppression of banning murdering.

I much prefer "Yes, I'd give the Devil benefit of law, for my own safety's sake."


I wish larger tech companies would follow Jack Dorsey's path and have more open communication about their policies. His debate with Tim Pool on the Joe Rogan podcast[0] especially provided insight into why they're making these type of decisions.

[0] https://www.youtube.com/watch?v=DZCBRHOg3PQ


Facebook is a private company. They are free to purge whatever they want off the platform for whatever they determine is in their best interest. And that is how it should be.

White nationalists are not a protected group, therefore it is not discrimination. If FB wanted to purge all pro Republican or all Democrat articles from the platform, that would also be in their right (though not best interest).


The direction that Facebook and Twitter is moving in is likely to provide a decent boost to smaller but federated social media like Mastodon, PixelFed, Pleroma etc. The corporate-owned and "one-size-fits-all" media seems to come apart at the seams as it struggles to balance an impossible array of different concerns across countless communities and cultures.


Does this mean that they implicitly support any hate groups that they aren't specifically banning? Antifa, IRA, Isis and the like?


They'll have big problems in many Asian countries then, where patriotic and racist nationalistic extremism is much more widespread. Think of India, China, Japan or Korea. Hindi nationalism is not much different from "white" nationalism and racism, mostly even worse. Will they block 50% of their user base there? How to draw a line?


I just don't get why people can't be adults and just decide what conservations they do or do not want to participate in. Instead of the Masters of Universe building tools to more easily manage and filter desired conversations, they enforce the "views of the anointed" at every turn.


I don't think anyone with leftist or conservative views (at least how we define them today) would have it in them to murder 50 innocents in a mosque. I don't see the slippery slope applying here either - the views held by white supremacists haven't exactly evolved over the years .


I'm still such a fence sitter on this. I am fairly close to a free speech absolutist, but I agree with others that FB is a private platform and isn't obligated to provide bandwidth to anyone.

Another concern of mine though is that just banning this stuff atrophies our ability to effectively debate it.


White racists used free-speech-limiting laws against Martin Luther King, Jr.

You start limiting free speech, that's going to cut both ways, eventually.

I've had comments on Facebook get auto-censored simply by discussing racism while trying to combat it.

Talk about shooting yourself in the foot!


I don't believe Facebook. This is just a publicity move.

If they really wanted to combat the hatred they cause, then they should change their algorithm and stop putting stories on top that cause the most emotional reactions (and clicks in consequence).


If the views can't expressed, then they can't be refuted either. Banning this sort of content from Facebook will only foster a sense of persecution in white nationalists, which will likely strengthen their movement overall, while isolating them further from contrary opinions.

John Stuart Mill's "On Liberty" is essential reading for those who wish to silence speech they find offensive, especially corporations which may or may not have the right to regulate content on their all-but-universal, essential-to-modern-life platforms.

In the meantime we should all ask ourselves how we can move forward to a world where no single Facebook-like entity has so much say over the ability of Americans to freely assemble, speak, and publish in the modern world.


I don't buy the "essential to modern life" part. I've quit FB cold turkey, because I concluded that it was making my life worse. And I seem so far to be happier for it, and to have lost very little.


Getting caught up in the freedom of speech question is folly. In context it reads as a not-so subtle dismissal of racism. I’m not even questioning the right to free speech. But when the first response to platforms removing white nationalist content is about “just wanting to have a conversation,” people of colour tune out. Freedom of speech is a valid concern, but I think in this instance it belongs further back in the sprint queue.

In regard to comments concerning why it only addresses white-nationalism, understand that these statements are made in the context of North America’s on-going flirtations with white nationalism going back 300 years. It’s not an easy pill to swallow and speaks little about individuals regardless of colour.


Why just white? I've read asian nationalism (incl. in US) happens to be nasty too. E.g. an asian-american girl dated a white boy and had to face lots of ridiculous hate speech in Facebook.


The bigger question is, will they also ban content that would be construed as hate speech if you simply replaced the words “white people” with “[!white] people”?


This is not the bigger question.


Sure it is, because it sounds like Facebook isn’t committed to getting rid of all hate speech, just the kind that isn’t socially acceptable. Same with Twitter.


Facebook's rules have always been a bit bizarre here.

https://www.propublica.org/article/facebook-hate-speech-cens...

> One document trains content reviewers on how to apply the company’s global hate speech algorithm. The slide identifies three groups: female drivers, black children and white men. It asks: Which group is protected from hate speech? The correct answer: white men.


Facebook certainly did not invent the idea of protected classes, USGOVT and tons of news/social companies all use the exact same notion when determining what constitutes illegal discrimination and disallowed hate speech.


I really don't think this is the same interpretation of "protected classes" as the US government uses:

> White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected.

I'm fairly certain a state that tried to ban female drivers or (openly via legislation; this next one definitely happens in practice) punish black children disproportionately would find themselves in hot water with the Feds.


Came here to say this. You can't have it both ways.


I think we're about to watch us have it both ways.


Does it mean that other nationalist content is allowed?


I'd hope they would also ban antifa/black block content too. Those assholes are trashing the city where I live every weekend for months


What about other nationalist or bigoted content? Like the new Black Panther Party (Black Nationalists) or Louis Farrakhan’s comments about Jews?


Good, they try to hide behind the 1st amendment but there's only one endgame that they're after, and we know how that turned out.


Censoring them confirms their narrative and only makes young white men who've been exposed to their rhetoric pay closer attention.


I support free speech (in the truest sense).

If Facebook wasn't a platform so many use then I might support there censorship as creating a space for a unique kind of dialogue. Just like hacker news has certain rules that help shape this unique community.

But since there as big as they are, and since there not the only ones censoring this kind of content. There not just creating a unique space, there making it extremely difficult if not impossible in some cases, for this perspective to be voiced at all.

And that is damaging to free speech.


Not sure how I feel about this, but I do think it would be interesting to see a white supremacist challenge this in court.


Does anyone know what this means in practice?


I despise these weird nationalists, but this is the final straw; I've deleted FB over this. :)


as a person of color, I would be very grateful if facebook can also ban material offensive towards Whites as well as hate literature towards white -- and this should include translation of non-English messages/materials in long run to weed such things out.


they could just use their algorithm to make unwanted content insignificant... This will not quench the problem with nationalists, it just drive them to other plattforms and they will become more radicalized because they only hear their own voices there.


Your assumption being that Facebook presently serves as an effective platform for deradicalizing extremists.

Do you have evidence backing that up? Because from what I've seen[1], Facebook has been serving the opposite purpose: showing people things that reinforce their beliefs and trigger outrage reactions for clicks.

1. https://www.nytimes.com/2018/10/15/technology/myanmar-facebo...


Zero credit for deciding white hate is bad 15 years after your company's founding.


Better late than never.

A lot of commenters seem to be giving them negative credit for this outcome, and a number of people who claim to want racist content gone are also giving them negative or zero credit. As someone that wants racist content gone, this seems like a perverse state of affairs. No wonder they allowed it for so long given this.


Then facebook should name and fire every single person who fought this long to keep racism prosperous on facebook.

facebook is not a monolith, it is a collection of people making evil decisions every single day. When it takes them 15 years to overturn just one of their evil decisions, there still should be some penalty for it.


I was mainly talking about outsiders who Facebook presumably tries to please, like commenters in this thread. It could be that any number of Facebook employees personally wanted to get rid of racist content, but Facebook thought that would be an extremely unpopular decision. Assuming the sentiment across this entire HN thread isn't unique to HN, I can see why they might have thought this would be an unpopular decision and maybe a bit of why they took so long to make it. I'm very glad they decided to do it.


The major problem I see here is why making a specific policy for a specific group of radicals? This problem could be tackled with the generic rules already in place, such as "it is forbidden on Facebook to promote violence".

There are many violent chauvinisms in the world that are not white, and they should be treated equally.


But its okay to be a south african black nationalist. Its OK to pose with decapitated heads if you are of arabic descent, but its not okay to be white apparently.


That's exactly the point. In Africa, India, South-East Asia, and other places there is a lot of violent racism and nationalism going on.


So, who gets to decide what a "white nationalist" is? These guys?

https://freebeacon.com/issues/soros-bankrolls-unverified-hat...


> We didn’t originally apply the same rationale to [...] because we were thinking about [...] things like American pride and Basque separatism [...]

What do they mean by "American pride"? Googling it brings a lot of marketing, I do not know how to filter all of that to find out an idea.


Search "patriotism".


So it is not a meme, like a song or some group of active people from the past or anything like that?


Correct, they were referring to a very general concept as far as I am aware.


Good. Facebook is not a free speech platform, it's a medium like the old media. The more mature it gets the more it will look like TV. They have no reason to pretend to appeal to the geekies or their besties in the valley - they re big enough on their own. Good. On to other ventures


Some of the people you won't be hearing from on FB are here: https://en.wikipedia.org/wiki/List_of_X_nationalist_organiza... where X = white.


Some of the people you won't be hearing from on FB are here. https://en.wikipedia.org/wiki/List_of_X_nationalist_organiza... where X = white.


I reduced my chance of seeing toxic content by deleting Facebook.


"Facebook to pretend they're banning white nationalist content by adding a couple of keywords to a blacklist."

They will put zero real effort into this because they put zero effort into everything.


What if they end up banning dissent material that merely looks like white nationalist material because of popular mischaracterizing narratives by fear-mongering clickbait newsfeeds?


Anybody expected freedom of speech in Facebook?


About damn time.


How does the ban work? The user is banned? The content is shadowbanned? What exactly are they doing? I can't figure it out from the message.


If they ban it like they claim to ban ISIS content or respect our privacy (Zuck's "privacy first" vision), nobody needs to worry.


interesting 180 degree turn for them. will they also stop banning people for calling out nazis?


I don't have a strong opinion on this particular issue, but I don't think your comment gets to the core of it.

If Facebook decided to ban all Chinese, Islamic, or African-American content, then the conversation would be completely different. People wouldn't say, "so be it -- go set up your own servers".

Obviously they have to make a value judgement, and they have done so here and said as much (which I appreciate). In other words, they made a considered and conscious decision that's beyond my expertise. But it's still a value judgement about particular content. The decision really can't be made on an abstract "free speech" basis.


This subthread started as a reply to https://news.ycombinator.com/item?id=19503270.


> If Facebook decided to ban all Chinese, Islamic, or African-American content, then the conversation would be completely different.

Also, if they decided to ban all White content, it would be different.

White and White nationalist aren't the same thing; please stop insulting White people that aren't racists by implicitly equating them.


I don't think he was doing that. He was trying to say that racial discrimination would not be an acceptable reason, I believe. I didn't take it the way you are suggesting at least.


But they're not banning Zionist (Jewish nationalist) content, only White natl content. So they're engaging in illegal discrimination due to disparate impact.


> But they're not banning Zionist (Jewish nationalist) content

True.

> So they're engaging in illegal discrimination due to disparate impact.

Disparate impact discrimination is illegal under Title VII (employment) but not Title II (public accommodation), so even assuming FB is covered by Title II (which is dubious), and even assuming this would be found to be disparate impact discrimination (which is difficult to assess because the test for that is narrowly tailored to Title VII context), it would not be illegal on that basis.


Yes, I should have said the "analogous" Chinese, Islamic, or African-American content, e.g. like the sibling suggests with Zionist content.

The thread this spawned reminds me of why I don't usually chime in on these issues. But what many replies are missing is that Facebook says they already ban "white supremacist" content under existing rules. They don't need a special rule for that. Just like they don't need a special rule for terrorist content that's violent.

But they ARE adding a special rule for "white nationalism", for the reasons given in the post.

I'm not saying they are wrong. I'm just saying that you can't really use the argument "they can go set up their own servers" to justify it, as the grandparent did.

I suspect that there's simply a LOT of white nationalist content, and it has a practical damaging effect, but nationalist content of other kinds isn't that widespread or damaging, and they want to allow it for free speech reasons (and probably because they aren't qualified to make fine judgements about various nationalist movements). That seems reasonable.

Anyway, Facebook gives the "right" reason for the ban (a value judgement), while the parent comment was pushing a reason that's besides the point, IMO. It felt like a "backward rationalization". But yeah it's more of a nitpick, so I regret spawning this thread, which became very political :-/

-----

Even so, let me throw something else onto the pile. A few days ago I watched this documentary on Netflix:

Struggle: The Life and Lost Art of Szukalski

https://www.netflix.com/title/80109551

It's too complex to summarize, but it's about a very talented sculptor who was also a Polish nationalist (for a significant period). It came out in December and was produced by Leo DiCaprio, because his father was a friend of the artist in LA.

I googled for reviews afterward, and this was one of the best summaries that I found:

https://www.counter-currents.com/2019/01/a-patriot-without-a...

Only after reading the whole article did I realize that it's from a white nationalist site (if you read "about this site", it's very clear).

The long and short of it is that it's a subtle issue, but most people seem to view it in black and white. If Facebook banned this content, it would be a mistake IMO (although I don't use facebook). But their decision might be the right thing overall. They have a hard job, and people like the parent were trying to apply some strict principle to it, when in reality there's no such thing.


>If Facebook decided to ban all Chinese, Islamic, or African-American content, then the conversation would be completely different. People wouldn't say, "so be it -- go set up your own servers".

This is because we have laws about discrimination based on race. White supremacy is not a race. You're free to discriminate against someone due to the views they hold.


I think you misunderstood the intent of an argument that was not very well made. I think his point is that if the group banned coincided more with the majority values the discussion would be different. For example, if Facebook banned feminist content or YouTube banned climate change supporters. Neither of these is a race and yet I think the discussion would, indeed, be very different.


This is a false equivalency though.

Like yes I'd be much more annoyed if you cut off my foot than if you served a vegetarian dinner.

That's about as apt a comparison as I can make between "white supremacy" and "science".

Lest you think I'm being one sided, I absolutely support the ability of Facebook and YouTube to ban pornographic content, while I think the groups that do provide pornography are morally fine, something I don't think is true for white supremacists.

So unless you're making an absolutist free speech argument, please elaborate on why you think feminism and white supremacy deserve to be treated the same way. I can name a few reasons they shouldn't be: Haji Daoud Nabi and Clementa C. Pinckney to start.


I guess I am making what you call an "absolutist" free speech argument. I think controlling dissemination of ideas is bad no matter how much you disagree with them and no matter how much merit these ideas have. I would rather white supremacy was out in the open, debating their point of view with the rest of us than them hiding in private forums and dark web feeling persecuted and stewing in their own little circle jerks. And the same goes for any idea, good or bad, scientifically proven or not. Flat earth, white supremacy, climate change pro and contra, feminism, men's rights and everything else. I think censorship is a tool of a very limited value, there are a few rare cases where it serves the public good and a lot of cases where it is misused and it doesn't matter whether the intent of the misuse is good or bad.


But you are not allowed to suppress the speech of people based on the views they hold. The first amendment does indeed protect even racists.

Or at least corporations and the government are not allowed to.


Just the government. A corporation is not at all beholden to the first amendment.


Not speaking legally, but speaking ethically, it is of course both possible and good to suppress the speech of e.g. white nationalists.


So chinese people who are proud of their heritage and promote it can be discriminated against because being proud of your race is just a view one can hold?


There's a difference between celebration of heritage and white supremacy. Please don't conflate celebration of Oktoberfest with calling for an ethnostate. They're different goals.


Good point there most certainly is. And Facebook recognizes that: >Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and separatism.

However, Facebook even now is ok with white people advocating for seperation. They claim there is a connection of white seperatism to white supremacy that made them ban it, while not banning other groups like the Basque from wanting independence.

>We didn’t originally apply the same rationale to expressions of white nationalism and separatism because we were thinking about broader concepts of nationalism and separatism — things like American pride and Basque separatism, which are an important part of people’s identity.

>But over the past three months our conversations with members of civil society and academics who are experts in race relations around the world have confirmed that white nationalism and separatism cannot be meaningfully separated from white supremacy and organized hate groups.


However, Facebook even now is ok with white people advocating for seperation. They claim there is a connection of white seperatism to white supremacy that made them ban it, while not banning other groups like the Basque from wanting independence.

Basque is a political state in Spain defined by geographical boundaries, not an ethnic identity. Basques can be white, black, or brown.

Facebook is okay with voters wanting self-governance. This is not even remotely the same thing as white nationalism. If you can't see that, you're clearly a troll and there's no point in further discussing this with you.


Basque is not an ethnic identity? Wikipedia would seem to contradict that:

https://en.wikipedia.org/wiki/Basques

https://en.wikipedia.org/wiki/Basque_nationalism


Ez, ez gara. Maybe, 100 years ago, but the Basque Nationalism shifted away from the alleged far right, traditionalistic and outdated politics (Sabino Arana) to the cultural, civic, values related nationalism about 70 years ago. Call us chauvinistic, but not racist, because even Black people speak better Euskara here than most non-Basque Spaniards.

And, OFC, they are considered as Basque as anyone else.


I'm referring to the modern Basque independence movement, not the historical Basque ethnicity or ethno-nationalization movement of the early 1900s that has been subsumed by the geopolitical independence campaign of the 21st century.


Not even in the 21th Century. The late 20's had Socialistic shifts on what could be defined as Basque. Arana's mindset went downhill fast as the Romanticism theories were replaced with shared culture traits to define a country.

Custom traditions and the language had far more weight than how did you look. Specially here, where, in Iberia itself, had multiple-way Religion conversos since several millenia.

Maybe the Americans are used to Race = Ethnics = culture. Here it doesn't work like that. It's the surrounding culture what defines you in the end.

If your parents came from Africa you are African-descendant, but if were born and lived in the Basque Country you are Basque and not African, period.


You may want to do more research into the whole Basque thing. It definitely is a culture with a language and identity and whatnot. And I'm pretty sure most Basques wouldn't accept non-white person as truly Basque in a cultural sense.


Do you believe there is a difference between being proud and supporting white supremacy?


Sure, witness the many "Latino Pride" or "Black Pride" types of groups or festivals. Most of these don't promote a notion of supremacy. I'm not sure why anyone is really proud of something they had no choice in, but that's probably a separate issue.


Do some of the people supporting white supremacy know the difference?


They do.

Here is Richard Spencer mocking white nationalism:

http://doesnoonehavevision.blogspot.com/2018/10/wanna-be-sit...


C'mon, that's an absurd misreading of those comments.

> He says he rejects nationalism, independence, and sovereignty for Other Races. He thinks the Aryan Race, being a conquering and wandering global people, must destroy and dominate all the world.

"I'm not a white nationalist because I want the entire world to be white-only" is a nonsense quibble on the terminology.


[flagged]


White nationalists are a subset of white supremacists who believe not only that whites are better than non-whites, but that this means they should kick the non-whites out (or exterminate them, in many cases).

https://www.merriam-webster.com/news-trend-watch/nationalist...

> Conflating the two is a racist attack on white people.

I'm fairly sure I just sprained my eye muscles.


[flagged]


China's a pretty shitty example to pick to make your point.

https://en.wikipedia.org/wiki/Xinjiang_re-education_camps

Their actions against their ethnic minority groups are bad for the same sorts of reasons white nationalism is bad.


Have you ever asked a black person if being proud of being black was black supremacy?


> You're free to discriminate against someone due to the views they hold.

I'm pretty sure discriminating against people for their views on religion is illegal. Ditto for their political views.

It's the definition of bigotry.


Discrimination is only illegal in certain contexts, mostly related to employment and essential services, as well as when interacting with the government (see First Amendment). For example, if I allow Girl Scouts to sell cookies on my business premises, it doesn't mean that I am also obligated to allow Jehovah's Witnesses to set up a stand as well.


Political leaning is not a protected class.


It is in at least seven states including California.


They ban a subset of Islamic content at least: the terrorist propaganda that leads to terrorist acts. If you assume that white nationalist content leads to terrorist acts too, then it only makes sense to ban it as well.


If we're limiting our discussion to US law and mores, then at least in the context of banning African-American content, there's specific federal law they'd be tripping over.

That same law has not been (and probably should not be) extended to protect white supremacy or white separatism, which are political ideologies.


I think the obvious analogy would be banning things like the Black Panthers. Should Facebook be allowed to do that?


The obvious analogy would be banning jihadi content, which it does.


White nationalism doesn't advocate killing all non-white nationalists.

Bad take.


Any white nationalism that advocates "a nation of their own" implicitly advocates such killing, because we're out of land.

... unless that philosophy explicitly states "... and we'll build that nation on the shores of Antarctica," there's a genocide implied.


Nearly every group advocating white nationalism that I've seen, including both the lesser-known ones and people who get mainstream attention like Richard Spencer, talk about it in terms of what could be built up in a situation where existing governments break down. American advocates tend to see the country as on the brink of a civil war or some other destabilizing event, possibly due to environmental pressures or foreign attacks on infrastructure, and most famously the mythical "racial holy war" breaking out between ethnic groups. They see the movement toward either left or right identitarian politics as a sign that people are already realigning based on ethnic and cultural identities.

By far the most common existing model I've heard reference for a theoretical white ethno-state is Israel. Non-white people would not be banned from participating in it, but it would exist with a mandate to serve the interests of a particular culture and people. In the same way that Jewish people can apply for automatic citizenship in Israel, people with European heritage would be given incentives to migrate and participate in the state. Of course, Israel did not come into being without war, and most white nationalists that I've seen anticipate future wars, but nearly all of them see an ethno-state as a response to war and destabilization rather than a cause of it. Most also advocate a peaceful migration of populations rather than forced or violent ones.

The most common historical model for white nationalism is the United States itself. The country was not founded to serve the needs of every race and people equally, but the particular interests of white British colonists. Eglitarian movements participated in the creation, and the abolition movement pre-dates the founding of the country, but the overall effect of the specific policies and laws enacted prioritized the interests of white people, and policies implemented into the 20th century and even some today reflect this clearly. If we deny that countries like the USA were originally patriarchal white ethno-states that eventually allowed equal particpation for women and minorities after centuries of advocacy, we lose touch with history. Most people still recognize this, but it's often ignored, most often by people with center-right politics who would prefer that history more closely resembled the present. But those old traditions still have a strong pull on people, and the only way to argue against them is to admit that, for all of human history, most people have preferred to live around people who look like them and share a common culture. Moving toward a different type of social organization is extremely difficult and can't be done without discussing openly why people have tended to prefer it. Look at centuries of failed attempts at communes to see how difficult it is to get people to act against their natural inclinations.

There are plenty of people with violent tendencies that would welcome ethnic wars. This is not restricted to white nationalists, but includes all sorts of people, and their motivations often intersect with other movements. The New Zealand shooter, for example, considered himself an "extreme Green" and killed people both because he interpreted them as a foreign invasion and because he wanted to oppose migrations that would lead to rises in population. This in itself does not mean environmentalism or advocating population declines is a dangerous movement. The particular aspects of his ideology advocating violence are dangerous, which is why there are rules specifically addressing calls to violence.

This ban which states that expressions of racial pride is allowed but bans advocacy for a nation-state which prioritizes the needs of an ethnicity, and only if that ethnicity has light skin, raises some interesting questions. If the model for white nationalism is Israel, a nation that serves as the homeland for a particular, majority white people, does support for Israel qualify as white nationalism in this context? I'm sure Facebook won't interpret it that way, but there are stong anti-Israel movements that consider support for Israel over Palestine to be an expression of racism. How we handle these kinds of things moving forward is not at all clear.


If white nationalists want support, they're going to have to find an example that doesn't have either (a) the fundamental human rights atrocity that is the Palestinian enclaves dogging its "success" story (I'd call Israel indicative of the problems with forming a new independent ethnocentric nation in this modern world) or (b) a nation that---for all its implementation flaws---still has "all men are created equal" baked into its philosophical DNA and "creation myth," if you will.


Just to be clear, I'm not asking anyone to support white nationalists. I think it's important to address problems with ideas like this directly and not make them into something they're not. If opponents of white nationalism always characterize them as advocating violent ethnic cleansing but they don't present themselves that way, some curious people will look them up, and all the arguments they heard against them will seem like ignorance. I believe that just helps their case. Arguments are most effective when they're fair and honest. People are attracted to forbidden knowledge and love to feel like they know secrets, and there you have the attraction of things like the "intellectual dark web."


> The New Zealand shooter, for example, considered himself an "extreme Green"

No, he was taking money from Far Right organisations in Europe.


That doesn't mean he didn't consider himself an extreme Green, though. Where you accept funding from and what your political beliefs are can be rather different...



No, he did not take money from anyone.


> because we're out of land.

This argument is profoundly weird. There are more people living on Manhattan than in Montana.


Correct, but the world is divided into nations already that use up 100% of the claimable useful territory. I can't imagine a non-violent process by which the United States carves up a chunk of Montana and goes "here you are separatists; have fun."

That'd be inconvenient for the Montanans that own that land.

... and the US has some historical precedent regarding how it responds to separatists in general that isn't very promising. ;)


They could just buy land from a country and it wouldn't require what you claim.


Who's selling?



The islands might be worth it.


>implicitly

Jihadis don't do it implicitly they do it explicitly.


The fact that you would find an equivalent with the Black Panthers is appalling. I would posit that if White Supremacy didn't exist, the Black Panthers likely wouldn't either. The reverse was not true for many decades.


I don't think it's a particularly unfair comparison. Hate groups are still hate groups, regardless of "who started it."

The SPLC considers some black nationalist groups (including offshoots/successors of the Black Panthers, which is relevant because there is no actual modern day "Black Panther Party") to be hate groups all the same.


> I think the obvious analogy would be banning things like the Black Panthers. Should Facebook be allowed to do that?

Yes, Facebook should be allowed to ban people posting from before Facebook existed.

OTOH, that's unlikely to be practical issue of any concern.


If they were regularly being implicated in terrorist attacks, yes.


That would likely depend heavily on whether the BP is calling for black nation separatism.


And if they'd racked up over a hundred killings in a series of terrorist attacks in just three years.


They should be able to ban whatever they want, as long as it isn't a protected class. So unless you can prove it was done with racial animus, sure.


> then at least in the context of banning African-American content, there's specific federal law they'd be tripping over

which specific federal law would that be? I can't think of any law protecting content about specific groups


Depending on how you turn your head and squint, Title II of the Civil Rights Act (assuming we can interpret Facebook as a public accommodation engaging in interstate commerce).

... though it's unclear whether it's more of a private club.


Title II's definition of public accommodation is limited to "any inn, hotel, motel, or other establishment which provides lodging to transient guests"


The ADA expands it substantially.

https://www.law.cornell.edu/uscode/text/42/12181


The ADA definition of a public accommodation only applies in the application of the ADA, not the Civil Rights Act.


I was looking at (3) "...or other place of exhibition or entertainment." But still: lots of head-turning and squinting, so may not apply.


It seems disingenuous for you to compare all Chinese, Islamic, and African American content to very specific bigotry spewed by white nationalists.


It’s interesting that you seem to equate African-American content with White Nationalist content.

Facebook isn’t banning content from white people, it’s banning content that is racist by design.


They said they already ban "white supremacy" because it falls under their existing guidelines. That's what I take to be the "racist" content.

But now they are banning "white nationalism and separatism" in addition. I guess I don't know enough about white nationalism to say if it's racist or not. Like I said, it's a value judgement (and an issue of categorization) that's beyond my expertise.

I don't think separatism is by definition racist. There was the separatist movement in Spain a year ago. The movement is obviously based on race or ethnic background, but it's not obviously racist. There were probably some racists in the movement, but the entire movement may not be racist.

I think they probably made the right decision -- it looks like they considered it carefully. But my point was that the grandparent's framing of the post as a free speech issue is misleading. Facebook ITSELF framed it as a value judgement.


White nationalists and separatists are racists because they want an ethnostate. I don't think it should be difficult to tell why these ideas are racist, and I hope you would make the same judgement.

Also, the separatist movement in Spain was not racist because any number of races can live in the region that called for separation. An ethnostate is racist.


One thing I genuinely don't understand about all this is how Israel fits into it. From wikipedia:

The primary principles of Israeli citizenship is jus sanguinis (citizenship by descent) for Jews and jus soli (citizenship by place of birth) for others.[2]

Citizenship by descent? Lots of Jewish people are white, or at least all the Jews I've ever met looked just like me. Israel is without a doubt intended for ethnically Jewish people, and is thus an 'ethno-state' by any reasonable definition.

So does that make the Israeli people collectively racist? They elected the government that wrote those citizenship laws after all. Is it a white nationalist movement?

Or if someone's going to argue that Jews aren't white, OK, why isn't it a racist non-white nationalist movement then? If advocating for a country to be reserved for specific kinds of people is no longer allowed on Facebook, what happens to Israeli election-related material?

Sure, advocate against white nationalism, they sure have a powerful platform to do it. Banning all discussion of it seems like it puts Facebook in a position of terrible hypocrisy to me.


So your thesis is that Israel as a country is officially a white-nationalist state -- (perhaps the most successful one?) and that Facebook would need to prohibit all political material from Israel because it would be inherently white-nationalist?


To be clear, I wouldn't support that at all. I have no beef with Israel or Jews. What I'm trying to explore is where the logic in this thread takes us.

If I'm fine with or at least neutral towards Israel, and I am, and if Facebook shares this view, and it clearly does, and if Israel is an "ethnostate", which it is, then by simmannian's definition, I and Facebook would have to be OK with racist states. That sounds totally wrong but I can only see three possible resolutions:

1. Being an ethnostate isn't racist or problematic. But in that case, being white nationalist would also not be racist or problematic, which means Facebook's policy is in contradiction.

2. Advocating for an ethnostate is wrong if it's "white", but Israel isn't a problem because Jews aren't "white". Maybe there's some convoluted path by which this is true, but I use a simple definition of whiteness - do they look like me? They do, so, this one can't work either.

3. It's only problematic if westerners do it, for no defined reason.

Of those I can only see (3) being a plausible explanation for Facebook's behaviour here. Which unfortunately seems to back up the feeling some of these people have that white people are being 'persecuted' for views that are considered fine when other races have them.

This whole thing rapidly seems to hit unresolvable contradictions. It seems like Facebook should just stay out of it as a result: any path forward puts them in a position of hypocrisy, which I find to be one of the worst sins.

Note that none of the above means I'm myself white nationalist or approve of it. Just that I think it's really important that moral positions are coherent, internally consistent and people who claim to follow them follow them even when inconvenient. That's more important than almost anything else, in my worldview.


Facebook already actively deletes ISIS and Al Qaeda-related propaganda: https://venturebeat.com/2018/11/08/facebook-claims-it-delete...


* it's still a value judgement about particular content. The decision really can't be made on an abstract "free speech" basis.*

In other words, they abrogated the Free Speech rights of others by imposing their own values.

Free Speech also includes the right to hear. If there are those who would want to hear/read the speech which is being de-platformed, then by definition, Free Speech has been interrupted.


Legally, of course, FB is completely right. Morally - try to replace "white nationalism" with, say, "communism" (one can argue communists killed a lot of people, so why not?), "Christianity", "Islam", "Black separatism", "Wiccans"... I could continue forever, but I hope you get the point. If you're consistently fine with any $GROUP put there, no matter how much you love it/disgusted by it, then your position is consistent and you are one of about 0.001% of people that has a consistent free speech position. For the rest, the answer would be "but this is completely different - $INGROUP can not be banned, they are a nice people that sometimes are misunderstood, but $OUTGROUP are definitely all violent bigots that should be banned everywhere for everybody's safety".

Of course, FB does not owe anybody to neither have consistent free speech position, nor support free speech at all, nor be an inclusive platform. They could convert their site to the fan club of Jar Jar Binks tomorrow and make every user to pledge loyalty to Supreme Lord Jar Jar or take a hike. Completely within their rights. Presenting themselves as an open platform while executing a clear political censorship is less morally clear (though of course still legally completely OK). It doesn't matter that political censorship is now directed to the group that is outgroup for most people here - it always starts that way. It never ends there.


Morally, FB is completely right.

They only become wrong by running down your slippery slope. They're not banning Christian speech, or Republican speech, or whatever.

The existence of a slippery slope does not demonstrate that everything that could reach the slippery slope is de facto immoral.


[flagged]


> receiving attaboys and cover from the Oval Office

I don't recall of any incident where the President congratulated a white nationalist terrorist or said anything that could even be conceived as such, do you have a source for this?


[flagged]


Context matters. Was he saying that referring to the "terrorist vs innocent people" or "protesters vs counter-protesters" in context of the Charlottesville protest where the terrorist incident occurred? It's a gaffe for sure but I don't think the intention was to congratulate a terrorist.


[flagged]


Look at the original quote you're citing [0]:

>"I’ve condemned neo-Nazis. I’ve condemned many different groups. But not all of those people were neo-Nazis, believe me," Trump said. "You also had some very fine people on both sides,"

I can't even read this as him saying there are fine white nationalists, let alone him saying that white nationalist terrorists are fine.

[0]: https://www.theatlantic.com/politics/archive/2017/08/trump-d...


You can't say you condemn neo-Nazis in the same breath that you praise a white supremacist rally for having good people. A white supremacist rally that resulted in people being killed and seriously injured. A rally where the people yelled 'the Jews will not replace us', waved Nazi flags and more.

Who were the 'very fine people' on the side of the neo-Nazi rally?


The most charitable interpretation is that Trump thought it was originally a rally in defense of the Robert E. Lee statue and was hijacked for neo-Nazi purposes.

I guess you could make the argument that not wanting a statue of Robert E Lee removed inherently makes you a neo-Nazi, but I don't think that necessarily follows.


Fake news much? This quote related to people defending and opposing confederate monuments, not white nationalists. If you read even one more line of the quote, it becomes immediately clear:

“You had many people in that group other than neo-Nazis and white nationalists,” Trump said. “The press has treated them absolutely unfairly.”

“You also had some very fine people on both sides,” he said.

Surely you can see, that "neo-Nazis and white nationalists" is one category, but "other than" is another, and "also very fine people" does not relate to white nationalists - it specifically excludes white nationalists, mentioning it as distinct group. That is immediately clear from reading just two lines of context.


[flagged]


I read the actual speech:

https://www.vox.com/2017/8/15/16154028/trump-press-conferenc...

It says, black on white: "You had people and i'm not talking about the neo-Nazis and the white nationalists. They should be condemned totally. You had many people in that group other than neo-Nazis and white nationalists.". I conclude that if he says "I am not talking about neo-Nazis and white nationalists" that means he is not talking about neo-Nazis and white nationalists. You call it "tortured, senseless interpretation". I sincerely hope I never, ever will be in a position where you would review code that I need to get into production. Because I can only imagine what you'd see in a moderately complex code if you can't understand a simple phrase like "I'm not talking about X".


I can't imagine fainter praise, frankly.


> I can’t imagine fainter praise.

Thanks for correctly identifying “good people” as praise (for a vehicular murderer plus a guy firing a pistol into a crowd).


Full quote:

> "you had some very bad people in that group. But you also had people that were very fine people on both sides. You had people in that group – excuse me, excuse me. I saw the same pictures as you did. You had people in that group that were there to protest the taking down, of to them, a very, very important statue...and I'm not talking about the neo-Nazis and the white nationalists, because they should be condemned totally – but you had many people in that group other than neo-Nazis and white nationalists, okay?"

It's clear as day that he condemned the neo-Nazis.


> we are in no way faced with a growing trend of violent leftists

Ever heard of Antifa? BANM? Or maybe Red Brigades? Red Army Faction? Sendero Luminoso? BLA? Japanese Red Army? Liberation Tigers of Tamil Eelam? There were dozens, probably hundreds, violent leftist groups. I don't know if they shoot up mosques, specifically, but why this specific mode of terror is your sole concern? They certainly did shoot, blow up, hang, cut off heads, poisoned and murdered by other means many people. And that's just terrorists, wait until they get the real power of state coercion... then the count goes to millions.

You mean there's no active leftist militant group in your town right now? Good for you. You may want to familiarize yourself with recent history and wider world though, to get a bit more complete picture than the current news cycle.

Oh and btw, the guy that shot up the mosque was very concerned about the environmentalism and thought China is a model country. Should we start deplatforming environmentalists and those who like China?


"But over the past three months our conversations with members of civil society and academics who are experts in race relations around the world"

Sorry I do not agree with this sort of shit. I have left facebook long back as a platform. I do not want some Phd in Gender Studies and Race relation to dictate what I can read and what I can not.


The US President has identified himself as a "nationalist"[1], and appears to be racially white. So how would this policy apply to Trump supporters and their speech?

This is a general problem with trying to ban speech. It's often very hard to figure out what someone intends to mean versus how others interpret it versus what it "really means". Especially when symbols, slogans, etc., come into play, not to mention different audiences. That ensures that this policy will be applied inconsistently and have unintended consequences, and create a flurry of argument over what was banned, by whom, and why.

It's just a mess and does not help the discourse.

[1] https://www.usatoday.com/story/news/politics/2018/10/24/trum...


> The US President has identified himself as a "nationalist"[1], and appears to be racially white

A white nationalist is not the same as a nationalist who is white; a white nationalist is one who adheres to the ideology of white nationalism.


I understand that, but I was trying to point out the difficulty in drawing that distinction when it comes to applying a censorship policy.


How about blue, red, green, black, yellow or purple nationalist content?


I think this is bad news for Facebook, the tech industry, and America as a whole. This kind of activity is going to prompt an enormous backlash from the right. And the right-wingers aren't going to make fine distinctions between the good tech companies and the bad ones.

Censorship may be a necessary evil, but if so, it must be done by duly elected or appointed government officials. FB and other tech platforms should build APIs that allow government officials to make the relevant decisions about when a piece of content is unacceptable. That is a principled and nonpartisan strategy that will allow the tech companies to focus on what they're good at - technology innovation - and avoid what they're bad at - politics and public relations.


Far right: private businesses such as bakeries should be allowed to refuse service to whoever they want!

Also the far right: why is Facebook censoring my ability to spread hate speech!

I'm oversimplifying it, but the hypocrisy and lack of self-awareness is astounding.


This is a strawman. Who actually says both these things?


Seems no more hypocritical and self-awareness-lacking than arguing that Facebook has the right to turn away white nationalists, but that a bakery must serve all the cakes.

Either way, it's conceptual gerrymandering to carefully carve out a wiggly boundary that -- strictly by coincidence! -- protects their own side and restrains the other side.


You're equating someone asking for a cake for their same-sex marriage to hate speech? Are you kidding?


No, I'm equating one business refusing a customer to another business refusing a customer. That is how analogies work.


Cool, morality determined by ad revenue opportunities.

What could go wrong?

Clearly nobody is going to shed any tears over Nazis... but is it wrong to support Palestinian claims to Jerusalem? Where's the line?

I don't like this move since I feel it will just push people into places where we can't determine real identification, should that be needed after a crime.

I don't like this move because I don't, fundamentally, think we should all have to agree with Zuck to have a voice online.

I don't like this move because it's reactionary, and there's no consistent philosophy behind why these groups would be banned but not others.

Just feels skeezy to censor people, no matter how skeezy those people are.


> Last fall, we started using [machine learning and artificial intelligence] to extend our efforts to a range of hate groups globally, including white supremacists

So... is this going to be a train wreck like YouTube's content ID where certain keywords will trigger the robots to censor my post with no human to appeal to?

"Sorry, you said: '80 years ago today Hitler committed suicide and axis power was extinguished.' This post contains keywords associated with dangerous groups and individuals and has been removed."


Gross. About time to replace facebook.


I would love for all people around the world to become level-headed and rational enough so we can have complete and total free speech. Personally, I enjoy 4chan and don't experience any negative emotions conserving to people who openly advocate genocide that would target me and my family.

However, people en masse are different, and tech platforms have to acknowledge that. So, I believe it is a good thing to ban and police hate speech, at least when private companies do it (and not government).


Without delving too deeply, because I don't want to engender any negative emotions: do you take the people who openly advocate genocide against you seriously?

If you do and it doesn't cause you to experience any negative emotions, I am surprised.

If you don't, I am also surprised. In light of what has happened in Pittsburgh and Christchurch.


My mother and grandparents have had to make a run for a bomb shelter a few times this week. So, yeah, I don't think that people who have "from the river to the sea" in their bio are joking.


.


[For context, the post above, before all of its content was edited out, called for “enforcing” CDA Section 230 against tech firms by making them liable as publishers for user content]

You aren't calling for 230 to be enforced, you are calling for it to be abolished.

Which is already happening one step at a time (SESTA/FOSTA), each of which radically accelerates self-censorship. Actually, I suspect a lot of the self-censorship that happens that isn't connected to legal changes but is connected to popular mood is to avoid political pressure for new limits on the 230 safe harbor which would force even more drastic self censorship.


Might sound good in theory, but in effect it means that some low ranking Facebook employee will have to decide whether Jordan Peterson is allowed to publish on Facebook or whether he is to be categorized as a white supremacist, a decision the outcome of which will vary according to ideological disposition, the ability to read and comprehend texts, and the power to navigate the realm of ambiguity and conflicting interpretations.


GOOD.

Now do anti-vaxxers and flat earthers.


I would never become sympathetic to the ideas of white nationalism if it wasnt for the exposure to them on mainstream media platforms. This makes me mad as hell. How am I supposed to call this a democracy? Might as well live in communist china.


Thought experiment for ya:

> Facebook to ban homosexual hookup content

How about that, eh? Funny how a simple change of political platform will U-turn your support of so-called 'free speech'!


There is a relatively far line from 'free speech' to 'free speech that actively encourages discrimination, harm of others, or infringing on others rights'

As a private platform, Facebook is free to curb the topic you have suggested, in the same way you are free to avoid said private platform.


Presumably people who support this policy wouldn't consider it a bolstering of free speech, whether it's justified or not.


T


Good to see the Valley is starting to get off its "we are just a platform, all information should be free and neutral no matter how bad, so we're not responsible" extremism of the past few years and understand that some forms of speech are harmful should not ever be encouraged and accelerated on any platform or technology, ever.

To the people that say "Where does it end?" – that's pretty easy. Let's start with curbing speech that encourages harming others.

They've been dragged kicking and screaming into this realization but we'll take it.


>Let's start with curbing speech that encourages harming others.

Based on numerous past incidents of corporate behavior (not specifically from Facebook) I do doubt a fair and unbiased application of such a standard. Politically we cannot even agree on what constitutes harm or even murder (abortion being an example of such disagreement). How then is it possible to fairly ban speech that encourages harm when we don't agree on what constitutes harm?


A platform can define what harm is. We can see what definition is, and we can choose to use a particular platform or not.


While I agree with you, Tim Pool (as well as many others) made a good point recently about the fact that when platforms are as powerful as being able to influence elections, maybe allowing them to define the structure within which we communicate might not be a good idea.

I am a free market guy, so for me these companies ought to have the right to determine the parameters within which their users can operate on their service. However, with things such as EUCD and GDPR rules, introducing competitors to the established networks becomes harder and harder for each day, which only benefits the existing platforms. So when the platforms are biased and competition is hindered by red-tape and regulations, we cannot expect alternatives to popup that can compete with said entities.

All of this leads me to believe that things will only get worse as I don´t see a way out of this clusterfuck we´re in.

https://www.youtube.com/watch?v=DZCBRHOg3PQ

Edit: why the downvotes? :S


Unfortunately, most of the time we aren't privy to their definitions. If they publish them publically to a literal extent then they MIGHT actually have to follow them! No company wants to have to follow their own rules, it's a Schrodinger situation.


Not to mention that as we've seen with Twitter the rules don't really matter. The name of the game is selective enforcement and zero accountability because you can always blame the algo/users for not reporting enough.


Isn't selective enforcement kind of inevitable? There is no algorithm that can follow code of conduct 'rules' perfectly. You are always going to be able to find instances where the rules aren't applied perfectly.


* If they publish them publically to a literal extent then they MIGHT actually have to follow them! *

Conversely, to the extent that they are public and transparent (eg I think twitchy is pretty specific about feedback for content removal) people will game that information and either refine their content to fit within the letter of the ToS while spreading it as widely as possible, or seek to litigate the decisions depending on what's least costly.


Well, read the top comment in this dicussion: They're not saying "if FB wants to kick arbitrary people including me I'm fine with that", they use "bigots" and "anti-vaxxers" as examples. Another important element is a constant refrain of "but platforms with no restrictions end up as a cesspool".

> We can see what definition is

"seeing the definition" just means seeing some flowery language which may or may not be the full truth when it comes to individual cases. If people don't pay attention to the fact that they don't actually know, and don't seem to care, who exactly is booted off these platforms and why, exactly and on a case by case basis, and just take the word of platforms like FB or Twitter for it, they're in for a rude awakening when they end up on the wrong end of it themselves.

The amazing achievement of not being a racist or an anti-vaxxer doesn't make up for the failure to clearly speak out against the hypocrisy of a company that on the hand hand fights democracies that want to protect citizens against tracking [0], to mention just one color in the spectrum of shit they're up to, while on the other hand bragging about how they're not evil. Facebook can do what they want, people and use their "services" all they want, and roll their eyes about "outrage over FB" all they want, but then they also need to own it.

[0] https://news.ycombinator.com/item?id=19499598


>To the people that say "Where does it end?" – that's pretty easy. Let's start with curbing speech that encourages harming others.

But that is not where they're starting though, is it? They're going much further:

>Today we’re announcing a ban on praise, support and representation of white nationalism and separatism on Facebook and Instagram

Praise, support and representation not of speech encouraging harm, but the ideology itself, reprehensible and ass-backwards as it may be.

This sentence indicates they're picking and choosing which ideologies to censor.

This is wrong (opinion), antithetical to American notions of freedom of speech (opinion), but Facebook has a complete and unalienable right to do this on their platform (fact). Despite that right, things should nevertheless be accurately labeled. This is censorship of political thought, not censorship of "hate speech" as most reasonable people define it.


It's really hard to prevent "we should do what it takes to maintain the dominance and survival of the white race" and "we should do what it takes to establish a white ethnostate" from being received as a call to violence.

If we're being intellectually honest, we know that they know full well that they're not getting a white ethnostate from peaceful participation in the political process.


Firstly, if someone thinks they can create a white ethnostate without violence, that's their political opinion. You may think that's native and stupid, or even that the person holding it is being duplicitous, but that view in itself is not hate speech.

Secondly, as I mentioned elsewhere, such rationalization can be used to ban a whole lot more speech than just "white ethnostate". Why is Marxism-Leninism still on Facebook? For that matter, since we're not limiting it to overt calls for violence, the mental gymnastics can be effortlessly applied to any calls to overthrow any government, when it's obvious that this cannot be done peacefully.


> if someone thinks they can create a white ethnostate without violence, that's their political opinion. ... that view in itself is not hate speech.

This is certainly an opinion as well, but my view is that the concept of a white ethnostate alone is hateful. Whether or not one can or can't be created without violence is immaterial.

Should people be able to talk about a hypothetical white ethnostate in academic, research, or news-reporting capacities? Sure. But simply promoting a white ethonostate as something that is wanted is, IMO, hateful.


I don't recall any recent news where marxists have shot up schools or churches, or bombed anyone.

If it were 1920 I'm sure the Marxists would get banned for stirring up terrorism.


You probably meant 1970s. Either way, that's a curious metric to decide what to ban. Why must we wait for tragedy to strike in order to act? And furthermore, why must this tragedy be recent, and how recent?

I say we decide first what kind of speech to ban (clear incitement to violence, for example, or something broader), and then proactively ban it to save lives.


There was a ton of marxism/communism related violence in the US in the 20's, mostly centered around labor strikes. In general, that period saw much more violence and instability than the 70s. There was a literal battle win West Virginia between striking miners and the government:

https://en.wikipedia.org/wiki/Battle_of_Blair_Mountain

> Why must we wait for tragedy to strike in order to act?

Actions of the government (or other actors WRT policy changes) are given legitimacy by serving as a response to perceived existing problems. In essence, if there is no tragedy, there is no legitimacy to the action. Without legitimacy, the policy change carries less weight and is not effective. This is a basic reality of policy implementation.

The actions of banning white nationalist speech on tech platforms are given legitimacy by the recent NZ shooting, where (1) the shooter made numerous callouts to the white nationalist subcultures that these platforms foster (subscribe to pewdiepie, etc) (2) The content the shooter generated and streamed spread widely across nearly all distribution channels (FB, youtube, reddit, etc) despite censorship efforts, (3) We know that wide distribution of the extremist type of content that this guy distributed (whether its white nationalist, ISIS, or anything) in his actions inspires more people to extremism.


The militant "left" was a real problem in the 70ies and had been so since early that century. Death toll: 10s of millions

Militant "right" is always a problem. Death toll: millions. Mostly "others" (jews, east Europeans, disabled people)

Militant "Muslims" are a problem in large parts of the world and has been so for centuries. Death toll: not sure. Ironically most Muslims lately.

And of course: militant "Christians" was a problem in certain centuries. Death toll: hundreds of thousands. Ironically mostly Christians?

I write "left" and "right" here because I find it unfair to blame concervatives and ordinary socialists for something that is more about being totalitarian than about more or less tax or ownership.

Same goes for religions to a certain degree (although there's no doubt that for example old Norse (viking) religion was very focused on war compared to Christianity).


Marxists have done enough harm thorough the history to justify ban. Stalin, Mao, Che Guevara, Fidel Castro, Khmer Rouge are just few examples off the top of my head. Just because they didn't do anything violent recently shouldn't mean they are allowed to spread their ideology.


Not sure we need to go full House of Un-American Activities for a threat that's largely subsided before the time of internet platforms. Your last sentence is phrased as some sort of absolute truth, but I wonder when you say: "they are allowed", I can't help but ask: "by whom"?


>I wonder when you say: "they are allowed", I can't help but ask: "by whom"?

The exact same people "not allowing" white nationalist content, of course.


If you want to make this argument maybe pick someone besides literal nazis to make it with.

Last I checked we kinda had a bit of a scuffle from 1940-1945 that stated, "This ideology is not okay and responding to it with violence is acceptable."

They're getting booted off the platform most people use to talk to grandma, not getting literally booted in the face. Unlike what they advocate for.


> literal nazis

Are "literal" nazis only who this new rule will be applied to? The accepted definitions of both "literal" and "nazi" seem to have drifted significantly in the last few years.

> not getting literally booted in the face. Unlike what they advocate for.

Funny, this type of story about people advocating for literal violence against non-whites is extremely common on the internet, I've heard it recounted I don't know how many thousands of times in the last few years, yet I've literally(!) never once encountered an actual instance of it anywhere I browse. Ironically, I have encountered hundreds of examples of violence being advocated against white people, although in the vast majority of cases the proponents themselves were white.

Could you possibly provide any examples of such statements on relatively mainstream sites, and also note whether you encountered it organically or found it via a google search?


> Are "literal" nazis only who this new rule will be applied to?

White nationalists are literal Nazis, so... yes?

> Could you possibly provide any examples of such statements on relatively mainstream sites, and also note whether you encountered it organically or found it via a google search?

Were you asleep for Charlottesville or all the coverage of it? White nationalists literally marched with torches chanting "blood and soil" and "we will not be replaced." What do you think those messages mean to non-whites? What do you think they're advocating for when they say those things?


> White nationalists are literal Nazis, so... yes?

a) No they aren't.

b) Even if they were, you have no way of knowing how this rule will be applied.

> Were you asleep for Charlottesville or all the coverage of it?

Would you consider the Charlottesville incident "mainstream", or common enough (and manifest in some form on Facebook) that Facebook must police user-posted content on its site?

Try to not treat this like an internet argument that must be won, and try not to exaggerate the frequency of rare (as far as I know, I am seeking a reasonable, fact-based argument in order to change my mind) incidents like Charlottesville, or conflate the specifics of what happened there with unsavory words that don't literally (actually) meet the claim of advocating violence.

Make no mistake, I'm definitely playing Devil's advocate here, but if the facts are on your side this (making the case that literally advocating literal violence) should not be overly problematic.

Or, if you would like to lower your claim to something like "frequent and possibly increasing dog whistling" (without arguing about the technical boundaries of that) on Facebook, that seems like a fairly reasonable claim.


> a) No they aren't.

Indeed they are! Or at least there's no difference meaningful enough in their political ideologies to draw a distinction. Obviously I'm not saying they're a member of a political group originating in 1930s Germany, but when someone calls someone else a "Nazi" colloquially, that's not what they mean either.

> b) Even if they were, you have no way of knowing how this rule will be applied.

So we shouldn't have rules because they can be misapplied?

> Would you consider the Charlottesville incident "mainstream", or common enough (and manifest in some form on Facebook) that Facebook must police user-posted content on its site?

Yes? Or what of Christchurch? Or the Quebec City mosque shooting? Or the Pittsburgh synagogue shooting? These incidents (and the literal dozens more like them) received tons of mainstream news coverage and the perpetrators 100% subscribed to white nationalist views, exactly the kind which Facebook seeks to deplatform. They believe that by removing these people from their site they might reduce incidents of these shootings. Do you think they're wrong?

What amount of violence needs to happen connected to one ideology for Facebook to ban said ideology? Facebook has banned child pornography, doxxing, and pro-ISIS content from their platform too. How much of that has to be posted before a ban is justified?

You can not like that Facebook can make decisions like this, but it is entirely their right to choose what to platform. If you don't like it, you can make your own platform. That it doesn't have the audience Facebook does is your problem, not Facebook's.


> Indeed they are! Or at least there's no difference meaningful enough in their political ideologies to draw a distinction.

Is that so? The Nazis were a large, organized group with a distinct leader, a fairly specific shared ideology, and track record of action - you would have us believe the same is true (no meaningful difference) of modern day white nationalists? And that there's evidence of this?

> but when someone calls someone else a "Nazi" colloquially, that's not what they mean either

col·lo·qui·al·ly

adverb: colloquially in the language of ordinary or familiar conversation; informally.

This is what I mean about the common misuse of the words "literal", "Nazi", "alt-right", and in my mind this excuse is similar in quality to the "I'm not really racist, I have black friends" argument. Have you ever stopped to consider if this type of "dog whistling" is contributing to the problem in any way? Do you care?

> So we shouldn't have rules because they can be misapplied?

Your words, not mine. The question was "Are "literal" nazis only who this new rule will be applied to?".

Have you no concern for possible second order effects of poorly thought out and biased censorship? And when I say censorship, I'm not referring to the First Amendment.

> Yes? Or what of Christchurch? Or the Quebec City mosque shooting? Or the Pittsburgh synagogue shooting?

For every single "white nationalist" incident you cite, I can cite several similar racially/religiously motivated crimes committed by non-white people. If the incidents are the problem, why is the enforcement (and news coverage) racially biased?

> These incidents (and the literal dozens more like them) received tons of mainstream news coverage

The coverage was mainstream, but is this behavior or these beliefs mainstream/common? How sure are you that these incidents are caused by incitement on social media? Have you ever considered that both the incidents themselves as well as the increase in "dog-whistling" propaganda on social media (I've yet to see examples of advocating violence) might both be effects of something else? Do you care?

> and the perpetrators 100% subscribed to white nationalist views, exactly the kind which Facebook seeks to deplatform.

Is that so, or might that only be what you presumed or were told? I haven't read the whole manifesto, but this fellow seemed to have much more complex and nuanced beliefs than those held by "100%" of "white nationalists".

I assume this is a "white nationalist" propaganda site of some sort, but to me the actual words of what this madman wrote are what actually matters, not who publishes them:

https://www.thenewamerican.com/world-news/europe/item/31759-...

Is what's written there mainstream white nationalist views? Personally, I have absolutely no idea, so I'm willing to consider any evidence you have supporting that idea.

> They believe that by removing these people from their site they might reduce incidents of these shootings. Do you think they're wrong?

They might be right, they might be wrong, unlike you I make no claim to know. What if they are wrong, and censorship aggravates these already mentally unbalanced people even more, driving them to organize (perhaps in the physical world) underground where it's difficult to see what they're up to or thinking? Is this worth considering when setting policy?

> What amount of violence needs to happen connected to one ideology for Facebook to ban said ideology?

Then we should be banning a lot more speech.

> You can not like that Facebook can make decisions like this, but it is entirely their right to choose what to platform.

I believe free speech is crucially important, and any curtailment should be carefully and logically considered, including a healthy discussion before any decisions are made. In this case, I can envision a plausible scenario where it makes the problem even worse.

> If you don't like it, you can make your own platform.

Again, there is a distinct difference between the First Amendment and the general principle of free speech. That so few people consider this important seems risky to me. I suspect this type of attitude is in no small part what got Donald Trump elected.

> That it doesn't have the audience Facebook does is your problem, not Facebook's.

And if social engineers like you are miscalculating, if you underestimate the complexity of sociology and human psychology, it might be all of our problem.

For the sake of improving the quality of discourse, can you sense any legitimate concerns in what I'm saying, or do I seem to you like yet another ignorant racist, little different than those who might frequent Charlottesville rallies?


> common misuse of the words "literal", "Nazi", "alt-right"

It's not misuse if it's common usage, it's simply usage you disagree with. Obviously when people say "Nazi" they don't mean "members of a 1930s political party," they're making a statement about something else. If you find the usage of these words confusing that's on you, not the words or the people using them.

> Have you no concern for possible second order effects of poorly thought out and biased censorship?

Not particularly: if those things happen, let's protest those instead of wringing our hands and worrying about the poor slippery slope.

> And when I say censorship, I'm not referring to the First Amendment.

Deplatforming is not censorship. You are not guaranteed an audience for your speech. White nationalists can still say whatever they want, and they can even say it legally. But they can't say it on Facebook, and it's not Facebook's responsibility to let them any more than it is to let pro-ISIS people post their content.

> If the incidents are the problem, why is the enforcement (and news coverage) racially biased?

This is irrelevant to the discussion at best and disingenuous at worst. Facebook can ban multiple kinds of content, including pro-ISIS content (and it does). White nationalism being banned is what we're discussing here.

> How sure are you that these incidents are caused by incitement on social media?

I believe that social media is helping incite these incidents, so I do legitimately believe the amount of people participating in hate crimes will be reduced as a result of this. Do you really think that people aren't radicalized by communities on the Internet? Communities that might exist on Facebook?

Of course, I also think it doesn't matter if that's true or not. No discourse of value is being lost by banning white nationalism, and Facebook already bans a plethora of other (totally legal) content because they don't think is appropriate on their platform. If you want to read white nationalist speech, it still exists. This merely means you can't read it on Facebook anymore. How this is a tragedy I simply cannot understand.

> Is that so, or might that only be what you presumed or were told?

Yes, it's so. Read the manifesto.

> Is this worth considering when setting policy?

No. People can (and will) do anything in response to anything; maybe white nationalist groups will make Zuck their #1 enemy over this. You can't control the actions of other people, especially crazy people, so catering your policies to them seems absurd.

> For the sake of improving the quality of discourse, can you sense any legitimate concerns in what I'm saying, or do I seem to you like yet another ignorant racist, little different than those who might frequent Charlottesville rallies?

I don't really see many legitimate concerns arising from banning white nationalism from Facebook. As I've said multiple times, Facebook already bans other kinds of speech they disagree with. I get defending free speech is a thing, but Facebook is not the government and they can ban whatever speech they like. They don't control the entire Internet or even a majority of it, and they can't prevent you from hosting your content somewhere else, so... I just don't see what's being lost here, except white nationalist content on Facebook.

And even if they do go ahead and decide to ban other content, I mean, A) let's cross that bridge if we come to it and B) who cares? It's just Facebook, they don't control the entire Internet. Make another site if you want to discuss what they deplatform.


This line of conversation is my point. You're debating someone who is acting in bad faith. You've already lost the thread because he's just going to come back with more circular bullshit.


Could you please explain how I'm acting in bad faith? Perhaps cite a few key exchanges to demonstrate.

> he's just going to come back with more circular bullshit.

Asking people to justify their claims, and pointing out falsehoods or logical errors in their responses, is "circular bullshit" now?

I clearly stated I am playing Devil's advocate.

I concede points where they are valid.

I make no (other than conversational) claims without backing them up, how many claims has OP made without providing evidence for?

The discussion is curtailment of free speech (the general principle, not the First Amendment - try to get someone to even admit there's a difference without changing the subject) - I am arguing that we better have very good reasons for it, supported by evidence. Fashionable popular opinion is not evidence.


> It's not misuse if it's common usage, it's simply usage you disagree with.

Your or my personal agreement on what the definitions should be are not the problem. The meaning has changed for only some users of the words, resulting in widespread misunderstanding, I would argue resulting in further (and completely unnecessary) political polarization. Words not having shared meanings seems sub-optimal to me.

> Obviously when people say "Nazi" they don't mean "members of a 1930s political party," they're making a statement about something else

But no one knows what statement they're making because we no longer have a shared dictionary, so people resort to their imaginations.

> If you find the usage of these words confusing that's on you, not the words or the people using them.

This suggest the problem is my lack of ability to understand, when the problem is there is no longer a common definition. There are certainly multiple popular definitions, according to which political ideology you subscribe to.

> instead of wringing our hands and worrying about the poor slippery slope.

It's not me who's "wringing my hands" - I'm not promoting or supporting any policy change, you are. I'm simply asking questions.

> Deplatforming is not censorship.

https://en.wikipedia.org/wiki/Censorship

"Censorship is the suppression of speech, public communication, or other information, on the basis that such material is considered objectionable, harmful, sensitive, or "inconvenient". Censorship can be conducted by a government, private institutions, and corporations."

Well look at that, once again our definitions don't match. Care to share the URL for the authoritative source you're using?

> You are not guaranteed an audience for your speech.

As I said: there is a distinct difference between the First Amendment and the general principle of free speech. You only have a "guarantee" to free speech in the limited context of the first amendment.

> This is irrelevant to the discussion at best and disingenuous at worst. Facebook can ban multiple kinds of content, including pro-ISIS content (and it does). White nationalism being banned is what we're discussing here.

I thought the principle behind your argument was the harm - maybe the principle is race after all?

> I believe that social media is helping incite these incidents, so I do legitimately believe the amount of people participating in hate crimes will be reduced as a result of this.

A perfectly reasonable theory, what evidence of this do you have? Are you well read on the topic, or is this just a casual opinion that sounds about right?

> Do you really think that people aren't radicalized by communities on the Internet? Communities that might exist on Facebook?

I certainly do, but I'm concerned about everything that could lead to radicalization. History is full of examples demonstrating the strange and complex behavior of individuals and civilizations. Are we over-simplifying this situation? Should we freely discuss such things with open minds?

> No discourse of value is being lost by banning white nationalism

Completely agree, but is that all there is to it? Can you think of any possible undesirable reactions (regardless of the soundness of the logic underlying the motivation) to this type of policy?

> If you want to read white nationalist speech, it still exists. This merely means you can't read it on Facebook anymore. How this is a tragedy I simply cannot understand.

The tragedy is if this further inflames emotions, "righteously" or not, and someone shoots up another church. That's what I'm worried about.

> Yes, it's so. Read the manifesto.

The link I included sure didn't make this fellow sound like just another stereotypical redneck racist as you claim, at least my understanding of that stereotype. But never mind, let's once again not bother ourselves with messy details when vague stereotypes will suffice.

>>> They believe that by removing these people from their site they might reduce incidents of these shootings. Do you think they're wrong?

>> They might be right, they might be wrong, unlike you I make no claim to know. What if they are wrong, and censorship aggravates these already mentally unbalanced people even more, driving them to organize (perhaps in the physical world) underground where it's difficult to see what they're up to or thinking? Is this worth considering when setting policy?

> No.

Well then, this well answers my "Do you care?" questions as well.

> People can (and will) do anything in response to anything; maybe white nationalist groups will make Zuck their #1 enemy over this. You can't control the actions of other people, especially crazy people, so catering your policies to them seems absurd.

If everything is random, why even bother with policies like this, or any at all?

> I don't really see many legitimate concerns arising from banning white nationalism from Facebook.

I obviously disagree, but if you believe the reaction of extremists is not worth considering, at least you're being logically consistent.

> I just don't see what's being lost here, except white nationalist content on Facebook.

If any future negative reactions (in part due to this type of policy) of extremists are excluded from consideration, you are completely correct.

> who cares?

People attending a future church service might, if there is any validity to my belief that people aren't as simplistic as you think they are. But we'll never know for sure, because like you the media seems to prefer to not bother with messy details of why people commit atrocities when a simple stereotype works (and certainly sells more papers as an added bonus).

> Make another site if you want to discuss what they deplatform.

A perfectly fitting conclusion.


> This suggest the problem is my lack of ability to understand, when the problem is there is no longer a common definition.

The only misunderstanding seems to be yours. We can clearly identify white nationalists as actual Nazis, no redefinition required.

> Censorship can be conducted by a government, private institutions, and corporations.

No one is saying that corporations can't censor things. I am saying that this is not censorship. Facebook deplatforming you is not the same as censoring you.

> maybe the principle is race after all?

The only thing we're discussing here is the deplatforming of white nationalist content from Facebook. Bringing other races or religions and their extremism in is totally tangential at best, and actively disingenuous at worst.

> Are you well read on the topic, or is this just a casual opinion that sounds about right?

There are many examples in the news recently. Here's one I read just the other day: https://www.washingtonpost.com/politics/2019/03/22/trumps-rh... White nationalist speech creates hate crimes; the correlation appears clear to me. Of course, it also seems logical to me without any sources, so sources are just a nice bonus.

> Can you think of any possible undesirable reactions (regardless of the soundness of the logic underlying the motivation) to this type of policy?

Essentially your argument seems to boil down to: we shouldn't create good policies because crazy extremists might have bad reactions to them. But crazy extremists have bad reactions to everything. We should be trying to deplatform and deconvert extremists instead of catering to their sensitive tastes. If that offends them and causes them to lash out, that is unfortunate, but they'll do that anyway. At least if they do it to this there might be less of them in the future.

> If everything is random, why even bother with policies like this, or any at all?

While you can't control how people react to what you do, you can do what you believe is right and hope it has a good outcome in the future. Facebook apparently agrees with me.


> The only misunderstanding seems to be yours. We can clearly identify white nationalists as actual Nazis, no redefinition required.

a) With no standard definition of the phrases?

b) Some people may be able to, but can Facebook accurately and fairly (despite no standard accepted meaning of many terms) police speech (remember, we're not dealing with people wearing white hats at a rally, we're dealing with speech, which is subtle), at scale? Sure everyone can agree at the extremes, but when it's close to the middle, then it's complicated. It's like "I know pornography when I see it."

> No one is saying that corporations can't censor things. I am saying that this is not censorship. Facebook deplatforming you is not the same as censoring you.

https://newsroom.fb.com/news/2019/03/standing-against-hate/

"Today we’re announcing a ban on praise, support and representation of white nationalism and white separatism on Facebook and Instagram, which we’ll start enforcing next week."

I'm not saying they don't have a right to do this, it is their platform after all. I'm not even saying that it is necessarily or certainly a bad idea. You and I disagreeing on this is fine and healthy. But how can you interpret "you can't say <x>" as not censorship? I asked you earlier for the definition of the word you're using, and you didn't reply. I ask again: please tell me the definition you're using for "censorship", with a link to the source.

> The only thing we're discussing here is the deplatforming of white nationalist content from Facebook. Bringing other races or religions and their extremism in is totally tangential at best, and actively disingenuous at worst.

No, that's the only thing the article is discussing. The HN discussion, and our thread in particular, are discussing broader principles of free speech and fairness, possible downsides of these types of decisions, etc.

>>> I believe that social media is helping incite these incidents, so I do legitimately believe the amount of people participating in hate crimes will be reduced as a result of this.

>> A perfectly reasonable theory, what evidence of this do you have? Are you well read on the topic, or is this just a casual opinion that sounds about right?

> There are many examples in the news recently. Here's one I read just the other day: https://www.washingtonpost.com/politics/2019/03/22/trumps-rh.... White nationalist speech creates hate crimes; the correlation appears clear to me.

"To test this, we aggregated hate-crime incident data and Trump rally data (a different variable than our topic of conversation, but again, no need to bother ourselves with precision or details) to the county level and then used statistical tools to estimate a rally’s impact. We included controls for factors such as the county’s crime rates, its number of active hate groups, its minority populations, its percentage with college educations, its location in the country and the month when the rallies occurred. We found that counties that had hosted a 2016 Trump campaign rally saw a 226 percent increase in reported hate crimes over comparable counties that did not host such a rally. Of course, our analysis cannot be certain it was Trump’s campaign rally rhetoric that caused people to commit more hate crimes in the host county."

You can find a correlation in data for anything you want to support, see: http://www.tylervigen.com/spurious-correlations

Now, stating that doesn't prove your claim is wrong, I'm just pointing out the one piece of evidence you finally provided is little more than an op-ed piece. We should be collecting more and better data on these things if they're important, so we can set evidence-based policy.

Erring on the side of caution (as Facebook is doing) is fine, but there's no need to tell lies in the process as far as I can see.

> Of course, it also seems logical to me without any sources, so sources are just a nice bonus.

This sentence explains this conversation, as well as the general state of political conversation in 2019: facts and evidence are considered completely optional.

> Essentially your argument seems to boil down to: we shouldn't create good policies because crazy extremists might have bad reactions to them.

No, that is one small point of my overall concerns (my "argument", if any, is that you won't provide any evidence for your claims, or assert that none is necessary when curtailing general free speech), and it's an important point. People are becoming incredibly politically polarized to the degree that it is causing strange behavior. Some people lose the ability to engage in logic & evidence based conversations on particular topics, others shoot up churches. Shit is pretty seriously fucked up and doesn't seem to be getting better. Being cautious and thoughtful about non-obvious risks seems like a good idea to me, not something to avoid.


> Praise, support and representation not of speech encouraging harm, but the ideology itself

The ideology itself is an encouragement to harm.


If you broaden your definition of "encouragement to harm" as strongly as you have done, what other ideologies fall under the exact same umbrella? Marxist-Leninist "revolutionary vanguard" idea certainly seems to fit the bill perfectly. Let's ban that too!

Again, Facebook absolutely has the right to censor political views using whatever arbitrary rationale they want. I just object to dressing it up with PR euphemisms such as "Standing Against Hate". They're not doing it to stand up against hate (Facebook is a cesspool of very thinly veiled hate against immigrants, for example). They're doing it for political expediency and PR.


> They're not doing it to stand up against hate [...]. They're doing it for political expediency and PR.

I mean, sure. But I'm so used to FB doing the wrong thing, that them doing the right thing for self-serving reasons seems like a welcome change of pace. Maybe someday they'll even do the right things for the right reasons?

...yeah, probably not. But still.


> Marxist-Leninist "revolutionary vanguard" idea certainly seems to fit the bill perfectly. Let's ban that too!

Wait, are you thinking there are masses of people who are okay with Facebook deplatforming white nationalists that are going to be upset if they did the same to Leninist vanguardism?


Totally with you on the accurate labeling: this isn't censorship.

This is deplatforming, which is entirely different.

> This sentence indicates they're picking and choosing which ideologies to censor. [...] This is wrong [...] This is censorship of political thought, not censorship of "hate speech"

Are you saying that hate speech comes off of a vacuum?

These two concepts do overlap.


> Are you saying that hate speech comes off of a vacuum? These two concepts do overlap.

In general, that logic calls for deplatforming every topic that people get outraged over.


people get outraged over tax policy discussions and no one is talking about banning them.


This despite a long history of violence related to tax policy: https://en.wikipedia.org/wiki/List_of_historical_acts_of_tax...


>deplatforming

In this context, that's simply a neologism for private-sector censorship. Again I want to reiterate that I don't think the private sector should be prohibited from censorship. But using euphemisms like "deplatforming" distracts us from the fact that it is private-sector censorship, and from drawing our conclusions about these platforms, how widely representative they are, and how safe our own speech is on those platforms from future "deplatforming". Thankfully I'm not anti-immigrant, pro-theocracy, anti-abortion, communist, don't hold views like 'homosexuality and transsexuality are not genetic', etc. But if I were, I'd be worried.


Private sector censorship is not a concept. These terms are mutually exclusive by definition.


Please don't dress up your very much arguable opinions and arbitrary definitions of common language as some sort of fact. Your efforts to single-handedly redefine what censorship means are probably better spent making major changes to the wiki article on censorship[1], since it mentions explicitly governments and private organizations may engage in censorship in the first two paragraphs. Or perhaps writing letters to the EFF[2] demanding they take down their articles on private censorship.

[1] https://en.wikipedia.org/wiki/Censorship

[2] https://www.eff.org/deeplinks/2018/01/private-censorship-not...


This has nothing to do with free speech. Free speech has to with governments curbing your speech, not private companies.


No, free speech has to do with being able to speak freely. If you are prevented from doing that by private means, it's in no way better than if you are prevented by the government, and for exactly the same reasons. If free speech has any value and not just an empty slogan, then it should have value in the face of privatized censorship too. If your freedom has value, would you feel better if you'd be in a private prison than in a government one? Probably not. Why should it be different with speech?

Imagine your ISP blocked your every message that disagrees with their political stance. Would it console you they're doing it as a private company, or would you demand regulators to come and stop it?


A better way to consider this is to ask: what about the platform's freedom of speech? Simply because you have a thought and post it on the Internet doesn't mean that someone is required to retransmit that thought for you. If they decline to do so, they're not censoring you, they simply don't find your content worth platforming. You are not guaranteed an audience for your ideas and they are not compelled to provide you one.

> Imagine your ISP blocked your every message that disagrees with their political stance.

This is substantially different from what is being talked about here: censorship versus deplatforming. No one is removing white nationalists from the Internet, Facebook is simply saying they won't retransmit their ideas. You can still find them if you desire and they still aren't illegal to express.


> This is substantially different from what is being talked about here: censorship versus deplatforming

What parent post was talking about is "free speech". The claim was that free speech is only about absence of governmental coercion. This is precisely wrong - absence of governmental coercion is one of the necessary conditions for the free speech to exist, but it does not equal it and does not ensure it.

> No one is removing white nationalists from the Internet,

We have already seen hosting companies to remove people from the internet for other reasons. Same with banks and credit companies cutting off certain companies from banking services, etc. I do not see how white nationalists would be an exception. I do not see how the ultimate goal is not exactly this - removing white nationalists from the Internet. Practical implementation of this goal could be hard, but getting within 99% of it - banning them from the space accessible by 99% of people without technical knowledge of darknet and such - is not.

If it is functionally impossible to speak in a way that anybody can hear you, is it still a free speech? Is "go build your own Internet" really a free speech?

I've recently read about the infamous "Green Book" - a guide for a non-white traveler in the US which specified which hotels, restaurants, gas stations, stores, etc. would serve them. There weren't many, and even when there was no governmental coercion (there were "sunset cities" but even if one avoided them) travel turned into a real hassle. Was that situation normal? I do not think so. I feel deeply disgusted by it.

You can create very bad situations even without governmental involvement. I feel that is what Facebook, Twitter and others are creating right now - even though for completely different reasons than those that gave birth to the Green Book. You could say the people being affected by it now deserve it, as they'd probably impose the same on their opponents if they could - and you probably would be right that they would. But shouldn't we try to do better? And the history teaches us these things start with the outgroups but never end there. Already disagreeing with any of the positions of transgender activists is verboten. How long before criticizing Green New Deal, or Keynesian economic theory, or the mayor of the town whose policies Facebook supports, or any other subject the benevolent overlords declare sacred becomes verboten?


> This is precisely wrong - absence of governmental coercion is one of the necessary conditions for the free speech to exist, but it does not equal it and does not ensure it.

No, this is entirely sufficient for free speech.

> If it is functionally impossible to speak in a way that anybody can hear you, is it still a free speech?

Your definition is absurd; essentially you are demanding that "freedom of speech" include not only that you can legally say whatever you want, but that you also are guaranteed a place to say it and an audience who must listen to you. At some point, the freedom of the venue to choose what goes on there, and your audience members to select what they hear, must matter.

> I feel deeply disgusted by it.

You and I both! This is why we have laws ensuring equal access to private services -- however, those laws are based only on characteristics we have determined shouldn't prevent your access (race, sexuality, religion). Having bad ideas does not mean you're part of a protected group.

> But shouldn't we try to do better? And the history teaches us these things start with the outgroups but never end there.

This is a slippery slope argument. We already abridge the speech of people who are maliciously trying to create harm (falsely yelling "fire" in a crowded theater is a classic example). Yet despite this we still have substantial freedom of speech today. The fact that we don't allow people to incite panic seems to coexist quite happily with our current rights and privileges; claiming that somehow preventing white nationalists from speaking on Facebook, or credit card companies refusing to take their money, will somehow lead to the death of free speech is just fallacious.

Frankly we have, socially and legally, always placed limits on speech. The fact that it hasn't been done perfectly doesn't mean we just throw all restrictions out; it means we should always try to weight the good of free speech against the ills that speech can cause.


> but that you also are guaranteed a place to say it and an audience who must listen to you.

I never did that. There's a difference between "audience must listen to you" and "audience would want to listen to me but unable to because I'd have to build a new Internet to do it".

> only on characteristics we have determined shouldn't prevent your access (race, sexuality, religion)

So we agree that denying people access to private services can be not OK even if it's not done by government coercion? But only if it's because of racism? If it's because of other forms of prejudice not listed in the list, or because of their non-membership in the ruling party, that would be ok?

> This is a slippery slope argument.

It is. We know for a fact that the slope is slippery. We witnessed the descent many times. We know how it works. We are observing the narrowing of the bounds of what is acceptable on major social networks right now. Should we refuse to believe the experience and our own eyes because "it's a slippery slope argument"? Yes, it is - because the damn slope is very slippery!

> (falsely yelling "fire" in a crowded theater is a classic example)

Very bad example. That decision by Wendell Holmes was overturned decades ago. But if you knew what the actual case was about, you'd be even more wary to use that quote. The case sought to ban American Socialists from distributing pamphlets opposing draft. If there's any case for free speech to be protected, the case of political speech opposing government action forcefully sending people to die overseas is that case. So maybe when you complain about slippery slope arguments, you don't realize how far the slope actually gone. Do you actually want to be jailed for criticizing the government? If you do, keep using that example. https://www.theatlantic.com/national/archive/2012/11/its-tim...


> "audience would want to listen to me but unable to because I'd have to build a new Internet to do it".

We are talking about Facebook here, right? Do they have to power to remove your content from the Internet entirely?

> If it's because of other forms of prejudice not listed in the list, or because of their non-membership in the ruling party, that would be ok?

It depends! It is entirely legal to discriminate against someone because they're stupid or ugly. Is that okay? Obviously we have to rule on a case-by-case basis.

> We know for a fact that the slope is slippery. We witnessed the descent many times.

Facebook has banned content it doesn't like since the inception of the platform. You have no more of a right to post there than have your articles be published by a newspaper. They aren't a government and can't police your speech, but they can (and should) decide what they post.

> Very bad example.

Except it's still actually illegal? The rest of your argument simply doesn't apply, since Facebook is not the government.


> We are talking about Facebook here, right?

No, Facebook is just one facet of it. We've seen deplatforming from every major social network, bar none, from Paypal, from Patreon, from banks, from credit card companies, from hosting providers, DNS providers, CDN providers... Virtually every piece of internet infrastructure is subject to deplatforming. And it's not just if you do something - if you provide services to people that do something controversial, you're out too.

> It depends!

You write "it depends", I read "if the mob likes you, you're ok. If they don't - well, you're screwed". Maybe it's a comfortable existence for somebody who never had a controversial thought, but it's certainly far from something I'd call "free".

> Obviously we have to rule on a case-by-case basis.

Who is "we" and why you have to rule? I don't particularly like to be ruled by "we", and be judged on its whims. OK, frankly, I can live without Facebook, I left Twitter long ago and could leave Facebook behind too. But it wouldn't stop there, would it? Intolerance to existence of contrary opinions never just stops because these opinions are now located under different URL. If it were possible, having them on Facebook wouldn't be a big deal from the start - just don't subscribe to people you don't like, ban them if they come to comment, and you'd never read their content. But the point is not that - as long as it exists anywhere, it must be suppressed. The only real end to it is elimination of the offending opinion, anything less is good, but ultimately always unsatisfactory. You may say "ok, what's bad in complete suppression of white nationalists"? Well, nothing in this mechanism has "white nationalists" anywhere, it would work against you or me as well as against anything else. Lack of freedom can be very comfortable when the rulers do the same things you wanted to be done anyway. It very soon stops being comfortable when they don't. Are you sure you'd never say anything anybody could find some problem with? Maybe decades later?

> Facebook has banned content it doesn't like since the inception of the platform.

And the list of banned things grows constantly. Yes, I know. That's exactly what worries me. Not because I like white nationalists - quite the contrary - but because one day whoever decides what to ban may dislike one of my thoughts.

> Except it's still actually illegal?

You basing that on what? Certainly not on decision of Wendell Holmes, since he never decided that (it was a rhetorical flourish for him) - so on what exactly? http://civil-liberties.yoexpert.com/civil-liberties-general/...


> Virtually every piece of internet infrastructure is subject to deplatforming.

This is essentially the marketplace of ideas working as intended -- if your idea sucks you get booted out of venue after venue. Maybe the problem isn't the venues having standards, maybe it's your idea being bad?

> If they don't - well, you're screwed

How is not being able to tell people your bad ideas "being screwed?" You aren't being jailed, your property isn't being seized. People are telling you they don't want to hear what you have to say, but you can continue saying it to anyone who bothers to listen to you. No one is obligated to hear or agree with anything you want to say.

> Who is "we" and why you have to rule?

Platform owners specifically, citizens in aggregate. We judge the ideas we want to hear and tell people we don't want to hear that we don't want to hear them. This isn't exactly some radical new concept either.

> Well, nothing in this mechanism has "white nationalists" anywhere, it would work against you or me as well as against anything else.

Except it literally has "white nationalists" in the rule itself. How about we ban them? And then rule, on a case-by-case basis, on whether other things should be banned? Like pro-ISIS speech? Or pornography, which Facebook also bans?

The idea that nothing at all can be banned, because Facebook might ban the wrong things, is absurd. If your ideas are bad and no one wants to listen to you, the problem is your ideas, not that Facebook can ban things. No one should be compelled to hear bad ideas. And before you ask, yes, Facebook can decide what ideas it considers are bad on behalf of its users. If you don't like that, don't use Facebook -- problem solved.

> You basing that on what?

https://en.wikipedia.org/wiki/Brandenburg_v._Ohio

"The Court held that government cannot punish inflammatory speech unless that speech is "directed to inciting or producing imminent lawless action and is likely to incite or produce such action.""

Of course, this is still applicable to the government, not private corporations, again. Private corporations can deplatform whatever they like for any legal reason.


"Imminent lawless action" is not the same as "shouting some words". People shouted "fire" in theaters hundreds of times - e.g. during plays - and nothing happened. Only when it causes - or is imminent to cause - the actual action, can it be punished. Just as saying "shoot him" is not restricted speech - unless you are saying it to an accomplice who then murders a person. There needs to be an independent lawless act, and speech needs to be used as a prerequisite or a tool for it.

> Private corporations can deplatform whatever they like for any legal reason.

Legally, they can. Morally, they shouldn't.


No, free speech is free speech. The fact that, legally, free speech is handled differently if you're the US government vs. a private entity is immaterial to the definition.

But also recognize that (unregulated) private entities have a right to decide what speech they do and don't want to be a platform for. That's the big distinction here.


> Facebook has a complete and unalienable right to do this on their platform (fact)

If you remove "unalienable", then yes, that's correct. Future legislation and regulation could definitely change that.


That's fair.


> To the people that say "Where does it end?" – that's pretty easy. Let's start with curbing speech that encourages harming others.

"Speech that harms people" and "speech that hurt people's feelings" are so closely related and sometimes people (intentionally or unintentionally) mix them up.


Are you saying that white supremacists only hurt people's feelings and does not have real and tangible outcomes? Look to Christchurch or the innumerable attacks/murders/gig economy fascism in the US perpetrated by the far right.


>Are you saying that white supremacists only hurt people's feelings and does not have real and tangible outcomes?

Are you going to extend that same logic to devout religious followers that want a government aligned with their religion and think non-followers should be forcefully re-educated or killed?


That's a fallacious argument. You could make the same argument about pro-Islamist sentiment (which is all over FB if you know where to look).


It's not fallacious: pro-ISIS stuff is already banned on Facebook so it's literally exactly the same standard.


pro-Islamist != pro-ISIS. It is completely fallacious.


You're right the two aren't equal, that's why pro-ISIS content is banned and pro-Islam content isn't! In fact, you could generalize this to a rule: extremist philosophies advocating violence are banned on Facebook. Islam isn't an extremist philosophy advocating violence as practiced by most of its adherents. But the parts of it that are (ISIS for example) are banned. Similar to white nationalism. Where exactly is the fallacy?


What makes the political aspects of Islam any less extremist than White Separatism? It sounds to be like Facebook has brought down the banhammer on the non-violent part of that as well.


There is no non-violent component of white nationalism. It advocates the extermination of non-white people. If you think otherwise you are either a white nationalist or seriously misinformed.


The problem is your definition of "white nationalism". The article explains the difference between violent white supremacy and white nationalism:

> Our policies have long prohibited hateful treatment of people based on characteristics such as race, ethnicity or religion — and that has always included white supremacy. We didn’t originally apply the same rationale to expressions of white nationalism and separatism because we were thinking about broader concepts of nationalism and separatism — things like American pride and Basque separatism, which are an important part of people’s identity.


Literally the next sentence you neglected to quote says this:

> But over the past three months our conversations with members of civil society and academics who are experts in race relations around the world have confirmed that white nationalism and separatism cannot be meaningfully separated from white supremacy and organized hate groups.

There is no meaningful difference between white nationalism, white separatism, white supremacy, and hate groups. They are the same thing. So... I mean... did you read the article?


> So... I mean... did you read the article?

I did. The snide comment is unnecessary.

That's the question I'm asking: if we accept that these beliefs are on a spectrum and that white separatism (or more generally pro-white politics) slide inevitably into violent white nationalism, who's to say the same doesn't apply to orthodox Islam? Specifically the above poster referenced Christchurch as an example outcome of the former, I argue you can't do so without considering the many examples of the latter.


White nationalism, white separatism, white supremacy, and hate groups are not a spectrum; they are the same thing. There's no difference.

There is a difference between orthodox Islam and extremist violence: just because some violent extremists are Muslim doesn't mean all orthodox Muslims are violent extremists.


You believe it's literally impossible for someone to advocate for separatism, pan-Europeanism, or protection of white culture without being a violent hate group?


Yes indeed! Those views inherently advocate for the genocide of non-white people (and even some people who are white but are "degenerate," like gay people, trans people, Jews, or whatever ethnicity they believe isn't sufficiently "white"). I do believe that some people don't understand what they've gotten themselves in to, but all white nationalism is essentially advocating for the broad extermination of many people.


NO, that's ABSOLUTELY not the case.

If you are looking for examples, search HN comments that begin with "I'm hesitated to say this but" or "I know this sounds XXX"


> "Speech that harms people" and "speech that hurt people's feelings" are so closely related and sometimes people (intentionally or unintentionally) mix them up.

Maybe make this argument when we aren't talking about literal Nazis. Advocating harm is their entire schtick.


> Let's start with curbing speech that encourages harming others.

You've already disproven your "that's pretty easy" note. Who defines "harm"? The goal posts seem to move damn near weekly on this. That's the problem that can't be solved as easily as you purport.


[flagged]


Nazis would say "no."


Yes, but what do you think of them?


You can't show a woman's nipple on Facebook, but you could show white nationalist propaganda. They never genuinely cared about being a platform to begin with.


I miss old Tumblr. It was the last true bastion of a lot of different ideas. Now it's just like everything else: trying hard to be advertiser friendly.


> "Where does it end?"

It ends, as it always does (and must) with massive hypocrisy on the part of the arbitrary censor, since "harm" is so poorly defined that it ends up being replaced with "whatever seems fashionable today".


> It ends, as it always does (and must) with massive hypocrisy on the part of the arbitrary censor, since "harm" is so poorly defined that it ends up being replaced with "whatever seems fashionable today".

You are defending literal nazis. Their idea of speech is Christchurch. Deplatforming that is absolutely the correct move.


> You are defending literal nazis. Their idea of speech is Christchurch.

This whole conversation centers on the relationship between speech and action. Equating the two will confuse the discussion.


What does that even mean? Like last I checked a whole big chunk of the world may have gone to war for a few years over the subject of, "Nazis are bad."

Kicking them off Facebook isn't anywhere close to that large of a response.


The virality of violence is a bad aspect of network effects. I think we agree online calls for actual violence should not be promoted.

What I have a difficulty understanding is how and who will define "violence". Inciting direct violence is one thing, but people like expanding definitions. Is expressing support for one state which wants to overthrow another state "violence"? Is supporting China (in some aspect) but which none the less persecutes some people, "violence"?

Is buying almonds violence because it deprives some people in some areas of California of municipal water violence?

What about suppressed minority groups in Sudan who oppose a regime, or likewise South Sudan? Are they proposing and promoting violence?

An uprising in Teheran, is that violence? Suppress it? Are they pro regime, are they anti-regime? Is it fluid, is it at times both? What do you do?

This gets very dicey very quickly.


There's emerging evidence that emotional contagion via social media is a real phenomenon, and indeed much commercial exploitation of social media arguably depends on that. That has many positive aspects, such as meeting my appetite for heartwarming videos of cute animals, but it can also be leveraged for less wholesome purposes.

Definitions are absolutely fluid, not least because the science of measuring and mapping such contagion in virtual spaces is currently in a fairly rudimentary state. We're building our technological infrastructure out much faster than we are developing an understanding of how it impacts our future.


I would say that if the mainstream narrative is that your ideology shows up in a bunch of mass shootings then you get pushback. We thought it made sense to take down ISIS content, now it makes sense to take down white supremacy content.

Presumably this wouldn't be happening if not for the dozen or so mass shootings with oddly similar perpetrators in the past 5 years.


There has been more left wing political violence in the US recently, the media simply disproportionately reports on right wing violence.

Why isn’t Facebook banning leftist propaganda when groups like Antifa regularly use it as justification for political violence?


That's definitely one interpretation, I left that as an option when I said "mainstream narrative". Facebook wouldn't ban leftist groups unless the mainstream narrative is that antifa shoots up rural churches.


> Let's start with curbing speech that encourages harming others.

Raising taxes would certainly harm me. In fact, in much more direct way than impotent babble of bunch of idiots obsessed with the color of their epidermis. So I guess we should start with banning any speech advocating raising taxes?


> Raising taxes would certainly harm me. In fact, in much more direct way than impotent babble of bunch of idiots obsessed with the color of their epidermis. So I guess we should start with banning any speech advocating raising taxes?

So impotent that they never walked into a synagogue or mosque and shot up a bunch of people. Oh wait...


A lot more Muslims kill each other (sectarian violence). Should Facebook ban expressions of Muslim superiority or Shia/Sunni superiority? Forget about protecting Westerners. Why not do it just to protect other Muslims? Yet I don't see that happening any time soon. Why the double standard?


Facebook already does ban a lot of extremist content, including pro-ISIS content. This protects both Westerners and Muslims.

Leaving that aside, banning white nationalism does not prevent Facebook from banning other content it considers extremist in the future, regardless of what you see happening any time soon.


Whataboutism is unbecoming and facebook is taking actions against ISIS and related accounts.

Weak meme, 2/10.


The GP's example of raising taxes might be surprisingly hard to refute.

For example, if the U.S. government's tax revenues were much lower in the 1960's, would they still have invaded Vietnam? Perhaps not entering Vietnam would have saved far more lives than are lost to domestic mass-shootings.

My main point is simply that causal relationships between policies and (various definitions of) harm can be highly speculative and messy.


Okay, but "can be speculative and messy" doesn't suddenly mean we can't make any judgments at all. Here the relationship seems quite clear. Why can we not act here, just because a theoretical example exists where it might be more complicated?


Let's make it easier and less messy. Supposed there's a policy that you know for certain would harm me and certain other people, but would not produce any other effects. Let's say there's a law that says I must pay a special tax that will be distributed among ice cream makers of the city. Ice cream is no Vietnam war, right? Nothing that awful. And if I don't like ice cream, such law is obviously not good for me. It's not entirely fictional examples - we have hundreds, if not thousands, redistributive laws on the books. They hurt some people, arguably help others. Some people claim these are positive-sum laws, some thing zero-sum or negative sum. But whatever it is, somebody is getting hurt - and not just by reading some unpleasant speech.

Should people be banned from advocating for or against such policies on Facebook? I mean, if one advocates for, it would hurt one set of people. If one advocated against, it would hurt another set of people. If you say we must raise taxes, you will be repugnant to Libertarians. If you say we must lower taxes, you will be repugnant to welfare state proponents. Should we just limit ourselves to posting cat pictures on Facebook? Nobody would disagree with cats, right? Right?


There's nothing 'speculative and messy' about the harm Nazis did and do.

We went to war over it.


So we should ban all people who have same thoughts with somebody shooting people? OK, we've just banned: Muslims, Christians, Hindus, Jews, Buddhists, Atheists, Communists, Anti-Communists, Socialists, Libertarians, Maoists, Trotskyists, Marxists, Environmentalists, Bernie bros, Republicans, Democrats, Independents... Pretty much everybody. Except for you of course.


If their members regularly go out and start shooting people in synagogues and mosques which their pathetic online 8chan friends cheer them on? Eeeeyup, boot their asses right out.


> If their members regularly go out and start shooting people in synagogues

But they do. Not all of them in synagogues and mosques of course, some of them do it on stadiums, night clubs, buses, trains, buildings, and some use explosives, poisons, cars, etc. But adherents of all ideologies above committed acts of murder. So, by your logic, every adherent of such an ideology should be banned. Or just those that visit 8chan? It's hard to tell these days.


Not as long as you are represented. You do vote, right?


I realize I kicked up a hornet's nest by accidentally omitting one word: "physical" – I meant to say let's start by stifling speech that directly calls for _physical_ harm.

The next would be maybe an argument of banning speech that _indirectly_ calls for physical harm, since that leaves room for interpretation. Calling for a white ethno-state indirectly calls for physical harm to all non-whites. Why? Because how the hell else are you going to move us out, other than by force? Are you planning to pay each of us $2 million to leave the country? Obviously not.

The third would be all other forms of hate speech (which arguably would cause even more debate).


I'll skip over over your answering "where does it end" with where we start, but more importantly:

This is a game, because it initially masquerades as using the definition of "harm" any reasonable person would use, while simultaneously working the other end to redefine and vastly expand the concept of "harm".

We're now supposed to recognize unpersoning, microaggressing, belittling, twitter-mentioning as harm, and it's naivete bordering on manipulative to try to and make the case that our understanding of harm will not continue to expand


> To the people that say "Where does it end?" – that's pretty easy. Let's start with curbing speech that encourages harming others.

This may come as a surprise to you, but this response will not be comforting to those with genuine concerns about civil liberties. Avoiding the question of where to draw the final line with the question of where to draw the first one just implies that you intend to draw more lines in the future.


> Avoiding the question of where to draw the final line with the question of where to draw the first one just implies that you intend to draw more lines in the future.

No, it just implies that we don't need to have the perfect final answer now. Maybe we'll draw more lines, maybe we'll find out we don't need to, maybe we'll decide that this line was drawn too far and pull it back. We don't need to pretend that we have all the answers yet—we don't even need to pretend we have all the questions yet.


You are very wise and thoroughly correct!

Perhaps people might consider offering such elegant points as yours, instead of making the claim that the current line is correct and that's all that matters.


I would argue that the people who are concerned about "what future lines will be drawn" when the first line being drawn is banning white supremacists are the ones who should be making more elegant points.


This isn't the first line though–Facebook already bans things like ISIS propaganda. This is in a similar bucket.


White nationalism isn't even remotely like ISIS propaganda. I am no fan of that ideology for sure but it's actually about something very different than most people think and most people haven't actually spent the time understanding their argument before they form an opinion about it.

Edit: Find it pretty crazy that this can't even be discussed without people accusing me of all sorts of things.


These recent white supremacist terrorists like the Christchurch attacker all have the same white nationalist memes and talking points on their social media. The Christchurch terrorist said in his own manifesto the white nationalist memes played a part in his radicalization. The white supremacist groups in their guides admit that the their ironic racist memes are propaganda to get people in their funnel and progressively radicalize them.

I'm glad that after these white supremacist terror attacks and seeing how these attackers all share the same things on social media, more people are waking up and realizing these memes are like ISIS propaganda.


> it's actually about something very different than most people think and most people haven't actually spent the time understanding their argument before they form an opinion about it.

That was facebook's policy as of yesterday.

They changed it for good reason, and it was not a small reason to overcome that much inertia.


>White nationalism isn't even remotely like ISIS propaganda.

I don't think so. One wants a caliphate, one wants an ethnostate. The end result is the same for anyone who isn't part of the extremist in-group.


One dehumanizes people based on religion and the other dehumanizes people based on skin color. They are both cars, one is blue and the other is yellow.


Yeah it is. I'd prefer not to post example content here on HN, but like most political movements ISIS/jihadis and white supremacists have both a public and private position, and the public one may often appear innocuous or reasonable at first appearance, although there are differences in recruiting tactics, eg https://cchs.gwu.edu/sites/g/files/zaxdzs2371/f/downloads/Na...


Stochastic terrorism? It's still a terrorism pipeline.

It's all well and good for you to say "white nationalism is very different from what most people think," but that doesn't stop the OP from being right.

Nationalism of all kinds root from an insecurity, and white nationalism in particular is proppogated in colonial societies to encourage the alienation of brown colonial workers via, among other acts, stochastic terrorism.

It's a lot like ISIS propaganda.


> it's actually about something very different than most people think

Can you explain what you mean by this?


Probably something like:

"We don't encourage people to commit terrorist attacks, we only encourage them to do what it takes to ensure the survival and dominance of the white race."


[flagged]


it doesn't in it's essence include any form of violent action at least according to ex their US leader

Who would you consider that to be, please?

ISIS ideology is about killing the infidels and actually doing it

I really hope you're not going to offer a 'no true Scotsman' argument for why the people who do commit murders or other terrorism in support of WN ideology should be considered as separate from the people espousing said ideology even though quite a lot of them advocate exactly such action.

'Survival of the white race' suggests there is some looming danger of extinction, a position which seems to have little foundation in fact and mostly depends on the trope that non-white people are rushing to outbreed everyone else - ignoring abundant data showing that high birth rates are correlated with poor mortality outcomes and extreme poverty, and move in inverse proportion to economic security.

I might add that obfuscation and misdirection are key components of far-right recruitment strategies. Refer, for example, to the style guide from the editor of the Daily Stormer, which emphasizes the rhetorical use of humor, irony, and ambiguity to obfuscate the primary agenda while still achieving emotional activation: https://www.huffingtonpost.com/entry/daily-stormer-nazi-styl...


If the theorists and proponents have such a difficult time separating an ideology from attempts at violent implementation of a program based on that ideology, then perhaps the violence is in fact part of the ideology after all?

If it quacks like a violent duck ...


I'm still having a hard time understanding the difference. From Wikipedia, as you suggested:

> White nationalists say they seek to ensure the survival of the white race, and the cultures of historically white states. They hold that white people should maintain their majority in majority-white countries, maintain their political and economic dominance, and that their cultures should be foremost. Many white nationalists believe that miscegenation, multiculturalism, immigration of nonwhites and low birth rates among whites are threatening the white race, and some believe these things are being promoted as part of an attempted genocide.

And the same paragraph with a few substitutions:

> [Sunni militants] say they seek to ensure the survival of the [caliphate], and the cultures of historically [Islamic] states. They hold that [Islamic] people should maintain their majority in majority-[Islamic] countries, maintain their political and economic dominance, and that their cultures should be foremost. Many [Islamic] nationalists believe that miscegenation, multiculturalism, [and] immigration of [non-muslims] are threatening [Islam], and some believe these things are being promoted as part of an attempted genocide.

ISIS may be comparatively more violent than white nationalists[1] but as far as ideology goes I struggle to see "two very different things" as you've suggested.

[1] At this moment in history at least - the Nazi party certainly fits the Wikipedia definition of "white nationalist".


Compare them to antifa. Should fb also ban anti capitalist content because some of us thinks its outright antihumanist just as much as white supremacy is?

This is the slippery slope you are walking down here.


I’m not walking down any slope as I haven’t said anyone should or shouldn’t be allowed to do anything. You said white supremacy “isn’t even remotely like ISIS” and that people only think that because they “haven’t spent the time understanding” so here I am trying to understand.

So far I’m pretty unconvinced.


I dont think anything will convince you if you dont see the comparison to antifa being more apt than isis.


How so?


[flagged]


You are damn right I would ask people to research what things actually mean before they go claiming other people spread fascist propaganda.

I am perfectly capable of both trying to understand the world I live and why some people think what they think and at the same time disagreeing with them.

If you think you are helping fight their ideology by banning it you are going to be sorely disappointed. That will not end well.


I have genuine concerns about civil liberties, but as others noted, there's a difference between using the power of government to prevent speech and a company refusing to provide a megaphone.


Well, we already have a point of comparison in how platforms handle content from groups like ISIS. The thing about groups who adhere to eliminationist positions is that they so obviously don't care about anyone else's civil liberties, so one could argue that respect for the rights of others is the ante required to sit at the social table.


We always draw a line somewhere when we write a law. Saying "I don't want to draw a line at all" is an attempt to abdicate our responsibility to protect weaker members of society under the guise of some higher ideal that is not threatened by good laws.


You're absolutely right! We unquestionably have a categorical imperative to protect our vulnerable friends, neighbors, and community members.

To those ends, I don't want to not draw lines. I want some kind of clear criteria and process around thw drawing of lines. Perhaps we could consider a system whereby we can make sure that the power to draw lines is not used against weaker members of society under the guise of some higher ideal.


I wouldn't doubt that there are genuine concerns, but what Facebook is doing has nothing to do with actual civil liberties, which isn't something that FB grants or could even take away.


It's a surprisingly comforting response. This ends with decentralization. Let's force all platforms to ban anything even mildly offensive.


Tell us more about your vision for decentralization? Is it anything like twitter.com/BuyThisPlatform ?


I think email works really well as a federated solution. Mastodon is trying to make a Twitter clone that works more like email. That's the logical evolution here IMO. Eventually, we probably arrive at something truly decentralized on a blockchain governed by smart contracts, but that's not at all needed to replace the major platforms. A federated answer similar to email would be good enough. Then each server in the federation can choose what content to replicate or ban.


FB isn't the government though.


They point being missed is that this is action on things Facebook CAN understand.

How the heck are they going to deal with pro separatist or uzbekian socio-political issues?

What FB has realized and is realizing is how to play the PR game, because they have to.

----

And if that means getting rid of Nazis, more power to them.


> Let's start with curbing speech that encourages harming others.

The day facebook bans advertising about abortion will be the day I believe that line.


> that's pretty easy. Let's start with curbing speech that encourages harming others.

I think taxes should be higher.

Whoops, did I just encourage harming others?


> speech that encourages harming others

How can it be consistent when some of the very same people that consider white nationalist speech as harmful encourage the spread of antifa's ideals?

I'd rather everyone feel free to share their ideals so I know who to avoid. It's going to happen anyway, and I'd rather it not be restricted to behind-closed-doors, so to speak.


If this sort of thing leads to Section 230 repeal though, are we really any better off?


I don't see why this would lead to an S230 repeal if banning nudity doesn't.


> Let's start with curbing speech that encourages harming others.

No doubt, explicit threats or advocacy/conspiracy to violence exist, and should not be tolerated.

However, there are countless complicated grey areas subject to interpretation and potential bias. Was Kathy Griffin advocating violence by holding up a mock Trump severed head? Is “Abortion stops a beating heart” a coded message to inspire violence against abortion doctors? What about “Eat the rich” or “The only good billionaire is a dead billionaire”?

My favorite edge case is this WKYK sketch, which is clearly satire, yet trivial to indict as an explicit call to unlawful violence: https://youtu.be/eg3_kUaYFJA


If your argument for restricting free expression can just as easily be used to justify, say, Hollywood studios blacklisting communist screenwriters, that's another thing to consider.


To the people that say "Where does it end?" – that's pretty easy. Let's start with curbing speech that encourages harming others.

Welp, there goes the Declaration of Independence.


...because nazis can't use Facebook to plan the next Christchurch?

I feel like maaaaaybe there's a few steps in between the two.


Let's ask King George what he thinks about that.

Awfully easy to answer questions like this by going straight to "Nazis" or "terrorists" or "child molesters," isn't it? When a complex social problem seems that obvious and straightforward to resolve, that's when you really need to challenge your own thought processes.


When the problem is with actual Nazis? Yep, real easy.


This form of hateful speech will not stop. The only immediate consequence is that it will go underground. The bigger consequence is that it will be much harder to for authorities to monitor. Under the previous status quo, it was an intelligence treasure trove. Suspected future terrorists would openly tell everyone their thoughts, their location, and more importantly their friends and associates. All of that will be gone now.


If they go underground it makes it a lot harder for them to radicalize idiots on facebook. Sounds like a win to me.

The average person isn't exactly curious about different ideologies but when facebook and youtubes algorithms throw it in their face constantly? Probably something we want to fix.


I don't believe that the average person is interested in racist ideology. The people it tends to attract are disaffected white males. They tend to have poor education and poor job prospects i.e. no hope for the future. My guess is that they were already radicalized before being heavily exposed to racist ideology online. They will still be able to cluster online. It just won't be under our watchful eyes anymore. It'll be similar to how pedophiles cluster online.

I hope that you're right and that I'm completely wrong about this.


The average person (which includes these disaffected white males) is an idiot. They'll pay attention to whatever facebook and youtubes algorithms puts in front of them without question.

There are always disaffected people, but throwing this crap at them day in and day out for ad dollars is just throwing gasoline on a fire for a percentage of the insurance money.


Do we really know who is right here? Do we have data on much social media helps extremist groups grow vs how social media surveillance helps authorities catch and disband members of these groups and their plots?


Their alternative destinations are pretty predictable, and their social networks are already closely monitored and infiltrated. The aim is to keep the cost of re-establishing their networks above that of tracking them.


> Their alternative destinations are pretty predictable, and their social networks are already closely monitored and infiltrated.

How do you know? or are you confusing "alternative destinations" for facebook and twitter?


Good question.


The only reason the valley has trumpeted the "we're just a platform" is that it was better for the bottom line and shareholders. Now it no longer is.


You make this just about greed when it's also about toil. Nobody likes dealing with the filth that some people share. The armies of content moderators that have to deal with it don't have happy jobs.

This is a category of labor that formerly wasn't necessary, and yet somehow now it is. Why is that? How can we make it go away?

Perhaps the monetary incentives will provide motivation to solve it.


Reddit went through this too.


Reddit still allows white supremacists and other assorted awful things to go through their platform.


For someone who cares more about freedom of speech and the free sharing of ideas, than what political stance people have it's a sad day.

However, at least Facebook now admits their political bias and people can make informed choices.

And no I do not support white nationalism.


> And no I do not support white nationalism.

Saying that white nationalists, despite their inherently violent nature, deserve a platform is categorically support.

You're not acting as a neutral arbiter. Instead, you're saying that this group who spouts violent rhetoric, despite the fact that we as a society have said hate speech should not be tolerated, ought to be able to spread their message.


>Saying that white nationalists, despite their inherently violent nature, deserve a platform is categorically support.

What? There's no room to support the right of certain people to speak without "supporting" them? "Deserving a platform" is a total red herring here. There's room to worry about the largest platform in the world suppressing speech without claiming that those it suppresses "deserved" that platform.

>despite the fact that we as a society have said hate speech should not be tolerated

I can't believe you are serious at this point, because it is just so obviously untrue. Have you read what people have had to say in this thread at all? Are you aware that the position of the ACLU is that not only is "hate speech" to be tolerated, it is protected by the First Amendment in the US? https://www.aclu.org/blog/free-speech/free-speech-can-be-mes...

I say this as someone who's pretty okay with Facebook taking down white nationalist stuff. As big as it is, Facebook simply isn't big enough to create major worries about controlling speech. My problem is that you're not making an argument here at all; you haven't really come to terms with the fact that quite a few reasonable people disagree with you.


> What? There's no room to support the right of certain people to speak without "supporting" them?

When their speech is to advocate violence against every other group that isn't like them? No there isn't.


I don't know about you but I am perfectly capable of holding two parallel opinions at once.

I can both think their ideas are absurd and at the same time support their right to have those ideas.

That's the very definition of free speech in the US. There are many ideas and messages I don't think should be spread that doesn't mean I am against people being allowed to spread them.

You don't know what kind of fire you are playing with here if your think your position is the rational one and have a good outcome.


> can both think their ideas are absurd and at the same time support their right to have those ideas.

We're not talking about having an ability to have an idea, we're talking about responsibly deplatforming hate speech.

> You don't know what kind of fire you are playing with here if your think your position is the rational one and have a good outcome.

I can say the same of you and your naive belief that the fascists will be beaten by triumph, will, and logic.


Antifa commit hate speech per your definition and conduct violence, should we also ban anticapitalist content? Where does it end?

No you cant say the same about my position. Why do you think we have the 1st amendment? Do you understand what lead to it?


> despite the fact that we as a society have said hate speech should not be tolerated

Here's your problem. Free speech isn't about "we as a society".

"We as a society" had decided that the earth was the center of the universe before Galileo decided to defy "we as a society" to state something unpopular.

I'm not suggesting that there is any value in white supremacism like there was in Galileo's speaking of heliocentricism, but his lack of access to free speech protection meant he was persecuted for speaking the unpopular - the idea that you can ban "certain kinds of speech" is going to do precisely the same.

Where exactly is the line drawn at what is considered "white supremacism?" What if somebody says something which isn't a white supremacist viewpoint, but some loon misrepresents the view as if it were? You don't seriously think it will end at white supremacism do you? This is just the first step towards incrementally dismantling your right to free speech.

The solution to "bad speech" is "good speech", not attempting to issue bans. The Streisand effect is in full force, and each time you try to ban it, you accidentally give them more leverage. The solution is to ignore. Stop advertising for them! That's all you're doing.

That doesn't mean that we should ignore actual hate crimes - but can we get back to a sane world where people are innocent until proven guilty, and where we have a legal system where judge and jury get to decide the punishment for crimes - rather than bored, virtue signalling keyboard warriors.


I agree with your argument and I highly suggest Brenden O'neils argument on offensiveness:

https://www.youtube.com/watch?v=BtWrljX9HRA

At one time, advocating for gay rights or saying god doesn't exist was considered offensive. Only two decades ago, major television networks wouldn't allow jokes critical of Christianity to remain advertiser friendly. A recent episode of The Simpsons had Jesus beating Homer with a sign that said "Love" (standards and practices from 20 years ago wouldn't allow 'Jesus' to even be said in a comedy, which is why "Great Zombie Jesus" was cut out in later edits of the Futurama episode Atlanta).

It is offensive speech that has lead to civil rights, gay rights and independence from autocracy. What is and isn't offensive tends to change over time.


When a group's ideas all hinge on abridging the freedoms of others appeals to universality are hard to take seriously.


This isn't about sharing legitimate political views at this point, it's about helping prevent radicalization.

White nationalist content should be viewed no differently than content by the likes of isis. I think it's completely reasonable and responsible to ban it from the platform.


>However, at least Facebook now admits their political bias

White nationalism is a... Politics?


Why are white people specifically targeted. As someone of an Indian background, I've seen plenty of Hindu nationalist content on Facebook, and -- despite it actually affecting my Christian family who are still in India -- there is no ban on that content. Such nationalism has led to more violence and death than any of the white nationalism stories.


Cynical Answer:

It's affecting FB's bottom line.

White nationalism is in the news and on the rise in the US/Europe and therefore spooking advertisers who only really care what the pundits in US/Europe are talking about.

This soothes them over = money for FB.

My hope is that the algorithms and models developed for slowing the spread of White Nationalism will also work (down the road) for things like Hindu Nationalism, Chinese, etc.


Is that really cynical? The presence of hate groups on your platform being bad for your bottom line is a good thing, right?


> The presence of hate groups on your platform being bad for your bottom line is a good thing, right?

Yes it undoubtedly is, but they only seem to care about it now.

I would love to see a more proactive approach, but unfortunately proactive approaches don't often generate profit.


Sure, doing it earlier would be good, but at any point in time the only choices are to do something now or do it further in the future. So I'm glad they chose now.


Why are white people specifically targeted

They're not. Why is white nationalism specifically targeted? Because there's a lot of it in the US and the anglosphere in general which is Facebook's home and primary market.

I've seen plenty of Hindu nationalist content on Facebook, and [...]

Well, that's something they should be addressing too and I agree it has significant negative effects. At least now you know who to address at Facebook about it. I know other people have been making that exact complaint to Facebook, such as members of the Dalit caste who have been on the receiving end of violence from Hindu extremists with very reactionary views on caste.


> Why is white nationalism specifically targeted? Because there's a lot of it in the US and the anglosphere in general which is Facebook's home and primary market.

Facebook has 1 billion + users, which means the majority are necessarily not American. Now I don't know where those other users are from, but white nationalism is not the largest source of terrorism related killings worldwide, even if it is a problem in america. The fact is that white nationalism is -- for whatever reason -- seen as something worse than all the other forms of racism, and that perception is itself racist. There are two forms of such racism, either straight up dislike of white people or an infantilization of the racism of other cultures that absolves them of their own racism (well they didn't know better...)


You're consuming American news media about an American company with PR problem in America concerning white nationalist extremist content hitting American eyes.

Hope this helps.


Sure, but, if that were the case, I would expect to be able to search for and find articles about similar handling of racism in other countries in media outlets from those countries. I have not, so I cannot attribute the single mindedness of facebook's decision to the news media I consume.


You could look into past controversies about people abusing Facebook for genocidal purposes, such as this one: https://news.ycombinator.com/item?id=18225364

You aren't likely to find much media coverage of this question from sources within Myanmar for reasons that I hope will be obvious.


In fairness, they've been censoring middle-eastern terrorist content (Al Qaeda, ISIS, etc.) for quite a while now.


According to their statement, they have long banned for "hateful treatment of people based on characteristics such as race, ethnicity or religion", and white supremacy has long fallen under that. They also say that nationalism does not usually fall under this, with things like "American pride" or "Basque separatism" being an important part of people's identities.

What they say is changing is that in the specific case of white nationalism and separatism the overlap with white supremacy is nearly total to the point that they cannot be meaningfully separated.


This comment confuses me unless the claim is that Hinduism forms a nation within India, and the existence of such a nation justifies the violence committed in its name.


Because it's in the news since the New Zealand mosque shooting.

Maybe Facebook should ban Hindutva organizing on their platform. That seems a perfectly reasonable position.


My point in posting this was to point out that Facebook is not a benevolent organization policing speech that disagrees with its internal morals.

Rather, it is a business that bans speech based on whether or not such speech hurts its profit margins. Those advocates of free speech should realize that Facebook is not currently a platform for speech, but rather a mass publishing company whose sole purpose is to sell adspace to marketers.

We should not expect a private company to serve the public interest out of benevolence. Moreover, it is worrisome that Facebook feels okay to ban speech -- again not based on some internal moral compass -- but rather on the whims of advertisers. It's fine to cheer when those advertisers agree with you and speech you dislike is banned, but the market is fickle, so all the people cheering this action shouldn't be shocked when Facebook bans speech you think should be there because some business doesn't like it.


Shouldn’t we be glad when incentives align like this? It seems bizarre to disapprove of a company doing something bad because of profit motive and disapprove of a company doing something good because of profit motive. If you think profit motive is inherently bad regardless of any outcome, then fine, but that should be the focus of the argument.


The alignment of incentives is as fickle as the alignment of the stars. There's no telling when the incentives will line against you; that is why it's worrisome in the general case.

> If you think profit motive is inherently bad regardless of any outcome, then fine, but that should be the focus of the argument.

That is not at all what I claimed. All I suggested was that we shouldn't be cheering the banning of speech based off the profit incentive since there is no clear alignment in the general case between profit and ethics.


> The alignment of incentives is as fickle as the alignment of the stars. There's no telling when the incentives will line against you; that is why it's worrisome in the general case.

That can be true. But any other concept that influences decision-making can also be fickle, e.g. "what the company thinks is moral." In fact, I would be more concerned if I saw a company doing something that seemed bad for their bottom line but was motivated by what the company's supposed moral values, because I know that the bottom line is always by definition an existential concern of the company, whereas its supposed moral values are not.


Banning Hindu Nationalists from its platform is a pretty reasonable position that FB should take.


non-Hindus do not have sufficient political or economic power in India and/or Facebook's multinational context.


Probably because Facebook is an American company.

It sounds like you have legitimate concerns about further content that should be banned and I fully support this and further efforts to draw Facebook's attention to the issue.


facebook only moderates posts in English

There was plenty of videos of the NewZealand shooter still up on Brazilian facebook


I imagine they'll get around to it eventually.


Your assumption is that this targets white people generally. White supremacist content is spread by a tiny minority of violent extremists in hopes of recruiting others to their cause.


How is that an assumption? The post is very clear that this is targeting white nationalism/separatism/supremacist. Black nationalism, Jewish nationalism (ie, Zionism), etc, are all perfectly okay.


In light of the evidence, it's a measure to target the groups that are disproportionately responsible for the majority of violent ideology-driven attacks in this country.

https://www.splcenter.org/hatewatch/2018/09/12/study-shows-t...


I think the benchmark here is the use of Facebook to help spread genocide in Myanmar. I'm hoping the company learned something there about how dangerous hate indeology is on their platform, including white nationalism. https://www.nytimes.com/2018/10/15/technology/myanmar-facebo...


You can't be non-white and nationalist.

(obvious sarcasm, apparently not)


So now facebook is explicitly editorializing content, are they now exempt from safe harbor provisions and legally responsible for illegal content published on the platform?


Separatism too? As a Spaniard I couldn't be happier about that.


As a Canadian I would be horrified if they banned all speech related to separatism. Quebec should absolutely be entitled to rant about wanting to separate themselves from the rest of Canada. That's not extremist speech, there is no realistic risk of violence from that speech, it's just a widely held (slightly less so recently) political opinion about how we should organize ourselves into political bodies.

I don't know what issues Spain has with separatism, maybe there are some legitimate ones, but not all separatist groups are inherently bad.


You'll be disappointed here. They are specifically targeting white nationalism/separatism. They specifically exclude other types in the article.

> We didn’t originally apply the same rationale to expressions of white nationalism and separatism because we were thinking about broader concepts of nationalism and separatism – things like American pride and Basque separatism, which are an important part of people’s identity.


So separatism is only bad if it targets non-whites?

So when Basques and Catalans say they want to leave because the rest of the country, especially the south, is full of half-moorish lazy fucks, that is cool.

Guess the same about the two halves of Italy.

I don't care as I don't use Facebook, but it's a double standard.


> So separatism is only bad if it targets non-whites?

I can't speak for Facebook, and I'm not super with-it on the conflicts in Spain, but it seems like they're making the distinction between "we want our own country" and "we want our own country and we want to violently purge everyone else from it".


But they don't say that, and if they did, you certainly wouldn't need any new policies to ban them. There are the 8chan lunatics, but outside that there are a wide range of views on what the white nationalist end-goal is and how to achieve it.

I'm reminded of the Sargon v Richard Spencer debate here. Sargon's line of attack was essentially this: You advocate for an ethnostate, people will not react well to the non-violent policies you publicly advocate to bring this about, thus there will be violence, therefore your rhetoric is violent. Even if you approve this line of reasoning, you have to admit that it can be applied to any separatist movement anywhere in the world.


> But they don't say that

Nor should they. I helped run a large, active web forum for years. We found general rules and moderator discretion were far more valuable than a list of 800 different "you can't do foo" specific rules, because the bad actors tie up all your time lawyering about the specifics.


As long as you're admitting that there's no general principle being evenly applied here, and it is entirely discretionary on Facebook's part, we have an understanding.

e: But it's not an arbitrary exercise of power when I do it!


One can have a general set of principles behind moderation decisions without making them available to the public.


> American pride and Basque separatism, which are an important part of people’s identity.

I'm not saying it should be banned, but "American pride" being an important part of people's identity is really part of the problem.


You can be separatist if it's not by race.

And in your rush to snark you skipped the part that calls out "American pride and Basque separatism" as important things to leave alone.


"Race" is a de-facto cultural category, at least in the U.S. It's not merely a description of someone's outward appearance; the reason people even care about the construct in the first place is its social aspects, which happen to be rather pervasive. It's not obviously wrong to advocate for separatism on the basis of such things, nor is separatism per se hateful or violent.


>You can be separatist if it's not by race.

Except you can't, which is why for example "Islamophobia" is a thing. Things that should be ideologically separate are inevitably tied back to race/ethnicity, because nobody seems to know how to separate race/ethnicity from ideology in debate. Just listen to how much flak Sam Harris gets on the subject.


Let me guess, they'll do it with a total lack of transparency and accountability, e.g. https://www.youtube.com/watch?v=QzlPhxf4Rd0 ?


But what exactly is "white nationalist content"? If I say something positive about Trump, will it already get me banned? What if I disagree with something a PoC says?

Of course FB as a private company is entitled to censor whatever they want.

In a general sense, I don't really understand the need, as usually in social networks like Facebook people only see the stuff they want to see. There are lots of tools to facilitate your own filter bubble, and people use them. So the worry that "ideology x is spreading via Facebook" is somewhat unfounded in my opinion.


I can't believe people are applauding this.

The whole "you can go make your own platform, facebook/google/twitter/youtube doesn't owe you anything" excuse is bullshit. Look at what happened when people DID try to make their own platform with Gab. They were DDOSed and people uploaded child porn and then reported it to try to get the whole site shut down to prevent people who were silenced elsewhere from talking there too. They're behaving more like the nazis than the white nationalists!

Cheering on authoritarian censorship of things you don't want to hear doesn't make you virtuous it makes you a coward.

The solution to bad speech isn't censorship, it's better speech:

First they came for the socialists, and I did not speak out— Because I was not a socialist.

Then they came for the trade unionists, and I did not speak out— Because I was not a trade unionist.

Then they came for the Jews, and I did not speak out— Because I was not a Jew.

Then they came for me—and there was no one left to speak for me.

-Martin Niemöller


will they ban Black Nationalism as well?


[flagged]


Speech that is about "exterminating nonwhites" is already illegal speech. It is incitement to commit crime. There does not need to be new rules or laws to prevent this speech - there just needs to be enforcement of the existing rules, which is fine, and I think everyone, including most right-wingers will agree that such laws should be enforced, and it is correct that media platforms block posts as soon as they are alerted to them.

The issue isn't about banning speech which is about "exterminating nonwhites", but about banning speech which discusses something else, but which takes a gigantic stretch of imagination to be considered about "exterminating nonwhites". What you want to ban isn't speech which is illegal (incitement to commit crime), but in fact, anything you imagine can be misrepresented to be considered equivalent. The only limit to your restrictions on speech are your imagination!


> Speech that is about "exterminating nonwhites" is already illegal speech. It is incitement to commit crime.

It's not nearly that simple. That speech is only illegal if it is thought to cause "imminent lawless action". INAL but just advocating for the "exterminating nonwhites" at some future time seems legal.

https://en.wikipedia.org/wiki/Brandenburg_v._Ohio


> Speech that is about "exterminating nonwhites" is already illegal speech

No, it's mostly not in the US.


Please. You are pretending White Supremacy is this totally benign thing because you are ignoring reality in pursuit of egalitarian ideology. You cannot simply ignore context for the sake of argument.


You can take anything out of context and present it as something else to make an argument, and it goes both ways. People can take something totally benign and present it as if it were a white supremacist argument.

It's not hard to imagine this being stretched to limit my speech. I'm white, and I'm also a nationalist - one who believes that nation states are the best means of managing humans who have a shared culture and traditions. How much of a stretch would it be to call me a "white nationalist?" I mean, lexicographically, it is an accurate use of two adjectives which describe me. However, I'm certainly not a "white nationalist" in the sense that might suggest I am in any way a white supremacist.

If white nationalist content is banned, does this mean that I cannot discuss nationalism because I am white - and therefore, a loon can trivially make the link that if I am white, and nationalist, then I must be a white nationalist? Doh!

What about white people who wish to celebrate their cultural heritage, peacefully, by suggesting that they should not have to give up or sacrifice their culture or traditions for fear that they might offend someone. Of course, it doesn't take imagination to think that a loon my consider this to be "white supremacism", because it is already happening!

The idea that there exists some large "exterminate nonwhites" network is a cartoon version of reality. There exist a small number of deranged humans who think that their skin color makes them superior, and an even smaller fraction of them who think that therefore, they must kill people who do not share that skin color. These are not at all representative of whites, nationalists, conservatives or free speech advocates, who are the real targets of recent censorship.

I mean, Alex Jones might be a nutcase, but I've not heard any evidence suggesting he has been calling for exterminating people. He has publicly apologized countless times for his handling of the Sandy Hook case which is the alleged reason he is banned. And Sargon of Akkad is an outspoken critic of white nationalists, but if we take this one word he said out of context then we've got him cornered!

You are the one pretending pal. You are pretending slippery slopes do no exist and that rules and laws only apply to the thing they were originally written to apply to. What utopia has this ever happened in? The only rules which can't be abused are the rules which don't exist in the first place. A rule which is in place is a rule which will be abused. Selective enforcement of rules also exists, and is commonly used to politically censor one side while openly allowing the opposing viewpoint.

Now if there are real cases of incitement to commit any crime against people because of their skin color, or anything else even, then I'm all for enforcement of rules and laws as they should be. This isn't what the censorship and assaults on free speech are about at all.


[flagged]


Please stop with the ideological riler-uppers. They don't actually inform us of anything and instead lead us down a generic, predictable, reflexive road. We don't want that! We want specific, informative, and thoughtful. You can be outraged and still contribute something more substantive than rage.


It's actually not on the rise at all, in fact it is down; just like crime across the board is down. We live in arguably the most peaceful and tolerant time in human history. The number of "hate crimes" in FBI stats is up, but those numbers are often misleading. The only place I actually see a slight resurgence is actually in Europe, and it's largely a backlash against the Syrian immigration crisis...which will generally resolve itself as people move back now that the war is over.

China is...a different story, largely because Chinese culture treats rights in a much different light than the West. That being said, if you compare modern China to China of the 70s or 80s, it is vastly better.


This gets off-topic, but immigration crisis in Europe is not really about Syria, and it's not over either, despite changes in situation in Syria.

(Otherwise I agree that in almost everywhere, people live the best time humanity has had.)


I think fascism is resurgent (minimally) because democracy is failing some people. Not because China is doing anything.


> balkanize traditional institutions of democracy

Trump was elected by a free, fair, democratically held vote of the people. You may cry about the outdated electoral system, but well, the rules are the rules. He won fair and square.

> and cooperation (USA/NATO/UN).

Switzerland isn't a member of the NATO and have only joined the UN in the early 2000s. You don't need to be a member of every overblown bureaucrat club on the planet to be friends with other nations and trade with other countries.


> Trump was elected by a free, fair, democratically held vote of the people.

We don't know that, and will never know that. This is a helpful resource for understanding why:

https://en.wikipedia.org/wiki/Russian_interference_in_the_20...


If you belive that russian twitter bots and a $100k Facebook ad campaign can swing a US election with a total budget of $4 billion, you are delusional.


That's an absurd summary of the degree to which we know Russia interfered in the election. I recommend you read the entire Wikipedia article rather than the section I cited.


Please don't post conspiracy theories.


The link I posted is a balanced summary of the situation we find ourselves in. Please read links more thoughtfully before crying "conspiracy".


I was going to comment on here that the reason people calling for a ban on "white nationalism" concerns me is because every time I turn on the news somebody calls Donald Trump a white nationalist without any evidence. Phrases like "Make America Great" and "Learn to code" have been denounced as white nationalism. And here you are, a few pages in, saying exactly that...


[flagged]


Please don't respond to flamebait with flamewar. I know all the defaults point that way, but this is a site where we're trying to do it differently.

https://news.ycombinator.com/newsguidelines.html


[flagged]


You understand that Christian Indians have been killed for eating beef and pork right? And the current party in power is the BJP, a party that has allied itself with people like Bal Thackeray, a politician who suggested suicide bombings as a viable means of protecting Hindu interests, as well as links to actual murders of politicians he disagreed with [1] as well as instigating the Mumbai riots [2].

And that's just in the one state I'm familiar with. The party itself has many links to actual Hindu militias. Pull your head out of the ground. The United States problems (if there really even is one) are nothing compared to those of certain other countries.

[1] https://en.wikipedia.org/wiki/Krishna_Desai [2] https://en.wikipedia.org/wiki/Bombay_riots


I am not from India, but you do understand that the party that has majority control over India, the BJP, is a blatantly Hindu Nationalist party? I would argue this makes them even more entrenched in nationalist politics than here in the US. Look at the hundreds of killings and mutilations over cows that the government of India seriously turns a blind eye to.


> White nationalists and their ideology have penetrated the Senate, the White House

Wait, this means Trump is now banned on Facebook? Or who in the White House is the white nationalist that will be banned on Facebook starting next week?


You can claim that white nationalism has penetrated the White House all you want, but there is no evidence of this being the case.


White Nationalism has permeated the entire GOP since Nixon’s Southern Strategy, and there is evidence of that everywhere, starting with the Southern Strategy itself as exhibit 1.

That it has been more pronounced than the post-Southern Strategy GOP baseline in the Trump Administration is also quite evident to both opponents and overt supporters of White nationalism.


[flagged]


Who specifically are you talking about?

And no, my standard of evidence is not hearing someone say the words - it is considering their actions and seeing if they align with a particular agenda. I've seen the accusations aplenty, but I just don't see any actual evidence supporting it. And for the record, I despise any kind of group-based ideology, or anything that treats people other than individuals. I denounce white nationalists or any kind of racial or ethnic superiority group when I see it - I just don't see it here.


And no, my standard of evidence is not hearing someone say the words - it is considering their actions and seeing if they align with a particular agenda.

That leaves the door open for a lot of hateful rhetoric excused on the grounds of 'ignore the speeches, look at the actions.' But in any case you could start by looking into the development of the 'muslim ban', anti-transgender policy in the armed forces, disregard for protected classes under title IX, and the so-called 'emergy' on the southern border.

Given the broad scope of your comment I'm not going to get into litigating those issues individually here, while understanding that you might not consider them as significant as other people do.


[flagged]


[flagged]


> I do not, honestly, think you are arguing in good faith.

The majority of Muslim countries are not in those areas. There are muslim countries in Southeast Asia, West Africa, and Central Asia. You're arguing from a place of ignorance.

The top muslim countries are Indonesia (12% of muslims), India and Pakistan (11% each), Bangladesh (9%). Those four countries account for 53% of the muslim population. The arab countries (not all of which were banned) only account for 20%. Thus, the vast majority of muslims are still able to enter the United States. Calling this policy a muslim ban is like calling the EU policy of requiring Visas for most countries a non-white screening program.

Moreover, both venezuela and North Korea are banned, both of which have few muslims. Is this also a Christian / Buddhist / atheist ban now?

The five majority muslim countries that were banned (Syria, Iran, Yemen, Somalia, and Libya) account for 7.5% of the Muslim population. Syria at least also has a sizeable non-Muslim population.

> It's telling, I think, that you made this about the current President and his predecessor, and that your argument that the Muslim ban wasn't really white nationalist is because the Muslim ban was really a... ah yes:

And what exactly is it telling of? An avoidance of hyperbole when it comes to politics?


The reason the US banned them from traveling was from bombing their countries for years. Nothing to do with their faith.


Everything you said is on point. You won't find many upvotes in this echo chamber. The majority of HN viewers are on the west coast of the US.


About 10% of HN readers are in Silicon Valley. I don't know the numbers for the west coast but it must be less than 15%.


[flagged]


1) The early businesses suit was against the owner and alleged originator of the policies - Fred. There’s evidence of paternal racism, but no coherent “white nationalism”, especially since Jewish tenants were favored over whites.

Donald in his own ventures was the first in the county to open up his country clubs to Blacks and Jews, and was repeatedly celebrated by African American leaders for his other community efforts.

2) Stephen Miller is Jewish. How is he a white nationalist?

3) Central Park 5 was a rush to judgement when a woman claimed she was raped near the ice rink and part of the park that Trump had just recently invested in renovating. Not proof of racism, or white nationalism.

4) "Good people on both sides", when you look at the full paragraphs of the speech (which details who exactly he is describing), is in reference to the "heritage not hate" locals who attended the event simply thinking it was about saving the statue. They were not sensational, and left when the neo-nazis showed up later in vans, so they were not covered in national news. I do not blame you for not noticing them.

Many good people attended the Women’s March who were later horrified to find out that organizers were anti-Semites, and that they had a convicted homophobic murderer (Donna Hylton) as a speaker. I can empathize with people caught in that kind of situation.


Fred .. Trump, Donald's father. Not some rogue un-monitored employee.

Stephen Miller is Jewish, your point?

Trump still stands by his comments as recent as 2016 about the Central Park 5 even though they were proven innocent using DNA evidence.

Trump never one mentioned "heritage not hate" - I've read the transcripts, many times.


Uh, Trump invited Steve Bannon into his cabinet. They did not part ways because of Bannon's extreme white nationalist views...


Steve Bannon was a close friend of Andrew Breitbart (who was Jewish and famously anti-racist), and was hired by Breitbart to be Editor. Bannon greatly expanded the Brietbart Israel office and managed a number of non-whites such as Raheem Kassam, Ben Shapiro, and others, placing them in leading positions.

While I do think Andrew wouldn't approve of what Bannon has done recently (and many of his old colleagues do not), there is no evidence his views have ever gone into "extreme white nationalist" territory.


Steve Bannon is best described as a civic nationalist. Although, I think he has questionable choice in allies.


You would need to show that Trump invited Bannon into the White House because of his white nationalist views, and not because of his abilities in other areas. Which I don't think you can do.

Also, if Steve Bannon is a white nationalist, it seems weird that Trump would support him, considering that presumably makes him want to get rid of parts of Trump's own family(Ivanka/Jared and their kids).


White nationalists and other racists making exceptions for select useful (to their cause) members of targeted groups when in power is nothing new.


Ok, what about black nationalist content? Asian nationalists, Jewish nationalists?


I just realized that facebook have deleted most generation identity pages. Im not affiliated with GI, but afaik they have ONLY done peaceful protests and they aren't even white nationalists... What the hell, this is beyond dystopian.


Will Facebook ban Chinese nationalist, Black/Mexican/etc. nationalist accounts also? Or maybe Jewish nationalist content?

If not, then why not?


Very glad to see everyone here is defending white nationalists. Exactly the kind of content I expected to see on this website

First they came for the Nazis. And I did not speak up because I was not a Nazi.

Then they came for the pedophiles. And I did not speak up because I was not a pedophile.

Then they didn't come for me, because I'm not morally reprehensible, and life was good.


So is the Scottish independence movement now forbidden on Facebook?

https://en.wikipedia.org/wiki/Separatism_in_the_United_Kingd...


Nice move

I was annoyed that instagram has now become just reshares of twitter and facebook posts that spread the same message without being indexed as easily as the text versions are

Hope they don't go after things I like and target my accounts next! I know how the limitations of their process and lack of appeals or justification.

Also easy enough to find my friends on group chats elsewhere

I would have very different views if these were state run systems, but fortunately that's not relevant here.


When you ban groups with relatively harmless names such as "Its OK to be white" I think you just end up giving people an eye opener... Its NOT OK to be white? I guess that makes it easier for Caucasians to decide their political affiliation... I mean just search for terms like "kill white people" "Kill whitey" etc. There is plenty of people glorifying white South Africans getting slaughtered in their bedrooms on facebook. But I guess thats OK, because fuck dey white man am i rite?


I'm going to caution against people using the phrase "free speech" as a thought terminating cliche when discussing this.

Banning speech can and typically has been harmful, especially when governments get involved with doing it. No one is ignorant about this fact. But we're collectively discovering that not doing anything about what can be published and said is just about as harmful, if not more so in concentrated circumstances. There will be irrational people who act irrational on fundamental erroneous information. It's a practical matter, not an ideological one. White supremacist ideas aren't up for debate. There isn't a point in debating them. And the people adhering to these ideals aren't looking for a debate and most definitely won't consider other ideas. You can't debate rationally with someone who hasn't arrived at their position in a rational manner.

There's also something else at play here. We're seeing new systems that replaced old ones re-implement "lost" functionality they thought wasn't needed. Turns out it was, in fact, needed. The old media empire of yesteryear was always subject to regulations around the content they spread.

If bitcoin is an exercise in why we need financial regulations, facebook and other social media networks are an exercise in why we need content and publishing regulations.


Are black nationalist ideas up for debate? Who gave you the power to decide what is up for debate?


Removing yourself from any kind of rational framework removes the required considerable characteristic of an idea or view point. Especially when those ideas and view points have been tried on top of just generally being a Bad Idea (TM).

So I haven't done anything. There's no need to start an epistemological bankrupt conversation here. Or to engage in philosophical waxing about intellectual curiosity surrounding view points that serve no purpose.


[flagged]


This type of hyperbolic comment doesn't belong on HN.


I know you would say that it doesn't belong here. Especially when it hurts bottomline of your favorite bay area brag. ;-)


Brexit is good for Britain, but bad for the EU. The only chaos is in parliament which is betraying the British electorate by going against their vote to leave and trying to remain at all cost. They will meet their maker at the next general election and any decisions they're making now will be overturned anyway.

The world actually enjoys watching Donald Trump because they see the USA as a world leader, and look up to it as an example. If the USA is doing great, then other countries can do great too! Making America Great Again is making the world great.

On the other hand, you have "progressive" dominated cities like San Francisco which have become cesspits, and the world is definitely not thinking "I wish we were more like that!" Quite the opposite - the real jokers are the democrats fighting for the title of the most degenerate.


This is clear discrimination. As if people with different shades of skin being nationalists are nicer than white nationalists. Is it 1984 already or we still have some time?


> This is clear discrimination

Every decision is discrimination.

> As if people with different shades of skin being nationalists are nicer than white nationalists.

A “white nationalist” isn't “a nationalist with white skin” but “an adherent to the ideology of white nationalism”.

You can be a white-skinned nationalist and not be a white nationalist.

> Is it 1984 already or we still have some time?

1984 didn't feature discrimination against either white nationalists or white-skinned nationalists, so it seems unrelated to your complaint.


White nationalists = nationalists that think that white people are better than other, but we have guys from ICGJC and other afrosupremacist groups, etc. Why not ban them all?

1984 is mentioned as a reference to the dystopian world of propaganda and Facebook / Google are quickly turning into corporate Ministry of Truth.


> White nationalists = nationalists that think that white people are better than other

No, white nationalism is not “nationalism” + “white preference”. It's closer to nationalism around a concept of a white nation.

> but we have guys from ICGJC and other afrosupremacist groups, etc. Why not ban them all?

Because we don't see scores of bodies piled up motivated by the message of those groups.


And the slippery slope begins. I am not a white nationalist (I’m not even white) but I think that recent immigration trends in Europe are illustrative and that we should avoid a similar situation occurring in the United States. Am I now a Nazi?


My router bans Facebook content.


Nothing wrong with deplatforming.


As a brown person this pisses me off, liberals keep pushing for these social justice rules on the platforms to silence content they find offending.

If your argument is that "they don't have right to access the platform", you are acting ignorant when you know there are only handful of platforms on internet.

The next step would be blocking DNS registrations of websites whose content you don't agree with, oh wait there is entire country which does this and you disagree with that country's rules too.


> you are acting ignorant when you know there are only handful of platforms on internet.

Your continued participation in said platforms ensures that they will remain the "only" platforms on the internet.

There is absolutely nothing guaranteeing you a right to have a private entity host your information, especially when you're paying absolutely nothing for them to do so.

> The next step would be blocking DNS registrations of websites whose content you don't agree with

Slippery slope fallacy incoming!


You do realize calling slippery slope a fallacy does not mean it's something you should ignore... right? Incrementalism is -by far- the most common technique used to implement totalitarianism.


Blocking access to information via DNS blocks is an entirely different concept than private entities not wishing to host extremist content on their private platforms.


Sounds like a slippery slope.


It really isn't.


It wasn't banned already? What the hell?


From TFA:

> Our policies have long prohibited hateful treatment of people based on characteristics such as race, ethnicity or religion – and that has always included white supremacy. We didn’t originally apply the same rationale to expressions of white nationalism and separatism because we were thinking about broader concepts of nationalism and separatism – things like American pride and Basque separatism, which are an important part of people’s identity.


Censorship is evil, facebook sucks, whats new


What exactly is "white nationalism"?


No! That is a terrible thing.


Does this include policing the debate about EU expansion to Turkey or Morocco?


"Today we’re announcing a ban on praise, support and representation of white nationalism and separatism on Facebook and Instagram"

First of all, why white nationalism, how about black or yellow nationalism, are they ok?

Second thing, they will ban "separatism". Really? Are you FB guys really going to ban Kurds who want their own land, are you going to ban Palestinians who want their own land? If FB existed 30 years ago, would it ban anty-communistic opposition in central and eastern Europe which was fighting to get separated from Soviet Union? What the about Scots, what if they want to separate from UK? Will they be banned? Same with Catalans?


I am surprised their PR team didn't advise them to use the neutral term "ethno-nationalist."


Why not just any race-based nationalism? It seems weird that facebook would ban white nationalism but black nationalism can go on.


Why not ban all racism, all hate, everything? Doing just one is PR only.


I imagine that's because it's sometimes subjective and difficult to define. We tend to focus on the obvious cases, but not all of them are. What about the discussion of e.g. race and it's correlation to IQ? How about discussion of trans issues and viewpoints which contradict what trans activists want to hear? Are we allowed to talk about biological differences between men and women which some find offensive?

They are all valid topics to discuss civilly, but some believe that even having a discussion is racist/homophobic/sexist. My fear is that it will move beyond what we've legally defined as hate speech and into "don't hurt people's feelings" territory.


I agree, it's near impossible to be a global communication platform and enforce 'goodness' and kindess for your users/audience.


They did. They just said before this, they considered white nationalism and white supremacy as different. Now it isn't


Read the article.


I did, this is so selectively enforced its comical. That's my point.


I guess you must have brushed over the part where it says:

> Our policies have long prohibited hateful treatment of people based on characteristics such as race, ethnicity or religion – and that has always included white supremacy. We didn’t originally apply the same rationale to expressions of white nationalism and separatism because we were thinking about broader concepts of nationalism and separatism – things like American pride and Basque separatism, which are an important part of people’s identity.

Very specifically, it points out that they are not just "doing one", they are bringing one in line with a broader policy that is applied to many different types of hate. So I'm really not sure what point you're trying to make.


Why not just ban all nationalism or nationalistic posts? Depending on ones relative position it's hateful towards others. But regardless:

> Our policies have long prohibited hateful treatment of people based on characteristics such as race, ethnicity or religion

I'm just saying this has always been selectively enforced at FB, the reason they are speaking to white nationalism now is because of recent current events and they are in the spotlight. I feel this is purely this is a convenient PR campaign.

Not that it's an easy topic to control or address all the platforms have issues with this, but FB is being opportunistic for press.


Most "nationalism" is based on physical location and community and you're not going to get a lot of support for banning that.


>all hate

This is the problem with this type of censorship, who get's to decide what is defined as "hate" speech. Facebook, who is run by it's own set of individuals with their own set of beliefs. It is their platform, their servers, they can do what they want, but Facebook as a company should not be defining what constitutes "hate" because we obviously have a stratification of morals in this country and what could be considered hate to one person could be considered perfectly charitable to another.


Quite clearly this is just a move to gain back support.

But this has nothing to do with free speech. If I tell someone something, they have no obligation to broadcast what you said. Why should Facebook act differently?

I don't like that they're targeting such a specific group. I think we should be on the lookout for real racism, and this is certainly an example of that.


The purpose of free speech is that you can influece the reality you live in, without that its useless, might as well not have it. As such free speech requires to be allowed on the most popular social platforms. Just like free speech had to be allowed on a towns square.


Facebook has no obligation to share what you share. No one has that obligation. They may be a "platform" but that doesn't mean it's a free playing field.


Do you also support the right to discrimate based on race?


> Our efforts to combat hate don’t stop here. As part of today’s announcement, we’ll also start connecting people who search for terms associated with white supremacy to resources focused on helping people leave behind hate groups. People searching for these terms will be directed to Life After Hate, an organization founded by former violent extremists that provides crisis intervention, education, support groups and outreach.

I wonder if they can re-purpose this technology to re-direct people who post content critical of Facebook and social media to pro-Facebook propaganda.


I am very much pro free speech. In the sense of nothing should be disallowed. I think platforms should be built in a way that there really cannot arise an ultra-(right/left/X) bubble. Facebook should build mechanisms like they can be found here on HN. If I would write something really offensive I'd get downvoted and eventually my post would vanish. That feedback by the community is extremely important, because I get to learn what goes and what not. Just deleting unwanted opinions makes things worse. People build up aggression against "the system" and feel censored by the wrong mechanisms. If I, however, see that the majority dislikes my opinions I know where the problem is.


What is it about nazis and white supremacists that causes everyone to boot them from their platform? From cloudflare to microsoft to twitter and now facebook, why is everyone so hostile to them? They just can't catch a break, if this keeps up they'll only be able to host content on platforms that are supportive of the message or they'll be forced to build their own platforms and since so many people are hostile to them already nobody will use their platform except people with similar beliefs (e.g. voat). Should these companies really be allowed to just ban them simply because they regard their ideologies as repugnant?

edit: I'll note here that the above should be read as tongue-in-cheek sarcasm, but I do think the question is worthwhile when considered in earnest by those who support nazi speech on private platforms.


Yes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: