Honest, but leading, question in return: Offensive to whom? 90% of the population? Wouldn't that block out the Church of Satan's distributed materials (that is, coloring books)?
To 80% of the population? Wouldn't that be the Book of Mormon?
To 60%? The Quran?
I think this part of their answer is a great response: "Of course, the simplest reason is that it's not up to us to decide what the rest of the world should or shouldn't see. Bad news, it's not up to you either. Worse news, it's still true even when we agree. Which is probably most of the time."
This is exactly the comment I clicked through to add. Without someone to be offended, nothing is offensive. When a service provider crosses the line and polices a piece of media, they are now responsible for defining that line moving forward. That's an insanely hard task.
Most people who want "offensive" content taken down do not understand the complexity of what they are asking for.
People don't have privilege. People toss around that word a lot and it's used incorrectly, as it is here.
Say you walk into a store and are followed or stopped quite often for a receipt check. If you're black in certain parts of the world, this is quite common, but much more rare if you're not black.
The privilege is a result of a system. You cannot say x people have privilege, and likewise people get offended when people yell at them for having privilege saying, "It's not something I choose." Is one person from one ethnic group responsible for all the ills cast upon others? Are we responsible for the debts of our fathers?
Privilege is a result of a system, and it is a system that grants privilege and socialite (and often subconscious) level, and that system of beliefs is what needs to be changed. So fighting against privilege involves changing the narrative of the system we live in. It takes time and it takes diligence and it takes careful and critical thought. It's not as easy as throwing a word or blame around.
I don't know. If you have the power to shut down a whole product line for a trans-national corporation with a single complaint, isn't that a kind of privilege? I certainly can't do it, you probably neither. But some people can. If you have a power to decide what is allowed to be spoken and what gets you removed from the platform, who is allowed to speak and who should be met with violence if only they dare to show up - isn't that a kind of privilege? If you can riot, destroy property, assault people and send them to the hospital - with full impunity and vocal support of major parts of the press and community, that would loudly denounce this conduct in everybody else - isn't that a kind of privilege? Some people seem to enjoy it.
That sounds like silly word play. It'd be like if I said "Tom has power." To which I guess you might reply, "People don't have power. Societies have power, structures have power, and positions have power... but people don't have power." Okay, sure. But Tom still has power.
We are all responsible for the debts of our fathers, except the bill comes in the form of the physical world we inhabit. Who thinks they exist away from the past, except as rhetorical play about conquering old challenges?
> The privilege is a result of a system. You cannot say x people have privilege
This is a distinction without a difference. The Principle of Charity applies and we can easily understand that "You have privilege X" is being used to mean "You are accorded privilege X by the dominant social system in which we both exist".
"I’ve postulated before that “privilege” is a classic motte-and-bailey term. The motte, the uncontroversial and attractive definition, is “some people have built-in advantages over other people, and it might be hard for them to realize these advantages even exist”. Under this definition, it’s easy to agree that, let’s say, Aaronson has the privilege of not having to deal with slut-shaming, and Penny has the privilege of not having to deal with the kind of creep-shaming that focuses on male nerds.
The bailey, the sneaky definition used to push a political point once people have agreed to the motte, is that privilege is a one-dimensional axis such that for any two people, one has privilege over the other, and that first person has it better in every single way, and that second person has it worse in every single way.
This is of course the thing everyone swears they don’t mean when they use the word privilege, which is of course how the motte-and-bailey fallacy works. But as soon as they are not being explicitly challenged about the definition, this is the way they revert back to using the word."
Slightly more pragmatic reframing of the question:
> "Why do you host harmful content?"
To which your returned question would be similarly open-ended: Harmful to whom? Define harmless content. Etc.
... but probably a little more "down to earth" in terms of discussing the bounds of such definitions.
The root of being "offended" may often be grounded in an agenda to silence and control, but it's typically defended by referencing potential harm. So that's usually a good place to start overturning such arguments.
I just think debating the former is meaningless (even if two parties agreed, their shared conclusion wouldn't be useful as offending someone isn't implicitly harmful).
Whereas debating what's harmful, while similarly arbitrary and subjective, at least has the implied goal of actual harm reduction.
You don't need a moral code. The pragmatic argument is that throughout history and across cultures, the rise of large civilizations was enabled by basic rule of law. Protecting people against violence brings stability and maximizes productivity.
The policy is a bit of a misnomer though - as I understand, that's them funding the fight - by donating the money that would otherwise be their profit - not the "morons". But I think that's a very acceptable way to handle the problem.
It's a tough question isn't it? One we've tried to answer a good many times as a species. The last sincere attempt may be considered to be the Universal Declaration of Human Rights; that an individual may live freely regardless of race, color, sex, religion, political opinion, etc. That seemed contradictory at the time, bit it was always predicated on the idea that no one could ever affect the ability of another to live their life in a free way.
It's just funny to see the same old argument brought up the thousandth time with maybe a slight context change to the last.
If it is offensive because it stems from "bad ideas" then it is imperative that it is posted in the "public market of ideas" so it can be exposed and refuted.
If it is not posted then folks will not know how to refute it when it crops up again -- and it will.
Free Speech for bad ideas is as important as free speech for good ideas.
Don't be offended. That's your choice.
P.S. This doesn't mean that everything offensive must be posted -- there is stuff that should be illegal to post because of the harm it can cause.
Edit: downvote within 15 seconds of posting ... you are speedy in your thoughtlessness.
> P.S. This doesn't mean that everything offensive must be posted -- there is stuff that should be illegal to post because of the harm it can cause.
What stuff? Using this logic, why should anything be off-limits to the "public market of ideas"? e.g. libel should be perfectly acceptable speech — that way it can be exposed and refuted!
This is a particularly insidious point of view because it frames its own line between acceptable and unacceptable speech as some sort of natural law, when really it's just as arbitrary as any other line.
I don't know how to codify this, but there does seem to be some things that are just simply harmful with no benefit. For example, libel against a private individual.
Sure, but recognize that the line is entirely arbitrary. The only difference between child pornography, libel, white supremacist content and being mean is that people got together and said "you know what, this seems like just a bit too far".
This doesn't mean that we shouldn't draw lines between speech we're okay with and speech we're not. Literally every human being on the planet does it! This is why I don't like "marketplace of ideas" arguments: the person making them always always always still thinks some speech should be somehow punished in the market.
Libel has a pretty narrow definition and clearly leaves the realm of opinions and ideas to a different realm of willfully lying about someone with the intent to harm them personally.
The fact that we remove libel protections from public figures including politicians already makes this an extremely narrow subset of speech.
Arguing that this is just another example of an arbitrary line within a grey area seems like a rhetorical device to overemphasize the subjectivity of speech protections in pursuit of making them no longer able to be consistently protected.
But the point is, why is libel illegal at all? I can mislead someone in order to get someone to buy something from me, but if I lie about a person it's not okay? Why is reputational damage worse than monetary?
The fact that it's okay to libel a public figure (a fact I didn't know until now!) further underscores how arbitrary it is. At what level of renown are you a "public figure"? How is it measured?
We should be as stringent as possible in legal exceptions to freedom of speech. But there's no moral obligation to protect any given speech from social consequences (such as getting kicked from your web host). No one is actually a free speech absolutist.
Lying to someone to get them to buy something is in many if not most cases called fraud, and is illegal on those grounds.
This is where speech becomes a kind of action: misleading people in business, directing people to do harmful or illegal things. It’s actually pretty interesting how consistent the common thread is ... incitement to illegal acts or misrepresentation which results in tangible harm.
To my knowledge few if any jurisdictions have as expansive a concept of free speech as the United States. From the standpoint of US law, other legal systems could be said to not have freedom of speech.
Therefore the United States libel definition is most directly relevant to the overall point here.
Yes, and to lots of us we believe that certain ideas are simply harmful with no benefit. We don't disagree on the principles. We disagree on where the lines are drawn.
But again, there's nothing that makes this line objectively better than any other. It makes sense to me, even though it's kind of vague — but the point is that it's still subjective.
This isn't libel, but Robin Hanson recently put together some interesting arguments on why blackmail should be legal. I don't agree with him, but it points to the fact that there is not universal agreement on even these widely-accepted notions of what is "too harmful" to allow.
Yes, even libel should be legal. If something damaging is a lie then it should be subject to civil penalties--but the government in no way should have the power to silence ideas by declaring them lies.
It wouldn't be but that's ok. The intent is for that disagreement to be adjudicated on a case by case basis in the court of law instead of legislated because one size doesn't fit all in this case.
Free Speech for bad ideas is as important as free speech for good ideas.
It's a good idea to be wary of the sneaking-in of bad ideas by packaging them with good ideas. In 2019, this applies to even some of the most reprehensible ones. (For example: Politics through intimidation and violent assault.) Ideologies and movements aren't monoliths. The bad parts of them can be teased apart and criticized, without making you an opponent of all of the overarching ideas and principles. Anyone who tries to sell you on the "all or nothing" idea is (perhaps unwittingly) engaging in this packaging of bad ideas.
downvote within 15 seconds of posting ... you are speedy in your thoughtlessness.
Here is the pattern that I often see around that: 2 or 3 rapid, thoughtless downvotes, followed by more upvotes. Be courageous, and take the long view. (Also, we're not supposed to talk about this meta stuff around here, according to the guidelines, but I'm giving you a heads up.)
Of course no one mentioned perhaps the most important reason for non censorship of ideas.
The censors and public at large can be wrong. Imagine living in a society where racism was the norm and the censors did not allow speech that argued for the equality of races. Instead of "hate speech" they called it "unnatural speech" or some such twisted label.
In fact, our society could be very wrong about some popular ideas. In fact, I don't think we are new or different than many societies that proceeded us -- we have been here before. We can't be so myopic that we are the generation that has discovered all the ideal points of view. Let views stand up to the test.
Also censorship is fear based. Buck up. Take on the challenge. Make sure the positive voices are heard. Don't protect adults with "safe spaces" where they will never be challenged. You are just setting them up for a fall later.
The public market of ideas means that the idea of a public market of ideas can be downvoted. Don't talk about ideas being public market and then in the edit complain about someone not liking your idea.
We've known how to refute genocidal racism for ages. The ideas aren't new and the reasons why they are garbage aren't new. Yet these ideas still exist and still spread. Why has the market of ideas not destroyed these ideas? If the market cannot destroy even the most terrible ideas that humans have ever devised, what does that really say about the market?
There are 8 billion people in the world. I'm not surprised some of them hold terrible ideas. The idea that some people hold X view so now we need to give the state Y power is insane with our population level because it can justify anything.
I am not "advocating for censorship", I am "advocating" for the freedom to determine what content is hosted on a computer you own.
A computer owned by the government is a different story since government property is paid for by the tax-payer, so it should not be able to act unilaterally in matters of removing content.
And, out of curiosity, what would you say if someone made an argument that company X, while not owned by the government, is effectively a monopoly due in significant part to government intervention? Mechanisms for this might be: patent grants, regulations that create significant barriers to entry for new competitors, lucrative contracts with a federal department, inconsistent enforcement of existing laws while the incumbent has protective connections with the enforcers, tariffs...
Your hypothetical is too vague for me to answer. If you have a specific example I'd be happy to elucidate further.
I'll add that I don't regard patents, regulation, or "inconsistent enforcement of existing laws" as a reason why a company or individual should forfeit the freedom to determine what they host on a computer they own.
You have to keep refuting it for each new generation just like you have to teach each new generation the law of conservation of mass despite it having been proven over and over for centuries.
Then it is clear that the idea that the marketplace of ideas will simply defeat evil is a fraud. We will have to spend literally forever publicly arguing that it is a bad idea to murder everybody belonging to a given race.
We don't have a "marketplace of ideas" for alternatives for the conservation of mass. We teach kids one thing in school and move on. Why do we behave differently for other topics?
It is especially important for the more abhorrent ideas like "genocidal racism" to be exposed. Silencing it and driving it underground will have the opposite effect of what we want.
The reasons racism is wrong needs to be constantly reposted -- we forget and repeat our mistakes too often. That is why we have holocaust museums. In fact the dangers of racism become more obvious when you let the racists speak! Let them. Teach your children from them. Show them the effect.
Precisely how is an underground movement that cannot spread their ideas in public going to repeat the holocaust? How is forcing people to stay silent about their nightmarish beliefs going to possibly be worse than what we've already witnessed?
People just take it as an axiom that forcing ideas underground makes things worse but that seems like an idea that requires some support.
Racism is a security vulnerability in our brains. These ideas aren't propagated through reason. They are propagated by exploiting cognitive loopholes. Sitting down with your kid and showing them a whole bunch of fascist propaganda and then having a rational discussion with them isn't going to have the effect you want.
"Sunlight is the best disinfectant," according to Justice Louis Brandeis's
famous case for freedom of thought and expression. If an idea really is false,
only by examining it openly can we determine that it is false. At that point we
will be in a better position to convince others that it is false than if we had
let it fester in private, since our very avoidance of the issue serves as a
tacit acknowledgment that it may be true.
https://www.edge.org/3rd_culture/dangerous07/dangerous07_index.html
This is not evidence, it is merely stating it as truth.
What of ideas that we have already determined to be false? Genocidal racism is garbage. It has been known to be garbage for a long time. Yet it persists. Must we continue having public discussions about the merits of genocidal racism? Can we stop in 10 years? 100 years? 1000 years?
This requires nuance and discussion, but things like the following are generally not productive and not protected by free speech.
Defamation against private individual (including libel and slander)
Child pornography
Blackmail
Incitement to imminent violence
Solicitations to commit crimes
I think any discussion concerning "ideas" is fair game. It gets murky when dealing with individuals. None of the things on the list above are an "idea."
I admit I was playing devil's advocate, but I appreciate the good faith response.
I genuinely do have trouble unifying my instinct that nazi propaganda, for example, should be censored, against my understanding that what constitutes "solicitation to commit crimes" is relative to what a society considers "good" or "bad," which is (somewhat) arbitrary.
Take your list as an example. Any action that is damaging to the State could be considered Criminal - perhaps something as simple as developing good cryptography, for example. Some things are crimes that shouldn't be - smoking marijuana, an example. Simply saying "solicitations to commit crimes should be censored" thus doesn't jive with me.
Yes I think it is impossible to completely codify a list. I can always think of exceptions, grey areas, ways to exploit individuals, and ways individuals can create "loop holes."
I think the main problem is the human heart ... but that is a different discussion.
For example: claiming gay people are pedophiles, or that trans women are men in dresses trying to sneak into bathrooms to assault people.
Claims like this--and these are the tip of the iceberg--target groups of people and affect every individual within. It's like passive libel/slander that every member of the group has to contend with.
It can even affect people outside the group. I don't think I know of a straight man under 30 who isn't afraid someone will think they're gay if they do the wrong thing. And more than a few cisgender women get run out of bathrooms because they present in a more masculine way.
Publicly denouncing a group of specific folks as communists, when they were in fact not communists I believe was illegal even in the McCarthy era (that's libel if posted and slander if spoken publicly).
How old are you? I'm curious because I thought along those lines when I was a younger man. You are right that we have almost no control over our immediate emotional reactions to something we have just observed. You do control your train of thought about that observation going forward.
I'm 29. Yes I agree that you can consciously control what you do long-term in response to something, but you can't just whiff things off entirely.
If someone calls me a "fag", it will bother me whether I pretend it does or not. I'll be offended and it will probably play back in my head for a bit. I can try to let it not bother me, but it probably will at least a little.
"Words can't hurt me" is a fine personal mantra/goal/ideal but you can't apply it to everybody in every case.
The difference between something like NearlyFreeSpeech and something like YouTube, which is often lost in the noise, is that NearlyFreeSpeech is a hosting provider, while YouTube has, in addition, also an active editorial system and recommendation engine that promotes "offensive" content. Just as NFS is justified in hosting content and requiring uploaders to back it up with their real identity, so that those uploaders can be judged, people are justified in judging YouTube management for the content that YouTube management uploads (the Watch Next side-bar, and the comment streams it attaches to videos)
people are justified in judging YouTube management for the content that YouTube management uploads
You mean "promotes." People are free to judge. However, by granting discovery/virality selectively, rather than going by pure interest and numbers, YouTube is exercising editorial judgement. This makes them into a publisher, not a platform. People are free to urge YouTube to become a publisher. YouTube is free to follow suit or not and take the rewards and consequences of their actions.
Nearlyfreespeech.net is a great dns and hosting solution for simple sites and applications. They got a nice bump back in when godaddy got mixed up in SOPA back in 2012.
I've used them before and they offer a great service for a good price and I support their general philosophy in regards to privacy and free speech.
I went to bookmark this site as a hosting company supporting a ton of languages, only to see that I’d already bookmarked it back in 2006. So I guess they’ve been around for a while :)
If I post your naked photos online and they are censored, that isn't bad.
If I post your address online next to a photo of your house and it is censored, that isn't bad.
If I post the source code of your personal project online and it is censored, that isn't bad.
If I post the contents of your diary online and it is censored, that isn't bad.
If I post the contents of a heated argument between you and your spouse online and it is censored, that isn't bad.
If I post a photoshopped picture of your kid online and it is censored, that isn't bad.
Not everything deserves to see the light of day and actually, we do get to make that decision. This idea that "free speech" means everyone has to agree to let everything appear on the internet is false. "Free speech" also means "I have the freedom not to support someone else's speech".
If you have to resort to extreme examples, then it shows the weakness of your position. Those examples are mostly illegal. It's not "censorship" in the context of Free Speech if one is counteracting illegal activity. The serious societal problems come in when there is censorship on an ideological basis.
This idea that "free speech" means everyone has to agree to let everything appear on the internet is false.
If Free Speech applies to the Internet, then it means precisely that everyone has to agree to let everything appear on the Internet. In 2019, saying that people can have Free Speech, just not on the Internet, is like saying people can have Free Speech, just not with mechanized printing. In 2019, publishing has to include the Internet, and suppressing publishing on an ideological basis is suppressing the principle of Free Speech.
> If you have to resort to extreme examples, then it shows the weakness of your position
What is extreme about them? An example of an extreme would be "child porn" or "your credit card number" or "the password to your email". The examples I listed are pretty mundane and actually quite common. Either way, you haven't actually explained why any of the examples I listed would be "bad".
> Those examples are mostly illegal
None of those examples are illegal except the naked photos and not even that in all states (but most, and not even just "naked photos" in and of themselves necessarily, i.e. naked photos in the context of "revenge porn"). But even if they were, so what? If it's illegal does that mean it's not censorship to remove it?
> If Free Speech applies to the Internet, then it means precisely that everyone has to agree to let everything appear on the Internet
This is obviously wrong. Based on that logic you should never be able to delete a comment from your personal blog because you're censoring the critics.
An example of an extreme would be "child porn" or "your credit card number" or "the password to your email".
Also illegal.
None of those examples are illegal
Most of those examples would constitute evidence of illegal activity.
If it's illegal does that mean it's not censorship to remove it?
If it's illegal, then it's no longer protected by the principle of Free Speech. The issue isn't censorship. It's the principle of Free Speech.
Based on that logic you should never be able to delete a comment from your personal blog because you're censoring the critics.
No. Ethically, that would be wrong. So too, would not taking down a dox or an illegally obtained naked photo. Neither taking down a dox or an illegally obtained naked photo would be examples of the suppression of Free Speech.
Basically, you're engaging in the dishonest conceit of equating Free Speech and censorship. They are not the same thing. At issue is the subset of censorship which abrogates Free Speech.
No they aren't. Point me to a law stating that any of those are illegal besides the "revenge porn" example that I outlined.
> If it's illegal, then it's no longer protected by the principle of Free Speech
So are you saying that anything the government deems illegal doesn't fall under the "principle of Free Speech"?
> No. Ethically, that would be wrong
Sorry, I'm not understand what you mean here. Are you saying it would be ethically wrong to delete comments from your personal blog? I'm not being snarky, just not sure if you're referring to something else when you say "ethically wrong".
No they aren't. Point me to a law stating that any of those are illegal besides the "revenge porn" example that I outlined.
That's dishonest quoting. The last quote should include my assertion that many of those would be evidence of illegal activity.
If I post the source code of your personal project online
If that was done without permission, if that information was obtained illegally, then that would be illegal. This would indeed be the case with my personal project under its current copyright, licensing terms, and repository disposition. That such a circumstance could be illegal is part of the intended shock value of your extreme example. If you didn't intend such an illegal scenario, it's still reprehensible and extreme. (see below)
If I post the contents of your diary online
Again, if that was done without permission, if that information was obtained illegally, then that would be illegal. Again, if you didn't intend such an illegal scenario, it's still reprehensible and extreme. (see below)
If I post the contents of a heated argument between you and your spouse online
If the contents were recorded from a private conversation in a home here in California, that information was obtained illegally, then that would be illegal. Again, if you didn't intend such an illegal scenario, it's still reprehensible and extreme. (see below)
The following two aren't illegal, but they're just morally reprehensible in a way which even transcends ideology. As such, the following examples are certainly "extreme."
Doxxing: If I post your address online next to a photo of your house
If I post a photoshopped picture of your kid online
But in any case, your position isn't defensible, because you're dishonestly or mistakenly conflating censorship and the principle of Free Speech. They are not equivalent. Again, the issue is the subset of censorship which interferes with Free Speech.
To be fair, there could be scenarios crafted for all of your examples of concerning activity which would make them Free Speech. If the information had a purpose to further the transparency of organizations, public figures, or shed light on legal matters, those would be covered by Free Speech. On the other hand, if the purpose is purely to hurt or humiliate someone for views, then this is reprehensible, and it doesn't fit the purpose of Free Speech.
> That's dishonest quoting. The last quote should include my assertion that many of those would be evidence of illegal activity.
It's not dishonest, it's literally your exact quote, copy and pasted in whole, verbatim. Your "evidence of illegal activity comment" is written later in the post, and I didn't respond to it because it's completely irrelevant. You stated those things are illegal. I explained that they're not and asked you to point to some evidence that they are. The fact that you are now saying they're "evidence of illegal activity" is just you moving the goalpoasts.
Anything can be considered "evidence of illegal activity" IF it implicates one in a crime. Using your reasoning, a bloody knife is illegal because it's "evidence of illegal activity". Well... no, it could just as easily be evidence that I cut myself making dinner, so there is no reason for you to bring that up except to shoehorn my examples into the category of illegal activity even though they aren't actually illegal.
> if that was done without permission... if that information was obtained illegally... If the contents were recorded from a private conversation in a home here in California...
So only IF you qualify everything I wrote with examples I didn't use which are actually illegal. You've got some balls to lecture others on dishonest argumentation when you can't even honestly tackle the argument as I wrote it. Unless you decide to post some links to laws showing that those examples are illegal, I am just going to move on from this part of the discussion because you're objectively wrong here. They're NOT illegal.
> The following two aren't illegal, but they're just morally reprehensible in a way which even transcends ideology
So are you suggesting that things you deem as morally reprehensible should be exempt from your "principle of Free Speech?"
You also did not answer my question about whether or not it is "ethically wrong" to delete comments from your personal blog.
Using your reasoning, a bloody knife is illegal because it's "evidence of illegal activity".
...You've got some balls to lecture others on dishonest argumentation when you can't even honestly tackle the argument as I wrote it.
One can cite or represent a bloody knife for shock value. In such cases, the implication is often clear. I suspect you're either misleading with the shocking implication, thereby having it both ways.
Perhaps I misread your intention. However, your scenarios don't really make sense if one applies the "not a big deal" interpretations. Sure, there are situations where it's not a big deal to post a diary entry. In that case, why would anyone care and why would there be any censorship which would be considered "good?" It doesn't make sense.
So are you suggesting that things you deem as morally reprehensible should be exempt from your "principle of Free Speech?"
There is indeed a problem with speech devoid of principle, meant only to hurt someone. This is understood by the law. The purpose of Free Speech is to let people express grievances or objects with regards to principles. Morally reprehensible speech should be allowed, but it's not the purpose of Free Speech. Some subset of morally reprehensible speech would even be illegal, and therefore it wouldn't be protected as Free Speech.
Again, it's you who brought up the nebulous extreme examples in the first place, with the purpose of having it both ways to justify censorship.
> One can cite or represent a bloody knife for shock value. In such cases, the implication is often clear. I suspect you're either misleading with the shocking implication, thereby having it both ways.
I am not making any implications. My point here is very clear: your statement that the scenarios I described are illegal is false. It's as simple as that. Either way, it doesn't matter to the point I'm making.
> your scenarios don't really make sense if one applies the "not a big deal" interpretations
Whether you regard the scenarios as "extreme" is a subjective characterization that has no impact on the argument. I am arguing that censorship isn't inherently bad or wrong. I don't agree that the examples are extreme or illegal, but even if they are, they are still examples of content that could inadvertently end up on the internet; the point is the same no matter how it got there.
Even if someone broke into the house of a politician, assaulted them at gunpoint, and then stole sensitive political documents before posting them on the internet, the argument is the same. The voluntary censorship of those materials by platform owners would not necessarily be bad or wrong. In that extreme and illegal example, platforms might have a legal obligation to censor the materials as well, but that doesn't change whether or not doing so is bad or wrong.
> There is indeed a problem with speech devoid of principle, meant only to hurt someone. This is understood by the law. The purpose of Free Speech is to let people express grievances or objects with regards to principles. Morally reprehensible speech should be allowed, but it's not the purpose of Free Speech. Some subset of morally reprehensible speech would even be illegal, and therefore it wouldn't be protected as Free Speech.
Descriptions like "devoid of principle and meant only to hurt someone" is a subjective determination the likes of which sit at the heart of every free speech debate. As you already stated, in the context of "what is understood by the law" the point is moot since "illegal speech" is not "free speech" by definition unless you're arguing that the law is wrong or misapplied. Either way, speech that is "devoid of principle and meant only to hurt someone" is also legally protected speech in most cases.
> Again, it's you who brought up the nebulous extreme examples in the first place, with the purpose of having it both ways to justify censorship.
I did not "justify censorship", I refuted the statement "censorship is always bad". To characterize that as justifying censorship is dishonest.
> Your "evidence of illegal activity comment" is written later in the post, and I didn't respond to it because it's completely irrelevant.
You forgot about the guidelines: Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize.
Do you have anything to add to the discussion regarding the topic at hand? I explained the problem with their reasoning in the post you're replying to.
> Anything can be considered "evidence of illegal activity" IF it implicates one in a crime. Using your reasoning, a bloody knife is illegal because it's "evidence of illegal activity". Well... no, it could just as easily be evidence that I cut myself making dinner, so there is no reason for you to bring that up except to shoehorn my examples into the category of illegal activity even though they aren't actually illegal.
The problem with your explanation is that it reverses the probability of an action being illegal if the context of it is not provided.
More people are accidentally injured with knives than they are used to commit assault, but in the examples the probabilities are flipped.
You could say that under that reasoning, running over an old nun in the street is illegal because it's "evidence of illegal activity", Well... no, it could just as easily be evidence that you were stopping a mass shooting by a psychopathic old lady.
I tend to look at it from the perspective of the listener. Do I, as a individual adult, have the freedom and responsibility to make up my own mind about other people's ideas or not?
I believe I do, and so I think there should be a platform for free speech. I believe this even though I largely do not wish to consume much of what people might term offensive.
I appreciate platforms which curate and moderate content as a form of customer service. What I don't appreciate is entities (governments or corporations) taking a moralistic stance as if it is their duty to stamp out bad ideas from existence.
The SPLC has done good work in the past, but they've published outright falsehoods recently. They've even had to retract, apologize, and pay a legal settlement. Much of their activity of late seems to be about political expedience and tribe, less about principle.
At least in the United States, threats of violence are also protected speech. "Fighting words" are not, but that is generally taken to mean words intending to cause immediate direct and specific harm to a specific person. A generic sort of "I wish $person was dead" is protected.
I think I would disagree with your definition of censorship. If almost all entities in a given sphere, be it web hosting or media, decided they would absolutely not publish or host anything related to a particular topic or form of content, it seems accurate to term that censorship. The fact that it isn't done by the government doesn't mean that particular content isn't functionally suppressed.
Over the last few years I've been questioning what it means to have freedom of speech. This quote in particular strikes me:
> Finally, censorship is always bad, for a variety of well understood reasons that we don't need to repeat here. But in the case of some types of content, it has special dangers. When you censor a web site based on the extreme or dangerous views of its creator(s), you haven't stopped those people from thinking that way.
Why? The problem is that I've seen someone who was very close to me repost propaganda on Facebook that looks like it's following the Nazi propaganda playbook. Stuff against immigrants, against religious minorities, ect. Just take some classic Nazi propaganda, swap out "jew", and that's this person reposts.
(Or used to, as this person recently complained that Facebook is blocking their posts.)
Anyway, I don't think that this person really thinks this way; instead I think this person's thinking is manipulated to push a political agenda.
I see this argument so, so frequently -- that repugnant views need to be given a platform, that heinous and disgusting content must be allowed a space, so that everyone can see it and fight against it!
Since when the fuck is that how the internet works?
If Stormfront hosts a site on NFS.net, who do you think visits that site? Bright young progressives valiantly carrying a banner of social justice?
No. Fucking neo-Nazis visit the Stormfront website, because, and this is important, _it's a platform for fucking neo-Nazis_.
Christchurch. Charlottesville. Numerous terrorists have indicated very clearly that they were radicalized online. Why the fuck is it somehow your responsibility to provide these people a platform to spread their poison?
To end my rant, here's that ridiculous quote that always gets tossed around in these discussions:
> "Sunlight is said to be the best of disinfectants; electric light the most efficient policeman."
Actually, as it happens, the best disinfectant is a harsh chemical, and the most efficient policeman is a fucking policeman. The internet is not a place of light and exposure. It is a place where disgusting ideologies can hide, quietly attract followers, and conspire to murder people.
Consider how powerful that sunlight was the next time a right-wing terrorist screams about blood and soil while brandishing an AR.
I'm sorry the tone of this is so angry. NFS is a great service -- I just can't stand this pitiful justification for aiding radicalization and eventual violence. There is a line you can draw. It is up to you to draw it.
> Christchurch. Charlottesville. Numerous terrorists have indicated very clearly that they were radicalized online. Why the fuck is it somehow your responsibility to provide these people a platform to spread their poison?
People can also be radicalized in person. Rallies, meetings, one-on-one conversations.
Does that mean we have to police everyone's personal lives and invade their privacy?
If "progressives" aren't visiting the website or doing anything to speak out against the website and it's content, it must not be important enough for them to warrant doing it.
Also, it's not a pitiful justification. It's a really good business move on their part. I assume NFS has seen in the past that larger webhosts don't/won't allow this type of content on their platform and are removing it because it's easier for their legal team/marketing/PR/etc.
NFS has filled the hole saying "hey, we'll support your right to free speech, even if you're offensive as all hell. We're just letting everyone know (including you) that we're donating all of the profits plus some to charities that actively fight against this type of culture since we don't agree with it."
They get good PR, they get the "woe is me, we're beaten and downtrodden from censorship" customers that keep getting removed from other platforms, and they can still be on their moral/ethical high horse by donating the proceeds to charity. NFS shouldn't be responsible for who goes to their website or the content on it. NFS's job is to host websites, not police them.
Where do you, personally, define the line though? And how do you ensure that line won't be changed to fight against you once your enemies are in power?
I could give you my answer, but it doesn't really matter (and anyway, I think it's kinda clear from my earlier comment) because I don't run a platform that could host any offensive content. NFS.net does -- and so I believe it's their responsibility to draw a line. It's a very, very hard problem, and there simply isn't one right answer, but that doesn't mean we shouldn't give one at all.
You can't ensure that! At all! Which sucks! But that doesn't mean we should wring our hands and worry about "what if?" -- in fact, it only makes it MORE important that we resoundingly reject ideologies that would seek to abuse a line for nefarious, censorship-y purposes.
> "Finally, censorship is always bad, for a variety of well understood reasons that we don't need to repeat here. But in the case of some types of content, it has special dangers. When you censor a web site based on the extreme or dangerous views of its creator(s), you haven't stopped those people from thinking that way. You haven't made them go away. You certainly haven't stopped the people who hold those views from doing whatever else they do when they're not posting on the Internet. What you've actually done is given yourself a false sense of accomplishment by closing your eyes, clapping your hands over your ears, and yelling "Lalala! I can't hear you!" at the top of your voice. Pretending a problem doesn't exist is not only not a solution, it makes real solutions harder to reach."
I no longer believe this, when cesspits of alt-right, racist assholes use such grandiose ideals to spread their hatred, which then bubbles out into the real world.
The idea that good ideas will win, and that common sense and rationality will take the day, are not really supported by what we see around the net. Instead the greater internet fuckwad theorem holds more true, and the spread of vile, violent ideologies is enabled.
Freedom of speech is a protection from government, but I think those providing speech platforms, such as hosting companies, should probably take more responsibility for what they propagate.
There was a time when the cesspit were all the people promoting the debauchery of homosexuality, or the idea that blacks were somehow equal to whites in intelligence and genetic disposition. These were greatly offensive statements at one time. People were ridiculed and deplatformed for having them.
Translate the Bible into a barbaric language like German or English? That caused great offense. People were excommunicated and even killed for that (like Wycliffe).
The book "The Coddling of the American Mind" goes into this concept that ideas and speech are not violent. We do a huge disservice to young people today by teaching them to fear ideas and block speakers at Universities they don't agree with. Listening to other viewpoints and challenging them makes us better thinkers. By banning speech is to say, "I agree people are too stupid to make their own decisions. Let's make the world 'safe' for them and ban ideas I don't agree with."
I highly recommend Brendan O'Neill's video on offensiveness:
> "By banning speech is to say, "I agree people are too stupid to make their own decisions. Let's make the world 'safe' for them and ban ideas I don't agree with."
1. People are stupid. Really, really stupid. Especially when given the means to surround and reinforce themselves with other idiots. See for example - antivax and alt-med in general, chemtrails any number of ridiculous conspiracy theories that propagate through the web.
2. These people are not thinkers, they are not open to having their viewpoints challenged and reason will not move them from their course.
I agree, this whole area is massively subjective, I'm not saying I have a solution. But this black and white idea that censorship is always bad, and the notion that we have a functional marketplace of ideas which people re-evaluate based on reason is ... well it's a fantasy.
But thank god you and I are smart - really, really smart. So smart, in fact, that we should take it upon ourselves to police the thoughts and words of the 'lessers'.
Not what I claimed, I'm not saying I have a solution here, or that anyone should be empowered to arbitrate. Merely that the idealist position is also pretty wrongheaded.
> Hmm... It seems like that's exactly what you said earlier. Walking it back now?
Nope.
First one is in reference to government, I'm not suggesting that the government should have the power to shut people down, as I've said in multiple places, I'm not sure what a good solution looks like.
The second one is about platform owners, who are not governments, but should take some more responsibility for spreading (for instance) hate by providing their platform to people.
When I say I'm not sure anyone should be empowered to arbitrate, I mean that yes, and that's a very different thing to people having control over what they broadcast on their own platforms.
> I'm not sure what group of people you just Other'd,
Then try re-reading my comment.
> a) whether you still hold the view you espoused 13 minutes ago that some group of humans are non-thinking.
I said they are not thinkers, and they aren't, they aren't interested in competing ideas or what's factual, and as such high ideals as the marketplace of ideas coming true in the end are at best naive.
> b) whether the message you're trying to make could be more clearly articulated.
I was pretty clear - a lot of people are very stupid. Look at how they hurt themselves and each other, look at reality tv, look at ... hell you don'[t have to look very far.
Again, I'm not claiming to be the smartest guy in the room, neither am I claiming to have a solution. I'm just trying to point out that these abstract ideals about ideas being allowed to propagate and compete, about allowing and supporting the dissemination of hatred from your platform in order to better fight it in the open, that when you do fight them with better arguments, people will listen... these ideas are pretty demonstrably flawed.
I have come to the point of view that what radical free speech people deride as "censorship" is often better described as "an immune system".
When you catch a cold, it is because viruses have used a bunch of subtle hacks to convince some of your cells to stop being part of you and start making more viruses instead. Your immune system comes in and stops them from doing this. Sometimes it makes mistakes, sometimes it can be co-opted and used as a viral host itself (see HIV for instance), but on the whole it keeps your cells busy being a part of you rather than striking out on their own agenda, whether it's one they arrived at by random mutations like most cancers, or one that crawled in under their normal defenses like a virus.
Running a platform that gives a voice to anyone and everyone, up to the limits of "whatever gets the platform-runner hauled into court and fined for more money than they make off of spreading the view that got them in trouble", is like actively providing places for diseases to grow.
To keep the analogy: if you live in a quarantined clean room to protect your immune system from the bad germs, viruses etc, you're going to end up with a very weak immune system. If you expose your immune system to the world, it's going to learn how to fight specific threats it'll encounter.
And yet for the really virulent stuff -- the polio, the smallpox, the measles -- we don't rely on random exposure, we innoculate with dead and fragmented pieces of the virus while working to exterminate it in the wild.
That would be the illegal stuff, I think. We're dealing with that on a societal basis. I suppose the extent depends on the trust we place into the ability of citizens to not collectively fall for something just by being exposed. If we all had very frail immune systems, doing our very best to extinguish the common cold would make sense. Since we don't, let's keep those reasonably safe that are in danger and let the rest deal with it.
For anything else, it may quickly become problematic if those that get scared or upset the most over news stories are in charge of declaring when to quarantine people, neighborhoods or states.
Just because someone thinks the world is flat, doesn't mean I have to disagree with them on most issues, except MAYBE on space exploration, and certain academic fields connected to it. Believing the earth was flat takes nothing at all away from their ability to judge humans, celebrate traditions and reciprocate favors. I do not buy into "getting spooked" by ideas I don't identify with, because if I draw a line in the sand that keeps moving towards me, maybe it indeed moves because of my own bad reasoning.
But who decides and points out what should be removed. What is “far right” from the point of view from someone who is “far left”?
It began with Alex Jones and that kinda made sense but now the same people and institutions are asking for the head of individuals like Phillip DeFranco.
The thing about censorship is that it never stops where you think it should.
> Friendly reminder that "slippery slope" is a fallacy.
It's a fallacy when there isn't a plausible mechanism that allows each step along a path to make future steps easier. Lots of things actually are slippery slopes. Censorship is almost certainly one of those things. From a comment I made last week regarding YouTube's "let's ban hacking videos" decision:
> ... "slippery slope" holds when each change makes it easier to enact further change in the same direction, and that seems to be the case here. "censor CP" + "censor porn" is an easier sell than the original "censor CP" step was, thanks to infrastructure already being in place. Adding copyright on top of that was easier still. And then violent content, and then aid to terrorism, and then politics we don't like, and gun repair videos, and ammo reloading, and...
It is a fallacy to say that the slippery slope will always occur. However, that does not mean that it is invalid to argue that in a given situation a slippery slope is likely to occur. In cases such as this I think we can reasonably assume that the "never" in "it never stops where you think it should" is not meant to be taken literally but is instead being used as a figure of speech for "very unlikely".
A logical fallacy, sure, but in terms of political ideologues "a little at a time" is a methodology for making massive changes. See "Rules for Radicals" and the Hegelian Dialectic.
Waving your hand and declaring "Fallacy!" isn't really a refutation. It's a cheap way to avoid actually addressing the argument.
Fallacies are sometimes correct, e.g. “with this therefore because of this” isn’t a bad starting point when triaging. Relying upon a fallacy as your sole argument is a problem.
> ...now the same people and institutions are asking for the head of individuals like Phillip DeFranco.
I'm fairly lefty, and haven't heard of this Phillip DeFranco before. Surely if I consult my biased google bubble, it will show me the dirt on him? Well, not really:
> “Hey, writer here,” Roose responded. “This collage is just a sample from his viewing history. Some far-right, some not.” [1]
And reading the original NYT article... they aren't asking for anybody's head. And this is where it gets really interesting from a free speech perspective. Phillip got offended because he was in a collage of somebody's viewing history. He might have been a step along somebody's slippery slope, whose politics are/were too extreme for his comfort -- or it might just be correlation without causation. The article didn't dig into Phillip and denounce him.
In fact, the thrust of the NYT article is that YouTube's recommendations take folks from moderate content, and send them on a spiral to more extreme content.
You're free to describe all of that as 'asking for heads,' but that really doesn't seem to be the case here.
You're free to describe all of that as 'asking for heads,' but that really doesn't seem to be the case here.
It's a staple of the Far Left in 2019, to vilify purely through association. For example, every time I've asked for proof that Tim Pool is "Alt-Right," I've only ever received vilification through association as "proof." The same pseudo logic is used to claim that Ben Shapiro (a devout yamaka wearing adherent of Judaism and arguably a top target of the Alt Right) is "Alt-Right."
In fact, the thrust of the NYT article is that YouTube's recommendations take folks from moderate content, and send them on a spiral to more extreme content.
As far as I can tell, this is just more of the tactic of moving the Overton Window left, by re-labeling the center as "Far Right." It's dishonest and manipulative. I say this as a lifelong Democratic voter and someone who tests center-left on the Political Compass test.
> Finally, censorship is always bad, for a variety of well understood reasons that we don't need to repeat here.
I agree that uncensored speech is good, but when I read this I knew it would come up as problematic. I think we've all seen a lot of bad arguments sneak their axioms into the conversation with a line like this (but again, I agree with what's written here). Does uncensored speech fall into the category of "it goes without saying"? Perhaps I should learn more about censorship so I can effectively advocate against it, just like the article says.
And maybe they aren't. It's going to be hard to look around the net and draw a conclusion either way that everyone -- or possibly even most people -- will accept as definitive. There's an increasing number of studies that show exposure to fringe and extreme views consistently is more likely to foster extremism rather than inoculate against it, and as much as I wish that weren't so, I reluctantly find them rather convincing.
I find NearlyFreeSpeech.Net's position on hosting "really offensive content" to be well-reasoned and thoughtful. I'm just not convinced it's ultimately true. The web has brought unparalleled good to the world in the mere quarter century it's been with us. It's also arguably been the prime mover in bringing back flat earthers, Nazis, and measles. When you ask "don't the anti-vaxxers, climate change denialists, and white supremacists deserve great web hosting, too?", maybe the answer is "no, not really."
Tell that to the people gunned down in NZ with a rifle covered in 'chan slogans ... ?
I'm not saying censorship is good, let's go full-on NewSpeak here, or even that I have any sort of solution to propose.
I just don't think this ideal, that censorship is necessarily always bad and that the best, only way to fight evil ideas is with discussion and better ideas ... I don't think holds up to scrutiny in the face of reality.
Do you blame 4chan? Really? An emotionally disturbed person may have more input today in the Internet age, but the majority of 4chan users don't go out shooting mosques.
This was one fucked up Australian, from a country of over 30 million; and the first Aussie to commit a mass shooting since 1996.
I do not think it's right to place the blame on a bunch of random people saying things. They didn't murder all those people. Speech is not violence. Violence is violence, and that one guy decided to take it off the computer, hop across the pond, somehow get a ton of firearms in a country where they are heavily restricted, and then shoot up a bunch of innocent civilians.
There are millions of other Aussies and Kiwis who probably chat on the same networks he was on, and never take up violence. They may speak up against things they think are wrong or that they don't like, either online or in public spaces, but there is a big difference between speech/saying things and murder.
The atmosphere of open denigration of others certainly seems to be contributing to the rash of violent alt-right and white-supremacist incidents we're seeing yes.
Do I 'blame' 4chan? Not in any absolute sense, especially as he did his live broadcast to 8chan. But I'm willing to say that I think participation in these online communities contributed, yes.
Speech leads to violence, and even a somewhat limited reading of the history of genocides in the 20th century ought to make it difficult to dispute that.
If the Hutus hadn't been broadcasting anti-Tutsi sentiment for a few hundred days before the violence, it's tough to say that the Hutus would've suddenly risen up to kill about 70% of the Tutsi population there all of a sudden.
Speech also leads to civil rights, Gandi's campaign to free Indian from the Raj .. and sure you can say, "Well that speech didn't advocate violence."
What about The American Revolution? or unions/strikes? There are times in history where speech lead people to push through those tipping points, and sometimes they resulted in violent revolution and others in non-violent revolution. (and we're generally okay with the violent revolutions, so long as the 'right' side wins).
Even in your Hutus/Tutsi example, you're suggesting the Hutus used speech to persuade their people to commit violence? It's still the choice of the individuals, and eventually the group, to act violently.
Unless you're saying that with enough advertising, you remove peoples' agency. That in the face of constant advertising, individuals have less of a choice and subscribe more to group think.
Maybe freewill is an illusion and you can get people do do whatever you want with enough speech, propaganda and averts. But that's a much bigger issue of human will than speech.
This might be true. Your logic applies to all such cases though. Right to vote suffers from exactly the same drawbacks as right for speech. And authoritarianism has the same tempting benefits as censorship -- you can stop people with bad ideas from messing things up. And, just like with censorship, one day you might end up on the wrong end of the system, and not necessarily because your ideas are bad.
>The idea that good ideas will win, and that common sense and rationality will take the day, are not really supported by what we see around the net. Instead the greater internet fuckwad theorem holds more true, and the spread of vile, violent ideologies is enabled.
The anti-vax movement and climate change deniers are better examples of this than Nazis and similar shitheads, IMO. There's easily accessible, scientifically sound evidence that both these groups are completely wrong. It's not at all open to debate. If the "free marketplace of ideas" worked then these groups would have disappeared long ago, yet the former group has led to almost ten thousand preventable deaths and the latter may result in far more. Maybe the truth will eventually reign supreme and both groups will become a footnote in history, but there are real consequences in letting groups like these spread their views online.
Nazis and similar shitheads are in the same boat. I would agree that Climate Change Deniers are an example of the Free Marketplace of Ideas not having worked yet. However, suppressing them just gives them ammunition. Best to just keep debunking: https://www.youtube.com/watch?v=ugwqXKHLrGk
(Re: The potholer54 video: If you dislike Steven Crowder, you should be glad he's a Climate Change Denier. That's just about the biggest hole in his hull!)
I used to be much more of a free-speech absolutist. It wasn't til I started reading about how genocides start that that started to change for me.
There is a pretty direct line drawn between speech and violence against out-groups; anyone who says otherwise would do well to read about the Rwandan genocide and their 200 days of public radio broadcasting demonizing the Tutsis prior to the genocide itself.
After all, if there wasn't power in speech, none of this would matter; there would be no restrictions on speech anywhere if it didn't threaten someone.
Even in free-speech absolutists, there's often agreement that direct incitements to violence should be off-limits, and why? Because speech moves people to act.
I haven't often seen the position that incitements to violence should also be protected speech, though I'm sure those people are there -- the question to them for me would be, what are you trying to advance or protect against with that position?
The question I'd also ask is: if you want to say that speech such as calls to genocide should also be protected, how is that advancing society, especially for the targets of that? The marketplace of ideas doesn't seem to do a good job protecting them, so...what's the solution there?
Controlling speech in order to prevent violence does not work when the people carrying out the violence control the institutions. In your Rwandan example the Rwandan government could not be trusted to censor their own goals. The situations where this works are much rather and smaller because it is about preventing violence that doesn't have major backing.
Violence is one solution to a given problem. Sometimes problems are worse than violence needed to fix them. Ideally a non violent solution is found instead (changing system from within, peaceful activism, etc). But non-violent solutions are only viable if speech is unrestricted. You have to convince enough people that things need change for society to flip on the issue and eventually fix it. By keeping speech free you are preserving the option. Otherwise you are left with 2 choices: violence or status quo.
Free speech is a tool for reducing need for violence. That's why direct incitement to violence is usually not protected -- it goes against the whole point.
If you want a tech solution to misleading speech and outright lies though, then mandate all publications of debunked speech to publish links to rebuttals, without having to remove original content.
"Those Three Shocking Ways Vaccines Cause Mice Tails To Fall Off Will Shock You!!! [Five Factual Claims This Article Gets Wrong <link>]"
There is a pretty direct line drawn between speech and violence against out-groups; anyone who says otherwise would do well to read about the Rwandan genocide and their 200 days of public radio broadcasting demonizing the Tutsis prior to the genocide itself.
If you're concerned about the "pretty direct line drawn between speech and violence against out-groups" then you should be paying keen attention to the normalization of political intimidation and violence in the past several years.
The fact that the media have been giving groups tacit support, by not or minimally covering their assaults and vandalism, while even giving them positive spin, should raise some concern.
I haven't often seen the position that incitements to violence should also be protected speech
They should not be. "Punch a Nazi" -- despite the vileness of the purported targets -- shouldn't be allowed. "Milkshaking" is incitement to assault. The fact that Twitter allows those to continue shows a groupthink bias at operation there.
To 80% of the population? Wouldn't that be the Book of Mormon?
To 60%? The Quran?
I think this part of their answer is a great response: "Of course, the simplest reason is that it's not up to us to decide what the rest of the world should or shouldn't see. Bad news, it's not up to you either. Worse news, it's still true even when we agree. Which is probably most of the time."