If these guidelines are implemented, I predict "Town Square" apps like Twitter will decline in favor of sub-networks like Discord or Substack.
> We will protect any speech that is protected under the First Amendment.
This will likely include speech that people and advertisers find objectionable, and will consequently want to disassociate from. There is a lot of First Amendment speech that people don't want to have shouted at them when they're trying just trying to go about their lives, or keep up with the media. Think about what HN would be like if the mods allowed all First Amendment speech in the comments.
> We will double down on our authenticity rules and procedures.
It's not clear if this requires people to use their real names + verify their identities, but if so it's dangerous for free speech. Pseudonyms can be a valuable way of enabling people to speak freely. Forcing people to use their own names also increases the risk of their being targeted by harassing speech (which defenders will claim is protected by the 1st Amendment). Again, people will likely leave for more curated communities or anonymous communities rather than endure it.
> We will provide users with the tools they need to curate content for themselves. For instance, we will give users the ability to hide or delete offensive comments to their posts.
I have these tools available for SMS and email, but it's exhausting to manage, and they've been rendered useless by the sheer volume of spam. For the average user, this is forcing them to take on a lot of work to protect themselves from spam and harassment. Many will just quit rather than add another channel they have to filter and manage.
If a "public square" (and I'm not convinced Twitter is a public square in the sense people say) becomes unruly and fails to provide a well regulated commons, people will depart for communities that regulate the way they want (like Hacker News or amenable Discord Servers)
I don't understand why some people are so hellbent on eliminating Section 230 of the Communications Decency Act, which would only mean the most reasonable way to have UGC is to have none at all, because anything else leaves a non-zero probability of the UGC website operator (now being considered a publisher) being sued.
This is the only good take on Section 230 of the Communications Decency Act, and the one that I agree with the most; since it would set the First Amendment as the bar for speech on social media, bringing parity to the protections we have elsewhere.
> Fighting Words. In the 1942 case Chaplinsky v. New Hampshire, the Supreme Court held that speech is unprotected if it constitutes “fighting words,” which are defined as speech that “tends to incite an immediate breach of the peace,” through the use of “personally abusive” language that “when addressed to the ordinary citizen, is, as a matter of common knowledge, inherently likely to provoke a violent reaction.” Certainly, all racist, misogynist, homophobic, and other slurs could be folded into this category and prohibited by social media sites.
This entire section is just wrong. Fighting words are extremely narrowly defined, to the point where if you don't get punched in the face within a few minutes of saying something, it is not fighting words under law. The original definition of fighting words was laid out at a time when honor duels were still common, and that is the environment under which they were defined. Since then, every higher court decision referring to them has narrowed, not expanded their definition. There is no way a slur on the internet can be construed as fighting words as the law currently exist.
> The original definition of fighting words was laid out at a time when honor duels were still common, and that is the environment under which they were defined.
You're right that it's a very narrowly defined carve out that has since been narrowed further, but it's definitely from the 1940s from the cited Chaplinsky v. New Hampshire.
He's got "incitement" wrong too; he's concluded that incitement is about "imminence", which is a factor, but the larger factor is intent. If Twitter was held to the 1A standard on incitement, it would be unable to block a great deal of content that was likely to cause imminent lawless action.
"Advocacy of" and "intent" aren't the same standard either!
Again: if Twitter was held to the 1A standard, there is a great deal of content likely to cause imminent lawless action that they would be unable to block.
The expectation that your own words are going to cause the lawless action, rather than supporting an action that was going to happen with or without them.
The entire Brandenberg case is about the distinction, on this particular word.
I don't think anyone can know what's going to happen with or without their participation, but granting that for the sake of argument, he also quoted "directed to inciting or producing imminent lawless action". (Edit: also "organize actions that spill over into the physical world")
The article, IMO, made it clear enough that information that causes violence but wasn't intended to cause violence is protected by the 1st amendment.
The bullet's heading is "incitement", the purpose of the bullet is to explain the clear "incitement" standard that Twitter could rely on to block content likely to cause harm, and we've reached a point where we're using the word "inciting" to define it. Like I said, this is a hash of an argument.
There is a clear legal standard for incitement. Sacks hasn't articulated it. Either he doesn't know what it is, or he does know, but smartly recognizes that it cuts directly against his argument. The point he wants to make is that First Amendment jurisprudence already provides a basis for service providers to eliminate the most objectionable content. It does not.
Particularly in the case of incitement: if Twitter was held to the 1A standard, it would be unable to block a great deal of content likely to cause imminent lawless action.
That's a fine objection to his proposal (though perhaps better with examples of information protected by the 1st that you want Twitter to ban).
But I don't think it's a fair criticism of his article. I clearly understood from his writing that the "Incitement" exception only covers information intended to cause violence in the near future.
I don't know if he's familiar with the precise legal language, but even if he is, it wouldn't be appropriate to use it in this article for laypeople. He's using common English.
No, he's not. He's obviously not. He's listing the specific notable exceptions to the First Amendment's bans on prior restraint. He goes out of his way to attempt to depict the specific legal language.
It's odd that the 'General Partner and Co-Founder of Craft Ventures. Previously: Founder/CEO of Yammer. Original COO of PayPal.' came to such a mistaken belief regarding fighting words in modern legal practice.
Carelessness in fact checking seems to becoming more common even among otherwise competent people.
Repeating something I said downthread: being a General Counsel for a VC firm or a company has very little to do with Constitutional Law or First Amendment law or, really, even defamation (which is something that does come up in companies). If the GC of a typical company has to deal with a defamation suit, they retain outside counsel to do it, because the law is hyperspecialized.
First Amendment Twitter runs a cottage industry of dunks on well-regarded lawyers saying stupid things about 1A jurisprudence. I'm not a lawyer, but I follow 1A Twitter, and I think this would qualify; for instance: the "incitement" section refers to "clear and present danger", which is the Schenck standard, which was famously overturned by Brandenburg. My understanding is that this, to 1A law, is about as fundamental as knowing the difference between a hash table and a tree is to a software developer.
His 2nd point is that one should essentially end anonymity online.
This seems like a step backwards to me. I remember back in the 90's when we reminded everyone not to use their real name online, don't tell people your address, definitely don't tell them your age or gender. God forbid you tell random people you're <18/F/Nearby .
So just for safety reasons, anonymity has long been seen as important online.
Besides that, anonymity and pseudonymity seem to me to be one of the pillars of free speech and open debate in society. You can safely test the waters and say things that you fear might be unpopular. See eg. the Federalist papers, which were published pseudonymously.
Anonymity and closely-related pseudonymity have been important to western civilisation for millenia. It might not be a good idea to get rid of them.
IANAL, but I'm not sure how the proposed change would have any effect. If the whole idea is to treat digital content platforms the same as distributors, distributors can stop carrying publishers and specific works on their own whims regardless of whether or not they are protected speech subject to the contracts that they have in place with those whose works they are distributing. The distributors don't have a government-backed monopoly that might come along with requirements that they adhere to laws that prevent the government limiting speech. You only have recourse if the contract in place between you and the platform stipulates the terms of distributing your content. It is my understanding that this law and the updated change don't specify that the government can compel you to produce specific speech, they specify that the government can't punish you for the speech of others that you distribute.
Am I missing important context? If someone can point to resources that would better explain this to me, I'd appreciate it.
If I understand correctly, he's proposing changing 230 from
permitting service providers to moderate user content using their own judgment and values, to moderating content using the First Amendment as the baseline. So, removing a post with a controversial opinion is fine under the current law, but not covered by the proposal. The whole "publisher versus common carrier" argument is sidestepped.
My question is, what'll happen if somebody moderates beyond the bounds of the First Amendment? If Twitter deletes an unpopular opinion that's legal to publicly express, would the poster be able to sue for unlawful removal?
If platforms are held to 1A standards, then they can't sell advertising. Online advertisers have already made it crystal-clear that they will not tolerate their brands on a free-for-all platform. Ergo, a requirement for platforms to keep constitutionally-protected speech up is effectively starving them to death. And if you're thinking of making it illegal to withhold ad revenue to a platform over their lack of moderation, then you're driving a stake through the heart of freedom of association.
A better idea would be to say, "ok, you can have rules, but you have to apply them evenly". A lot of platforms will bend the rules for popular users and that absolutely is a problem.
If all advertising media/mediums are free-for-all, then what, they just quit ads and marketing? If that's the case, I am all for that. I am all for disincentivizing hyper-marketing and advertising that happens today.
No, because even in this scenario there are large media outfits that publish their own stories and would still be able to sell advertising to brand-conscious advertisers. Ads don't go away, they just retreat to newspapers and large blogs while everyone else loses out on a means to fund their work.
Those are all individual relationships that creators have with advertisers and likely would not go away no matter how much of a cesspool the social media platform is. You see, most platforms don't actually intermediate these kinds of advertising deals and thus don't see a cent from them.
Some platforms - notably YouTube - have a platform-run ad exchange that creators can participate in and make money from. This is critical for people getting into the online video business. Like, to the point where people are expecting creators to jump ship from TikTok to YouTube the moment that Google figures out how to sell and attribute ads on Shorts.
In a world where platforms are legally barred from providing advertisers with brand safe placements, advertisers will just jump ship from platforms and start working with individual brands directly. Which means that the platforms are now just providing free hosting they can't pay for and smaller creators aren't able to use ad networks to get paid for their work.
> the entire internet and Dorsey and Zuckerberg should meet it by offering the terms of a peace treaty in which they pledge.... But we realize that as the de facto public square, we are better off adopting the First Amendment as our standard than trying to improvise our own, and don’t want to arrogantly substitute our judgment for that of the Court, who has a more than two-century head start on us in grappling with difficult speech issues.
How can multiple platforms (FB, Twitter) be the public square?
The appeal to authority (Court with 200 years head start has better judgment) is also out of touch.
Maybe it should distinguish between forums, where everyone sees the same content, and social networks, where people choose who to follow and who to block. Maybe social networks should be regulated as common carriers[1].
But simply removing the second paragraph of Section 230, making moderated forums liable for every post, would make them legally unviable.
Forums are too small of an audience to be even considered by most people, including political leaders considering such regulation, who consider Facebook, Twitter, and Reddit to be the only websites with UGC, and it's difficult to explain that distinction to folks, in my personal experience.
As mentioned, "forum" here doesn't mean 'software website running on vbulletin' as we normally refer to a web forum, it's more like the "open forum of ideas" concept.
dang can ban me, remove this comment, hide it, etc, but in no way will he or hackernews be legally liable if someone sees it; I am.
If 230 were not there, then there could be legal liability and the protection against it would be positive moderation or none at all perhaps. Positive moderation meaning dang would have to read and approve each post because HN would be taking on the liability of said post. Simply not scalable.
Well, it's not just HN and a bunch of vbulletin boards. Every subreddit, every facebook group, every discord server is the kind of forum that section 230 protects. In fact, even if you block people on twitter you are arguably turning the replies to your tweets into a moderated forum. Should that make you liable for them?
This article from 2020 is making the rounds because David Sacks is in Elon Musk's inner circle and now works at Twitter.
It's an absolute hash of an argument that would mire every American service provider in perpetual litigation. Twitter would be better off with no Section 230 than with one that requires them to prove "false statements of fact" (even the word "fact" in that phrase is a subject of white-hot intense litigation) or even "incitement". It's unlikely that Sacks stands by this analysis today, and more likely that Sacks wrote it believing that he'd never be personally responsible for implementing it.
Sacks is a very intelligent guy with good insight on tech markets, but he's also deeply partisan and his motivation for the reasoning in this article can't be disentangled from his political agenda.
Though, what's particularly interesting about this article is that Sacks is a friend of Elon's and as a result has now become directly involved with the reshaping of Twitter. I am certain Sacks no longer emphatically supports the idea that "these tech behemoths are too large and powerful, pose a threat to democracy and free speech, and need to be reined in for the good of America" now that it applies to his friends.
This is the most well thought out takes on Section 230 I've read yet. David Sacks is publicly conservative and I disagree with him on a lot. But he's also a lawyer and very, very smart.
Worth a read, no matter where you stand on the issue.
Lots of people are lawyers. Most of them, including some of the most extraordinarily competent, are not First Amendment specialists. First Amendment Law Twitter makes sort of a cottage industry of dunking on lawyers, including accomplished litigators, for saying dumb First Amendment things. Sacks, so far as I can tell, has never professionally practiced law: he got his JD in '98, went to McKinsey, then joined PayPal and spent the rest of his career in startups. There is no particular reason to believe he knows what he's talking about here simply by dint of having a law degree.
I disagree, for reasons I've given all across the thread. His "incitement" stuff is fatally broken. His "fighting words" stuff is fatally broken. His "defamation" stuff presumes prior restraint for defamation. It's a mess.
A legal challenge to Section 230 immunity, Gonzalez v. Google, involves the promotion of user content.
This analysis from 2 years ago doesn't reflect "promotion of user content" as a distinct, and potentially unprotected activity. Arguably, recommendation algorithms are neither a simple act of hosting user content (first part of Section 230), nor are they good faith moderation to remove problematic content (second part of Section 230).
Both Twitter and Facebook have already been declared and used as "public forums" not only in US legal cases but also in other western ones. The only issue with S230 is that companies abuse(d, some still do) the platform & publisher status, without transparency. Full transparency will probably never be achieved realistically speaking (it is[or should be] a matter of transparency towards the end user - the user of the forum - , not towards the government), so the only sensible thing to do is take away the ability of the platforms to modify/erase in anyway content from the end users, IF the said platform is legally considered a "public forum". Who considers that and issues like "is any new-comer entitled to such status" is the next headache in this whole ordeal,because it will probably constitute new bureaucracy. The fine line will always be between companies and the government since both are different entities. However, imo, 1A should precede both in priority, otherwise it's the start of blurring the said lines & the decline of american society.(See China: de facto a fascist regime)
What I find funny about this "repeal Section 230" political movement is that those most (seemingly) in support of it benefit the most from it. Some thoughts:
1. "Free speech" is an ambiguous term. If it refers to First Amendment protections then it immediately doesn't apply to Twitter, FB, etc. The first five words are quite literally "Congress shall pass no law". Later Supreme Court rulings extended this to state and local governments. So FB, Twitter, etc can't "Censor" or violate the First Amendment of anyone, by definition.
2. So if it's not a legal definition, it's a principle. Literally nobody is a free speech absolutist. Even 4chan has Terms of Service;
3. Section 230 was originally created to give a "safe harbor" to ISPs so they wouldn't be held liable or responsible for content they transmitted. This was and is very similar to telcos not being responsible for illegal activity occurring on phone calls.
4. The point of a safe harbor is that the provider becomes essentially neutral to the content. But this, like anything, has limits. Telcos cut off or block people for spamming, for example;
5. The voices calling to repeal Section 230, as in this article, are upset about isolated cases of, say, Twitter "censoring" the Hunter Biden laptop story. But if Twitter no longer has that safe harbor, there are only really two alternatives: more moderation or no platform at all.
Repealing Section 230 won't be friendly to the likes of Alex Jones, Tucker Carlson, Ben Shapiro or Kanye West.
The First Amendment applies only to government, but the idea of free speech applies to everyone. It has also come out recently that social media platforms are receiving directions from the US government about what content to take down, making them state actors.
This is an article about the implications of the First Amendment and their applicability to private entities, not about the amorphous concept of "free speech".
Yet for practical purposes, the US gov has been interfering and dictating which speech can be posted on social media. Both twitter and facebook have takedown portals for government officials. I suspect others do as well. Back door channels between gov agencies and companies exist and influence what gets exposure. Such channels even led to a major news organization being censored and temp-banned, despite having a legitimate story of national interest. This is a major issue, and I'll tell you why, even though I believe I don't have to:
In the past, journalists were the check against government over-reach, corruption, and fraud. That has been thrown out the window, as it is clear to the objective observer that the majority of media and government work very closely together to push a coordinated narrative, and have been for some time.
When citizen journalists emerged, whistleblowers and independent bloggers began publishing stories (some of them trash, others being very good at what they do) - yet they rely on social media for reach. If their stories oppose the narrative of both mainstream media and government, they are labeled (without any real evidence) as misinformation or "russian" propaganda. This in itself is propaganda.
Frankly, I'm tired of the nanny state. Neither myself or any other functional adult needs media, government, or party to tell us what to think, do, or say. The whole mess is antithetical to the core principles of the United States.
The First Amendment analysis here is... bad, very bad. It reads as someone who remembers some phrases from AP US Government and that's about it. Pretty much every section on what's constitutionally protected is wrong.
Fighting Words was handled by somebody else
> Incitement. The Supreme Court has long held that advocating the use of force is unprotected speech under the First Amendment when it is “directed to inciting or producing imminent lawless action.” The word “imminent” has been the subject of further litigation, generally requiring a “clear and present danger.”
"clear and present danger" was the standard before it was overturned by the incitement to "imminent lawless action"--the way it was presented here is completely backwards. Recall that Brandenburg v Ohio upheld that advocacy of violent overthrow of government was constitutionally protected. Incitement here has been narrowed to the point that you should think of the bar as around the level of "you are at the head of the mob and pointing out the next person to be lynched"--anything less, and there's a decent chance your speech won't be considered to be incitement.
> False Statements of Fact. The Court explicitly held in 1974 that “there is no constitutional value in false statements or fact.”
Yeah, but in 2012, US v Alvarez rather explicitly held that the government can't justify banning false statements just for being false.
Defamation was handled by somebody else.
> Fraud. Another kind of false statement is fraud. There is no right under the First Amendment to impersonate someone else or deceptively amplify one’s views through fake accounts.
That is not what fraud is. Fraud statutes require material gain, and impersonation isn't a kind of fraud. And again, impersonation without seeking that kind of gain is constitutionally-protected--that's basically what US v Alvarez is about (falsely claiming you have a military medal).
> Obscenity. Obscenity has a famously shifty and subjective definition, summarized by Justice Potter Stewart as, “I know it when I see it” in the 1973 case that established “prevailing community standards” as the basis for determining when speech is obscene and therefore not entitled to First Amendment protection.
What the fucking hell are you on about? "I know it when I see it" comes from the 1964 case. The 1973 case established the Miller test which defines obscenity as a three-prong test: (a) Whether "the average person, applying contemporary community standards", would find that the work, taken as a whole, appeals to the prurient interest, (b) Whether the work depicts or describes, in a patently offensive way, sexual conduct or excretory functions specifically defined by applicable state law, (c) Whether the work, taken as a whole, lacks serious literary, artistic, political, or scientific value. [quoting Wikipedia here].
The basic tenor of First Amendment cases is that the government cannot act as a moderator of speech. In virtually every single case, where the court is asked to rule on something that's around the bar of what is and isn't permissible, the court rules to push the bar yet further outwards. Were this analysis a paper in a high school civics class, I'd be hard-pressed to grade it above a C; as a suggestion for political analysis, it fails entirely.
Who actually wants to end Section 230? Most of the complaints I've heard about it have just been that it should be changed to only apply to platforms (which don't censor), and not to publishers (which do).
I don't see how anything will happen on 230. Conservatives want to change it to allow any kind of speech including racism and fake news and the platform can't censor it and liberals want to ban anything they consider hate speech or "not woke" speech.
I've seen "hate speech" be defined as anything "I" (the reader) don't like and is offensive to "me" (the same reader). So shrug I just don't see why it's not up to the platform (as it is currently) about what they want to host, aside from illegal stuff like imminent physical threats, terrorism, national security, and similar.
Most of the content featured on social media is hate speech. Hate drives engagement so it's amplified.
And most of that hate is still considered acceptable even under the most restrictive policies.
There's hate speech like ageism, misandry, and racism against white people, which are technically covered by the policies but generally overlooked.
Then there's the shifting tide of hatred against people for a variety of other reasons. Like Elon Musk at the moment. Sometimes this even becomes harassment against private individuals (as in, not public figures) like when reddit falsely accused the wrong person of being the Boston Bomber, or the false allegations against the kids from Covington Catholic High School.
And of course the perennial political battle, which has somehow become even more partisan and hateful thanks to social media.
So I think that claims that changing Section 230 will unleash a torrent of hatred on social media and make it unusable are rather ill informed. Social media is already like that.
What it might do is give more people an opportunity to interact with people they hate and perhaps learn that they were wrong.
Yes, it's a difficult problem isn't it. On the one hand there are people behaving poorly towards other people, though which people they are and how poorly they are behaving depends on the opinions of each individual. On the other hand the insidious arm of the government reaching into the affairs of a free people and controlling thought by controlling media is not quite a comforting thought either.
As usual, if people would stop being so shitty to each other and greedy for themselves, none of this would even be necessary.
The article correctly quotes and understands sec 230, and doesn't misrepresent it as either a license for censorship or a ban on moderation.
The article defines clearly what the author would like changed.
The Bad:
The article is very confused about what is or is not protected by the first amendment. For instance it claims (incorrectly) that hacked information is not protected. It is protected. If a reporter gets hacked information, but does no hacking himself, and the information is newsworthy, he's free to publish. The same applies to defamation and falsehood: I might be liable in civil court if I publish such things, but no one in the government has a right to take a red pen to what I want to publish pre-print because they decide it is false or defamatory. Also, be careful banning defamation separate to falsehood: that means I cannot say true-but-damaging things about people. Is that what the author or the general public want? I can't point out that someone is a thief even if they are and I can prove it?
The article's example is bad and I wonder if he will actually get what he wants. It complains about the censorship of the NYPost article about Hunter Biden's laptop. But then it goes on to claim that platforms should be able to censor hacked material (which that story was based on and that was the original reason it was censored on Twitter). And that it should be able to censor false information (again, most of that story was incorrect or at best remains unsubstantiated years later...) and defamation. So the sort of story he wants to be protected would fall under at least 3 of the categories he wants to be unprotected?!
The article misses the great un-written advantage of s230: s230 makes it clear who decides (the platform) what to moderate and uses a simple measure (whatever they want basically). That gives them a lot of power, but it also does away with a huge issue. Namely, if you give every twitter user recourse to courts AND you have complex rules over what is allowed (who defines falsehood? can you PROVE the moon landings occurred? is calling someone a bastard fighting-words or defamation? What if it's true? Is a penis "obscene"? What about Michelangelo's David?). So at best Twitter now has ungodly legal fees and every court in the land has to rule on all this nonsense. And that's without any judge making a mistake or (god forbid) falling to political partisanship. I don't envy the judge ruling on whether Trumps tweets on Jan 6th were incitement to violence. And they will of course have to give immediate and binding rulings in real time on such cases. S230 made all that disappear, just a tiny proportion of cases had any legal standing.
The truth is, and I suspect that the Author knows they but just cannot quite accept it, s230 isn't perfect. But its the best we can do without "the cure being worse than the disease"...
Oh, I missed the defamation thing, but you're absolutely right: Sacks has gotten a legal point wrong that even Walter Sobchak can accurately explain. What a mess.
> We will protect any speech that is protected under the First Amendment.
This will likely include speech that people and advertisers find objectionable, and will consequently want to disassociate from. There is a lot of First Amendment speech that people don't want to have shouted at them when they're trying just trying to go about their lives, or keep up with the media. Think about what HN would be like if the mods allowed all First Amendment speech in the comments.
> We will double down on our authenticity rules and procedures.
It's not clear if this requires people to use their real names + verify their identities, but if so it's dangerous for free speech. Pseudonyms can be a valuable way of enabling people to speak freely. Forcing people to use their own names also increases the risk of their being targeted by harassing speech (which defenders will claim is protected by the 1st Amendment). Again, people will likely leave for more curated communities or anonymous communities rather than endure it.
> We will provide users with the tools they need to curate content for themselves. For instance, we will give users the ability to hide or delete offensive comments to their posts.
I have these tools available for SMS and email, but it's exhausting to manage, and they've been rendered useless by the sheer volume of spam. For the average user, this is forcing them to take on a lot of work to protect themselves from spam and harassment. Many will just quit rather than add another channel they have to filter and manage.
If a "public square" (and I'm not convinced Twitter is a public square in the sense people say) becomes unruly and fails to provide a well regulated commons, people will depart for communities that regulate the way they want (like Hacker News or amenable Discord Servers)