Hacker News new | past | comments | ask | show | jobs | submit login

You'd be surprised at what is "not illegal". Every community moderates.

You can make an argument that some sites have moderated too much, or been too overtly political.

But every community moderates, and has to. This can be demonstrated with a simple example. Guess what isn't illegal? Spam!

It should be obvious that FB and Twitter etc have to take spam down though.

Then there's abusive behaviour. Stalking, harassment, etc. Often not illegal, but hurts a platform. Better take that down.

What about propaganda? They should let it all go? That sure sounds like a....phone network. Actually wait, it sounds more like the news media, which has been regulated for decades. A zero moderation policy would likely have led to demands for regulation, too.

And then there's the massive category of topics which are not illegal, but horrifying. Have zero moderation, and you end up as 4chan or worse.

--------

I'm generally in favour of keeping politics out of things, but your "we take stuff down only for a court order" is a naive view. Literally every forum moderates, they have no choice.




good point, but from a tactical perspective, not moderating might have made sense, to force lawmakers' hands in passing stronger laws about spam, harrassment, propaganda, etc.

they'd have a regulatory shield and probably retain carrier status, rather than sticking their necks out by selectively moderating content.

a moderate amount of moderation is my preference, but i see the value in freer speech zones on the internet (that i can mostly ignore).


They do have a regulatory shield though. Section 230 allows platforms to moderate their sites, while leaving them not liable for any content they do leave up. It's considered speech of the user.

This has come to a head as plarforms have grown larger. Note that even if some platforms were strictly neutral, they'd still be affected by the changes the government is proposing. It would have been very unlikely that none of the major platforms would have moderated beyond legal minimums.

https://en.m.wikipedia.org/wiki/Section_230_of_the_Communica...


i realize it's hard to draw a bright line on acceptible moderation, but then, is it fruitless to strengthen such laws?

seems like a lot of lawsuts and not a lot of legislating, from the wiki.


>good point, but from a tactical perspective, not moderating might have made sense, to force lawmakers' hands in passing stronger laws about spam, harrassment, propaganda, etc.

Why would companies find that preferable to moderating themselves? If the government makes laws prohibiting that stuff, then that means the companies could get in trouble if they let any of that through. If companies self-moderate, then there will be less pressure for laws like that to exist, and the companies won't get hit with penalties for missing a few things here and there.


The US government is practically incapable of taking on that role because that actually would make it a First Amendment issue.

But, as graeme has pointed out, this idea of any moderation increasing a platform's liability is as wrong as it is widespread.


> Spam

Spam, pornography, and profanity are not viewpoints. The problem with social media is not censorship of unwanted content that an ordinary person can identify without reference to viewpoint. What bothers civil libertarians, myself included, is censorship of specific points of view, even when these points of view are expressed in a calm and civil manner.

> Horrifying

Who gets to decide what's horrifying? You? Why?


> What bothers civil libertarians, myself included, is censorship of specific points of view, even when these points of view are expressed in a calm and civil manner.

Is this actually occurring? I haven't seen anyone get deplatformed because the company disagrees with the message, but rather because they violated ToS with regard to how they were communicating their message.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: