Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Automating your moderation policies is a cop-out. It isolates you even further from the peons whose lives you're controlling.

European humans have the right for any automated decision affecting them to be reviewed by a human. We do not want a world where a tiny handful of people, getting fewer by the day if it's Twitter, have untrammeled control over what billions of people can say, see or do. We can't let businesses do what they like, change their policies on a whim and have it instantaneously applied by automatons, if it's bringing misery to millions of people.

Remember those old-fashioned things called laws? Courts? Judges? Legislators? The things we used to use for deciding what people could/couldn't do, before tiny companies with billions of users applied their own de-facto laws with no oversight, governance or accountability. Move fast and break things!



1) you can have human reviewed appeal and have automated moderation 2) a provider of a service is not a provider of a public commons, nor is the service being confined to a specific type of discourse preventing anyone from doing anything in their life. You don’t have the right to go to a movie theatre and use it to practice your singing recital during the movie - the theatre owner can and will kick you out. This doesn’t trample on your rights - you can still practicing singing, just not there. Likewise you can still spew racism and talk about child sexual abuse in your home all you want. But no one is obliged to provide you a platform for that. 3) I see nothing superior to a capricious and biased human judging my content vs a capricious and biased model judging my content. The human, however, will be traumatized by the “millions of people” made miserable by their inability to spew hate and share child porn on Facebook. 4) we still do depend on judges and laws. Today the judges as laws say you not only have the right but the obligation to create a space relatively safe. In your European utopia speech and thought is even more limited by law than in the US, but in both locales forum providers are held to a standard. Right now in the US there’s a major case in front of the Supreme Court which will determine whether these companies can be held liable directly for failing to sufficiently moderate - if it passes, they’ll be even more obliged to “purify” their platforms of anything that anyone might feel compelled to sue them over. But even now all providers have to conform to the laws and judgements of the lands they operate in, and those lands say it’s not only ok to moderate, but it’s mandatory. So, your final point really, truly, makes no sense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: