Okay the speech issue aside because it's frankly been debated to death, what I find really crazy about the situation is how much this is being treated by Facebook as a 'mechanical turk' like task.
The author describes having only a few seconds for a pretty complex task and is exposed to disturbing material for essentially their entire workday, without any adequate psychological training at all.
This is not how content moderation should be done at all, and I'm not even sure that this should be treated like a routine job.
At Facebook's scale, there is not a good way to do it. Even if they were able to scale it, it would not be good enough anyway.
You can automate it, but clearly that doesn't catch everything and automated systems can result in just as much bias as humans doing the job.
You can hire humans to do it and make them work fast, because the amount of posts per day is extremely high. You can use some automation to help them, but that will combine the problems of human moderation and automated moderation.
You can hire lots of humans to work at a normal pace and give them psychological counseling, and spend so much on their salaries that you'll go out of business, and even then they won't catch everything and will be frequently accused of bias.
It's a no-win situation for Facebook because even if they were able to objectively and fairly moderate every single post, they would be accused of bias by politically motivated people anyway.
This is not a technology problem, it's a social and political problem and Facebook will never be able to please everyone. Twitter is in the same boat.
> You can hire lots of humans to work at a normal pace and give them psychological counseling, and spend so much on their salaries that you'll go out of business, and even then they won't catch everything and will be frequently accused of bias.
Then you should not fucking be in the business at all! What FB does is externalizing the cost onto societies and skimming the profit!
Cost here being two things: first the cost of mentally fucking up the moderators (so an extensive amount of psychological healthcare has to be shouldered by society) and second the cost to society that arises from letting Nazis and jihadists spread their propaganda (including the cost from real-world violence enabled by said propaganda).
Any online open forum would have the same problems no matter how hard they tried to prevent it. We can either accept that some level of hate speech is inevitably going to slip past the moderators, or we can give up on having open forums altogether. There is no perfect world with perfect moderation of online content.
I wish Facebook had the courage to say this themselves.
> Any online open forum would have the same problems no matter how hard they tried to prevent it.
Facebook tries to get by with as few moderators as they can, with paying them as low as possible and generally trying to teflon away anything negative from their service (i.e. they negate being responsible for the fake news/hate speech on their platform).
I believe that there would certainly be a way to provide fair moderation - for example a ratio of 1 full time CSR (customer services representative) per 1000 users, not the 1 per 20k or worse they are currently trying to do.
>for example a ratio of 1 full time CSR (customer services representative) per 1000 users, not the 1 per 20k or worse they are currently trying to do
Hey, that's the Mastodon model! Relatively small independent servers with own admin/moderator teams responsible only for their own servers. That's scalable moderation.
Aren't those more like a maintenance team? While they are moderators, keep the server running and use it themselves, they don't specifically responsible for the posts on their server.
Mastodon has moderation tools[1] that allow administrators of a server to restrict not just users on their own server, but also how other users in the Fediverse can interact with their users. Users can also report content.
If that is the price to avoid externalizing the cost of a lack of moderation to society, then it is 2 million CSRs.
I'm sick and tired of Silicon Valley companies thinking they can save money on customer support and have societies pick up the tab.
And it's not like Facebook (or Twitter) couldn't get the money: 3€ per user and month should be enough to cover the cost of a properly salaried CSR - and at the same time create a barrier for bots and Nazi trolls.
> And it's not like Facebook (or Twitter) couldn't get the money
What makes you say that ? Facebook barely makes $5/quarter (3 months) in revenue. They couldn't even afford $1/month/user. Meanwhile Twitter has never turned a profit ever.
If you can't make more than $5 per quarter per user, maybe you shouldn't be in business. If you have never, ever turned a profit, then what are you, a charity?
That would make them the world's second largest private employer after Walmart. It's an absurd number of people to hire for content moderation. And it likely wouldn't solve any of the world's problems.
> I'm sick and tired of Silicon Valley companies thinking they can save money on customer support and have societies pick up the tab.
Unless they're obliged to by law, they never will. These companies minimize customer support in order to improve the bottom line, and will keep these at a minimal level - just barely enough to get away with that. It's one of the elements of the so-called "disruption".
If you're tired of "picking up the tab" you should leave facebook and encourage others to do the same. What ever happened to the idea that people who say stupid things expose themselves.. it's kind of a favor
If Facebook can't moderate properly because it is too expensive, then Facebook can't exist. We can go back to a more plural world of many specialist forums with community moderation. That way the content properly reflects the society that generates it.
Forcing my views? What if they were spreading porn to children, should society force it's views on them then? What if they were subverting elections? 'Forcing your views' is kind of how society works. Facebook is not a harmless forum where friends can adopt banter at the level they feel comfortable with. Facebook put things in your timeline that are unsolicited because they are a publisher and should follow the same laws as other publishers.
How is it different from companies building products that pollute the earth, and not cleaning it up? Food companies not worried about the impact of huge life stock, or burning down forest for agriculture, or car companies not worried about air pollution created by their cars?
If Facebook can't work economically in a way that's also somewhat responsible socially, maybe it shouldn't have 2 billion active users per month. Maybe it shouldn't survive. Maybe I should finally quit it.
Well then I guess they took a bite larger than they could swallow, right?
It's pretty understandable they're not able to moderate over 2 billion active users per month.
So that leaves two options: Either not have 2 billion active users, or not have any moderation. The latter is plainly unacceptable because, like others pointed out, it simply externalises the cost onto society. The first option, well I'm sure people will find other ways to communicate and plan events; It's not like their service is so unique, just the network effect and the size (which is way too big any way). We were long overdue for something better to come along any way.
There's no fundamental "right" Facebook has to do this, to keep existing and growing in the way they do, if all they want to do is grow but not take responsibility with the consequences of their size. Edit someone else said it better: No one owes you a business model. If a crucial bit appears impossible to "scale", then well tough luck I guess.
I'm absolutely gobsmacked that people are starting from the position that hate speech should be censored at all. The determination of what constitutes 'hate speech' is made by whoever happens to be in power at the time. Winston Churchill would have given a much different answer than Adolf Hitler. From my point of view, it's chilling that the mechanism to do this has been built in the first place.
This is a common approach by groups who perpetuate hate speech- to exclaim with faux outrage that any limits on hate speech are tantamount to the dissolution of free speech at the behest of ostensibly powerful minority groups. I won't go so far as to accuse you of being a member of hate groups, but your response certainly echoes the propaganda that they spread, especially in forums like HN.
The fact is that society has always drawn lines about what speech and behavior is and isn't acceptable in different venues and circumstances. Facebook's choice to censor hate speech is no different from a bar, restaurant, or department store asking someone to leave for shouting the N-word at fellow shoppers. It's a private non-governmental entity making a choice about how they want their users to act on their platform.
Furthermore, free speech claims in favor of hate speech ignore the real material costs in human lives that facilitating hate speech incurs. While I greatly value free speech personally, I also greatly value human lives and the ability of all people to meaningfully participate in society without facing systemic oppression, violence, and hatred.
Tolerance must inevitably come face-to-face with intolerance, and if we don't act to stop the worst kinds of hatred and intolerance they fester and decay the ideals that allow tolerance to flourish at all.
Suggesting that free speech advocacy, which has a very long tradition that predates contemporary social divisions, "echoes" the "propaganda" of "hate groups" is simultaneously meaningless and pernicious. You're drawing a line between free speech and "hate groups" while in the same breath denying that you're even doing so. In the process, you're not only inventing a nefarious association where none exists --- because causality flows forward through time, not backward --- but also completely ignoring danjayh's central point, which is that the definition of "hate" depends on the objectives of those with the power to define the word, and that machinery to suppress "hate speech" becomes, ultimately, a vehicle for reinforcing existing power structures and delaying needed change.
> Furthermore, free speech claims in favor of hate speech ignore the real material costs in human lives that facilitating hate speech incurs.
What material cost? Censorship advocates continually assume that hate speech must have large and personal costs, but in my experience, present no evidence. The benefits of censorship are not apparent. The costs, however, are clear in the historical record: slowed scientific progress, emotional distress, and atrocities that might have been avoided through vigorous public discussion. And it's always been in the name of the public good, or saving souls, or protecting the innocent, or some other unassailable and noble good that people with power have forced others not to say certain words. The idea that no, this time, it's different suggests a certain historical hubris.
I'd wager that throughout history, there's never been a society that's made its people happier or better-off through censorship. I'd love to see a counterexample if you can find one.
You have a number of points I agree with, but I find other points troublesome.
For instance, the costs of censorship are, interestingly enough, also the costs of not censoring free speech. Slowed scientific progress (due to the rise of anti-intellectualism), emotional distress (pretty obvious how some types of free speech cause this), and atrocities (e.g. Charlotesville) could have been avoided through censorship. Those are the material costs of allowing unbridled, "hate group" free speech.
On a side note, I am very personally conflicted on this topic. It was strange to read your comment and strongly agree with some statements and then very strongly disagree other statements.
I'm not even sure there's a real objective definition of "hate speech" - there's certainly speech you hate. And speech I hate. But its all rather subjective. Best we can say is that some speech grossly offends a shared, widespread, moral consensus, and so seems hateful from that perspective. But... lots of genuinely progressive speech is also this way.
Free speech policies that would effectively harbor hate speech - also protect your speech from being labelled as such, and then censored, or worse. There are a disturbing number of places still left in this world, where controversial speech can get you jailed, or killed - with the blessing of, or by a government.
Humans can't be trusted, by and large, to be benevolent censors.
The argument is that since the Nazi will inevitably punch you, preemptively punching the Nazi is inherently defensive, and thus urging people to punch Nazis is not hate speech.
Of course, you may very well ask, isn't it trivial to construct an almost identical argument to justify violence against almost any particular category of people you'd care to name? Hopefully your interlocutor won't respond by punching you in the face.
You’re drawing a false equivalency. Social media platforms like Facebook and Twitter are more like a public squares rather than restaurants or stores. It’s a space where free speech should be defended. Because free speech protects our society from authoritarianism and violence. It’s authoritarians who want to limit free speech, because their ideas have no merit and can only be enforced by violence.
> Social media platforms like Facebook and Twitter are more like a public squares rather than restaurants or stores.
Are they? Let's see the most important difference:
Is a public square privately owned? No. It is owned by the community, usually the council.
Is a restaurant privately owned? Yes. The proprietor has the option to evict people at will, as long as they do not discriminate against minorities in the process.
Do hate speech laws apply in a public place? Yes. If someone states racist things in public, they may be put in prison for hate speech.
Do hate speech laws apply in a private place? It depends on if this space is open to the public, and on the event being held. Generally, yes.
Is the owner allowed to evict any member at will from a public square? It depends. If the person is being harmful towards others, they may be removed from the square and detained, and sometimes even charged a crime.
Is the owner allowed to evict any member at will from a private square? Yes, as long as they do not discriminate against minorities when doing so.
As you can see, the attribute you have brought up does not matter. Facebook has the legal right to remove people for speech, as long as it does not discriminate against minorities in the process.
> Is a public square privately owned? No. It is owned by the community, usually the council.
Many public squares are privately owned these days.
> Facebook has the legal right to remove people for speech
No-one is disputing that. Grandparent wrote "It’s a space where free speech should be defended," not "It's a space where free speech is protected by the law." The lack of legal protection for speech in this kind of space makes it all the more important that individuals stand up for it.
> It’s authoritarians who want to limit free speech, because their ideas have no merit and can only be enforced by violence.
Let's not be disingenuous: democracies also want to limit the speech of authoritarians. And they probably should, because every idea that's broadcast will garner some following. It's human nature. Your knife doesn't only cut one way.
Good people want to limit the speech of bad people in principle, sure, just as good people would endorse violence if there were a way to ensure it were directed only against bad people. But no-one can fairly judge that, so limiting speech is a bad idea in the same way that permitting violence is a bad idea. The marketplace of ideas works slowly, but it does work: it does, ultimately, find the truth; good ideas succeed while bad ideas die out. Whereas the "marketplace" of violence can't tell whether an idea is good or bad, and while the violent people might be on the side of good for now, there's no reliable way to keep it that way.
There's not a lot of value talking about free speech on HN because it itself is subject to even stricter censorship of unpopular ideas. It's really a bubble of agreement and almost-agreement.
I don't get the arguement that Facebook, (previously a way to stay in touch with friends) should have to broadcast every crank who wants to talk. If you want to be in a newspaper or radio you have to hit a certain quality level, and be the right kind of content. This editorship is not a restriction of speech, I can start my own website, print my own newsletter etc.
This is probably too reductive, but ultimately it's profitable to do so. Facebook is not in the news business, they're in the data/advertising business.
Whether hate speech should be censored is also decided by whoever is in power at the time. The Adolf Hitler regime isn't going to be prevented from rounding up dissidents just because Weimarbook decided to be supportive of nazis.
Yes. People seem to be fixated on the perceived evils of the day and forget about how all of history is full of competing political groups and ideas, most of them promoting or using violence to gain power. If it was obvious how to pick goodies and baddies, hardly anyone would join the baddies.
> Any online open forum would have the same problems no matter how hard they tried to prevent it.
Then don't make it open. Make it auditable and tied to your public identity, not to an email address you can invent at any time. The OP is right, FB is making huge profits by punting the costs onto society.
Is that provably inevitable for all online forms, or is “nobody knows how to yet”?
Governments act like all tech problems are the latter, but I would like to know if this is like encryption that only friends can break (never), or like self driving cars (most people call it impossible until five years after it’s been demonstrated).
For me moderating what amounts to open communication platform falls into third category: it is impossible and you should not try to do that.
Especially in FB's case where it is quite obvious that various opposing "hate groups" have learned how to game the system, and also how to present results of that such that it supports their rhetoric.
> it is impossible and you should not try to do that.
No, it is merely extremely expensive. But that's something that you need to tackle head on when your business model is to provide a service for free to a large chunk of the planet's population. The 'free' bit doesn't absolve you from your responsibility as an operator.
Seems like their recent move to get news off the feeds and focus on family and friends should solve the content issue. If someone is posting or liking hate speech or offensive content, I can just block them. I wouldn't add someone as a Facebook friend that would do that kind of thing anyway.
The former, because words are subjective, therefore this is no way to get everything anyone anywhere would ever consider hate speech. It's just an impossible scenario.
There's only one way I've seen in which one can reliably make and maintain a polite community: build it around a niche topic and moderate away things that stray too far away from it. The moment you let your community be about anything and everything is the moment the "unwashed masses" come and the whole thing starts catering to the lowest common denominator, which is a pretty shitty level.
It's not a complete recipe, but I feel it's a necessary condition. This is a big part of why HN is a polite place, or why most niche subreddits are polite places.
So what? Nobody builds social networks? Great idea, let's just all agree not to build social networks or make companies out of them. Because as we all know humanity is great at not doing things we have every capacity as well as a handful of reasons to do.
Try thinking about a real solution to the problem next time that isn't "let's not do this".
Really! It's like someone founded a nuclear power startup and people in this thread are basically excusing them from properly dealing with the nuclear waste because, you know, it's really hard.
I'd flip this: if a society is so weak that it's damaged by "letting Nazis and jihadists spread their propaganda" (i.e. by letting people speak freely) then that society should fail.
(I do think that if Facebook is employing people to do something that predictably harms those employees then it should bear the cost of protecting them from those harms, and treating and compensate them when it fails to protect them)
Nazis and jihadists aren't that bad. They're just people promoting their political ideology. That's everywhere. Look at gangsterism - it's a major cause of murders in the US but it's glorified and promoted by rap music. Communism is arguably worse than Naziism but it's not treated with the same horror.
Seems perfectly coherent to me. He's saying that if your business can't afford to solve the problems it creates (and instead pushes the problems onto society at large and/or disposable workers), then he thinks that business should go under.
Facebook is a massive enabler for the spread of fake news/hate speech. I do not doubt that there are problems in society, but Facebook needs to be held accountable for its (in)action.
Should paper manufacturers be responsible for the content that’s printed? Should ink manufacturers be held liable for what people do with it? Should the post office be held liable for messages being mailed? Remember — Facebook is voluntary. Nobody is forced to be there and if there are uncomfortable messages to which you are being exposed, turn it off. Sign out. Quit using it.
You seem to have an almost hysterical reaction to this story. Why are you so sensitive? Did Nazis and violence and Red Brigade terrorists and Communists and racists and Islamic extremists not spread their messages effectively before Facebook? Has the rate of violence from these groups increased since Facebook?
I don’t have stats in front of me, but I would argue that violence, terrorism and hate has decreased since Facebook. Sure we hear more about it now, but that’s a result of media penetration and not an actual increase in incidents. I would invite data that suggests otherwise.
One might argue that awareness of hate groups has increased because of Facebook thus leading to increased marginalization of their ideas.
Blaming the medium for the message is fascinating. The disease of hate and totalitarianism spread quite effectively before Facebook. We might argue that Facebook has actually reduced such things because it provides a marketplace where ideas can be evaluated and either accepted or shunned much more efficiently.
Banning the communication of ideas doesn’t stop the communication of ideas. We learned that from Solzhenitsyn. We should encourage more openness, less censorship. Sunlight disinfects.
You make some good points. It's curious to see people attacking Facebook so vehemently, as if using it were somehow obligatory. I don't see much appeal in Facebook myself, but I don't feel the need to force my views on others. Is the Overton window shifting toward outright censorship? It's an interesting question.
Your comment was downvoted a lot though. Must've hit a nerve :-)
> Should paper manufacturers be responsible for the content that’s printed
No, but publishers should be, and they are. FB is a publisher, a distributor of content. They are not the creator of electrons or LED displays via which your content is distributed, which would be the closest analogy to your paper and ink manufacturer.
No, the real publisher is that dumbass Facebook friend of mine, who shared a "fake news" story. And that dumbass Facebook friend of yours who saw the post of my dumbass friend and reshared it further.
I don't see how they can do that. There is little qualitative difference between what you probably mean by "fake news" and the stuff that mainstream news outlet put out. To stay safe, Facebook would either have to refuse allowing any news content on their site, or else turn into a world's leading intelligence agency / think tank to validate all of it.
The qualitative difference is that traditional news outlets are held accountable for their "fake news". The New York Times tries not to publish untrue claims, and if it does, it will retract them and issue a correction or apology. If it won't do that, there are libel laws and other legal remedies.
Fake news on social media is subject to none of these controls. Facebook is a multiplier of some of the worst tendencies of human beings, and it lacks the institutions that the rest of society has evolved over centuries to contain those tendencies.
I don't think it matters really. If Facebook's current practices cause, enable, or magnify harm to society, then there are levers that should be pulled to minimize that harm.
Facebook is, in part, a tool for distributing misleading information on a massive scale. We control tools that cause harm even if those tools are inert or even beneficial if they're not used by malicious people. Facebook has no institutional or economic reason to reduce the harm it causes, and it's the role of government to reduce harm when the market has failed to do so.
I think the commenter is making the point that they have grown to a level unsustainable to deal with the issue of exposing it's users to fake, hateful, or abusive content.
Facebook is making this easier because it makes keeping with touch and finding people easier. But Facebook is only an incremental step forward here; the objection you're raising should be aimed at the entire Internet. It's the Internet that lets crazies find each other and organize; before Facebook, it was other sites, other services.
I'd go one step further - that's just consequence of better communication. You can't have one without the other. If you make life easier for everyone, you're also making life easier for the crazies, and for the bad guys. Personally, I'd still say it's worth it.
Not really. I think this phenomenon is well documented at this point. How else would fake news spread so easily across social media? Everyone has a couple of crazies in their social circle, and they will broadcast false statements as facts. Due to a human cognitive quirk, we lend more credence to propositions we hear from multiple sources. The "crazies" that already believe in nonsense already do this on reflex. And so the crazy spreads, even to otherwise reasonable people.
People got sick before the great flu pandemic of 1912, therefore the flu was not responsible for the 1912 flu pandemic.
Do you see the problem with this argument? FB was rapidly accelerated the organization of these types of extreme positions. My original comment follows from my last comment. What do you think the crazies do when they've gotten all outraged at some false proposition or other?
Err, no. I never said crazies never found each other before, I said the pandemic of crazies finding each other and organizing now are the fault of FB (and other social media of course).
That's not what the comment said. Here is the conversation:
Original comment: You can hire lots of humans to work at a normal pace and give them psychological counseling, and spend so much on their salaries that you'll go out of business, and even then they won't catch everything and will be frequently accused of bias.
Reply: Then you should not fucking be in the business at all! What FB does is externalizing the cost onto societies and skimming the profit!
I still fail to see how that is a coherent and relevant reply. It comes across as shouting and venting rather than substantive.
Are you a non-native English speaker? It is perhaps not a very elegant response, but it's perfectly coherent and relevant.
Consider the original comment to be the first clause in an 'if-then' sentence, i.e. "IF the cost of hiring lots of humans to work at a normal pace and giving them psychological counseling causes you to go out of business, THEN you should not fucking be in the business at all, BECAUSE otherwise you are externalizing the cost onto society and skimming the profit."
Basically, Facebook should be taxed for the externalities they create like fake news (just as manufacturing plants should be taxed for the externalities they create by polluting). They should also be required to provide a safe workplace.
So if it turns out that their moderation policy creates big problems for their employees and the world, thought should be forced to shoulder a financial burden in the form of taxes/workers comp.
Yes, these are problems you come across in a nanny state. Which is why in the US, most all speech is protected, even hate speech. And for the government to impose laws or rules to force a private company to regulate that speech runs counter to that. So I'm not really sure why this is a conversation in the first place.
I think parent's point is that "fake news" is a negative externality of the same kind as pollution.
Which it isn't. Pollution is a well-defined term; you can identify particular offending substances or disposal methods by their impact on humans and their environment. There are precise things that can be taxed to discourage their use. "Fake news", on the other hand, is a pretty uniform spectrum, from the most blatant falsehoods to the stuff CNN and NYT write. It's hard to even agree on where to draw the line (personally, I'd draw the line at "journalism should begin and end at stating verifiable facts + opinions on those facts clearly marked as opinion", but that's an extreme/minority position), and there's no way to start quantifying impact.
>At Facebook's scale, there is not a good way to do it. Even if they were able to scale it, it would not be good enough anyway
Facebook is picking the quota for decisions per hour or whatever, and choosing whatever help is available to reviewers. Certainly they have some culpability for the outcome.
> At Facebook's scale, there is not a good way to do it.
That seems quite unpersuasive, especially with no citations given.
Just off the top of my head-- email exists at roughly the same scale as Facebook. Yet those shady "fwd:fwd:fwd" stories do not propagate over email at anywhere near the velocity of the propaganda campaigns which saturate Facebook. And at the end of the day those "fwd:fwd:fwd" stories don't reach nearly the number of end users as the propaganda campaigns do because of the critical mass of email users unwilling to forward that crap on to their contacts.
> That seems quite unpersuasive, especially with no citations given.
Well there is no good approach to censorship, it's censorship after all, you will always piss people off.
People need to grow up, there is stuff out there that may dicomfort your world view but get used to it. Also only because you are deleting comments on facebook does not mean - at all - that you are tackling the cause, you just swipe the dust under the carpet.
> Yet those shady "fwd:fwd:fwd" stories do not propagate over email at anywhere near the velocity of the propaganda campaigns which saturate Facebook.
Because people no longer use e-mail for private communication. I remember the times of "fwd:fwd:fwd" stories - they were popular before IMs and social media.
And let's not forget the ultimate, age-old source of "fake news" - a real-life, face-to-face conversation with a friend or family member that starts with "did you hear that $<fake bullshit news of the day>".
"You can hire lots of humans to work at a normal pace and give them psychological counseling, and spend so much on their salaries that you'll go out of business, "
They could hire 10000 people at 100k each. This would cost 1 billion per year making 3 billion every quarter.
There are certain videos that contain extremely graphic violent content and could easily be automatically detected and blocked because they are reposts of the same video. That is a simple problem for which Facebook should be able to apply existing technology. There is a lot Facebook can do, but they are not, because they are lazy.
> apply existing technology. There is a lot Facebook can do, but they are not, because they are lazy.
Mmmh. You do realise that the state of the art technology that you refer to was developed by Facebook, right? I mean, there are no public communications around whether and which type of image recognition is used to detect harmful content at scale for obvious reasons but… take a wild guess.
Do you _really_ think engineers at Facebook who love to solve problems at scale would prefer to have an army of contractors handle something hurtful if there was a solution involving smart code and a battery of servers?
So, if FB has the state of the art and they are still employing armies of contractors then maybe they are just using said contractors to create their training data set.
The contractors’ feedback is certainly considered a high-signal dataset. I am not directly familiar with that aspect of the company (I worked on things somehow related but not this) but I suspect there are several ways of training and assessing estimations that certain content breaks certain rules (gore, direct threat, child endangerment, etc.). There could be challenges (legal, quality issues) of using automated decision to block or not block different types of flagged content.
One thing that, as a data scientist, I noticed: none of those reports complains about seeing the same image repeatedly. I understand that posters are often tempted to spread new content, so I would expect there to be many repeats. It could be because they see so many similar and shocking things they don’t recognise. Or companies could use techniques to pool together content that is identical or similar enough.
I'm saying they should be able to automate flagging of the really bad stuff because it will likely be copies of the same video (by virtue of being so bad) ...
So why not provide multiple Facebook views? That is, put everything in the same databases. Automate creation of flags for posts and accounts. Then let users set moderation level as they desire. Some would want everything. The rest would specify what they don't want to see. And they'd be warned about false positives. Some governments would limit options, I'm sure. But we already have that, so hey.
That's exactly what they do...the algorithmic feed is basically what you described except instead of it being very explicit upfront, the signals are a combination of explicit and implicit and the resulting view is always evolving independently for each person.
I kinda get what you're saying but in practice zero filtering would be a horrible horrible user experience. The content velocity on Facebook is so very high that unless you filter the feed in some way - it becomes a different product entirely, or perhaps even unusable.
And in the case of "Let the user choose the filters however they want", the business portion of the company takes on a massive risk. Remember this isn't a charity. Facebook wants to be able to make a lot of money and exist independently as an organization. Furthermore, they want to be able to predictably grow the business.
The reason I push back against your suggestions is because I see them often - solutions that swing too far into the realm of idealism and entitlement. Facebook is a business, so you have to be able to provide solutions that recognize the importance of the business side of things.
OK, I get that. Unfiltered Facebook would be a very different thing. And maybe not viable as a business. Because public outrage about stuff that offends them. And because laws against hate speech, subversive agitation, and so on. It'd probably need to be a Tor onion service, to protect the provider. Such as Diaspora or whatever.
And yes, I am an idealist. As Crowley defined Thelema, "Do what thou wilt shall be the whole of the Law", and "Love is the law, love under will." The problem, however, is the haters. Burroughs joked about just killing them where they stand, and I suppose that's consistent with Thelema. But I can't see how it'd work in practice.
Because it's not about protecting people from being hurt by seeing what they don't want. It's about protecting people's minds from being exposed to ideas that Facebook or the government thinks they shouldn't think.
I'd say that it's both. Laws against hate speech are intended to protect people against seeing what might hurt them. But yes, many governments censor subversive speech, and force Facebook etc to follow their lead.
Why do they need to moderate every post. The problem posts will be coming from a minority of accounts. Ban them on first offense, or maybe allow one warning, and people will figure out pretty quickly that Facebook is not the place to post that sort of thing.
Bans are easy to circumvent, and the worst trolls tend to be technically savvy enough to create new accounts. They'll do it forever. They'll hide behind VPNs and anonymizing proxies if that's what it takes.
Also, what do you do if the hate speech is posted or linked to by a notable public figure, such as the President of the United States or a member of Congress?
So? The system doesn't have to be perfect, it only needs to make trolling time consuming enough to reduce it to tolerable levels.
> hate speech is posted or linked to by a notable public figure
Here's how I would approach it. Accounts that are tied to real identities have a much higher threshold for bans -- public figure or not. Sans a credible threat of harm to specific individuals they can largely say whatever they would like, hateful or not. When pseudoanonymous accounts get flagged their choice is to attach a publicly visible identity to the account or be banned.
It has to be perfect. People will be enraged if they can find any example of it not working. The human brain only understands examples, not rates. Facebook is now an all-knowing god that cannot make mistakes or not know something.
Perhaps, but to the extent that it does that’s a combination of moderation and machine learning — exactly what post several layers above was saying is insufficient.
Here is how I would do it: I would create an auto-moderation feature where users could report and classify inappropriate materials. I would have a team of moderators judging some contents in-depth and a system trying to match their judgment to a clique of auto-moderators that are not linked to each other.
I do not understand the lack of research in automoderation systems. This has now become a crucial point of social media! The most advanced system I have seen is 20 years old (Slashdot's moderation system) but now we are back to much more primitive reddit-like systems. Why did people stop explore these?
In a world where deep learning and blockchains are available through many easy-to-use packages I am surprised that there are not more experiments around that.
All of these things are doable without a blockchain if you have a third party you can trust for not cheating in the moderation, in the handling of micro-transactions, for not being bribed by internal or foreign agents.
Look at the fire twitter/facebook are under: Dems accuse them of serving Russian interests, Reps of being pressured by "deep state".
A blockchain-based moderation would be immune to this criticism, the data and algorithm being laid publicly, for all to see and check.
Well, by using blockchain, you make whatever you’re doing infinitely cooler, thus attracting higher quality talent who can then do a better job moderating content.
Alternatively, I bet we could find a way to use a middle out compression solution to make moderation problems smaller.
I understand this is a buzzword that is overhyped these days and that by putting it next to deep learning I probably triggered many buzzword saturation thresholds.
I usually point out that there is only one thing that the blockchain bring: it removes the need to trust a third-party. If you don't need that, then you don't need the blockchain. I would argue that political discussion in the current climate is one of the cases where the absence of a third party of trust would be an advantage.
If anything, it makes the job worse by an order of magnitude. Arvato Bertelsmann is famous for bad working conditions.
For example: circumventing minimum wage by (de facto) requiring unpaid over hours. They write into the contracts that you only need to work Mon-Fri but in reality they even ask you to show up for work sundays, christmas or whenever they feel like it. If you don't comply, they show you the door. Illegal yes, but with minimum wage you usually can't expense the time or money to sue them.
This sounds very much like the kind of exposure we had moderating ww.com/camarades.com and files.ww.com. I've had a lot of contact with authorities over the years about some of the more disturbing things we came across on the service. Some of those images will haunt me for life. Nothing but good about the people that do this for a living from an official point of view, that's got to be a very hard job to keep up for an extended period. I ended up shutting down files.ww.com because there was no way to run that part of the business profitable with the amount of oversight it required, and it seemed to attract the worst kind of content, but I'm frankly also not too unhappy to be out of the webcam community business.
Content moderation is a pretty depressive activity to engage in for any amount of time. My biggest stroke of luck was to find a couple of community members that really enjoyed doing this and that kept a pretty fair hand or instead of a year ago I would have shut the whole thing down a decade ago.
>> My biggest stroke of luck was to find a couple of community members that really enjoyed doing this ...
That must be a rare type of person. Without casting aspersions on those two individuals in particular, I wonder whether part of a solution might be to invite the posters of objectionable comment to moderate others' content. Poacher turned game-keeper.
It makes me so sad that we are employing people (for very little money) to harm themselves psychologically. This isn’t good for anyone, this is a case where machine learning is compulsory.
We as a society have decided to hide the terrible aspects of our culture instead of deal with it. It’s why we have public and private schools, it’s why we give homeless people bus tickets out of our cities. We want to take the easy road and push our problems out of sight and out of mind.
I’m not saying I have an answer but the problem of Hiding from things that disturb us can’t be a long term solution.
As someone who used to run a news company, I’m not sure watching all the beheadings and burning that ISIS posts is healthy or necessary, and I would advise that anyone I love to avoid watching them. I can handle a lot, but that stuff messes you up mentally, at least temporarily.
I think watching videos where normal people die in some sudden unexpected way has taught me how to be more perceptive of potential danger, even when a situation seems perfectly normal. I would say it serves a purpose, and if people must watch a gruesome video, I’d say those are the ones to watch.
Check out "Active Self Protection", youtube channel. His channel serves exactly to this purpose. I also felt the same way as you described after watching a few of these.
But I don't think there is any pragmatic reason at all to watch torture videos that terrorists do. It is not a defense confrontation. It is the aftermath, it serves no purpose, nothing to learn there.
I am probably among the tiny minority of users who strongly objects to any form of Content Moderation whatsoever.
Regulatory requirements notwithstanding, a policy that dictates what content gets filtered to users is analogous to a parent forbidding a child from watching an age restricted movie.
Although I could present a "this is a slippery slope" argument here, the more salient argument is that content moderation is essentially a form of social engineering. If you think I am exaggerating but have never seen video footage of what _real war_ does to real human beings I would encourage you do so; consider then whether you still experience the same apathy that you did whenever "Suicide bomber in <place_in_middle_east> kills x" appears in your feed.
IMHO, people should at least be presented with the option to see what is getting filtered rather than selectively suppressing objectionable material lest society remain indifferent..
While it would be nice to have online discussion without moderation, in practice it simply doesn't work. There are just too many trolls out there who wreck it for everyone else. (I've moderated online chat for > 20 years, so I have some experience of trolling).
The other issue, which is perhaps more relevant here, is illegal content. Are you saying, for example, that it shouldn't be illegal for someone to put posters up on every street corner in their town saying that Mexicans are all child molestors and should be shot? Where should the line be drawn?
Actually, in the US, hate speech is not illegal and the hypothetical posters you mention would be protected speech. “Should be shot” is protected, “should be shot at 11am today at the soccer field on Hicks Road” would not be protected. There’s the concept of “true threat.”
There is a mistaken assumption that hate speech is illegal. In the US, it isn’t illegal. Speech that contains a specific threat of violence would be illegal — but such threats have to be specific.
There is a song by the band Type O Negative called “Kill all the white people” and the first line of the song is “kill all the white people and then we’ll be free.”
That song is hate speech and promotes violence against white people — but it isn’t illegal.
Even speech promoting overthrowing the government is also protected as is speech calling for killing police or raping people.
Many people consider the spreading of overt lies and conspiracies about specific ethnic and religious groups to be hate speech. For example, Holocaust denial is considered hate speech in many places.
If you consider things like the "blood libel" hate speech (back when antisemitism was rampant in Europe, there were very popular myths spread by people that Jews would kill Christian children and consume their blood for esoteric rituals), then his overture is full of them. From "Muslims cheered in Jersey while watching the 9/11 attacks" to "You have to kill the families of terrorists" to “If you have people coming out of mosques with hatred and death in their eyes and on their minds, we’re going to have to do something.” (how do you see hatred and death in someone's eyes or minds? It's clearly promoting the hatred of Muslim people because they apparently leave mosques wanting to kill you) to (again about Muslims) “There's a sickness. They're sick people. There's a sickness going on. There's a group of people that is very sick.” to the story he told about killing Muslim insurgents with bullets dipped in pig's blood as an effective means of stopping terrorism to “They're going to have to turn in the people that are bombing the planes. And they know who the people are. And we're not going to find the people by just continuing to be so nice and so soft.”
I mean, is any one of those statements literally "I hate Muslims"? No.
But is spreading lies that Muslims leave mosques with "hate in their eyes and minds" and that Muslims cheered for 9/11 and that Muslims are sick and twisted people and that Muslims are willfully hiding terrorists and not respecting America, etc etc. and that all this (all these false things) need to be stopped no matter what the cost (kill the family members of suspects, dip bullets in pigs blood etc), is all that not promoting hatred? Is the effect of that speech not exactly the same as the effect of hate speech?
> Are you saying, for example, that it shouldn't be illegal for someone to put posters up on every street corner in their town saying that Mexicans are all child molestors and should be shot? Where should the line be drawn?
It would be inconsistent with the principle of free speech to sanction certain things from being said in public. Even racist, offensive and blatantly stupid bullshit (like your example) should not be censored.
The reason for this is because, paradoxically, if we assume that people are capable of rational discussion and debate they will eventually see where their beliefs or statements were in error through their reasoning.
To borrow my previous analogy, _hate-speech_ laws are akin to telling a child to: "obey; because I am your father" as opposed to people self-correcting their ethics through open debate and questioning.
> if we assume that people are capable of rational discussion and debate they will eventually see where their beliefs or statements were in error through their reasoning
I think it's pretty obvious that many people aren't capable of that.
I have no problem with content moderation as long as it stays on a specific platform. I strongly object against any form of internet censoring (e.g. DNS manipulation), but deleting posts by the rules of a specific forum is okay for me.
The difference is that one thing forms a community by the rules they set for themselves and the other thing tries to restrict the communication between people.
Moderation doesn't create a civil online society any more than police create a civil society in real life. At best it reinforces the norms of the existing culture, at worst, it works against it to impose a cultural facade.
>content moderation is essentially a form of social engineering
I mean, do you think social engineering is bad? What do you think Public Service Announcements are for? Why do you think governments give tax credits to people who buy electric vehicles? Social engineering is a great tool when wielded responsibly with citizen oversight.
> a policy that dictates what content gets filtered to users is analogous to a parent forbidding a child from watching an age restricted movie.
Which most people would agree is perfectly reasonable parenting. If there's an argument in here, I suppose it's that a content platform is entering into a paternal relationship with its participants. But that really isn't what's happening when it comes to the relationship between adults and participants on such a platform isn't a paternal one, it's more like guests at a house. Anyone would be perfectly within their rights and might well have reasonable justifications for controlling what kind of media or other expressions are welcome in their home or other environments that are under their stewardship.
There are drawbacks to limiting expression. Your example of the visceral impact of watching violence piercing apathy is a reasonable point. But it's equally reasonable to suggest that the apathy has other sources (there are other causes of psychological distance), that there are diminishing returns to explicitly portraying violence (most people already believe that killing except in self-defense is wrong), and of course there appear to be real negative impacts as discussed in the article.
> IMHO, people should at least be presented with the option to see what is getting filtered rather than selectively suppressing objectionable material lest society remain indifferent..
This is almost reasonable. The problem is that the material itself is only a primary concern. Secondary effects include encouraging other participants to doxx, engage in violent threats, etc. You're not just selecting material, you're creating a set of expectations for how civilized people are.
Perhaps there should be unfiltered forums; I certainly wouldn't stop anyone from creating one. But general social media platforms should probably reflect some norms of civil societies their members are drawn from, as well as whatever additional values their owners may hold.
> The only power that the content moderator has, is to delete a post.
This surprises me, but it is consistent from Facebook's viewpoint where the user is the product. You can fire a customer, but you can't fire your product.
So maybe it is to be expected that Facebook implemented moderation as a QA process. It would probably be against their interests to implement policies that would police their community in a way that makes sense, with warnings, temporary and permanent bans etc.
> "You can fire a customer, but you can't fire your product."
On some level, money is morally neutral, but it's not like Facebook won't have enough "products" to sell if they dealt with those that violated their TOS more decisively.
That also contradicts an earlier statement in the article:
> The moderator has not only to decide whether reported posts should be removed or kept on the platform but navigated into a highly complex hierarchy of actions.
I had to quit as I was particularly disturbed by what I saw as signs of professional deformation in me: a kind of hyper vigilance (especially about the risks for my family). I was dreaming about the job an my own perception of reality shifted in a most concerning way. The terrible Las Vegas Shooting suddenly seemed entirely normal to me.
I know we like to believe otherwise but an observation of human behavior en masse shows that we are preternaturally vicious. Extreme cruelty and violence is the default as much our need for connection and love. Those of us who've had the opportunity to observe or take part in humanity at it's most primal form, in the unsafe gaps between "civilization" and a well functioning state, often come away with our reality shattered. I know mine was.
Lastly, the author shows signs of PTSD, granted this is an armchair assessment.
"...At the end of the ramp-up process, a moderator should handle approximately 1300 reports every day which let him/her in average only a few seconds to reach a decision for each report..."
If you are doing something 1300 times a day you are effectively not doing it. I don't know what you call it, "Monkey in a can" comes to mind, but you are not adding any sort of human oversight or judgment aside from simple image recognition -- and probably not doing that good of a job at recognition at that.
I suppose these jobs exists so that the social media giants have somebody to point to (and fire?) when things go wrong. That's fine. Sounds necessary to me. But not at that scale. To attribute any sort of meaningful behavior to clicking images at that rate is a joke. Most normal people would be burnt out before they got to their first 100.
It's almost as if a better solution would be to leave mass media to a few distributors per city and a few national ones.
You know, like newspapers.
What do we really gain by having direct and immediate access to the uncensored thoughts of casual acquaintances and relatives and all their casual acquaintances?
What do we really gain by blog posts? I personally have learned a lot about technology from peoples' blog posts, and Stack Overflow posts. Maybe not as much from Hacker News posts, but I've still learned some. And by reading reddit posts such as stuff in /r/askreddit I've learned a lot about people with backgrounds quite different than mine, and I believe the ability to better empathize with them.
All of your examples involve moderate to high self selection for thoughtfulness and have a culture of self-critique. Reddit has it historically and HN has it in daily practice.
However, Facebook is nearly a wasteland of relatives and acquaintances yanked into the network, sometimes against their better judgement. Once their and fluent in the technical aspects they feel free to be themselves, not necessarily a thoughtful member of the network.
No, of course not, that's an indefensible position.
Their benefits out weight their detriments. I'm just not a fan of amateur journalists with no oversight, poor training, an ideological axe to grind, and possible mental pathological tendencies.
How do you filter all that out? Ahhh, you can't. At least not without a cultural homogeneity that leads to strong social support for education and care of the less well off. How do you get that? Time.
1. I got the impression that there may be no long term safety net for the enduring impact of this. (Like PTSD support for policemen.) That's not good enough.
2. Reading this piece I don't imagine the system can actually "work" in any reasonable way. This job should be done by the individual user. In FB you can "block" other people. I imagine that would sort out much of this issue.
3. If FB and others are forced to police and censor like this maybe they should simply take themselves out of business lest they become inclined to exterminate our species for the actions of what, I hope, are a few.
Not yet in this thread has anyone discussed the design concept that the community at large could use to help moderate: a dislike button.
Hacker news comments is a pleasure to read (A) because of the community, which (B) is curated by the ability to down vote horrible posts.
Facebook places value on these horrors, inadvertently I guess, but they are not empowering the users to shut down bad stuff. Imagine the possibilities.
Interestingly the original German title is "Drei Monate Hölle" (three month of hell). I wonder why the English title focuses on the learning while the German one focuses on the suffering.
Note that it was translated from English to German and not the other way around despite being about someone who works for a German company that does the "content moderation".
I wish there are more transparency about these networks divisions that make these kinds of decisions. How do we contact a higher level management person at facebook and talk about something that was removed or deleted?
The person in this article mad themselves seem very objective and desiring to be fair, considering many angles... however I do not think someone can click to remove or not remove, thousands of posts / accounts a day and stay objective with all of them.
Projects I worked on have had facebook and tumblr accounts pulled, and I have found no way to speak with someone at either company about the issue. Given how similar the content looked to some standard spam, I can see how a moderator would quickly click to remove when they see hundreds of similar things a day that are indeed quickly made spam. However these particular projects were not affiliate spam, they were original works, however no amount of emails or @mentions on twitter ever brought anyone from these companies discuss the issue.
Had similar issues with google.
When it's just a fake account being used for trolling, the harm in removing an account is likely minimal. However there is little chance these low levels mods know if the poster was a troll of legitimate group trying to do right.
If presence on social networks affects search engine rankings, getting a low level overworked, over stressed person to delete your competitions accounts could reduce their sales by 80% easily.
As facebook seems to want to "be the internet" and monopolize what is okay to be discussed and seen, they certainly have poor customer service for righting mistakes that are made in removing things. This is also true of tumblr and others.
wasn't there something on hacker news a while back that pointed out many of these mod jobs or pushed to countries like the phillipines or something as well?
With the bias that can be created by culture, and the amount of content that people are forced to see, there should be better options for contacting these companies and getting things re-instated.
In some cases deleting an account hardly affects anyone. In other cases deleting an account creates a snowball of removing speech that may be important to many, not just on facebook, but affecting the other portals as well.
This is fascinating, and disturbing. a few thoughts are that UK police when cataloging child pornography for courts were given regular breaks, counselling and, what stuck in my mind, watched BBC children programming on TV at the same time just to counter the overwhelming ness
this overwhelming is human - we extrapolate from our experience what the world
is "really" like - so a flood of beheading videos teaches you the world is fucked up. We as society do urgently need to get a grip on this - it's not just the extreme end but how skewed away from BBC-normal is the viewing diet of the average youtube kid?
I think there are solutions to the scale problem. the facebook brute force approach is interesting - but at 6.5M per week unmanageable.
I think this needs to be something dealt with as social media deals with everything - farmed out to the community. what we have is a free to publish environment. but we as society have really no insight in what is published - whereas previously there were few
enough newspapers one could read it all.
so why not have an approach like recaptcha - every day or so you get a suspicious post in your feed, and are asked to comment. Feedback can be something like "95 of your friends think that post was offensive enough you should be jailed" - this would be valuable human level
feedback that is not available in "likes". one can easily
see it as a way to understand the different universes of people
online - the fox-hunting lobby generally approved your post showing a bloodying but people who like superhero movies were revolted.
I can see the apple faceID thing here being an interesting measure of revulsion ...
Ultimately social media has been a consequence free zone for publishers (posters). By adding in feedback from
friends and wider society we can get a much needed insight into what is happening around us, and the feedback will
often be a useful control rod for people
posting - as it is online.
Long unfocused rant but interesting thoughts.
One final thing - Berlin has sufficient immigrant population that it can recruit 1000 multi lingual multi cultural immigrants (plus churn - author lasted 3 months) with enough tech savvy to be moderators. Just want to mention that to the 52% of UK that think we can be a tech hub after telling all the immigrants to piss off.
"I showed empathy only when I found something connecting me with the world of the living beings, these small details that I tried not to notice that would humanize the corpse and overcome my reflex of repulsion."
This part really stood out to me. It made me think about the difference in my own reaction between reading about some number of deaths and reading about how the victims ended up where they did.
The author describes having only a few seconds for a pretty complex task and is exposed to disturbing material for essentially their entire workday, without any adequate psychological training at all.
This is not how content moderation should be done at all, and I'm not even sure that this should be treated like a routine job.