Facebook's rules are necessary because of the nature of the beast: When a company is large enough to have to begin to hire outsiders to moderate content, the "moderation culture" (for want of a better term) is lost. In order to communicate this, you need to codify it in a set of rules that outlay exactly what actions take place when. The problem I see with this method is that you lose the ability to have a "why". Why content is left in place despite not meeting the rules, or why content is removed despite meeting them.
Community insiders promoted to moderate have usually demonstrated that their opinions are in line with the culture that the moderators are attempting to promote, which means the sort of rules that lay out exactly what is acceptable versus what is unacceptable are not required.
There's definitely upsides to the moderator-community-gestalt method of determining what content is unacceptable, the biggest of which is that it denies trolls the ability to 'rules lawyer' their way into posting content which is against the spirit of the rules, but not against the letter.
Which system is more elegant? I don't think Facebook's is, necessarily, but even if it were I don't think it should (or needs to) apply to a community like HN.
"There are also sub-categories that enjoy extra protection...: Age – youth, senior citizens, teenagers"
and then, just a few paragraphs later:
"[C]ombining a protected category with an unprotected category results in an unprotected category. Irish teenagers are the example. While they are protected under the national origin category, the term teenager does not enjoy special protection"
Honestly surprised at how complex the rules are. There's rationale, for example for why “Irish women are dumb" is not OK, but “Irish teenagers are dumb” is OK.
Hm, maybe someone mixed up their slide? I would imagine with all the online bullying issues facebook has that protecting teens would be a high priority.
There's a whole different section of rules for bullying. For example, picture of someone urinating in their pants == OK, unless there is a caption calling it out in a derogatory fashion. Unless the person is a public figure (more rules to define this), then the derogatory caption == OK.
There are ways in which these policies are more coherent than the moderation rules we have here; in particular, the rationale for disallowing speech designed to target or intimidate minorities (for Facebook: because it's bad for business, by making people hesitant to share; here: because it destroys civility and the possibility of intellectual curiosity).
Every time I ask peers about what the best rule for HN would be with respect to racism, anti-Semitism, ageism, homophobia and gender discrimination, I'm met with the response "duh, just ban racism, anti-Semitism, ageism, homophobia and gender discrimination". HN wants something more elegant than that kind of ban, but the construction Facebook uses here seems plenty elegant.
But yes, you eventually get to the PC/NPC stuff, and "fucking Muslim" vs. "fucking migrant", and it's off the rails.
The good parts of this, though, are worth considering!
> "duh, just ban racism, anti-Semitism, ageism, homophobia and gender discrimination".
This sounds reasonable at face value, but not one of these things is well-defined.
Many people interpret uncomfortable but entirely reasonable discussion of sociology and demography (e.g. discussions of wage differences or crime distribution) as racist or sexist or ageist. Many people confuse anti-Zionism or anti-theism for anti-Semitism. IMO it's better to be very conservative with banning such things, because the false positive risk is very high, and there are a lot of important things to discuss around these topics.
Is there any reason you chose to compare these rules to HN's rules? HN's moderation is one of the best I've ever encountered, I'm not sure there's much of a need to improve it.
It is. But compared with FB, HN has two distinct advantages:
a) The scale is small enough that most content can be manually reviewed, or reviewed after it reaches any significant number of views.
b) People in HN are expecting a professional discussion forum and are largely ok with a relatively active moderation. People posting in FB often assume that any moderation by Facebook the company must comport to freedom of the press or freedom of speech laws. This is of course legally nonsense, but it might not be an unreasonable expectation over all. In some ways, because of the scale of online platforms, the societal effects of FB/Twitter/Youtube content policing norms is closer to say, a code about what is allowed or not on-air in radio/TV in a country than a particular set of editorial policies for a particular newspaper.
HN's moderation works only because Scott and Dan patrol threads flagging and banning people who are (for instance) openly anti-Semitic on threads. It would be better to have simple, coherent rules against these things, rather than unstated rules that require enormous amounts of effort to enforce.
Again I want to be clear that most of these rules seem silly or even puerile. Just the beginning of it made sense.
Frankly I think it's starting to fray a little around the edges. Downvotes and flags are supposed to be used to suppress non-constructive comments and submissions but I see them being used more and more to express disagreement (or, to be more precise, I see more and more comments being downvoted and stories being flagged that seem to me to be constructive and respectful but which express unpopular points of view.)
For the downvoters of the parent, here is the citation:
PG: "I think it's ok to use the up and down arrows to express agreement. Obviously the uparrows aren't only for applauding politeness, so it seems reasonable that the downarrows aren't only for booing rudeness."
He wrote that back when downvoted comments didn't get greyed out the way they do today. Because of the grey-out, a downvote today is effectively a vote for, "No one should read this." IMHO, disagreement should not have that effect. There really ought to be a way to distinguish between "I disagree" and "This comment is not constructive and should be suppressed" -- and there is. If you disagree, you can post a comment to that effect. But whether PG intended it or not, a downvote today is suppression because of the way HN works.
Thanks for pointing out the context of when the pg comment was made. I wasn't aware of this, and the comment makes more sense to me now. It also reinforces—for me at least—that having separate "agree/disagree" and "constructive/unproductive" button pairs may be a good idea. Open questions:
- how they'd play into karma (probably just the constructive/unproductive points, maybe agree/disagree displayed as a pair but not contribute to karma)
- whether this is would be too complex from a UX perspective
- while I think "disagree" and "unproductive" can be usefully distinguished, whether "agree" and "constructive" would be useful. That said, if we want people to comment for disagreement (substantively, I would hope), I wouldn't want to see or encourage a bunch of "+1" or "me, too", or "this" comments. I think users want to show support for good comments, and up-votes are currently the only way to do this if you have nothing else to add.
Actually, HN already has a separate "flag" link. The only real change needed would be to dim based on flags rather than downvotes. Then downvotes could go back to their original meaning (disagree).
I would abolish karma. It turns HN into a silly game.
I agree that dimming should be a function of constructiveness rather than agreement. If flag is to be used for that purpose, there's currently no way to express "I think this is a constructive comment", no way to balance "I think this is not a constructive comment". Having such a button is effectively the same as having "constructive/unproductive" buttons: the flag link would be subsumed.
As for karma, I think it serves two purposes: it acts as a gating function to ensure members have contributed usefully before allowing them to behave in a potentially unproductive manner (e.g., creating throwaway accounts to down-vote or flag). It also encourages continued constructive contributions. Yes, it's artificial, but that doesn't necessarily mean that it can't serve that purpose. I don't think karma is perfect, but I think it is, on balance, a constructive feature on HN.
If there are other methods of serving those two purposes more effectively and as simply as karma does (and I admit to ignorance here), I think they'd be worth exploring. I think there needs to be some pressure over the lifetime of an account to encourage constructive comments, non only on a per-comment basis.
"Agree" generally implies "constructive" so these are not orthogonal. You really need three options: agree, disagree but constructive, and non-constructive. Only the last one should count towards suppression.
As for karma, you make a good point. Have an upvote :-)
Back at ya! Very good point regarding {agree|disagree but constructive|non-constructive}. Which really just leads us back to up/down + flag, doesn't it? Then both up-voting and down-voting would count towards "un-dimming" while flagging would count towards dimming, I think. Thanks for working through this with me...er, bringing me around to your opinion :)
- It really should be a ternary selector, not up/down + flag. You should only be able to select one of the three.
- Having both up and down contribute to undimming would actively discourage using disagree as "punishment". It would very likely change how "flag" is used, though the proper use of "flag" is unclear to me right now, as from what I can tell its effect is not visible until the comment is dead. It would clarify the effect between "down" and "flag".
I can't think of any comments 'Dead' because of disagreement.
I've seen feedback from users that they feel that this is exactly why a comment was down-voted, whether it gets killed or not. I admit I don't have any dead comments at hand, and I don't think the search API returns dead comments. Granting the anecdatum, here's one that I came across:
I think users blame downvotes on disagreement, but in reality it's often badly formed written viewpoints lacking in value or OT.
I think that's often (but not always) the case. That said, I don't know how to convince those who think otherwise. One person's weakly argued comment might be someone else's disagreement. Compare with the Political Detox Week experiment: there was quite a bit of expressed disagreement as to what "politics" means. Or what constitutes civility. Or what's off-topic. It's not only whether the down votes are due to disagreement; it's also whether the commenter perceives the down-votes as such (which is the case of the example I provided).
Edit to add: There will always be some percentage of members who won't agree, and they're more likely to be vocal. I'm not sure how to measure when the optimum is reached (most users feel the system is working as expected).
If anyone is still paying attention here, can you help conduct this experiment by down-voting aaron695's comment to see what happens? I can't do it myself because you're not allowed to down-vote responses to your own comments.
"Constructive" is interestingly subjective. The higher the raise the bar, the more likely it is to be controversial because e.g. you require posters to share and build on the same assumed knowledge. You are welcomed to challenge assumed knowledge from a position of expertise but not from a position of ignorance.
I give "I disagree" downvotes on topics I am very familiar with and the poster is simply insufficiently experienced to be broadcasting their bogus opinions however polite they are. I can surely imagine even more inexperienced posters would see that poster as constructive and helpful but its not the case.
Unfortunately, "more coherent" doesn't mean "better", it just means "more fleshed-out". The problem with formalizing rules of human behaviour is that the latter is generally too fuzzy, so you either end up with a very complex system of rules, or with a simple system that has tons of edge cases (like the one in the post).
I think you were attracted to the improvement of writing rules down, but that has a host of other problems, including the fact that you are then forced to enforce the rules, even where they don't make sense (e.g. you have to remove some comment because of the letter of the law, even though it wasn't actually inflammatory, it just ran into an edge case).
I moderate(d) a large reddit community (millions of subscribers?) and you pretty much can't win. If you don't have rules, you're "murky" and "wanton". If you have rules, you get "these rules suck" or "you're only enforcing the rules whenever you feel like it" when you try to go by the spirit of the law rather than the letter.
To illustrate your point, Facebook censored the Pulitzer Prize-winning photo taken June 8, 1972 with crying children, including 9-year-old Kim Phuc, after an aerial napalm attack on suspected Viet Cong hiding places.
That photo violates a several of Facebook "ban" rules, but is an exception to the rules. After much outcry, Facebook reversed their rule-based ban.
Why? What makes you certain of that? I'm not suggesting that Dan and Scott stop moderating, just that it's possible to make a clear, coherent statement about bigotry in the moderation guidelines.
My experience with moderating large communities and, mainly, my inability to define bigotry to any satisfactory degree. I certainly urge you to give it a go, maybe you'll be able to solve the problem for all of us.
Hell, I can't even ballpark where your freedom to speak ends and my freedom to not feel bad starts.
It would be better to have simple, coherent rules against these things, rather than unstated rules that require enormous amounts of effort to enforce.
Would it?
It would certainly require less effort, but we know from other places that blanket rules are pretty easy to game (eg, see the games people play on Wikipedia with the 3RR rule).
Once people know them, the rules themselves become weaponized.
Edit: I partially misunderstood the parent comment. See below.
Your viewpoint is antithetical to open intellectual discourse. You specifically call for censorship on political grounds, not even because of something that could be construed as an attack. To quote:
> but their views only come across in the content they post here, not as a form of attacking other groups on the site or off the site
Fringe political ideas should not be suppressed, especially if they aren't even attacking anyone. HN's decent level of open discourse is not a "problem", it's one of the things that makes this site better than e.g. Reddit.
I never called for censoring groups, I was just outlining the problems faced here. If you read back up to parent, what I'm trying to get at is saying that despite HM harboring these regressive actors, at no point are they mobilizing or openly attacking others, so Facebook style content removal or moderation does not cross-apply here. I think we may be more on the same page than you realize
Although their views may be seen as rephrensible from my perspective I still see their value as contributors to this site considering most topics that they are contributing to discussion on won't bring up these issues
You think Reddit has a lack of "open discourse"? That's strange. While Reddit has problems with brigading and the like, so does HN. So if Reddit has a problem, and HN does not. How do HN and Voat compare? Or HN and 4chan? Or HN and 8chan?
> There has not been a real problem of [...] hate speech on HN
What do you have showdead in your profile set to? If you have it set to "no" you're going to miss a lot of it. But even so, there's plenty of it on HN.
Algolia search doesn't return killed comments, but there's plenty of rabid anti-Semitism, racism, sexism, etc on HN.
> HN has a HUGE misogyny problem and to a lesser extent holds a startling quantity of alt-right/neo-nazi posters, but their views only come across in the content they post here, not as a form of attacking other groups on the site or off the site
I'm not sure this is compatible with the 1st sentence I quoted.
Some people's freedom of speech silences other people.
Thanks for joining the conversation. I wanted to steer my comment a little bit more to the last point you made about how racism and misoginy, no matter how subtle, does create a toxic culture which silences or intimidates others, but saying that REALLY riles up the "freeze peach" crowd here and I was concerned that making such a statement in an already upsetting comment would get me hit with the flaghammer. More to the topic at hand (and the parent comment), this is about the role of a site making active and aggressive steps to remove content or clamp down on the posting of content that is focused on organizing and attacking outgroups openly, something that does not occur here on HN. You are going to get comments colored with implicit racism or gendered bias, but nobody is grabbing pitchforks to intentionally attack anyone here.
HN, despite carrying all of the more socially regressive actors of the meatspace tech community is remarkably civil and kind
> Some people's freedom of speech silences other people.
While this statement might be somewhat true, the idea that someone would be censored for having an opinion that others find offensive is extremely frightening. Obviously if someone attacks a group based on a stereotype, or advocates discriminatory or harmful behaviour towards a group, that's one thing -- but suggesting censoring people because they may hold views that others don't like isn't a road you want to go down (who makes that judgement?)
> Some people's freedom of speech silences other people.
No, this is completely and totally incorrect. What is correct is that some people silence themselves in reaction to others' free speech. The power is all in their hands: they have the freedom to speak, but they choose not to use it.
That applies to me as much as anyone else: on some controversial topics I choose to speak my mind, generally because I think that there's a chance I can persuade others to see things correctly; on other controversial topics I don't believe that there's any chance to persuade folks, so I silence myself. What's the point of controversy if no-one will be persuaded? It's just trolling.
Regarding hate speech, I have showdead turned on and I've not really noticed a problem with it. There are a very few loons, but they get downvoted to heck and that's that.
The phrase "migrant" refers to an action or behaviour taken by a person -- that can't really be considered a PC because that would be going down a road nobody really wants to (there are all sorts of actions people do that others are critical of and you'd have to write a phone-book full of rules if you wanted to start moderating that)
However, a rule that says you can't make blanket statements about a PC is much easier to enforce.
statements such as “migrants are dirty” are allowed, while “migrants are dirt” isn’t
So I'm not sure the rules are quite as simple as outlined. There's other oddities, like one area says " the term teenager does not enjoy special protection", then another area where they specifically say it does get protection. Argh.
I understand why private companies will not publicly subscribe to any specific set of rules: they often need to, have to or want to break these rules. They are a business and I am not judging them here. But I believe such rules should be an open standard - for other companies to build on, non-profits or even state sector. Navigating these rules should be even thought at elementary school one day.
Community insiders promoted to moderate have usually demonstrated that their opinions are in line with the culture that the moderators are attempting to promote, which means the sort of rules that lay out exactly what is acceptable versus what is unacceptable are not required.
There's definitely upsides to the moderator-community-gestalt method of determining what content is unacceptable, the biggest of which is that it denies trolls the ability to 'rules lawyer' their way into posting content which is against the spirit of the rules, but not against the letter.
Which system is more elegant? I don't think Facebook's is, necessarily, but even if it were I don't think it should (or needs to) apply to a community like HN.