I honestly wonder sometimes how it’s physically possible to do what dang does, it seems like such an enormous and Sisyphean task. Sad to see twitter replies complaining about the moderation, because if it wasn’t for the moderation the quality here would be much lower, the moderation is the whole reason those people are attracted to participating here. Respect and many thanks indeed.
As someone that moderated a … pithy … community site, I am all too familiar with that type of complaint.
I would regularly get called a “nazi,” because I would step into catfights, and break them up, or “nip them in the bud.” Once you’ve been doing it a while, you learn to smell trouble brewing, and it can escalate rapidly.
Many folks chafe at restrictions, and they may actually be correct. They, themselves, may be entirely trustworthy; but others may not.
Generally, we’re all for rules and structure, as long as they apply to others (especially those with whom we disagree), but they shouldn’t apply to us.
I participate in HN, quite a bit. I basically avoid all other social media.
A big reason is the decorum, here. If it dissolves, then I will no longer participate.
I’ve been “danged” a few times. In some cases, I felt that it was not what I considered “reasonable,” but I accept the slaps anyway.
When in an Italian city, wear a bedsheet, and all that…
> Many folks chafe at restrictions, and they may actually be correct.
We're at least a couple decades beyond the point where a reasonable person could possibly entertain such a thought. Lack of restrictions always turns communities into cesspools. They don't necessarily die, but the people that hang out there are the ones that enjoy spending time in cesspools, and that's usually different from the vision the creator of the site had in mind.
There is however a big difference in the legitimacy of restrictions. The reality is that their are many unreasonable people on the Internet. Most unreasonable people however think of themselves as reasonable and other similarly unreasonable people as reasonable as well. There are also some reasonable people on the Internet. So how do you know which one you are?
When we can't assume one or the other we have to look at something else where we might reach a conclusion. If someone says they are oppressed we can look at what would hinder their oppression. Arguably this would be things like transparency, participation and equality. As those things make oppression harder.
So if someone says they are being silenced a reasonable person might look if there are multiple known moderators who can check each other, a moderation log to see how moderation is happening or some voting system to reflect the opinions of the community.
In most cases where people are says they are being silenced these things doesn't exist. As such it would be unreasonable to dismiss their opinion as there seems to be an argument for the restrictions not being legitimate. Otherwise you just create a different cesspool of unreasonable people.
It would basically be impossible for a site like HN to silence anyone. There are many other highly-visible places to break news even if dang wants to cover something up. It's not remotely the same as if the government were to impose restrictions on speech.
I don't think it is impossible. If I did I wouldn't have made the argument I did. I also didn't say anything about the government. The same situation applies to for example a company. A company that wants to find out how things are going is going to insist in multiple interests in important meetings, documentation on decisions taken and things like employee surveys.
Presumably you disagree, but then you disagree with the far majority of well regarded implementations in the real world. I am about to buy an apartment and it is the same thing. You need a witness, a contract and approval from the housing organization. Because that how you reasonably decide what happened.
And yes, people break the news all the time. For example on Twitter on this very post. That is the point. When they do how do you know who is reasonable? Which is what I have argued.
I only recall getting danged once and it was spot-on. I wasn’t acting like a full on flaming ass, but there was at least a glowing ember there and he phrased his response in a way that made it easy to accept.
One of the moderation techniques I don’t really understand is the “We detached this subthread” move. I don’t know what it accomplishes, besides narrowly targeting a (sometimes highly-upvoted) subcomment to specifically bury it. It’s harsher than even flagging because it nukes the whole subthread full of replies, too. Often no explanation. Just “We detached this subthread from [url]” and that’s that. Seems like something you'd only want to do to the most heinous of posts.
I recall having a +80 upvote or so reply “detached” in this way, but can’t find the link.
In all 3 cases, the original parent was a massive subthread, probably at the top of the page. When child threads develop under top-heavy subthreads and cause discussion to drift in a generic direction, away from the original topic, that's a problem for thread quality. This is one of the most common times when we detach subthreads. All 3 examples look like classic cases to me.
Moderating top subthreads in this and other ways is probably the single biggest thing that moderators do here to promote thread quality. It makes a huge difference. We've learned a lot about this in the last few years.
By the way, none of this is necessarily a problem with the detached comment, nor does it imply anything bad about your (i.e. the child commenter's) intent. Often the child comment has been posted for exactly the right reason—thoughtful conversation. But if it takes a step away from the original topic in a generic direction, or a flameprone direction, upvotes and replies often start pouring in, skewing the thread as a whole in a way that makes it less interesting.
In other words, this is not just the fault of the child commenter (and maybe not at all). It's a co-creation along with all the other repliers and upvoters.
The more we learn, the clearer it gets that we need to moderate not specific posts, but subthreads. Subthreads are the unit of HN comment moderation. (Edit: doesn't that mean—someone will ask—that you could maliciously derail/destroy a subthread by adding bad comments to it? No, because we can detach those. That's case (2) in https://news.ycombinator.com/item?id=27132402.)
Wow, thanks for taking the time to make that massive reply. This kind of Inside Baseball is facinating and really helps to take the mystery out of what goes on here. Surprised that this "tree balancing" really makes much of a difference.
EDIT: I wonder if trolls deliberately wait for and then target top main-threads, in order to maximize the blast radius of their trolling.
Lots of people do this without necessarily being trolls because they feel their comment would be lost, especially if some giant topcomment is dominating the discussion.
People also do the opposite - writing a toplevel comment to discuss (berate, more often than not) multiple other comments - 'I can't believe the nattering nabobs of negativism that have been commenting here, etc'.
In both cases, writing a good, non-repetitive, non-meta toplevel comment is probably better but it's more work and potentially unsatisfying - a lot of the time, a comment like that will in fact be lost in a big discussion.
As I've responded elsewhere in this subthread, a good top-level post can often succeed in gathering enough votes / mod attention to float up.
Probably not on the busiest posts (though those are something of a conversational lost cause already), but on most typical discussions without excessive comments, yes.
Searching for "by:dang <typical moderation phrase>" is a handy way for seeing what actions are taken, in what circumstances, and why.
I'll admit to tailing dang's comments periodically, or searching for specific terms when people challenge / inquire about my own takes on comments. (Most often in the case of dupe / non-dupe article submissions, occasionally others.)
Initial conditions, both over the lifetime of a post, and of a reader's encounter with a post, both have a huge impact on discussion direction. Much of HN's moderation seems to operate with this in mind.
Clickbait titles are de-baited. Indirect links are disintermediated. Divergent distracting comments are downweighted.
Much of my own interaction with the HN mod team (ask dang, I email fairly frequently, probably a few times a week) are with regard to link or title issues.
I'll also occasionally try to steer a discussion back on track with a top-level comment I hope more directly addresses the post than many early takes do. This isn't always successful, but I've been surprised several times where my own late-in-the discussion comment ends up highly-placed in the thread. One key is to really fight back against expressing frustration (the comments are usually born from an excess of that), and just lay out a strong case for an alternative take on what seems significant.
I actually did express that frustration out loud in this thread ... and dang commented on the initial-conditions aspect:
On an older account I once had a factual comment detached whilst the other person his comment (which pretty much stated the polar opposite) stayed under the main post. Looking at which other comments stayed or got detached it was quite easy to see an enormous bias. Pointing that out, all I got was a vague “I understand you are passionate”. I think that’s the only time I’ve ever seen him miss his stride, though.
I run an online community, with my own developed software. We use lots of different tools to prevent spam, harassment, etc. We use different APIS to check IP, ASN, email for previous spam behaviour, use of disposable email domains, etc.
We have a team of moderators and as mentioned before you can "smell" something brewing. The words, to whom they are directed, etc.
A couple of years ago we started using Perspective API [0], a Google service
"Using machine learning to reduce toxicity online". We use the toxicity model, trained by Google and The NY Times. Basically we submit the text of posts and replies and receive a score. We do some account history calculations and if it's past a certain threshold then moderators receive a notification - we can't read all posts in our platform all the time but this helps a lot.
The community itself helps too, and the reporting tool is used when things happen that we aren't aware.
There are a lot of projects out there that are attempting to do this because of the impracticability of a human team to deal with the entire torrent of information.
> As part of a larger effort to combat disruptive behavior, Riot Games recently updated its Privacy Notice and Terms of Service to allow us to record and evaluate in-game voice communications when a report for that type of behavior is submitted—with the goal of kicking this off in VALORANT first.
I believe it would be quite possible. Text is a bit dryer to work with but has the advantage of being all text.
I wonder if this is a sign that they need more moderators. Given how much moderation is done by the community, I'm surprised dang needs to work as hard as he seems to.
To my knowledge, there is a combination of specific views on the site that highlight important features of the site (an obvious example might be the list of flagged comments; a less-obvious example might be https://news.ycombinator.com/topcomments which shows the top-most comment of each front page item) as well as email alerts from a variety of sources, some of those being user-driven.
You can just click the 'threads' link at the top left and it'll show you your comments and nested replies.
I had no idea about this until someone mentioned it. Prior to that I couldn't be bothered very often to check my comments for replies. It was too many steps and made me feel narcissistic for doing all that just to see.