@dang, I think we need an explanation for why Tank Man-related content on Hacker News has been disappearing all day. I usually trust HN to be a bastion of free speech, and if there isn't some kind of proportionate response here, I don't believe myself or many others here will be able to see it that way going forward.
EDIT: Thank you for your response, dang. Hacker News is a special place, which is why we have responded so strongly to today's events - I apologize if the tone above came off as less-than-civil. I (and it seems, many others) look forward to hearing more about the 'dupe' article others have linked to below. It was only upon seeing the article marked as a dupe after seeing the previous flagged out of existence that it began to feel like more than just a user-initiated action, so I am sure further information on the mod-initiated actions will put these fears to rest.
This post is on the front page right now (edit: and now also https://news.ycombinator.com/item?id=27396783) - that's the opposite of "disappearing". I'd have to see links to the other ones.
Here's one tip for you guys, from years-long, world-weary experience: if you're coming up with sensational explanations in breathless excitement, it's almost certainly untrue.
Edit: ok, here's what happened. Users flagged https://news.ycombinator.com/item?id=27394925. When you see [flagged] on a submission, you should assume users flagged it because with rare exceptions, that's always why.
A moderator saw that, but didn't look very closely and thought "yeah that's probably garden-variety controversy/drama" and left the flags on. No moderator saw any of the other posts until I woke up, turned on HN, and—surprise!—saw the latest $outrage.
In other words, nothing was co-ordinated and the dots weren't connected. This was just the usual stochastic churn that generates HN. Most days it generates the HN you're used to and some days (quite a few days actually) it generates the next outlier, but that's how stochastics work, yes? If you're a boat on a choppy sea, sometimes some waves slosh into the boat. If you're a wiggly graph, sometimes the graph goes above a line.
If I put myself in suspicious shoes, I can come up with objections to the above, but I can also answer them pretty simply: this entire thing was a combo of two data points, one borderline human error [1] and one software false positive. We don't know how to make software that doesn't do false positives and we don't know how to make humans that don't do errors. And we don't know how to make those things not happen at the same time sometimes. This is what imperfect systems do, so it's not clear to me what needs changing. If you think something needs changing, I'm happy to hear it, but please make it obvious how you're not asking for a perfect system, because I'm afraid that's not an option.
[1] I will stick up for my teammate and say that this point is arguable; I might well have made the same call and it's far from obvious that it was the wrong call at the time. But we don't need that for this particular answer, so I'll let that bit go.
I don't doubt posts are flagged by users as opposed to moderation.
But at the same time, it also seems like flagging can be too easily abused, and can lead to accusations of censorship and distrust. (Though I've certainly seen it work well in cases, especially for false/defamatory articles.)
But it really does seem like we're at the point where longstanding users need to also be able to vouch for flagged stories, or something like that. And even if that doesn't automatically restore the story, it could at least show a label like "pending moderator decision" or something.
At a time where trust in the media and authority is low... a little bit of greater transparency might go a long way. :)
> longstanding users need to also be able to vouch for flagged stories, or something like that.
This is the critical point. Today, users can "vouch" for [dead] stories, but can't vouch for [flagged] stories until they get flagged so much that they convert to [dead].
The other "Tank Man" story was flagged, but never quite dead, so users couldn't vouch for it; from users' perspective, it appeared to simply disappear.
Allowing users to vouch for the other story would have helped considerably.
Should we have vouch for stories that haven't been marked flagged yet as well? I will search for HN's ranker formula but right now, it shows this story in 8th place to me. The upvote/time would put it on top, so I am guessing either most upvotes are old and there is a decay involved in ranker or it is also being flagged. It could also be an auto flame war detector pulling it down, HN has one based on upvote/comments of the post, I am not sure if there is one which considers voting within comments itself.
I have a flag on my platform that flags users who reported things I agreed with the removal of, as "trusted" and their reports are given more weight in the algorithm next time. Sort of the inverse of what you propose where everyone is somewhat untrusted by default. (But not entirely untrusted.)
There is apparently an entire branch of research called Reviewer Reliability. Someone on HN pointed that out to me a couple of years back, when we were discussing the problem with fake/bought/manipulated/blackhat product reviews.
In a hilarious twist of fate, searching for that term brings up papers on either medical research or peer-review reliability problems in general[0]. You try to find data on a potentially abstract, complex societal issue, and come up with what can only be described as attention grabbing HN-catnip.
It's possible to email the mods vouching for stories (or comments). I do this fairly frequently. Not (yet) in this case, though there was a politically-tinged story earlier this week that I alerted mods on.
What's particularly insidious is that killed stories both don't show up in Algolia search results (this is somewhat understandable, but in the case of political flagging, problematic), and even where favourited (something I also do with some regularity), may not be visible to non-logged-in users and IIRC actually disappear from the index in time.
My proposed approach is a little more automated. The mods don't have to find and remove flagging rights individually, they just unflag a post and instantly all the users who had flagged it lose some credibility for future flags.
EDIT: The "flagging trustworthiness" could even help mods to find posts which might need to be unflagged quicker based on the average trustworthiness of the flags.
But unfortunately that could lead to a chilling effect on flagging deserving content. I like having dang and other moderators involved as the final say to reverse a decision if it's necessary. I am pretty conservative with my own flagging and vouching (I've vouched precisely once and it was a comment I strongly disagreed with but was clearly made in good faith and added to the discussion) and I believe that other folks are as well. Since the system mostly works and since social media upvoting algorithms are so insanely complicated to balance well I think it's fair to avoid rocking the boat.
i would like an option in the settings to disable/enable my ability to flag content. On mobile, I have accidentally flagged things and recall it being challenging to hit the button to unflag.
But flagging works most of the time. There are just too many submissions that result in utterly predictable and boring comments. I myself succumbed to the temptation many times, and left tons of comments when it came to, say, Trump's latest antics. Frankly HN will be (marginally) better if someone erased every comment of mine in any thread that mentions Trump.
By the time a story needs to be vouched for, it loses the momentum it previously had gettting it to the front page. So you need a way to prevent vouching from being necessary in the first place.
If a story transitions to a high flag/vouch level (IDK, >10 of each), it should have its clock reset to be considered a new submission, regardless of dupe status. Maybe label with [controversial] or [hot].
This problem happens again and again with hot topics, and at the moment the default behavior is to let them disappear, which is a bit lame.
On the 'vouch' option, this seems helpful. However, at some point one just accidentally creates a tier of "Super Upvotes" and "Super Downvotes". Trying to establish a reasonable heuristic/estimate/algorithm/whatever to identify who is or not a 'trustworthy user' worthy of the "vouch" permission might produce an interesting paper or blog post, but I doubt it'd help to increase transparency in real life, especially if there is a need for judgement-calls.
I actually tend to browse HackeNews through Feedly. And I am normally surprised at the number of links I wanted to come check out and are flagged or dead.
That said, I normally chalk it up to the sites topic's and interest being a little more diverged from my own, Which is perfectly fine as I typically enjoy the moderated approach over the constant outrage and flame fests i see elsewhere.
So you have a long term otherwise great user who contributes positively who has a strong opinion on a political issue that you don't want all over your forum for or against. People tend to flag it for you but its not controversial enough to fall in a hail of flags. Keeping it flagged fortunately takes a relatively small effort saving mod effort.
You introduce vouch for flagged stories. Now your user vouches for absolutely EVERYTHING on his side of an issue and his opposite number vouches for everything on the other side.
Content that isn't low quality and resonates with a good number of people is likely to attract votes even if its off topic and ultimately not desired on that forum and direct and constant mod effort is now required to keep it off because super upvotes now counter super down votes. Welcome to your new political forum.
The mods respond to emails (contact link in the footer) about observed issues, so that’s always an option if y’all think something needs a human revisit and there’s no site button for it.
Yes. From observation, there are a number of topics which inevitably draw the attention of a personal army that very much cares about that topic in a specific way, and to a much greater degree than a typical user.
Vouching works for [dead] comments and submissions, not those that are only [flagged] (but not [dead]). I believe this is what 'crazygringo is referring to.
Karma threshold isn’t the only criteria. I’m at ~47k karma and do not have access to vouch functionality. I might be in a sort of permanent semi penalty box (which I take no issue with, not my sandbox), so take with a grain of salt.
I'd love to hear how you think you know that. Actually, better than that, please show us all some links to where this happened. If what you say is true, it should be easy to find examples.
As it is too late to edit my comment, I wanted to write this here, rather than sending a private email, dang.
I apologize if the questions of myself and other users on this site today has set you on edge - and I am sure that today, in public and in private, you have seen many ugly things that the majority of us do not, and you reasonably draw a trend-line.
I believe that you should extend the same charity of trend-spotting in the other direction.
We live in tumultuous times, and the speed at which the ratchet is moving seems to be ever-increasing. There are significant concerns, as I know you know, about censorship abroad, and also at home in various western countries. I believe the overwhelming outpouring you have seen today has been in response to one undeniable fact -- that even a genuine accident on the part of some engineer somewhere could apply CCP (or any country's ruling party) censorship globally is a line in the sand that many did not realize had been already been crossed.
Whether accidental or intentional, this is a watershed moment in the debate over censorship and freedom. It seems likely there are many more such errors in configuration actively deployed right now. That we have no way of knowing what, or how many such incidents there are is an existential threat to non-authoritarian systems of governance across the globe.
To see something that seemed unthinkable even a few months ago - that Tank Man could be censored in western countries on the anniversary in remembrance of the struggles he literally stood for - crossed a threshold for me in terms of what I believed could be possible more broadly. To see the extremely reasonable discussion around it disappear from hacker news, and stay dead for hours (I note that both the inappropriately-flagged article and the accidentally-marked-as-dupe article both still maintain those statuses at the time of this writing [EDIT: The flagged article's status was changed a few minutes after. Thank you, dang. Doing so does not mean you are re-writing history, and we appreciate it]) made it feel like it had encroached even closer to home than I had suspected.
It made it feel like perhaps I'd been even more naive than I had ever imagined. I'm sure you must feel the same way, after some of the more hateful things I'm sure you heard today.
All of this is to say that I treasure the community that you have played the single largest role in shaping, and your explanations have completely satisfied me.
I apologize for the way your day turned out, and any negative ways in which I have contributed towards that.
You explained to me (and HN) recently that posts which are critical of HN itself are moderated less, not more, than others.
The same standard should apply to posts in which a greatly disadvantaged group are standing up to a vastly superior power, all else being equal (credibility, tactics, nature of grievance, etc.)
As with content concerning the events of June 4, 1989, in Tiananmen Square, China.
Dupes should be merged. Valid freestanding posts should be unflagged.
Ok, I've removed it. Now people will accuse us of rewriting history, but that's ok - one eventually makes peace with the fact that there's no way to win.
dang says elsethread, paraphrased, 'we evaluated that flagged post and figured it was probably legitimately flagged, but after users contacted us^ to say it should be unflagged, so we listened to them and unflagged it'.
Good intentions often lead to bad outcomes. The mods have the best possible (and still imperfect) judgement of all of us, and they misread the circumstances, same as those users who flagged it. So they'd have to ban themselves right along with everyone else, which would be the end of HN.
That the actions of other users ended up being reversed is no excuse to demand that they be banned, especially on topics where agreement will never be reached^^ by the community — such as Apple, China, or Firefox. Please be more cautious about demanding bans of those whose actions result in disagreeable outcomes, even if you doubt their intentions.
^ Using a post or comment is a terribly inefficient way to go about doing this, but it certainly does tend to produce maximum-drama results. I don't think anyone should be banned for taking that approach — maybe they didn't realize the Contact link exists or reaches the mods! — but I'm exhausted of otherwise good-standing and high-karma users using posts and comments to bring things to their attention. I'm also glad to see the thread and discussion here, which contradicts my own viewpoint. I have no easy resolution for that cognitive dissonance.
^^ Please don't call for me to be banned if you disagree with my reply. However, if you feel I've violated the guidelines in some way, I do of course encourage you to let the mods know using the footer Contact link.
EDIT: HN's contact email is working fine; the evidence presented was known to be invalid before it was posted and used in bad faith to bait the conversation.
It worked fine when I emailed them yesterday using a non-Gmail email client and server. You’re suggesting that all email for YCombinator is down (or, worse, some/all of Google Apps) so that seems worth proving out. I checked and there aren’t any posts today about Gmail or Gapps being down. Diagnostic superpowers, activate!
Are you getting a bounce/timeout from a real email client through a real SMTP server? I know some mail servers I used to operate would reject your telnet test as shown, so I’m not really surprised it failed at a Google SMTP endpoint.
Are you attempting to email them using Gmail, using a third-party commercial service, via a cloud/datacenter-hosted personal SMTP server, or a home internet connection SMTP server?
If the SMTP server is under your control, is TLS correctly configured on it to modern standards? Do your server’s A/PTR records match the hostname it’s configured to use?
> You're suggesting that all email for YCombinator is down
No, I'm suggesting that their mail server is not reachable over unencrypted SMTP from random probably-residential IP addresses. This is fairly typical IME, and is why
> Using a post or comment is a terribly inefficient way to go about doing this [...] I'm exhausted of otherwise good-standing and high-karma users using posts and comments to bring things to their attention.
> maybe they didn't realize the Contact link exists or reaches the mods
is a non-sequitor. The point of comments as reporting medium is that they can be made via the same channel that was used to read the (offending or inaccurately-noted-as-offending) posts in the first place.
I emailed them a few minutes ago and they replied to confirm that they’re able to receive email.
While I respect that you may be demanding a certain standard of their mail server’s telnet responses that Google is refusing to meet, I imagine either that you are able to send them email and simply do not wish to, or that you have consciously chosen not to be able to email anyone hosting email with Google in deference to your objections at how Google operates. In any case, whatever the reason, I wish you the best of luck with your choices.
I wish it was more than mild annoyance being expressed. There has been a huge uptick in /r/conspiracy style posts and accusations over the last few months (none of them valid, except one but that was purely by accident).
No, we'd only mark a story a [dupe] if it got significant attention in a different thread. I haven't heard back yet, but the most likely explanation is that a moderator (correctly) thought that one Tank Man story on the HN front page was enough.
Edit: oh - I think that one was actually marked a [dupe] by software. I'd need to double check this, but if so, it's because it interpreted the link to the other thread as a signal of dupiness.
Edit 2: yes, that's what happened. When a submission is heavily flagged and there is a single comment pointing to a different HN thread, the software interprets that as a strong signal of dupiness and puts dupe on the submission. It actually works super well most of the time. In this case it backfired because the comment was arguing the opposite.
> That's not quite accurate, but even if it were, there are easy ways to improve that bit of software if people start abusing it.
How easy it would be for bad actors brigading with freshly created accounts, or not-so-freshly created accounts with a history of brigading, to abuse this feature to censor stories?
I think I already answered this but I'll put it a different way: (1) not as easy as it sounds; (2) it would be easy to tell that they were doing it; (3) if they do it, we'll improve the software so they can't; (4) if that doesn't work, it's a minor feature and we can easily drop it.
We were told the same thing about the lab leak theory from: Social media companies, Mainstream media, and the government.
The reason for distrust is valid. We live in an age of rapidly increasing censorship and the CCPs growing reach of control in American discourse. Skepticism is becoming the default for very real reasons.
I can tell you personally with high confidence that neither the Communist Party of China nor any other Communist Party has influence on how we operate Hacker News. I can't say anything about any other site or company or media or government, because I'm not involved with any of that. But unless the communists are zapping me with behavior-control rays or Angela Lansbury had me brainwashed decades ago, zero such influence is happening here.
You don't have to believe me, of course, but if you decide not to, consider these two simple observations.
First, lying would be stupid, because the good faith of the community is literally the only thing that makes this site valuable. So, sheer self-interest plus not-being-an-idiot should be enough to tip your priors. I may be an idiot about most things, but I hope I'm not incompetent at the most important part of my job. The value of a place like HN can easily disappear in one false step. Therefore the only policy which has ever made any sense is (1) tell the truth; (2) try never to do anything that isn't defensible to the community; and (3) acknowledge when we fuck up and fix it.
Second, if you're going to draw dramatic conclusions about sinister operations, it's good for mental health to have at least one really solid piece of information you can check them against. Otherwise you end up in the wilderness of mirrors. What you see on internet forums—or rather, what you think you see on internet forums, which then somehow becomes what you see because that's how the brain does it—is simply not solid information. Remember what von Neumann said about fitting an elephant? (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...) He asked for a mere five degrees of freedom. Nebulous internet spaces give you hundreds at least. That's way beyond enough to justify anything—even dipping in a ladle and getting one ladle's worth is enough to justify anything.
(Edit: people have been asking what Angela Lansbury has to do with this. If you don't mind spoilers, Angela will explain it for you here: https://www.youtube.com/watch?v=p3ZnaRMhD_A.)
I’ll start with: no I don’t think you or “HN” are in on some conspiracy.
My question is: does HN actively attempt to counteract government actors from influencing the site? I think it’s been proven that China among other countries employs folks to try to influence social media sites. Not necessarily by influencing staff, but by creating user accounts who do things like downvote unfavorable comments or flag stories they don’t like.
This seems like it would be a prime target for that behavior.
Counteracting abuse of this site is the #1 thing we do behind the scenes to try to prevent the value of HN from eroding. That's actually what I spent the first hour of my morning doing, before I realized that there was $BigDrama happening. (Thank you, bat-signaling emailers.) If you ever see me commenting on how "large HN threads are paged for performance reasons, so click the More link at the bottom, and we'll eventually remove these comments once we turn off pagination", well, the reason that's not done yet is because moderation takes 90% of my time, answering emails takes the other 90% of my time, and counteracting abuse takes the other 90% of my time.
The better HN gets, the more people want to suck its juices for their own purposes. Most haven't figured out that the above-board way to do that is simply to make interesting contributions, so they do other things, and there's probably a power law of how sinister those things are. The majority are relatively innocuous, but lame. (Think startups getting their friends to upvote their blog post, or posting booster comments in their thread.)
Users are good at spotting these innocuous/lame forms of abuse, but when it comes to $BigCo manipulation (or alleged manipulation), user perceptions get wildly inaccurate—far below 0.1%—and when it comes to $NationState manipulation (or alleged manipulation), user perceptions get so inaccurate that...trying to measure how inaccurate they are is not possible with classical physics. Almost everything that people think they're seeing about this is merely imagination and projection, determined by the strong feelings that dominate high politics.
How do I know that? Because when we dig into the data of the actual cases, we find is that it's basically all garden-variety internet user behavior.
It's like this: imagine you were digging in your garden for underground surveillance devices. Why? Well, a lot of people are worried about them. So you dig and what do you find? Dirt, roots, and worms. The next time you dig, you find more dirt and more roots and more worms. And so for the next thousand places you dig. Now suppose someone comes along and insists that you dig in this-other-place-over-here because they've convinced themselves—I mean absolutely convinced themselves, to the point that they send distraught emails saying "my continued use of HN depends on how you answer this email"—that here is where the underground device surely must be. You've learned how important it is to be willing to dig; even just somebody-being-worried is a valid reason to dig. So you pick up your shovel and dig in that spot, and you find dirt, roots, and worms.
Still with me? Ok. Now: what are the odds that this thing that looks like a root or a worm is actually a surveillance device? Here my analogy breaks down a bit because we can't actually cut them open to see what's inside—we don't have that data. We do, however, have lots of history about what the "worms" have been doing over the years. And when you look at that, what do you find that they've been up to? They've been commenting about (say) the latest Julia release or parser combinators in Elixir, and they've been on HN for years and some old comment talks about, say, some diner in Wisconsin that used to make the best burgers. And in 2020 they maybe got mad on one side or the other of a flamewar about BLM. (Don't be mad that I'm using worms to represent HN users. It's just a silly analogy, and I like worms.)
Or, maybe the history shows that the person gets involved in arguments about China a lot. Aha! Now we have our Chinese spy! How much are they paying you? Is it still 50 cents? I guess the CCP says inflation doesn't exist in China—is that it, shill? If @dang doesn't ban you, that proves he's a CCP agent too!
But then you look and you see that they've been in other threads too, and a previous comment talks about being a grad student in ML, or about having married someone of Chinese background—obvious human stuff which fully explains why they're commenting the way they are and why they get triggered by what they get triggered by.
This kind of thing—dirt, roots, and worms—is what essentially all of the data reduces to. And here's the thing: you, or anyone, can check most of this yourself, simply by following the public history of the HN accounts you encounter in the threads. The people jumping to sinister conclusions and angrily accusing others don't tend to do that, because that state of mind doesn't want to look for countervailing information. But if you actually look, what you're going to find in most cases is enough countervailing information to make the accusations appear absurd...and then you'd feel pretty sheepish about making them.
I'm not saying the public record is the entire record; of course it isn't. We can look at voting histories, flagging histories, site access patterns, and plenty of other things that aren't public. What I'm saying is that, with rare exceptions [1], what we find after countless hours of extensive investigation of the private data is...dirt, roots, and worms. It looks exactly like the public data.
And here's the most important point: the accusations about spying, brigading, shilling, astroturfing, troll farms, and so on, are all exactly the same between the cases where the public data refutes them and the cases where the public data is inconclusive. I realize this is a subtle point, but if you stop and think about it, it's arguably the strongest evidence of all. It proves that whatever mechanism is generating these accusations doesn't vary with the actual data. Moreover, you don't need access to any private data to see this.
There are also trolls and single-purpose accounts that only comment in order to push some agenda. That's against the HN guidelines, of course, and such accounts are easy enough to ban. But even in such cases, it doesn't follow that the account is disingenuous, some sort of foreign agent, etc. It's far more likely that they're simply passionate on that topic. That's how people are.
[1] so rare that it's misleading to even mention them, and which also don't look anything like what people imagine
---
Still, power laws have long tails and one wonders what may lie at the far end, beyond our ability to detect it. What if despite all of the above, there is still sinister manipulation happening, only it's clever enough to leave no traces in the data that we know of? You can't prove that's not happening, right? And if anyone is doing that it would probably be state actors, right?
You might think there's nothing much to be said about such cases because what can you say about something you by definition don't know and can't observe? It seems to get epistemological pretty quickly. Actually, though, there's a lot we can say, because the premise in the question is so strong that it implies a lot. The premise is that there's a sort of Cartesian evil genius among us, sowing sinister seeds for evil ends. I call this the Sufficiently Smart Manipulator (SSM): https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so....
There are two interesting things about the SSM scenario. The first is that since, by definition, the SSM is immune to anti-abuse measures, you can't postulate any technical measures for dealing with it. It's beyond the end-of-the-road of technical cleverness.
The second interesting thing is that, if you go in for this way of thinking, then either there already exists an SSM or there eventually will be one. And there's not much difference between those two cases. Either way, we should be thinking about what to do.
What should we do in the presence of an SSM? I can think of two options: either (1) give up, roll over, and accept being manipulated; or (2) develop a robust culture of countering bad arguments with better ones and false claims with true information. Of those options, (2) is better.
If you have such a culture, then the SSM is mitigated because the immune system will dispose of the bad parts of what they're saying. If there are any true bits in what they're saying, well, we shouldn't be rejecting those, just because of who said them. We should be big enough to accommodate everything that's true, regardless of where it comes from—just as we should reject everything that's false, regardless of where it comes from. We might prefer to reject it a little more rudely if we knew that it was coming from an SSM, but that's not a must-have.
The nice thing is that such a culture is exactly what we want on HN anyway, whether an SSM exists or it doesn't. The way to deal with the SSM is to do exactly what we ought to be working at as a community already: rejecting what's false and discovering what's true. Anti-abuse measures won't work forever, but we don't need them to—we only need them to last long enough to develop the right habits as a community. If we can reach a sort of (dare I say it) herd immunity from the viruses of manipulation, we'll be fine. The answer to the Sufficiently Smart Manipulator is the Sufficiently Healthy Community. That's what the site guidelines and moderation here are trying to nurture.
Edit: I should add that I'm not 100% confident that this can work. But it's clear that it's the best we can do in that scenario, and the good part is that it's what we ought to be doing anyway.
Man this comment give me PTSD from the early reddit days. If you read nothing else in this comment: You're doing a great job solving a hard problem, keep it up!
> well, the reason that's not done yet is because moderation takes 90% of my time, answering emails takes the other 90% of my time, and counteracting abuse takes the other 90% of my time.
So much this. There just isn't enough time with a small staff.
> Most haven't figured out that the above-board way to do that is simply to make interesting contributions
So much this too. This is what we always told people on reddit -- brands would ask us "how do I get more popular on reddit" and we tell them, "make interesting content".
> Almost everything that people think they're seeing about this is pure imagination and projection, entirely determined by the strong feelings that dominate high politics.
Same with all social media. People assume governments have heavy handed control of all content on social media, when in most cases the government couldn't care less. They focus on using propaganda to control individuals and then let those people make a mess of social media.
Your whole post resonates with my experience on the inside of moderating a big social media site and meeting with moderators of other big sites.
I'll be honest, at first I wasn't too keen on you moderation style, as I found it too heavy handed. But I take that back. HN doesn't cover everything I want to talk about (I go to reddit for the rest), but what it does cover, it covers better than reddit does.
So thank you, and I hope you get some more help with one of those 90% jobs!
> They focus on using propaganda to control individuals and then let those people make a mess of social media.
There was an interesting report in German TV, where they analyzed a paper looking for bot patterns in Twitter. That paper named some offending accounts, so what they did was PM one - and it turned out that it simply belonged to a pensioner with strong political opinions and a lot of free time. Interesting to look behind the cover some times (through I do think that TLAs realize this power and don't let that slide, to some extent at least).
> I'll be honest, at first I wasn't too keen on you moderation style, as I found it too heavy handed.
It's interesting how viewpoints diverge - for quite some time when I started reading, I actually did not realize that HN was moderated. If I may ask, where did you encounter so much heavy moderation?
> If I may ask, where did you encounter so much heavy moderation?
A couple places. The one that bothered me most was that titles would get changed without asking or notification to the poster. Sometimes they would get changed to something I didn't think made sense, and then I looked like I had done that, since there was no indication that it was changed. I guess I'm still not a huge fan when it happens to me, but I see why it happens.
I also didn't like having my comments detached or cooled. If you reply to a top level comment with a good comment that happens to generate a flame war under you, it will get detached from the top into it's own thread, and that just felt weird because it made it look like I made a non-sequiter top comment and also stifled discussion (which was the goal of course).
Also if you make a comment that gets a ton of votes but is perceived as off-topic, they will put a flag on your comment that makes it fall in the rankings. So based on the points and time it should be up at the top, but instead will be near the bottom, sometimes under the comments with negative scores.
Lastly, I have dead comments turned on, and I would see dead comments that I didn't think deserved to be dead. Eventually I got enough karma that I could vouch, which helped.
Those were my main moderation complaints. I still don't particularly like when it happens to me, but usually when I see it happen to other people I think, "yeah that makes sense".
> Also if you make a comment that gets a ton of votes but is perceived as off-topic, they will put a flag on your comment that makes it fall in the rankings. So based on the points and time it should be up at the top, but instead will be near the bottom, sometimes under the comments with negative scores.
This one is interesting to me, because I have emailed the moderators to do exactly this for highly upvoted comments I feel take the discussion into what I feel are the wrong places. I can understand that for a new commenter such tangents might be novel, but for someone who’s been around here for a while I am curious if you oppose such actions for the nth time that someone drags “here’s my article about new C++ feature” into “honestly C++ just keeps adding too many things, discuss”.
"What I'm saying is that, with rare exceptions [1], what we find after countless hours of extensive investigation of the private data is...dirt, roots, and worms. It looks exactly like the public data."
Ah, but this is just proof, that the communist sleeper agents are entrenched even deeper among us, than we expected!
Great response. At the same time, absence of evidence is not the same as evidence of absence. It seems improbable that nation states aren’t keenly interested in social media influence. It seems much more likely such efforts are undetectable.
Appreciate the thoughtful response, I see you’ve been spending a good portion of your day dealing with it. It aligns with what I expected but I think the explanation will be a good reference for the future when this inevitably comes up again.
Is there any valuable connection between the users that flagged the original post that might be interesting? Not looking for specifics, since I imagine that's secret, but wondering how much of it really was standard behavior versus something else.
The flagging history of all the users who flagged that post was very consistent. There was no connection to any specific topic (nor between the accounts, that I could see). Rather, they have previously flagged stories about things like cryptocurrency, ransomware, covid lockdowns, $BigCo flamewars, and lots and lots of scandals involving such subjects as Florida, Katie Hill, and the Chicago Police Department. Also, most if not all were avid HNers, people who comment and upvote and in a few cases email us a lot.
The pattern seems clear that these users are flagging the more sensational kinds of submissions that tend to lead to predictable discussions and flamewars. There's room for competing opinions about which of those are/aren't on-topic for HN, given the site guidelines; if you or anyone want to understand how the mods look at it, I recommend the explanations at the links below. But clearly the flagging behavior in this case was in good faith.
It's easy to predict any submission related to China will turn into a shitshow because you have the usual suspects like justicezyx resorting to whataboutism, cry racism, flagging, etc.
Want to censor a thread on HN? Flag it with a few different users, or turn the thread into a shitshow so that the "flamewar" tools will be triggered, or moderators will be forced push the thread off the frontpage.
Do you have any evidence that anyone is actually doing this? I'm not talking about threads that can fit that interpretation; you can fit any interpretation to most threads.
> [[A man is talking to a woman]] Man: Spammers are breaking traditional captchas with AI, so I've built a new system. It asks users to rate a slate of comments as "Constructive" or "Not constructive". [[Close up of man]] Man: Then it has them reply with comments of their own, which are later rated by other users. [[Woman standing next to man again]] Woman: But what will you do when spammers train their bots to make automated constructive and helpful comments? [[Close up of man again]] Man: Mission. Fucking. Accomplished. {{Title text: And what about all the people who won't be able to join the community because they're terrible at making helpful and constructive co-- ... oh.}}
Quote from Satya Nadella Q1 2019 Earnings Conference Call
"...In fact, this morning, I was reading a news article in Hacker News, which is a community where we have been working hard to make sure that Azure is growing in popularity and I was pleasantly surprised to see that we have made a lot of progress in some sense that at least basically said that we are neck to neck with Amazon when it comes to even lead developers as represented in that community..."
The charitable interpretation of this comment is that Microsoft uses Hacker News comments as a barometer for developer sentiment about Azure. It’s just Microsoft trying to do the “developers developer developers!” thing. They want to make Azure into the kind of thing that people on Hacker News would like. I think this is the most reasonable interpretation, because why on earth would Satya confess to astroturfing on an earnings call?
However, if any executive is getting graded against this metric, Goodhart’s law applies, and there’s a good chance astroturfing would happen. Satya probably wouldn’t know about it.
If a Hollywood CEO says that they are trying to raise the audience Cinemascore ratings of their movies, we’d interpret that to mean that they are trying to make audience-friendly movies, not that they are trying to astroturf Cinemascore. And similarly, if someone at the studio were astroturfing Cinemascore, the CEO wouldn’t talk about it on the earnings call.
I'm not sure about why anyone would confess, but I'm fairly certain MS used to pull this sort of stuff, and not in that sophisticated a fashion - back in the days of Windows Phone 7 and then 8, there were people all over slashdot talking about how amazing the platform was and how the developer experience was just the best... before developer builds were available.
Maybe I was misreading it, but to me at the time it seemed like a flood of unreasonably positive people gushing about something they couldn't really have had any experience with.
No, I only found out about it from the comment belter linked to. FWIW I think (god help us) fshbbdssbbgdd's explanation sounds plausible. I had a similar instinctive response but not as well thought through.
We have banned people in a few cases for serious $BigCo astroturfing but there's always a grey area in the Venn diagram around "PR operation" and "overzealous fan". You can't tell those apart without a smoking gun and those are hard to come by. Fortunately, from a moderation point of view it's a distinction without a difference because the effects on the site are the same.
Also FWIW, my sense (and we do have circumstantial evidence for this) is that even when these things are PR, they're somehow haywire (e.g. a contractor gone rogue), not official strategy, and if high-enough execs found out about it they'd probably shut it down. That's just speculation though; informed speculation, but not highly informed.
First of all, congrats on the good work you and your team do daily.
I do not want to single out a single company, but would like to use this particular example to ask you the following: Please keep in mind the level of manpower and persistence,
some of these corporations can call upon for their strategic objectives..
> We have banned people in a few cases for serious $BigCo astroturfing but there's always a grey area in the Venn diagram around "PR operation" and "overzealous fan". You can't tell those apart without a smoking gun and those are hard to come by.
I can attest to this: at one of my old companies a post related to us ended up getting removed, just because so many of our engineers (entirely independently of the company) voted or commented on it. After that there was a very strict instruction from the company _not_ to engage with any posts about us...
Eventually we probably need to figure out how to make the content robust even under vote manipulation. We're still far from that but I think it's...at least not out of the question. We're still at the very early stage of learning what's possible through community and culture (online, I mean).
Social media manipulation is a tricky problem. I'm sure HN has plenty of active measures against various types of abuse and manipulation, and I'm sure they can't tell us about them, because the people doing that read HN too.
Plenty of orgs are surely trying to do that actively for all sorts of reasons. No idea how successful they are, probably tough to tell.
The spookiest thing of all is that most of the effect might be genuine grassroots action. Picture a Chinese Nationalist poster here, genuinely independent tech enthusiast and happens to know enough English to participate in an English forum. Perhaps they are genuinely annoyed by what they see as westerners meddling in their internal politics, which there is a long history of. Perhaps they flag what they see as clickbaity stories likely to lead to a bunch of China-bashing out of genuine annoyance. They don't need to be paid or leaned on by the CCP at all, they just actually feel that way.
Dammit, now I sound too apologetic about it. Sigh...
From some old comments, I remember that there is a "voting ring" detector. (The details are obscure, because it's part of the secret sauce or something.)
I guess there is also a "flagging brigade" detector. [If not, I upgrade this comment to a feature request.]
Let's say you have 5,000 to 10,000 accounts who semi-regularly post as "normal looking" accounts with other activity, how many of those have to downvote/flag a post to knock it out of front page? Not many I gather.
Considering the upvote count even the hot rising posts on front page have. I would assume the flag threshold to be quite small. I don't know if each flag vote counts the same or does it depend on karma/account age, but at least from upvotes I would assume you would need less than 50 flags to pull even a hot story. So, paid influence ops should easily succeed in HN; whether they have even bothered with HN because of small audience size, that I don't know.
In most cases it is the politics aspect or the unfair coverage aspect that leads users to flag a story, like say on lab leaks; but this story being flagged so easily was interesting. It is about a tech platform intentionally/mistakenly censoring things we will count as free speech.
I appreciate your many positive contributions to HN, I trust your good intentions, and I definitely don't think you were making the kind of argument that this sounds like. It does still sound like it, though, which is no doubt why you got strong responses below.
If you highlight the first use of the pronoun "we" in my comment, it should be clear that you're responding to a different argument than I was making. (superjan already made this point.)
Agreed, and I of course agree with the argument you actually were making, rather than my misreading of it. Still, it's profoundly disappointing to get responses including the kind of vicious attacks that you see in the thread.
By the way, when you're not in the middle of trying to keep HN users from coming to blows with one another, you might be interested in this comment of mine from yesterday, which gives some context for why I abandoned the site previously, a decision which I recall puzzled you at the time: https://news.ycombinator.com/item?id=27389993
I think you are misreading him. He says that there is no pressure or indirect influence from china on how HN is run. This is a popular internet forum, of course there’s people trying to push their narrative.
My comment is not racist, either overtly, expressly, subtly, or implicitly. Acknowledging diversity of opinion and the existence of factions does not imply discriminating against any of those factions.
You can find an extensive list of recent comments I've made that are complimentary to China and Chinese people in the last three months in https://news.ycombinator.com/item?id=27398213.
I said that many HN users are Chinese and Chinese-American [immigrants]. There is nothing racist about that. The Chinese Communist Party, which is the government of [Mainland] China, is very popular among people from China. There is nothing racist about either of these ideas, nor about drawing the conclusion that if you publicly attack the Chinese Communist Party, you are going to offend a lot of Chinese people, including many HN users.
It's not a logically entailed consequence of the premises—it is theoretically possible that only Chinese people who are opposed to CCP rule, such as many of the students who died in Tiananmen Square, many Taiwanese people, and Falun Dafa members, are HN users—but it is overwhelmingly likely that this is not the case.
Pointing this out is not gross. Falsely accusing people of racism is gross.
The whole sentiment here is gross, but it is particularly fucked up how you keep going out of your way to pull Chinese-American people into it. Why not call me a Papist while you're at it?
If you post a story on HN with a list of historical atrocities attributable to the Roman Catholic Church or to Spanish colonialism, you can definitely expect a significant fraction of Latin American and Spanish people to take exception to it, and I've seen that happen on HN several times in the past. Recognizing and understanding that there are hot-button issues for particular political, national, and ethnic groups is not racism; it's a fundamental part of understanding human diversity, which is necessary in order to achieve peaceful coexistence.
Moreover, it is not necessary to claim that every member of a particular group belongs to a popular factions within that group to do this; it is sufficient to acknowledge a general tendency. For example, there are Latin Americans who are not Catholic, and there are Roman Catholics who deplore the Spanish Inquisition as fervently as any Anglican; nevertheless, if you go around denouncing the Spanish Inquisition as one of the worst things ever to have happened in history in front of a large number of Latin American people, a significant fraction of them are reliably going to object. On HN, they may flag your comment.
Me saying this is not the same as me calling you a "Papist".
What fraction of current Chinese-American immigrants grew up in China in families that were lifted out of poverty by Deng Xiaoping's economic policies? I'm guessing over 10%. How would you expect these people to react to demonization of Deng Xiaoping? Many of them will be offended, either because they regard Deng as a hero or because they see that demonization as being directly motivated by anti-Chinese racism, which in many cases it is: people do sometimes criticize the Chinese government because they hate Chinese people. In other cases, it's a more subtle form of racism, which doesn't directly consider Chinese people bad but considers their feelings unimportant.
And that racism is what I'm standing against, as consistently today as I have for years, as evidenced by my comment history linked above. I don't think a person either has to be racist or have to regard Deng or the CPC as above criticism in order to understand that many Chinese people will be offended by such criticism. Yes, including many Chinese-American people.
Withdraw your baseless attack and apologize.
Addendum: tptacek responded to this with a now-deleted comment saying something to the effect of "I apologize to any Chinese HN users who have to read comments like this."
For what it's worth Kragen, if you ever see this, I understand what you mean and I don't think that you're being racist. We can't even talk about the diversity of humans and what influences their intentions anymore without being called racist.
"we" doesn't include the users. The users don't operate hacker news. They have an effect on it but it's not top-down censorship which is what's being implied here.
I definitely do not support excluding or discriminating against Chinese users, a practice which is a big problem on HN. I'm just saying that, just as a significant subset of USA users are likely to flag posts that encourage people to burn American flags or claim that the USA is a "white-supremacist state", and a significant subset of Muslim users are likely to flag posts that criticize Muhammad, there's a significant subset of Chinese users who are likely to flag posts that criticize the Chinese Communist Party, even if it's implicit criticism by way of calling attention to particular historical events that its opponents commonly use as rallying points.
https://news.ycombinator.com/item?id=27161925 criticizing the US for "aggressively escalating the TSMC conflict with targeted attacks on the PRC's nuclear and supercomputing capabilities".
Mind you, I wasn't trying to accuse you of being racist. I purposefully phrased my comment the way I did not to be obtuse but to allow for the fact that coming across as racist may not have been your intention. I still think the way you phrased in the comment I replied to had the issue I mentioned.
> neither the Communist Party of China nor any other Communist Party has influence on how we operate Hacker News
> Second, if you're going to draw dramatic conclusions about sinister operations
This isn't about drawing dramatic conclusions. I have no delusion that Hacker News is colluding with the CCP. This is simply a question about a trend of disappearing posts.
My original statement about
> growing reach of control in American discourse
is purposefully broad because the mechanisms of control are broad themselves. There is plenty of valid concerns around different types of cyber warfare or the growing self-censorship and desire among individuals to avoid challenging topics related to China. Hacker News is a collection of individuals and doesn't need to be a part of grand conspiracy to be susceptible to pressures that have exerted control over other media organizations.
Explaining the process of hacker news moderation and how you mitigate real threats to free speech would be a better approach than claiming your critics are sensationalizing.
To be clear I fall on the side of HN generally handling things well, my post was squarely at your dismissive response to valid criticism.
As someone who had the pleasure of working with dang for several years, HN and its moderation team are some of the calmest and most considerate people I've ever met.
The CCP has zero say in how HN operates.
For the most part the users of HN are in control of whats displayed, its probably one of the most censorship free sites on the planet. You should see how often my karma fluctuates because I express an unpopular opinion, it doesn't bother me because I know its about people and not an algorithm.
Exactly. Discourse has become censored almost compltely on a large number of non "fact checker approved" views over the last year. Reddit bans permanently over it all over, now Twitter. Facebook gets a lot of the blame but sadly it seems like my the only place my conservative friends have a voice.
This trend of "stop Asian hate" is also not organic. It's designed to use the "your racist" Trump card to shut down any talk of the lab leak or China's response
Anyone can do this stuff. Cut a good promo that feeds into the internalized mythology of the target audience and you can get them to believe in it without a hint of skepticism.
As far as organic or not, it doesn’t really matter. People need to have an immune system for nonsense, especially it feels right. Most people can spot nonsense that goes against their own worldview. The trick is to be able to spot nonsense that is aligned with your worldview or you could directly benefit from if true.
That's super nutty. How would the alleged uptick in racist violence against Asian-Americans provide cover for misdeeds of the Chinese Communist Party? Over the past couple weeks many mainstream news sources are reporting the possibility of the lab as a source. In fact the lack of transparency of Chinese authorities has been a pretty universal complaint (minus WHO dithering) since jump.
I do think it's entirely possible the Asian hate fears are the sort of alarmist panic that the American media loves to trade in. I'd like to see statistics regarding violent crime reported by Asian and Pacific Islanders, rather than mostly anecdotal reporting or the dozen or so high profile attacks. I don't see this sort of breathless but shallow reporting as a conspiracy but just run of the mill bandwagoning.
The reality is just the opposite. Even a cursory examination of the news will show a torrent of anti-China coverage (and attendant changes in public sentiment toward China) which just happens to coincide with a US campaign to "confront China" in "great-power conflict." The idea that China is controlling this discourse is risible.
A proposal for a vouching system, since others suggested the idea but not an implementation: Any user can "vouch" for an article. This gets stored in a database, but doesn't do much on it's own. There's a secret karma cutoff, above which the vouch gets "counted", and if there's more vouches than flags a moderator is alerted to give the article extra attention.
Any time vouching triggers extra attention, the decision is recorded in the database. If someone routinely vouches and gets overruled (i.e. vouching for bad content), then their vouching no longer counts in the future.
At some point after the system is introduced, start giving extra weight to people whose vouching decisions line up with moderators.
Worst case, this is just flagging stuff for extra moderation attention so there's not a lot to abuse. If it's requiring too much extra attention, adjust the required vouch:flagged ratio or raise the threshold needed to vouch "For Real".
(I'm not saying I see anything wrong with the current system - I tend to appreciate how well this particular Walled Garden is tended to. But the vouch idea seemed cool to me, and I felt like I could contribute a useful implementation)
Flagging amounts to anonymous moderation, which is why you see a lot of conspiratorial responses to it. Users have no way to know who is removing a post from acceptable discussion via flagging.
One possible way to address this is to make visible a list of users who flagged a post. The arguments against this are obvious. But without such information, in the end you have to accept one result of anonymous moderation is the generation of conspiracy theories.
I always really appreciate and learn from the measured perspectives you share here. It really is a bastion in a world that has recently been pretty crazy. Thanks a lot for your hard work, dang.
Edit: oof, that link does look awful doesn't it. Most "how to get on HN's front page" content is terrible, it doesn't work and induces people to post dross and pull tricks that just degrade the site. I've got a set of notes about how to write for HN that I want to publish someday. If anyone wants a copy they can email hn@ycombinator.com.
I can't comment on this particular slick content marketing course because apparently you have to buy it to find out what it says, but previous ones I've seen have been entirely unreliable, and the look and feel of the ad certainly seems antithetical to the spirit here.
No doubt that's true but if you're saying something specific about HN, I'd like to know what it is. Also, the word 'censorship' is too vague now. People just use it about whatever they disagree with. So if you have a specific question, it would also be good to use more precise language.
You guys need to realize that you have a trump card (can I say that? or too soon?) that users of other platforms don't have: direct access to the people running the platform, who are willing to answer any question about it.
Why not automatically punish users that abuse flagging to censor stories (e.g. users start with 100%, each flagging that’s removed decrease their flag-weight with 50%, so 3 wrong flags means 100->50->25-12.5%, and similarly, correct flags increase by 50%, and stories where mods haven’t accepted/rejected the flag will have no impact)? This way good actors that flag blog-spam and whatnot will not be impacted, whereas bad actors/nationalists will quickly lose their ability to abuse this moderation/censorship tool (which is clearly being abused to a degree where your manual intervention, if any, doesn’t cut it).
Why even allow flagging to influence a submission’s ranking without mod intervention in the first place? Spammy links won’t reach the frontpage anyway (although they sometimes stay on Show HN for a while, so can make special exceptions for Show/Ask submissions).
And how about if a user flag a post, then you might consider making the post completely inaccessible to said user (and if they posted a comment in the submission then set their flag weight to 0 for said submission)? After all, good actors will have no interest in engaging in the submissions they flagged, whereas bad actors will want to attack the submission from all angles (flag submission, downvote comments, post own comments).
It's a good question, in that it brings out the fundamental issue with these discussions.
> Why not automatically punish users that abuse flagging to censor stories
The problem is with the words "abuse" and "censor". No one can agree on what they mean because it depends on what you think of the underlying story, and when it comes to divisive topics, people have strongly differing views on that.
When the topics aren't so divisive (e.g. Conway's Game of Life on the GPU, or something like that), this is not such a thorny problem. But those aren't the cases that we're talking about in this thread.
so, my last suggestion + a perfect disposable anonymous pseudonyms system.
infinite troll accounts would ideally be defeated by all the moderation being exposed and distributed. and infinite pseudonyms account would make toe the line efforts too expensive.
This wouldn’t have happened, accidentally or not, if it wasn’t for the continuing and constant bullying of the Chinese government, and the willingness of international actors to kowtow to it.
HN is not a bastion of free speech and it is not meant to be. Controversial topics are routinely flagged because they incite flame wars and downvoting because you disagree is considered acceptable.
However, in this particular case, I don't think it should be flagged unless the comment sections becomes unmanageable. It may be a political topic but it is seen from a technical angle, and indeed, a lot of the comments are technical in nature: the effect of different options, different engines, alternative wording, etc...
What I think is interesting is how artificial the censorship looks. If I see no results for such a simple phrase, I know something fishy is going on and that would encourage me to carry on.
From someone with several years of moderate volume online community moderation, the issue with post-hoc censorship is radicalization.
If something disappears, no one has bad feelings about it.
If something two people are arguing over disappears, then two people carry simmering resentment about it (ironically, likely more than if their verbal spat had reached a cathartic conclusion), that eventually manifests in their next comments, and which ultimately leads to an erosion of common decency and civility.
It's a fine line, but it's definitely a line rather than right vs wrong.
Is that really an issue? If the user makes more uncivil comments then you flag them again until they get the picture. There are plenty of other places on the internet that seem to welcome and encourage flame wars -- they can go there to work out their resentment and then come back when they're ready.
FWIW, 'right to speech' != 'right to be heard' and 'freedom of speech' != 'freedom from consequences.'
Saying/posting something which quickly gets flagged into oblivion is freedom of speech working as intended. As is subsequent posts overcoming a flagging brigade...
But that's just the surface reason. The deeper reason is because of HN's design which seems to weight flags much more heavily than up votes. This means that a non-trivial minority can successfully drag a story off the homepage, even if a majority believes it is important and worthy of discussion, and even if the comments are actually constructive.
Unfortunately we are not able to see what the count of flags are, and I don't know the scoring algorithm to see what portion of minority is needed to make a story disappear, or if it is even greater than a fixed portion (i.e. whether flag count may have a greater than linear effect, compared to linear support from up votes).
I have heard the rationale of attempting to avoid flame wars and heated discussion of controversial topics, but I'm not sure that it's the only way or worth the price of making things important to most of the community disappear. So far, follow up submissions seem to still be on the home page, but as I write this comment, I can see https://news.ycombinator.com/item?id=27396783 already drop from the top position down to #6 and I wonder if it will disappear also.
I hope that HN may reconsider what options are available to keep discussion constructive and useful while still allowing important topics to be seen and discussed without being buried by a minority.
HN is not a "bastion of free speech", one of its most distinguishing features is the extensive moderation implemented to suppress spam, trolls, and rancor. A platform like reddit or even Twitter is far more welcoming to diverse discussion than HN, but this is an intentional decision by the site operators to maintain a certain threshold of discussion quality.
> I usually trust HN to be a bastion of free speech
You must have a remarkably selective definition of free speech then. Stuff gets flagged all the time, and users are suppressed via opaque and obfuscated methods.
To be clear, I'm not saying that moderation is a bad thing. But increasingly I notice that people use the term "free speech" to simply mean "speech I agree with".
I believe the principle of charity is enshrined in the HN guidelines. There's no need to assume ill intent here, either on my part or on the part of many others who believe in the bedrock, foundational principle of free speech. I don't see how you could make the connection that being outraged that China has managed to censor search results nearly globally on one of the top-three search engines on the day of the anniversary of the event being censored could be interpreted as arguing for only 'speech I agree with'.
"Political" or "controversial" content is always^H^H^H^H sometimes punished by HN (sorry, dang). While I think this is a highly questionable policy which is itself political in nature (reinforcing as it does the status quo), it need not be specific to China.
If you (or anyone) takes a look at that material and still has a question that hasn't been answered there, I'd like to know what it is. Please just make sure that you've actually familiarized yourself with the past explanations, though, because the odds are good that they do answer the question.
Not american here. My reponse is not directly to this subthread, but to likening this subject to politics.
I think there is a massive difference between political discourse (which are mostly about opinions) and fact vs lies discourse (which I would qualify as one where something that is definitely provable or in this case definitely happened is denied as untruth or never happening by one side, with said side pushing for increased conflict in the discourse so as to get the entire thing stopped).
I understand the wish and sometime need to push the first away, but the second is entirely different and when you agree to push it away you are by default siding with the lies faction, even if you have a very good and valid and pure reason for it. The question then remaining being, what obligation has a platform that's massively use for discourse to remain partial to those things ? Legally none, at least in the US. Morally, to each their own.
I understand the issue is way more complex than that, and that you have a third type of discourse which uses the same rules to push something false (eg bill gates vaccine nanobots get activated by 5g !!!), and I have no idea what the correct solution is or isn't. Just wanted to maybe clear out why "we're not taking side" is taken by some as taking sides.
I appreciate your comment but I think you make a mistake in treating "facts" as if fact-discourse is somehow different than political discourse. I don't mean 'there are no facts' or 'truth is what you make of it' or anything like that. Rather, the problem is one of selection. There are infinitely many facts. Selecting the facts that serve your purpose and omitting the ones that don't is not only a political choice, it's the quintessence of political discourse.
While I certainly agree (if I hear you correctly) that truth matters and there is such a thing as loving the truth for its own sake, I don't agree that public discourse can be divided in the way you posit. Quite the opposite.
This explains the flagging. It does not explain the 'duplicate content' article from CNN on the origins of the Tank Man photo, which I believe requires HN moderator activity.
I did notice it the past that HN articles critical of China behaved in a strange way. In the past all the comments in the comment section were downvoted, sometimes you could see dozens of comments greyed out, every single one of them. (Probably because it's not possible to downvote an article). Now the articles are flagged, probably whoever was organising this realised that flagging an article is effectively downvoting it.
No one is organizing anything. It is all just normal community + software + moderator interaction. People read sinister "organization" into it purely because of their pre-existing conceptions. It's a big Rohrschach test and nothing more.
I say that based on looking into literally thousands of such cases, and spending god knows how many hours poring over data. My comments are based exclusively on what I know about HN—I know nothing at all about other sites because I don't have their data. But I know a lot about HN, and I can tell you that the users making breathless insinuations about this stuff have literally no idea what they're talking about. The truth is just painfully boring. (Edit: here's a detailed explanation - https://news.ycombinator.com/item?id=27397230.)
As for why the comments in such threads get downvoted, that is easily explained by the fact that flamewar topics attract shitty comments, and shitty comments get shittier as people get politically and nationalistically riled.
I've downvoted everyone in an entire thread before. Not because I'm a member of a shadowy organization (otherwise I'd be blowing my cover posting this) but because all of them were really bad. HN being HN, usually someone will take the time to write a good comment, but especially on emotionally charged topics, the earliest comments will be both short and just a reflection of the posters views before they read the article (if they read it at all) which makes it likely I'll consider them downvote-worthy.
I think there are probably quite a few users who behave similarly. But there's no need for us to form an organization, because each of us can just use the user-moderation tools as intended, without having to coordinate our actions with each other.
Yeah, and it can take a surprisingly small amount of flags to take down a post. So it could literally just be a dozen people flagging them, not even in a coordinated matter.
Something similar happened on Jan 6 this year with a lot of individual users being like 'No, this isn't related to tech, so we'll flag it." until a couple posts got enough traction to actually stay at the top.
BTW, you can 'anti-flag' by vouching a submission. I've done this a few times when HN insta-flags posts about inclusion in tech.
EDIT: Thank you for your response, dang. Hacker News is a special place, which is why we have responded so strongly to today's events - I apologize if the tone above came off as less-than-civil. I (and it seems, many others) look forward to hearing more about the 'dupe' article others have linked to below. It was only upon seeing the article marked as a dupe after seeing the previous flagged out of existence that it began to feel like more than just a user-initiated action, so I am sure further information on the mod-initiated actions will put these fears to rest.