I don't doubt posts are flagged by users as opposed to moderation.
But at the same time, it also seems like flagging can be too easily abused, and can lead to accusations of censorship and distrust. (Though I've certainly seen it work well in cases, especially for false/defamatory articles.)
But it really does seem like we're at the point where longstanding users need to also be able to vouch for flagged stories, or something like that. And even if that doesn't automatically restore the story, it could at least show a label like "pending moderator decision" or something.
At a time where trust in the media and authority is low... a little bit of greater transparency might go a long way. :)
> longstanding users need to also be able to vouch for flagged stories, or something like that.
This is the critical point. Today, users can "vouch" for [dead] stories, but can't vouch for [flagged] stories until they get flagged so much that they convert to [dead].
The other "Tank Man" story was flagged, but never quite dead, so users couldn't vouch for it; from users' perspective, it appeared to simply disappear.
Allowing users to vouch for the other story would have helped considerably.
Should we have vouch for stories that haven't been marked flagged yet as well? I will search for HN's ranker formula but right now, it shows this story in 8th place to me. The upvote/time would put it on top, so I am guessing either most upvotes are old and there is a decay involved in ranker or it is also being flagged. It could also be an auto flame war detector pulling it down, HN has one based on upvote/comments of the post, I am not sure if there is one which considers voting within comments itself.
I have a flag on my platform that flags users who reported things I agreed with the removal of, as "trusted" and their reports are given more weight in the algorithm next time. Sort of the inverse of what you propose where everyone is somewhat untrusted by default. (But not entirely untrusted.)
There is apparently an entire branch of research called Reviewer Reliability. Someone on HN pointed that out to me a couple of years back, when we were discussing the problem with fake/bought/manipulated/blackhat product reviews.
In a hilarious twist of fate, searching for that term brings up papers on either medical research or peer-review reliability problems in general[0]. You try to find data on a potentially abstract, complex societal issue, and come up with what can only be described as attention grabbing HN-catnip.
It's possible to email the mods vouching for stories (or comments). I do this fairly frequently. Not (yet) in this case, though there was a politically-tinged story earlier this week that I alerted mods on.
What's particularly insidious is that killed stories both don't show up in Algolia search results (this is somewhat understandable, but in the case of political flagging, problematic), and even where favourited (something I also do with some regularity), may not be visible to non-logged-in users and IIRC actually disappear from the index in time.
My proposed approach is a little more automated. The mods don't have to find and remove flagging rights individually, they just unflag a post and instantly all the users who had flagged it lose some credibility for future flags.
EDIT: The "flagging trustworthiness" could even help mods to find posts which might need to be unflagged quicker based on the average trustworthiness of the flags.
But unfortunately that could lead to a chilling effect on flagging deserving content. I like having dang and other moderators involved as the final say to reverse a decision if it's necessary. I am pretty conservative with my own flagging and vouching (I've vouched precisely once and it was a comment I strongly disagreed with but was clearly made in good faith and added to the discussion) and I believe that other folks are as well. Since the system mostly works and since social media upvoting algorithms are so insanely complicated to balance well I think it's fair to avoid rocking the boat.
i would like an option in the settings to disable/enable my ability to flag content. On mobile, I have accidentally flagged things and recall it being challenging to hit the button to unflag.
But flagging works most of the time. There are just too many submissions that result in utterly predictable and boring comments. I myself succumbed to the temptation many times, and left tons of comments when it came to, say, Trump's latest antics. Frankly HN will be (marginally) better if someone erased every comment of mine in any thread that mentions Trump.
By the time a story needs to be vouched for, it loses the momentum it previously had gettting it to the front page. So you need a way to prevent vouching from being necessary in the first place.
If a story transitions to a high flag/vouch level (IDK, >10 of each), it should have its clock reset to be considered a new submission, regardless of dupe status. Maybe label with [controversial] or [hot].
This problem happens again and again with hot topics, and at the moment the default behavior is to let them disappear, which is a bit lame.
On the 'vouch' option, this seems helpful. However, at some point one just accidentally creates a tier of "Super Upvotes" and "Super Downvotes". Trying to establish a reasonable heuristic/estimate/algorithm/whatever to identify who is or not a 'trustworthy user' worthy of the "vouch" permission might produce an interesting paper or blog post, but I doubt it'd help to increase transparency in real life, especially if there is a need for judgement-calls.
I actually tend to browse HackeNews through Feedly. And I am normally surprised at the number of links I wanted to come check out and are flagged or dead.
That said, I normally chalk it up to the sites topic's and interest being a little more diverged from my own, Which is perfectly fine as I typically enjoy the moderated approach over the constant outrage and flame fests i see elsewhere.
So you have a long term otherwise great user who contributes positively who has a strong opinion on a political issue that you don't want all over your forum for or against. People tend to flag it for you but its not controversial enough to fall in a hail of flags. Keeping it flagged fortunately takes a relatively small effort saving mod effort.
You introduce vouch for flagged stories. Now your user vouches for absolutely EVERYTHING on his side of an issue and his opposite number vouches for everything on the other side.
Content that isn't low quality and resonates with a good number of people is likely to attract votes even if its off topic and ultimately not desired on that forum and direct and constant mod effort is now required to keep it off because super upvotes now counter super down votes. Welcome to your new political forum.
The mods respond to emails (contact link in the footer) about observed issues, so that’s always an option if y’all think something needs a human revisit and there’s no site button for it.
Yes. From observation, there are a number of topics which inevitably draw the attention of a personal army that very much cares about that topic in a specific way, and to a much greater degree than a typical user.
Vouching works for [dead] comments and submissions, not those that are only [flagged] (but not [dead]). I believe this is what 'crazygringo is referring to.
Karma threshold isn’t the only criteria. I’m at ~47k karma and do not have access to vouch functionality. I might be in a sort of permanent semi penalty box (which I take no issue with, not my sandbox), so take with a grain of salt.
I'd love to hear how you think you know that. Actually, better than that, please show us all some links to where this happened. If what you say is true, it should be easy to find examples.
But at the same time, it also seems like flagging can be too easily abused, and can lead to accusations of censorship and distrust. (Though I've certainly seen it work well in cases, especially for false/defamatory articles.)
But it really does seem like we're at the point where longstanding users need to also be able to vouch for flagged stories, or something like that. And even if that doesn't automatically restore the story, it could at least show a label like "pending moderator decision" or something.
At a time where trust in the media and authority is low... a little bit of greater transparency might go a long way. :)