Hacker News new | past | comments | ask | show | jobs | submit login

It's one thing to censor, but it's another to intentionally deceive your users into believing that they sent something.



When sending torrent links (Ubuntu / Arch iso even), FB blocks it. Usually they show a red exclamation point on the message for me so this is quite weird. Of course, this is one of the reasons I don't use FB anymore.


This one also shows a red exclamation point, but because of a bug the exclamation point goes away after you send more messages. You can see it in the twitter screenshots or replicate yourself by sending any gnews link on messenger


> It's one thing to censor

!!!

Are we nonchalantly accepting its OK to censor and moving on to the argument about deceiving users?

Edit: should add more context to this. I don't think we should be casually OK with censorship. Next thing you know, FB will silently(or not) start censoring political opponents, activists they don't like, etc. I am quite unsettled by how the west is doing the same as China. Censorship is just casually taken as granted now. I don't think counter arguments hold for "private platforms". At some point, it becomes public square when a billion+ people use your platform.


>Are we nonchalantly accepting its OK to censor and moving on to the argument about deceiving users?

Personally I'm disturbed at the non-chalant attitude taken towards shadow moderation in this thread. Shadow banning and shadow moderation may make one's job easier but it has a disastrous effect on the forum or community by injecting a definite level of deceit into the moderator/poster relationship.

Any forum where shadow bans /shadow moderation are practiced the moderators by definition can not be trusted.


> Shadow banning and shadow moderation may make one's job easier but it has a disastrous effect on the forum or community by injecting a definite level of deceit into the moderator/poster relationship.

You say that as if it's fact, but I'm not sure it is. HN has a method of shadow banning that is reversible if you are noticed as posting stuff that contributes and follows the guidelines, and even when you are shadow banned, people can opt into seeing your contributions (I've had showdead on from day one, I think).

There are places in the internet you can go to make sure you are heard and you can say whatever you want. If you prefer to post here instead of there, then some examination on why that is might be warranted, and specifically whether the thing you are complaining about helps or hinders making this place somewhere you feel worth spending time.


>You say that as if it's fact, but I'm not sure it is.

How is it not? Shadowbans by definition are deceptive; their whole intent is to deceive the poster into thinking that their message was sent and is readable.

If you post and your message appears to appear on the thread but only you can read it then...you are being deceived. The message board software is lying to you.

Since it is the moderators who decide that then it's the relationship between you (the poster) and the moderators that determines whether or not your post actually successfully completes or if you're being lied to.

That is what I mean when I say that shadowbans inject a factor of deceit into the poster/moderator relationship.

It's possible to argue that it doesn't, but it's also possible to argue that the moon is made of green cheese as well. Neither argument is the truth.

>There are places in the internet you can go to make sure you are heard and you can say whatever you want.

And there are places where if you are banned, you know it; if your comment is removed, you know it. My qualm in this thread is about the deceptive practice of shadowbanning.

> If you prefer to post here instead of there, then some examination on why that is might be warranted, and specifically whether the thing you are complaining about helps or hinders making this place somewhere you feel worth spending time.

May I ask -was it your intent to come off as being condescending and borderline insulting? If not, you ought to be aware that is how that sentence came off. And I'll tell you for nothing that being talked down to after spending 25 years on the internet does NOT contribute positively towards HN being a place worth spending time in.

It's also a hand-wave -meaning a distraction from my primary point which is about shadowbanning and the fact it injects a note of deceit into the moderator/user relationship.

There are pluses and minuses to everything; good points and bad points. Obviously since I will continue posting in HN I see more positive points than negative ones -but that does not change the fact ...and it is a fact... that I believe (and have no reason not to believe) that shadowbans have a negative effect on the larger community that they're practiced on. They make HN a place less worth coming to, and the fact that there are positives reasons to come to HN does not change that.

So -to long didn't read summary; shadowbans undermine posters' faith in moderators and serve to undermine the credibility of moderation as a whole.


I think more precisely, the nature of the deceit is this: Shadowbans lie to a specific individual about their participation in the community.

This is because the user is not considered part of the community. A shadowban is something you deploy against a user that is presumed to be a malicious actor in the system. It's deceit, but in a similar category of deceit to telling a user the resource is 404 not found instead of 403 or 401 because you don't even trust them enough to let them know that they found a resource that exists. It's deceit with the purpose of stymiing further malicious action.


> It's deceit, but in a similar category of deceit to telling a user the resource is 404 not found instead of 403 or 401 because you don't even trust them enough

Is it the same? Showing a 404 is a security measure that applies equally to anyone lacking sufficient privileges. Ideally an app should prevent you from getting these, or show a message like "Not found, or no access".

> It's deceit with the purpose of stymiing further malicious action.

Shadowbans are powerful moderation weapons. Too much so, I'd say. The moderator is both prosecutor, judge and executioner. Who is the lawyer? Who revises after a time if the shadowbanned person has redeemed themself? Maybe their bad behavior was just a temporary fit, or even a mental breakdown they have since recovered from. Or they just become wiser grown-ups over time. All very human things that happen, but now they have a harsh sentence applied to them, and they may not even know about it.

Especially at scale these things should be handled properly. And importantly there should be transparency of how the procedures work, what the status is, and what one can do about it.

I also wonder to what degree on the big social platforms there exist even more subtle measures than all-out shadowbans. Like setting metrics on a sliding scale to define the extent the AI should limit a person's influence on the network.

Maybe this already has a name, but let's call it "shadow suppression". With such measures in place a person's voice can be dimimished for a lifetime without them ever knowing. They get a Like or a Comment here and there, but they will never go viral or even reach the audience that they think they address, and no matter how good their content and behavior becomes.


Normally you don't get any of that process because you don't have any right to use someone else's service. They can ban you for having red hair or blue eyes if they please and they would be foolish but not in the wrong.


"They can ban you for having red hair or blue eyes if they please and they would be foolish but not in the wrong."

I am not sure if that is true anymore, with the implementation of anti-discrimination laws.

I mean practically it is still the internet, but if your service is public and beyond a certain size, someone might indeed sue you for doing that.


Sure, but scale that lack of rights to platforms will millions or billions of users, and you get effective dictatorship. Platforms just wielding too much power that extends well beyond their walled garden, impacting society as a whole. The question is often raised if e.g. FB should be considered a utility service, and have their policies tightly regulated.

Edit: I submitted a separate Ask HN on "shadow suppression": https://news.ycombinator.com/item?id=28344492


It's an odd dictatorship that someone voluntarily submits to.

Being out of Facebook's decisions on what you may say is as simple as closing your account.


People’s desire to fit in will make them give up all sorts of rights. Pay attention to history and this is pretty apparent.


It's a bit of a catch-22 though, isn't it?

If people can make sound independent judgment, we don't need a law constraining what private companies must allow on their services because private users will find the censorship reprehensible and will voluntarily leave. We need the First Amendment to constrain what government can do because people cannot voluntarily leave a government; the same is not true for their Facebook account.

And if people cannot make sound independent judgment, and their desire to fit in will make them give up all sorts of rights... How can we trust them with the freedom to say and hear unfiltered information? The first demagogue who comes along and convinces them they must do what he says or they don't fit in will have the whole user base in their power, right? In such a context, it is perhaps necessary to allow the media its freedom of the press to transmit what is true and refrain from transmitting what is false.


> How can we trust them with the freedom to say and hear unfiltered information?

Why does this always come up? Like this exact example. Where there no others? Do you know how I read this? “I think I’m smarter than everyone, and everyone should look to me for the truth”.

> In such a context, it is perhaps necessary to allow the media its freedom of the press to transmit what is true and refrain from transmitting what is false.

This is mostly what we have now. Go compare the divide between the parties vs the Grand Canyon. This is the result. Why? Because opinions. And what does the left do when they can’t disprove an opinion ? Cancel/censor.

We also need the first amendment to apply to corporations. The left found the loophole and is giddily exercising it.


I'm not sure what you mean by "the left found the loophole," can you clarify?


The gov cannot directly limit speech except in certain, very limited, cases. Instead they get corporations to do it for them. Which is currently very legal.


Well said.


Perhaps it’s time we change these laws, and anybody creating a place to talk with others can’t have any rules banning anybody. Perhaps it’s time we pass laws that force companies to abide by the first amendment.


You foster and shape a community by first telling people what you will accept and then by banning people whose behavior doesn't comport with standards. Here we sit on a functional community with such standards discussing this matter politely with one another wherein your argument would be to remove the standards that make this community worth chatting on.

Just because you and are I reasonable doesn't mean the overarching internet is. It's manifestly not. It's extremely likely that a large part of the signal that makes this community valuable would depart if it was entirely filled with 10x as much noise.

Let people make their own communities with different standards if they don't like this one.


> It's extremely likely that a large part of the signal that makes this community valuable would depart if it was entirely filled with 10x as much noise.

What does this say about those participants who do leave? The entire point of different opinions is different points of view on the same subject. Those leaving simply because they don’t like the general narrative do no abide by this principle and probably shouldn’t have been in the community in the first place. Those leaving because they experienced daily abuse, that’s reasonable. I still don’t think they should leave, but instead grow thicker skin.


I don't want to read through 10x as much stupid, offensive, or otherwise bad content in order to read the same number of insightful content. Life is finite. Overwhelmingly the sort to get themselves banned are sharing crazy. I don't think I'm missing much or indeed anything. Why shouldn't I embrace less crazy in my life.


> Overwhelmingly the sort to get themselves banned are sharing crazy. I don't think I'm missing much or indeed anything. Why shouldn't I embrace less crazy in my life.

Because when someone is censoring you can't know if it's crazy or not. You just assume it is, like you are now.


Nobody's obligated to grow thicker skin to use Facebook. In fact, Facebook would prefer to not put that obligation on their users; it decreases adoption and retention numbers.


And decreases the ability for people to hear harsh communications. You want to take over the US? Just stand right in the middle and start saying mean things.


There are very online fora I am aware of that have any kind of arbitration or check on the execution of administrator or moderator authority.

Facebook is one (which is why they will be reinstating Trump's account in the future: https://about.fb.com/news/2021/06/facebook-response-to-overs...).


Yes, but that example refers to a very high profile account. I wonder if the average person will get the same amount of attention (or at least a reasonable amount) in similar cases. Also this refers to a very noticable account suspension, not a shadowban that may stay under the radar.


Some users are simply not valuable participants. The kind that get shadowbanned can include those spamming Viagra, scams, and other nonsense.

I think that not getting rid of that content/malicious participants (not community members) undermines faith in moderation.


Other nonsense? As in viewpoints that are disagreeable to the moderators in charge?

There’s a clear difference between male enhancement spam and a guy with a poorly thought out comment that most people find disagreeable. A middle schooler can spot the difference.

Back in my day we welcomed those poorly reasoned or flat out wrong comments: you would attack them with logic and use them as opportunities to find the truth of a matter. And the community reading it would revise their priors and the needle would move closer to the truth.

Today those comments get censored, leading to much of the awful discourse you see on FB and Reddit these days. You can’t discuss sacred cows without getting banned from a community or shadowbanned altogether.

The ridiculous rhetoric on the left these days is a direct result if this phenomenon.


No, just the kind of stuff that our customers didn't want, or that were illegal.

Pirated movies, scams that purported to offer pirated movies, people trying to steal passwords, people trying to hack our users.

I think your accusations are aimed at the wrong person.


Read the parents, this isn’t about your specific situation. You defended censoring, saying sometime it’s useful to essentially protect others from what you view as nonsense. This is the problem. What is “other nonsense” and how much of the moderators bias is in it? If someone is posting information, leave it alone. If it’s wrong it’ll be dealt with in short order by someone refuting it with more information. If it’s a spam account just posting marketing links then ban it. These users were not posting marketing links in their private messages.


> How is it not? Shadowbans by definition are deceptive

The question is not whether it's deceptive, but whether "it has a disastrous effect on the forum or community". It is deceptive. Does it have a disastrous effect on the community? It doesn't appear so to me, so I think that needs supporting evidence.

> May I ask -was it your intent to come off as being condescending and borderline insulting? If not, you ought to be aware that is how that sentence came off.

How a sentence is interpreted has a lot to do with the context the person reading it is in. I didn't intend to be dismissive, but I think the point still stands, because I was actually trying to make a point. Why not just use one of the sites that doesn't do shadowbanning or censorship instead? This isn't some sly insinuation you should leave. It's an actual question about what is different here from there in the rules and moderation, and how and why that might contribute to how the community acts at each place respectively.

> It's also a hand-wave -meaning a distraction from my primary point which is about shadowbanning and the fact it injects a note of deceit into the moderator/user relationship.

No, it's a real question that you decided wasn't relevant so then went out of your way to avoid.

> but that does not change the fact ...and it is a fact... that I believe (and have no reason not to believe) that shadowbans have a negative effect on the larger community that they're practiced on.

It hasn't been shown to be a fact. You haven't even really shown how it could be negative to the community, much less provided evidence. "It erodes faith" is both unsubstantiated, and it's also not shown even if it was how that actually results in a problem (I'm not sure there don't exist forums with hated moderators that still function fairly well).

Shadowbanning is censorship and it is deceitful, but it's also aimed squarely at those the moderators think are not part of the community, or at least not productive parts. The whole point of it is to keep those people from realizing their account is banned right away and starting a new account to continue the behavior.

I would argue that for the most part, this has a positive effect. There are plenty of dead posts which you can see too if you want which provide little or no useful contribution to the discussion or the community.

> shadowbans undermine posters' faith in moderators and serve to undermine the credibility of moderation as a whole.

I haven't observed that. I think most people here trust the moderators to use that sparingly, and there's also a system which appears to let the community see and reverse shadowbans in the case it's incorrectly applied or the banned person in question provides more meaningful content later.


Most people aren’t aware that shadowbanning occurs, how it is used, and to what extent. By very definition they don’t see shadowbanned comments or content.

I’m not sure how you can think people approve of something they aren’t even aware is happening.

It’s not just deceptive to the shadowbanned user, it’s deceptive to the whole community, too. How can an average user trust that the content they’re viewing isn’t entirely astroturfed? Or alternatively, 100% organic?


> I’m not sure how you can think people approve of something they aren’t even aware is happening.

Because people often trust people based on reputation, results, or other proxy measures rather than reviewing every action they take.

> It’s not just deceptive to the shadowbanned user, it’s deceptive to the whole community, too.

It is. At the same time I think most people don't care, and are fine not having every moderation action announced to them.

Additionally, as I've stated multiple times here, and so have others, HN allows anyone to opt into seeing these comments from shadowbanned users, so HN doesn't even follow the same ideal of the problem case you and others are putting forth. You'll get a lot farther pushing this idea of shadowbanning being bad and a problem if you actually respond to the reality of the situation you're arguing about here, and the counter-arguments being put forth, rather than just blindly repeating the same thing.

> How can an average user trust that the content they’re viewing isn’t entirely astroturfed? Or alternatively, 100% organic?

How is that anything to do with shadowbanning or moderation? That's a problem entirely separate and that shadowbanning and moderation of the type we're discussing has nothing to do with.

But, if you really want an asnwer to this ridiculous question, it's that you can go into your profile here, find the "showdead" option, turn it on, and then you'll see all the comments you're complaining are hidden and deceptive and hidden from all the users. And if you don't trust this is all the dead comments, you can't really trust the moderators at all (and there's no more or less trust than any other site which says they do or don't do something).


For the record, I’m not talking about HN, I’m referring to Facebook and Reddit particularly. HN handles this issue better than many platforms. Apologies if that wasn’t clear.

So users are okay with being deceived and ultimately don’t care? Let me know if I’ve misunderstood your argument.

That may be true. And knowing human nature you may be right. I disagree that it’s a morally acceptable way to moderate a community or forum, though.

People lose trust when they find they’ve been shadowbanned, or discover that the practice is used. They rightfully understand that the discourse is being manipulated in an underhanded, opaque way.


> So users are okay with being deceived and ultimately don’t care? Let me know if I’ve misunderstood your argument.

Users that expect moderation expect a certain level of housekeeping and don't care if they are exposed to it all, and I would guess a great many don't want to be bothered with it.

The important distinction here is that being shadowbanned is being excluded from the community, and is done to even more effectively keep these people away from and out of the community than just regular banning would accomplish. The community expects to see content from the rest of the community, these people aren't part of that group, regardless of whether they still have an account and can log it, so the amount of deception to the community is little to none.

The alternative to shadowbanning is not letting they people post, it's banning. The effective desired outcome in both cases is that this person no longer is allowed to be part of the community. The effective actual outcome is that regular banning often just results in people making a new account and continuing with the same behavior. The only deception of shadowbanning to the community is that you're not explicitly saying "hey, these posts we tried to stop from being made? We're letting them be made but just hiding them from view, since we don't want them here in the first place."

If I'm going to get mad about that I might as well get mad at the police or a business owner for taking down some banner that was hung illegally across other businesses that I would see on my commute if they didn't act fast enough. Would I feel deceived that people had his some random message someone decided to put up that was unwanted was removed without me being able to see it? No.

> I disagree that it’s a morally acceptable way to moderate a community or forum, though.

If every forum was like that I might agree. But groups of people have the right to choose how to police themselves and there are plenty of other communities out there that do it differently.

> People lose trust when they find they’ve been shadowbanned

Those people are being kicked out. The community, or the mods, want them to lose trust and leave in most cases. I don't worry that someone I think deserves to be in jail is disillusioned because they don't think they deserve to be in jail. Or, if I worry about it, it's not in a way that makes me think they don't deserve the punishment.

> or discover that the practice is used.

I think that's very few people.

> They rightfully understand that the discourse is being manipulated in an underhanded, opaque way.

All moderation is this, whether done by the few with extra powers or outsourced to the crowd. Moderation is censorship. Censorship isn't always bad (like most things in like, it's a matter of extremes being the problem. Too little or too much both have issues, so we generally fluctuate as a society around a sort of middle area).

You can call it underhanded and opaque all you want, but it's only that to those that have already been excised from the community. To everyone else, it's really no different than if that person was banned, except they're less likely to come back for a while.


You notice how all of your examples are about HN and not FB? Where’s the option on FB to turn on banned posts? Where’s the default option turned on on HN?


That would be because A) this thread diverged from that when the assertion that all shadowbanning ain any forum is harmful to the community, and B) as noted elsewhere in this discussion, FB isn't shadowbanning, they're denying sends and a UI bug later is removing the information that the send failed.

> Where’s the default option turned on on HN?

I'm not sure what you're asking. If you're asking if showdead is on by default on HN, no, it's not, and I don't think it should be.

The whole point is to make it so people that are not contributing usefully to, or worse are actively harmful to, the discussion and community are less likely to cause a disturbance. Any one of those people can easily tell if they are shadowbanned if they decide to check just by opening up an anonymous browsing instance and looking at the same discussion. They then have the option of either trying to contact the moderators to make a case, continuing on and trying to align better with the community in future comments and hope someone vouches for them (if they're aware of that), or making a new account and starting over.

But, ultimately, even if they don't even know they're shadowbanned, as long as they try to align their behavior with the community, I think there's a good chance they're be vouched for enough to not be shadowbanned after a while. When that doesn't happen, from what I've seen it's because they haven't really made an effort to change. For example, a dead user fit2rule posted yesterday about how they've been dead for a few months. I looked at their comment history, and the vast majority of them are single line responses. Those aren't necessarily bad, but they probably don't cross a threshold to make people decide they contribute enough to vouch for them. At the same time, the occasional comment they have with more than a sentence that seems to try to actually engage often doesn't show as dead, as someone must have vouched for it because they though it was worth being expressed. If they had kept with that type of engagement, I think they would not be dead at this time.


> the assertion that all shadowbanning

How can you know this? The community is what it is as a result of the discourse that happens here. If you remove one person that discourse, and therefor the community, has changed. So shadowbanning being harmful is more of an opinion of whether all voices should be heard or not.

> I'm not sure what you're asking. If you're asking if showdead is on by default on HN, no, it's not, and I don't think it should be.

I know it’s not, and I think it should be. It’s quite easy these days to have a pop up say “want to see all of the conversation? turn off this setting” and guide the user to turn it off with an easy way to turn it on. In fact, ideally, this censoring would happen on the client configured by rules created by the owner.

> as long as they try to align their behavior with the community

You do realize that everybody aligning means everybody is the same right? What happened to diversity? (cue “some opinions don’t matter” statement)


> How can you know this?

What? You asked my why my comments are about HN and not FB, and I said it's because up-thread a wider assertion was made about all shadowbanning, which is what we're talking about at this point.

> If you remove one person that discourse, and therefor the community, has changed. So shadowbanning being harmful is more of an opinion of whether all voices should be heard or not.

The people that own and or run the forum ultimately control the community. They may let the community exert pressure on them, but button line, they control it, and anyone that's under some other impression needs to come to term with reality. Those people want different types of communities, and will take different steps to ensure they get what they want. People that do not like those steps, or the communities that develop, have the choice of exerting pressure on the people that run it or on other community members, or they can choose to go elsewhere.

HN is not a public resource. It's a private resource that allows the general public. If you want to make a case that FB is special because it's so big and everyone is on it, that's one thing. Clearly define why and what the criteria is, and we can discuss whether that makes sense or not or breaks down in practice or how it could be games. But to expect that if I stand up a little forum with dozens of users to discuss specifics of care and restoration of the Datsun 240z and I'm contending a persistent forum spammer that's putting gratuitous messages about penis enlargement and I find that banner works for minutes but shadow banning seems to actually solve the problem in most cases, I think it's ridiculous to think that most the small community would be up in arms over how I was being deceitful to them about not showing them this spam.

I get that you might think DB and HN and Reddit are bit public resources and might have to obey a different standard. If you do, please outline what makes them different and how we determine that. If you don't, and think everyone should obey this standard, please explain why my theoretical little car forum that is entirely maintained and moderated by me should even care what the forum members thing if I'm happy with the community as is without any of the people that might left because they don't like how I run it?

> You do realize that everybody aligning means everybody is the same right?

Why'd you even bother to post this? I half moment's thought would make it obvious that I'm talking about aligning a very specific subset of items, namely behavior, as I specifically stated.

Communication is only possible with a shared understanding of some base. At the lowest level, that's generally language, but it can be extended to other norms to make communication easier and less error prone.

> (cue “some opinions don’t matter” statement)

It's not that they don't matter, it's that they don't matter in some contexts to some people, and are thus inappropriate. Does Joe Bob's weekly rant about whoever the current politician that has his ire is matter? Possibly. Does it matter and is it appropriate on a small car enthusiast forum? Nope, and he has no right to have it shown there, nor expectation that if he posts it there that others should see it, whether he thinks they do or not.


> What? You asked my why my comments are about HN and not FB, and I said it's because up-thread a wider assertion was made about all shadowbanning, which is what we're talking about at this point.

You completely ignored the question. How can you know that shadowbanning did not negatively effect this community? You cannot know that. Where does the HN exist that never shadowbanned for you to draw your comparison from? This is opinion, not fact. But, I'll let you move the goalposts. HN is not nearly the size of FB however.

> People that do not like those steps, or the communities that develop, have the choice of exerting pressure on the people that run it or on other community members, or they can choose to go elsewhere.

Pressure was exerted, censorship continues. What recourse does one have for the largest social media company in the world? The left is outright giddy at this thought.

> HN is not a public resource.

Given that no login is required one could argue it is public, given that recent scraping judgement.

> It's a private resource that allows the general public. If you want to make a case that FB is special because it's so big and everyone is on it, that's one thing.

I do want to make that case, but not related to size. Any, ANY, forum or anywhere where discourse happens cannot censor anything but spam. Repeated messages, messages that are clearly marketing, etc. Instead you have individuals who are not qualified playing judge, jury and executioner. Judges recuse themselves when they can't be neutral. How many forum moderators do the same?

> But to expect that if I stand up a little forum with dozens of users to discuss specifics of care and restoration of the Datsun 240z and I'm contending a persistent forum spammer that's putting gratuitous messages about penis enlargement and I find that banner works for minutes but shadow banning seems to actually solve the problem in most cases, I think it's ridiculous to think that most the small community would be up in arms over how I was being deceitful to them about not showing them this spam.

Absolutely nobody but the scam artists will care about this type of censorship. But note that you're using marketing spam to justify censorship of others opinions.

> Why'd you even bother to post this? I half moment's thought would make it obvious that I'm talking about aligning a very specific subset of items, namely behavior, as I specifically stated.

You didn't change the outcome, we're all still the same.

> Communication is only possible with a shared understanding of some base. At the lowest level, that's generally language, but it can be extended to other norms to make communication easier and less error prone.

Like censorship? Yea that would certainly streamline things.

> It's not that they don't matter, it's that they don't matter in some contexts to some people, and are thus inappropriate.

And therefore we should censor them because they absolutely cannot be useful for anybody else? How about this one:

Someone's BS filter is tuned mostly OK, but they still need some work. You censor a false post and they miss an opportunity to do just this. What you're going to end up with is a bunch of people who can't think for themselves who require someone telling them exactly what to do and think. It's been known for a while that the left wants the population following exactly everything they say. And given that all things are cyclical we can easily see that it's a reversion to the style of gov the US came from. Whether or not you are knowingly complacent or not I'm not sure, but you're fighting for that style of government back.

> Does Joe Bob's weekly rant about whoever the current politician that has his ire is matter? Possibly. Does it matter and is it appropriate on a small car enthusiast forum? Nope, and he has no right to have it shown there, nor expectation that if he posts it there that others should see it, whether he thinks they do or not.

Discourse happens where discourse happens. And I doubt Joe Bob randomly spout out his opinions, usually someone would insert some slight reference in everyday communications. The left does this one a lot.

But by all means, let's use this to censor?


> You completely ignored the question. How can you know that shadowbanning did not negatively effect this community?

You're the one maintaining that it does. Why try to force me to prove a negative, when you've not supported your own assertions in the first place?

> But, I'll let you move the goalposts. HN is not nearly the size of FB however.

No goal posts were moved by me. If someone takes a specific situation and makes s statement about it that generalizes across everything, I think it's pretty clear that's when the goal posts are moved, and claiming the goal posts were moved later when people actual use examples from the wider world is just a rhetorical tactic to cover that up. If you don't like having to defend the claim against a wider set of cases, simply back off that portion of the claim and say maybe it doesn't apply to everything, and don't defend it. It's simple, it doesn't mean you're wrong, it just means you're willing to revise your argument to be more specific and stronger in the fact of facts.

> Pressure was exerted, censorship continues.

Presumably it wasn't enough to matter then? Should one customer control a service? Should one percent?

> Given that no login is required one could argue it is public, given that recent scraping judgement.

Publicly available is not the same thing as a public resource, but if you want to actually use that as some sort of evidence, some indication of what you're referring to would be useful, as I'm not going to google "recent scraping judgement" and assume we're working off the same facts and referring to the same thing. I would read a provided link though.

> You didn't change the outcome, we're all still the same.

I don't think so. Your claim is now that aligning everyone (on behavior, since I clarified that obvious point and you stuck by your statement) means everyone is the same. That's obviously wrong. Just because you and I agree not to yell epithets at each other here and converse in a respectable manner does not mean we are the same, as is obvious by how we disagree on things.

> Like censorship? Yea that would certainly streamline things.

Yes. One of the definitions of censorship is the prohibition of extreme things. I suspect there are a great many places you support censorship, as almost everyone does. I don't let my children speak to me or their siblings in certain ways in my house. I would be disappointed in them if they did so outside my house, and depending on which child and their age, might enact some punishment. I am censoring them, and I believe it's both for their own good and for the good of those they interact with. I would argue the vast majority of parents do this. Every community does this in some manner (even if punishment is just ostracization).

The problem with taking a hard stance of "censorship is bad" is that then you feel compelled to think of something as bad when it's called censorship and is censorship but when viewed on its merits isn't actually bad. It's similar to how "ads" have been so vilified today, that people take nonsensical positions because to them the word embodies something. Names above of business above and on doors are also advertising, but it's clearly useful and informative advertising of what's inside. Similarly, there are types of censorship, such as a community enforcing its norms, which often but not always are mostly beneficial, at least for the community overall.

Does this result in some people feeling like they've been targeted unfairly? Yes. Were they actually targeted unfairly? That depends quite a bit on the circumstances, which includes personal responsibility of the person and what the community is trying to enforce. Whether a person that's breaking the rules of a community get to do so regardless of their wishes should depend on the behavior and the norms. Breaking codes of conduct in a purely opt-in community, where it was a choice to do one thing over another? That sure seems like someone abdicated their own personal responsibility to me. Being punished because of some aspect of your birth such as gender and race? That seems like a problem, unless the community's purpose is a place for people such as that and you don't belong.

> What you're going to end up with is a bunch of people who can't think for themselves who require someone telling them exactly what to do and think.

As if shadowbanning is mandatory across all aspects of life and everyone will use it with the same criteria? That seems fairly far fetched given what we're talking about.

> It's been known for a while that the left wants the population following exactly everything they say.

Please, don't even start with assigning blame to one side. It's trivial to find instances of bad behavior on both sides along this spectrum, and the only reason you wouldn't see it is if you hadn't bothered to look (but many people haven't). All the sources of the right do exactly what you complain about, any person that doesn't toe the party line is not given the opportunities to have their position heard, and you're getting a curated version of facts like everyone else.

Preventing shadowbanning, or banning, or anything else won't help this. There are natural effects of communities that do this well for most cases, those are just useful to weed out the persistent bad actors that aren't acting in good faith anyway. If you actually care about different views and getting information through to people, you need to branch out and actually engage different communities. If those communities silence even coherent and within the rules interactions than find a different one, since that one won't be receptive to that content no matter what, and forcing them to not exclude it won't help that community, it will just make it entirely dysfunctional for the things it was functional for previously.

> Whether or not you are knowingly complacent or not I'm not sure, but you're fighting for that style of government back.

I think your vision is too constrained. You seem to think forcing everyone to hear everyone else is the solution. It's not. You forcing civil rights militants (for lack of a better term) and white supremacists to interact in their own home territory is not a plan to make things better, it's just a way to radicalize them further.

The solution is not to allow anyone to say what they want and make sure it's heard regardless of the community, it's to make people want to actually understand each other, so they're willing to meet people on their own terms and following the rules of the location they're at.

> Discourse happens where discourse happens. And I doubt Joe Bob randomly spout out his opinions, usually someone would insert some slight reference in everyday communications.

Not in my private forum, not without my say, just as not in my house without my say. I reserve the right to kick out or ignore anyone. And it doesn't matter if that person hears or sees a reference. They know the rules or they shouldn't be there, and "not being able to control myself" is not a valid excuse in the real world.

> The left does this one a lot.

I'm flabbergasted that you seem able to suggest this seriously. To think the left is the only side that censors when the right has also had a long and storied history of it, is baffling. Both sides censor when it suits their purposes. Notable times the right as taken up that mantle in the past include McCartyism, censoring portions of climate change science (during both GWB and Trump)[1], and the many times religious organizations aligned with the right wanting to ban certain books.[2]

> But by all means, let's use this to censor?

There's a difference in controlling what your community you control (because you provide what makes it possible) can do and what the wider public is allowed to do. Censorship when applied to all aspects of life is definitely a problem, and why we have freedom of speech. But we also have freedom of association[3] as an important part of that, and if you look into that closely, being able to police your own community is an important part of that (if you can't express what you want in your community you don't actually have a community).

As I noted earlier, if you want to make a case specifically about Facebook and how it should be treated as a public resource (which I'll clarify doesn't just mean something publicly accessible but something people have a public stake in that the state should ensure has additional protections and requirements, like water or clear air), then go ahead and make that case. I might even agree with you on something like Facebook. But what was stated earlier by rnd0 and which we've ostensibly been discussing is shadow banning and communities in general, but public resources or even Facebook in isolation.

1: https://en.wikipedia.org/wiki/Censorship_in_the_United_State... (the citation goes into the detail linking Trump)

2: https://www.au.org/blogs/christian-nationalists-censor

3: https://en.wikipedia.org/wiki/Freedom_of_association


Way too long of a post, I don't care your position anymore and maintain mine.


I've got bad news for you about HackerNews then, HN has 'dead' users who can post but are only shown if you specifically opt-in and they can't be replied to at all.


But those comments from dead useres can be "vouched" for if they appear to be in good faith, and one or more of those seems to take people out of dead status, I think. I'm not sure what points threshold grants that ability, or how long it has existed, but I do it every few month or so when I see a comment that seems to be perfectly fine but the comment is dead.

I think that might be a pretty good mechanism, all said and done. Someone acts in a way the community doesn't accept enough times they go dead, and then if they act acceptably they start getting people responding and participating to them as they come out of dead status. They only get attention, and positive attention, from acting responsibly.


HN would be a useless trove of violence and hate if all [dead] comments were automatically unhid. Folks who are into 8chan might be into a community like that, but I personally much prefer the moderated approach taken today.


> HN would be a useless trove of violence and hate if all [dead] comments were automatically unhid.

I read with showdead=yes and it is nowhere near that description. In fact dead posts are not even that common on most articles. And most of the dead posts, while on average low quality, are very rarely hateful or anything like that.


The question is are they few and not quite as bad because the shadowbanning, or in spite of it. Those comments can't be responded to without vouching for them, and I doubt most people are going to vouch for them just to respond, unless the comment has some merit. How many people just stop commenting because they are getting no interaction? how many additional posts are prevented because people can't respond to those comments?

Like many policies that might actually have an effect, I'm hesitant to say exactly how much it achieves without data comparing what it's like without it.


I read with showdead=yes as well and can vouch for your experience. You are not contradicting the parent post, IIUC, however: I take it seattle_spring was talking about how HN would be if it lacked the system or something like it at all.


There used to be one dead user that would post... wild racist stuff due to mental illness. But lately yes, I agree with this characterization.


Vouch takes the comment out of dead status. I then see other dead comments by the same user so I don't think it takes the user out of dead status.


I think it depends. I've vouched for a comment and then seen subsequent comments not be dead. I suspect there's a bit more to the machanisn, but I think it can lead to being bit dead anymore, even if it just triggers a mod rereview of user comments to decide if they get a second chance.


I'm not even remotely surprised, and that doesn't really change my point.


Shadow moderation became the norm ages ago to deal with spam. You can debate about its limits but somebody with a hardline position smacks of somebody who has never dealed with the real world issues but is cocksure about how things should be done.


>You can debate about its limits but somebody with a hardline position smacks of somebody who has never dealed with the real world issues but is cocksure about how things should be done.

The canard about what happens when you assume applies in this case. Very much so.

I've moderated forums and I've never had a problem with simply deleting spam out-right when it came up. Same with problematic users and posts.


It's one thing to moderate something in the hundreds to thousands of users, and something entirely different in the tens of millions. When you start having multiple groups of attackers and whole underground markets aimed at taking advantage of your users, simply banning individuals / deleting individual posts no longer cuts it.


Then what? You’ve basically said you have a job you can’t do. And choose the lazy method of banning those, infringing on a very important right. If you can’t keep up with volume then quit?

So many people in this thread are talking about death threats from being a moderator. If being a moderator is such a dangerous job then why do people do it? One has to wonder, they’re either getting paid very well, or they like the power. I’m guessing it’s the latter in almost all cases.


I'd maybe do that for the manual spamming but it's foolhardy against anything machine generated. You're just helping people who couldn't care less about anything but increasing clicks understand how they get detected.


>I'd maybe do that for the manual spamming but it's foolhardy against anything machine generated. You're just helping people who couldn't care less about anything but increasing clicks understand how they get detected.

I haven't had to deal with it recently (since 2015 or so) but at the time captchas seemed to help with the automated stuff and what slipped through we deleted by hand.

Times change, of course, so I see your point


The assumption that you are making that all the users, and specifically the users that need to be shadow banned, are just normal mentally healthy individuals. Mostly, they are not.

If you want to waste precious minutes of your life dealing with the mentally deranged for free -- that's your call. But you shouldn't be disturbed by those who don't.


On HN, shadowbans are implemented via automatic flagging.


> Next thing you know, FB will silently(or not) start censoring political opponents, activists they don't like, etc

They already do. [1]

1. https://crimethinc.com/2020/08/19/on-facebook-banning-pages-...


Is there any better idea for how to stop people deliberately harming each other with bad information? Particularly in light of confirmation bias and the level of effort required to disprove vs the amount of effort required to make stuff up.


It will happen in the US. BigTech is a reaction to the internet. The ruling oligarchy lost control of the narrative. They need to control information flow again.


It's not a public square for being successful, don't use Facebook if you don't like it. I don't either.


Our legal system (fortunately) hasn’t accepted the theory that the penalty for being popular is that you get to be nationalized. How would you feel if you threw a lot of parties in your house, and then one day the government came to you and said you now have to allow everyone to party at your house?


You're making an interesting point which I resonate with - but I want to elaborate and crystalize my thoughts a bit more. I am a staunch supporter of civil rights and we should examine the extremes.

Extrema 1: Social network with everyone on the planet addicted to it, it becomes impossible to avoid. Even if one doesn't want to use it, it will become a necessity to communicate to others. Twitter is an example where it is official source of information from Government agencies to local fire department, embassies, and many politicians, etc. Recently they're requiring an account (and phone number) to access what seems to be the official channels of information for many nations. With milions if not billions of users, it is a significant chunk of the entire world's population. Just like any other industry (big oil, big tobacco), big social media and big tech need to be regulated. There is a spectrum between a private forum of 10k users and a megacorp able to control election outcomes even. What's even more eggregious is that the algorithms that govern the engagement of 1B+ users is developed by people of Silicon Valley (probably 0.000001% of US population or around ~300 people give or take). In private. Closed doors. We oughta see how this is terrifying (even if I agree with their stance and moral principles).

Extrema 2: Big tech merges with Gov in the business of law enforcement. A small party where government has absolutely no business gets interrupted by cops who were alerted by the "thought algorithms". These thought algorithms were developed by Gov + Big Tech by fusing various channels of information and it was deemed that this party is too pick-your-fav-niche-topic and cannot be tolerated. It is my business, my party, my friends and my agenda. No one should have the right to interrupt unless I am violating laws and I am granted due process.

Obviously both extremes are encroaching civil rights of people to have privacy and access to information.


Why does it matter? You can't argue by an analogy, because first you need need to argue why running a social network with 1B+ users is the same thing as throwing a house party.

With great power comes great responsibility. You don't get to be "popular" for free.

And if you don't like it — you don't have to pay that game.


> You can't argue by an analogy

Sure I can. I’m doing it right now. Of course, analogies are imperfect, but they make good teaching and argumentative tools.

> You don't get to be "popular" for free.

Fortunately for society, the law disagrees with you. (Also, your way has been tried in the past, and was not especially popular - to the point of revolution in certain Eastern European countries in the 1960s-1980s.)


Because I didn’t throw a party every night for multiple years in a row that invited everybody in the country then once it started getting super popular I only allowed in those that talked like me.

Free speech was so important is the first amendment. It’s valued even hire than protection.


The Constitution constrains what the Government can do. It doesn’t constrain what private actors like Facebook can do. It never has.

Also your analogy is flawed. Facebook doesn’t do what you suggest today (“only allow in those who talk like me”). They allow almost all speech, with opposing viewpoints, different policy perspectives, different cultures, etc. Arguments happen there all the time.


> The Constitution constrains what the Government can do.

Yes, thanks for the reminder. That’s why this isn’t the gov doing and instead is the left via their means of control. That’s why there’s pending FOIA requests for this very thing.

> Also your analogy is flawed. Facebook doesn’t do what you suggest today (“only allow in those who talk like me”).

Only if you take it quite literally. Otherwise are you saying they don’t censor anybody that doesn’t reiterate the same thing the left is saying?


> are you saying they don’t censor anybody that doesn’t reiterate the same thing the left is saying?

Yes, and there are plenty of examples of it, too. Republican governors and politicians, and right-leaning people of all levels of fame, with very few exceptions, continue to have a presence on Facebook and communicate there with their followers. Pick any Republican governor or Senator: they will be there.


So if I go on and post “misinformation”, which is defined by the left as anything antinarative (see all the “you ain’t black” claims) then what?


Why don't you just stick to characterizing things as "facts" that are actually demonstrably true? Then you don't have to even worry about this stuff.

Facts don't have a well-known liberal bias, Colbert Report joking aside.


Facts and “misinformation” have nothing to do with each other. At least not what the left has defined it as.


Usually? Nothing.

Occasionally, if it's a common piece of misinformation that's been widely-shared enough to justify manual intervention, a fact-check box may appear underneath. This happens to "facts" with a conservative political slant, as well as "facts" with a liberal political slant. If you run a page, your page can get "strikes" for this sort of thing that will cause you to down-rank in people's pages, but if anything, Facebook has been caught out for tipping towards the conservative side of that scale, suppressing that penalty on some popular conservative pages (https://www.engadget.com/facebook-overruled-fact-checkers-to...).

Rarely, you'll share "misinformation" that is also in violation of community standards (I'll leave that to the reader's imagination) and get a time-out proportional to how often that happens.

To understand Facebook's behavior, it's useful to remember that their goal is growth and retention. They want everybody using the service. I suspect, based on observation of their behavior, that they've discovered for themselves that without those measures, growth and retention are being harmed more than they'e harmed via time-outs and fact-checking (i.e. organized boycotts over Facebook being a place where falsehoods spread wildly, people encouraging their more vulnerable friends and relatives to stay off FB so the misinformation parade doesn't convince them to take horse-dewormer, etc.).


> This happens to "facts" with a conservative political slant, as well as "facts" with a liberal political slant.

Yes, we've all seen this "fair" fact checking. Hint, if the left is OK with it, the right is furious about it. You cannot be fair to both in a way that is accepting to liberals.

> Facebook has been caught out for tipping towards the conservative side of that scale, suppressing that penalty on some popular conservative pages

As they should, for both sides. But interestingly this makes my point even more. That everybody is quite aware of what censorship was about and is taking place. Yet they put on this falsity and get shills to help defend them because?

> To understand Facebook's behavior, it's useful to remember that their goal is growth and retention.

Right. They are censoring and aligning everybody on FB to the same thought process, which happens to be a majority of Americans.

> I suspect, based on observation of their behavior, that they've discovered for themselves that without those measures, growth and retention are being harmed more than they'e harmed via time-outs and fact-checking

When censorship is the result, do you think we should have more that a suspicion? What about the people on the right leaving? They don't seem to care about retaining those? Interestingly the people left align with the CEO's own political beliefs.


> What about the people on the right leaving? They don't seem to care about retaining those?

Right, and usually they would. Growth and retention is the lifeblood of everything Facebook does; Zuck would rather cut off his own right foot than intentionally lose users.

Which is why the only rational conclusion I can find with the evidence I see is that Zuck is either afraid that continued inaction was eventually going to land him in jail (or put him through years of Congressional investigations) or they have hard numbers showing that for every X people on the right they retain, they are losing N*X other users, N > 1.


Or, and in my mind more plausibly, this is your typical democrat trying to cancel and persuade others to align to their point of view. How much social flack do you think people get for not being on FB? This entire thing is an effort in persuasion, and if that fails, exile.


The screenshots show two red exclamation points showing it wasn’t delivered


You just defined shadow banning.


It is censorship. Calling it shadow banning doesn't make it okay. Bigger and bigger groups of citizens are revolting over censorship: Bill Maher and tons of Americans. This builds a revolt against Facebook


Right or not, I think charitably, the person you are responding to is pointing out that this is not a new thing in social media sites. It's a decades old technique at this point.


I recall vBulletin 3 software having a mode to specifically make the website slow and frustrating for certain users.

https://www.managingcommunities.com/2009/09/14/troll-hack-gl...


"miserable" mode. It was a mod introduced in vbulletin 2 by community members.

When installed it extended the idea of banning or shadow banning as it degraded the user experience for the person in question. Basically giving the impression that the site was malfunctioning to the point that it was frustrating for the user and they left the site for a while. This included random delays to the page loading, and sending them to a different page and ignoring their intent.

https://www.vbulletin.org/forum/showthread.php?t=93258


I believe Hacker News does this too (disclaimer: I have never experienced this firsthand, and the people I hear it from are obviously not impartial sources).


I’ve experienced this. If you are downvoted in quick succession you will get a “you are posting too quickly” error if you try to reply to anything… even if you didn’t post anything except the original post. Seems to last about a day.


I think that's just the standard rate-limiting; there is supposedly also a thing where Hacker News will throttle the connection as well.


They did that to me years ago, like ten years or so. Took me weeks to figure out the site wasn't slow when logged out! They don't do it anymore (and thankfully, because if they did it to me now I'd slowloris the hell out of this site).


> I'd slowloris the hell out of this site

I think most people who have the technical skill to do this effectively don't telegraph their intentions on what is, to all intents and purposes, social media.


I said they don't do it anymore. I'm not telegraphing any intentions. It would be like saying I'll shoplift in a store that no longer exists.

But saying people don't broadcast their criminal intentions on social media... I'm sure that's more common than you think :P


That's rate limiting. This is enabled on accounts if you ever post something "controversial". Shadow banning, where the site lets you think you are posting successfully, has definitely been used in the past on this site and probably still is.


Shadow banning isn’t really something used in private messages, no?



Facebook is doing this for a while. Pretty sure they always checked privately sent links. Some years ago I sent a .tk domain to a few friends and it got shadow banner after a few minutes (it was a harmless meme site).


Hackernews does shadowbanning as well.


It was done at the unnamed social network I worked at, over 10 years ago.

To combat spam, mostly. People posting links to scam sites to watch sports games.


When faced with software/malware that was appending links to user's instant messages back in 2005, back-end code to silently drop the user's messages was put in place. This was a messaging service that had ~200 million users.


Apparently by Facebook.


There is no mass revolt going on against Facebook. Get real.


Are you kidding?

My entire friend's list left Facebook just to figure out that tragically, Insta was owned by the same dickheads.

The prisoners are perfectly happy here.


It takes a certain amount of hubris to think that because you and your friends left Facebook that this constitutes a “revolt.”

To move that needle requires hundreds of thousands or millions of people to act in concert. Scale matters.


Maybe it is hubris but you know the saying “If shoeshine boys are giving stock tips, then it's time to get out of the market."


Hell, I am out of Facebook, but it’s not because they’re preventing me from sending links to Crazytown, USA to my friends. Five years later and still no revolt.


Quite an excellent advice, specially for early adopters: we sometimes loose track on what is really happening with sites that were good and just "playful" at the beginning...


It's not censorship. Guarantee of reach on a post has not existed on FB for 10+ years if ever.

If it did, FB would be a spammers heaven.


Facebook is still a spammers heaven. Of course, the spam you find on there is a little more sophisticated than "you have won the lottery" emails.


But nobody is banned here. It's banning certain links. When we hear ban we think accounts, not content.


The censorship is intentional, but the silence is a bug in the react code. You can look at the console in chrome devtools to see. There is an error message informing the viewer that their message was censored. The message is incorrectly hidden when you send additional message.

Also refresh the page and the messages disappear from the viewer.


This is the correct explanation (thank you) There’s no intention to hide the fact that certain links are blocked. The bug is that the status of failed message sends (not limited to blocked links) only shows an exclamation mark against the last message on web.


But it's that silence which deceives the user into thinking that the message never arriving is a technical malfunction and not intentional censoring. How many times, do you think the user would see the censoring message and be cool with it? Maybe once or twice, then they will stop using FB likely forever and tell all their friends. But does FB want that? Hell no, they want to retain the user for data collection and ad showing purposes. So how about just not showing the censoring message, make the user believe there was just a technical glitch and not insidious shadowbanning?

I think this "bug" is intentional and done for plausible deniability in case of a lawsuit: "See, there is transparence. It was just a bug in the code. Human error."


Re your last paragraph. Would that really work though? You subpoena the person who wrote the code and ask them under oath. I wouldn’t perjure myself to protect the company, would you?


How do you know the bug is unintentional?


As good as they are at 99.99% of what they do, almost every time a friend brings an example to me of Facebook doing something malicious, it turns out to be simple implementation error.


So that got winners of the C Underhanded Contest working there?


They do it because it is effective. Otherwise people would be more likely to try to circumvent it.


It seems to me FB has been doing this a long time with news feed posts. Often I'll post something and my friends who I check with don't see it in their feed.

Posts with links tend to be censored out more often. For example if I post a link to a charity donation website often my friends don't see it in their feed, but if I don't post the link and only state the name, they see it.


Malicious?


Sort of. It's like shadowbanning/hellbanning. It solves a lot of problems for moderators and prevents angry people from escalating the situation into actually being banned.

I've used it before as a moderator. For the above reasons, but mostly because I'm lazy and don't want to deal with angry people. Still censorship though. I think that ideally censorship ought to be communicated.


>because I'm lazy

Yeah, you described it well.

Start by understanding, Mr Moderator, there is a hell of a lot of difference between public posts and private messages.


Not if your goal is to censor a viewpoint. They are one in the same whether it's a viewpoint made in public or private - the main point of censorship is to not allow you to communicate certain thoughts. It amazes me that some posts here (not yours, but like the one you're responding to) have already seeded the censorship ground. For them it seems the question isn't whether it's ok to censor opposing viewpoints, but whether they should be told they've been censored. The censorship part is a-ok, but not if you're not informed?


*ceded


That was important, thanks.


I've moderated a forum before. Some people on the Internet are just crazy. If you ban them, they will come after you—maybe even in person. Shadowbanning on forums (setting aside the FB case) is sometimes abused, but it serves a vital purpose: preserving the time and safety of volunteer moderators.


Don't they just get twice as annoyed when they realize that not only were they banned, but they were silently banned and wasted time writing posts that went straight to /dev/null? The only time shadowbanning seems like a good solution is outright commercial spam and link farming, where slowing them down is a win in itself.


Surprisingly if you browse HN with "showdead" on, you'll find quite a few people don't realise. Others realise but keep posting comments they know most people won't read (you can tell in some cases because occasionally someone will mention their ban in comments).

(I browse with showdead on because the volume is small on HN, and every now and again you come across people in the former category who were shadowbanned for one or a few incidents relatively far in the past, who since then seem to have acted reasonable, where it's worth pointing it out so they can appeal their ban - mostly the flagged comments or shadowbans are well deserved, though)


I’ve run a forum for 15+ years and found shadowbanning vital in dealing with nutbags. As OP said, some people will pursue you personally and get obsessive. They have one forum they are obsessed with. You have thousands of users that are potentially like this.


Is it difficult to run a forum without the users learning your real identity? That seems like the way to go, but I have no experience with that sort of thing.


It would be possible, though if you participated a lot, you'd leave a lot of writing to be analysed. That would at least deter the casuals.

In my case, I post using my first name and most regulars would know who I am. It's a sports forum and people have recognised me at games, etc. I've had people look up my phone number and call me aggressively about things. I've had countless legal letters and threats. Someone doxxed my parents in the early days. You quickly learn how many unhinged members of society we have.


I think as some point you have to ask if moderating a forum is this important.


They get annoyed but not in the same way; the craziest want an audience and if you don't give it to them they generally move on.


"They get annoyed but not in the same way"

I would be carefuly about that.

"and if you don't give it to them they generally move on."

The craziest has lots of stuff going on. Lots of anger and hate. So if they decide, that you are their mortal enemy for shadow banning them, they might not move on.


It's not worse than just directly modding them -- that gives them more direct rage fuel.


It's a normal part of censorship. During wars some countries have employed sensors to intercept letters from soldiers. The soldiers will never know what was censored en route or indeed if the letter made it at all.


For another example of such totalitarian shenanigans consider the practice of "shadowbanning" : https://en.wikipedia.org/wiki/Shadow_banning

Ostensibly used for spam/troll countermeasures. In practice used to dick with whoever they choose for whatever reasons please their black little hearts.

Reddit and Twitter are notorious for it. Other "social media" organisms too.

AND ANOTHER MORE IMMEDIATE EXAMPLE

Consider the way that downvotes progressively remove a post from view.

It's a way of crowdsourcing the task of censorship.

Many hands make light work.

And the blame is neatly spread around.

And the actual shape of the censorship, the form being conformed to, is only indirectly controlled by the admins. Thus more blame neatly escaped.

It's goddamn elegant is what it is!


Moderation is not the same as censorship.


Only if you're going to take an absolutist stance that "censorship only applies to governments".

But that's not the only definition. wikipedia defines censorship thus:

"Censorship is the suppression of speech, public communication, or other information. This may be done on the basis that such material is considered objectionable, harmful, sensitive, or "inconvenient".[2][3][4] Censorship can be conducted by governments,[5] private institutions, and other controlling bodies. " (https://en.wikipedia.org/wiki/Censorship)

You can argue whether or not moderation is necessary or not but it most definately is censorship.

The way it is being applied by FB is most certainly censorship and highly inappropriate and destructive.


Do you think HN would be a better place if it didn't have moderation (or, as you put it, "censorship")?


That's beside the point I was making, as I said;

>You can argue whether or not moderation is necessary or not but it most definitely is censorship.

Moderation is censorship, and censorship is evil. It may be a necessary evil (because liability if nothing else) but it needs to be seen and treated as evil. Meaning to be used as sparingly as possible.


Censorship is contextually a good not an evil. No matter how divergent our positions there is almost certainly content that both us if asked to rate it would rate negatively. That is to say that the net value of the forum increases as we delete more of it.


What's your definition of evil?


That's an almost impossibly broad question to answer.

In the context of this conversation? Moderation is evil because it is destructive, repressive, exploitable for the cause of tyranny (eg silencing dissent) and is the opposite of that which is good (freedom of expression, freedom of thought, free flow of information).

Like any other violence, there are times when it's warrented but it's always a balance. The greater good is to keep the forum open, so it's necessary to remove illegal materials. And that's an easy call when what is illegal is child pornography; but what about when what is illegal is a political message or even just a series of seemingly random numbers?

Sometimes it's the lesser evil (child porn is always the greater evil) and sometimes moderation is the greater evil (repressing political messages or being used in the service of propping up totalitarian regimes).

I apologize if this doesn't make things clearer for you. As I said at the beginning; the nature of evil is a broad question that has been wrestled by philosophers for millennia.


Some would say that what is true is good, and what is false is evil, when it comes to communicating or sharing knowledge. In that case Facebook is doing the right thing.


Evil is a fundamental attribution error.

Sure, people do bad things, but it’s not ”evil people” who do bad things — it’s just people, the same people who have the capacity to do good things also do bad things.


How is this different from booing you for saying something stupid/wrong (or otherwise disliked even when smart/true) in a public venue. If anything, downvoting seems more honest and less "censorish" than booing, because you still get a chance to respond.


Downvotes scale much better, both temporally and spatially. Boos take more energy, presence and surprisingly even investment.


Well for one, you get to see who's booing you.


HN does that a lot too... using shadow bans and other techniques...


Hacker News does it. Reddit does it sitewide. Individual moderators do it sitewide. Shadowbanning is an essential tool for moderation, for dealing with trolls, spammers, abusive people.

You can argue the tool is being used too widely by Facebook, but it is silly to pontificate against the practice as a whole.

You reaaaaallly would not like the uncensored internet.


Hold your horses. It is not "the internet". It is private chat.

You need to understand difference between communication that is meant to be public from one that is meant to be private.

Law makes clear distinction.


Facebook is not a common carrier. You aren't having a private conversation on any of its properties. You're communicating with Facebook and they relay non-public messages to recipients.


It is private conversation because it is not meant to be made public by its users. Not because Facebook isn't "common carrier".

Piece of paper isn't common carrier. If I write a note to my wife and give it myself it is private because I wanted and expected it private. If I taped it to a billboard it means I no longer expect it private and it is public.


You’re mixing up two legal concepts: expectation of privacy and common carriers.

Despite phone companies being common carriers (and thus being required to carry communications without censoring them), the Supreme Court has made it clear that people should have no reasonable expectation of privacy when they relay communications through a pay phone. The reasoning was that you give up your right to privacy when you involve a third party to your conversation. See Katz v. United States, 389 U.S. 347 (1967).

So the two can coexist.

Only common carriers are forced to carry speech unimpeded, regardless of its content. Expectation of privacy is irrelevant to common carrier status. The former is about whether it can be intercepted. The latter is about whether it can be impeded.


Katz v. United States held the opposite: “The Government’s activities in electronically listening to and recording the petitioner’s words violated the privacy upon which he justifiably relied while using the telephone booth and thus constituted a ‘search and seizure’ within the meaning of the Fourth Amendment.”

More recent decisions suggest that the third party doctrine, according to which there is no legitimate expectation of privacy in information voluntarily turned over to third parties, is unlikely to apply to private messages on Facebook:

https://en.wikipedia.org/wiki/United_States_v._Warshak

https://en.wikipedia.org/wiki/Carpenter_v._United_States


Crap, my bad. You’re right. Clearly I misremembered the case and should have reviewed it first. I withdraw my point about that.

As you said, Katz did say that we do have a reasonable expectation of privacy with respect to person-to-person communications. However, that has no bearing on whether the carrier can or cannot block the communication. That still hinges on whether the carrier is a common carrier.


I would suggest blocking and selectively editing messages is different.

Conversation on messenger is multiple messages, blocking would be preventing entire conversation. Removing some messages selectively changes meaning of other messages


It's not a private conversation. It's content sent to, and hosted on Facebook's servers. Why should Facebook (or HN, or any site) be compelled to host content that it doesn't want? If you ran a web site, should you be disallowed from moderating what user content is available?


> It's not a private conversation.

It is according to Facebook. Here is a quote from their own website:

> We’re dedicated to making sure Messenger is a safe, private, and secure place for you to connect with the people who matter.


Reddit shadowbans in private chat too. Gmail does shadowban equivalent. HN doesn’t have private chat, but I imagine they’d use the tool if they could.

If you’ve never moderated anything halfway popular you have no idea what “uncensored” means in 2021.

It isn’t early 90s usenet. It’s endless spam, porn, etc


Facebook is not an outlier in this case: Reddit shadowbans affect DMs too.

Not sure why you brought the law up, as far as I know what FB is doing is 100% legal.


Do you not understand what private communication is and how it is different from Reddit?


Serious question: What's the difference, for your definition of "private communication", between a FB Messenger message and Reddit DM?

I can't think of any reason one would be considered private and the other not.


Same way I can expect my phone operator to not change meaning of my texts between me and my wife but allow me to block messages from strangers I don't want to be bothered by.


This doesn't answer my question, which I will restate:

Why is FB Messenger 'private' while Reddit DMs are 'not private'?


No, I actually really would. I grew up with the uncensored internet and it was, on the whole, less abusive and manipulative than today's internet culture.


It felt like actual tool or big box filled information you could dug through and find stuff you needed. It wasn't restricted by corporate greed or policed by variety of groups who were trying to impose opinions, policies or "culture" on other people claiming at the same time it's for everyone's sake. It wasn't of course a perfect place but it wasn't commercialized as it is today - and that's what happen: the years of simplicity we're done and it was necessary to make the Internet appealing and accessible to average people, by a certain price.

Seeing the monopoly of Google in browser sector, the whole predatory ads sphere, manipulative social media and the attitude corporations and politicians have towards ordinary people regarding the technology, I don't think I want to see what will be the next big thing in the evolution of the Internet and related tech.


It was a tiny, tiny elite who were online compared to today.

As late as 2003, 90% of the world was not on the internet.

Internet culture represents the average person now whereas it didn't at all up through the first Internet bubble and bust.


Not by today's standards. Typical messaging services with teenagers were utterly filled with what we'd now call 4chanish or spouting hate or cyberbullying. That culture isn't really compatible with prim and proper normal parents and other parent-like people that are now on the internet and throwing their weight around.


It’s way more likely to me that this is simply because the Internet used to be the (nearly) exclusive domain of people who were culturally like you: educated, technologically sophisticated, libertarian-leaning affluent white males. Not that you’re necessarily any or all of these things, but if you were around for “the uncensored Internet” you likely felt comfortable with that as your cultural in-group. I say this as someone who is many of these things, and was also around in the “good old days”.

These days people of every cultural background get a seat at the table. That changes things. And that’s to say nothing of the advent of professional, anonymous astroturfing and disinformation campaigns.

Going back would be an abject disaster.


You didn’t make a point in your reply.


You may have missed it in your rush to downvote me.


I did not downvote you at all, I replied because I wanted to discuss it with you. Others have replied, and I think its really interesting you think an unruly playground needs to be moderated. Others have pointed out you just "go to your corner" or dont engage so to speak. But you seem to think it needs to be moderated. I am not saying you are wrong I 'm curious as to why.


The problem is that I can “go to my corner” but I can’t stop people with no intention of playing by any set of rules from coming with me.

We’ve gone to our own corner here on Hacker News. And yet there’s moderation to keep discussion on-topic and respectful. In my estimation you likely choose to participate in this corner of the Internet because of the moderation of both link submissions and comments. Without it, HN would devolve into a cesspool over time like every other attempt at unmoderated forums that’s been tried. Well-meaning participants would be driven out by trolls, spammers, and angry people with an axe to grind.


I disagree, I come here because the articles I see and comments are read are reached by consensus. Bad comments are normally at the bottom of the page and easy to ignore.

There is shadow banning on HN and I disagree with it, but I should be able to make the above generalisation of why HN seems to work with less moderation than expected.

On sites like 4chan, there is some moderation, but again, the few interesting comments that exist get automatically highlighted by the engagement that occurs within the page. Moderation does not allow this to exist, census does. Moderation just helps but I argue the site would work without it, and thats how the internet used to work. Even newgroups that used the wrong mechanic to handle consensus based uploading, and suffered from spam , had content that was good and easy to find, without any moderation that I could see.


> There is shadow banning on HN and I disagree with it, but I should be able to make the above generalisation of why HN seems to work with less moderation than expected.

I respectfully suggest that HN has significantly more moderation than you believe it does.


I had not realised but I will take your word for it - thanks.


I think I missed it too. I didn't even downvote.


Conversation at dinner with a group of your friends and their friends is going to function a lot more smoothly than one at a table of atheists, christian fundamentalists, Jews, Sunni and Shia Muslims, Americans, Russians, Chinese, Tibetans, Israelis, Pakistanis, Indians, liberals, conservatives, socialists, capitalists, libertarians, fascists, anarchists, environmentalists, trans people, cis people, homophobes, racists, sexists, “the woke”, narcissists, sociopaths, frauds, manipulators, trolls, geniuses, average people, and imbeciles.

The Internet might have seemed “better” back then and like it didn’t need moderation. From many people’s perspective it probably was. But that was likely a function of the reality that most people on the Internet at the time had a lot in common with one another.

We don’t get the benefit of that luxury today.


I disagree with your take -I don't see it being a question of moderation. You had many of those groups online (if not all of them) in the 90's and 00's. Unlike today, if you didn't like the forum you were on you could go to another one where you fit in.

There was a greater diversity of social gathering places.

Now -there's facebook, twitter, reddit and HN.

Moderation has ALWAYS been a factor and that's not what made the old internet better.

What made the old internet better was that it was open and more diverse (in terms of viewpoint and choices). The new internet is basically (on a social level) a few social media corporations which are becoming increasingly sterile.


Being on the internet and unmoderated doesn't mean you have to engage with everybody. The idea is that you get to choose what sort of conversation to engage in, and who to /ignore. Hardly different from "real life", except that the internet gives you a larger pool to engage with and better tools to deal with it.


The problem isn't you choosing to engage with everybody. It's everybody else choosing to engage with you and those you want to engage with.

If every discussion on HN was derailed by bad actors, you'd go somewhere else. If there was nowhere else moderated to go to, you'd probably just stop engaging altogether.


I don't think you’re wrong in pointing out that the internet is likely more homogenous the further back in time we go. It’s a hard thing to be wrong about with respect to any scene. But I don't think I agree that it means that we must bend on principles of freedom of expression, digital civil liberties, and anti-censorship and anti-central control. And if definitely doesn't mean the internet wasn't possibly still the most diverse place on the planet even shortly after its inception. Now that the internet has taken the world stage, we must continually fight and push for the society we want to build, for what’s right, really. We must educate and in certain ways indoctrinate. Otherwise we risk losing our values to the swaths of normies—especially now that other cultures are vying for their own version of comfortable.


>You reaaaaallly would not like the uncensored internet.

Respectfully, speak for yourself.


People don't seem to understand the ramifications.

I prefer truth to being enslaved into conformance by "the algorithm".


Every platform that tries "we're 100% free speech no censorship or moderation" winds up learning the same lesson which is that their problem isn't with moderation it's that you don't like the kind of moderation decisions that are being made. Parler ran this gauntlet on fast forward [0], Gettr did it too [1], places that have tried holding on to the free speech absolutist position eventually burn to the ground in one way or another like the various children of 4chan.

[0] https://www.techdirt.com/articles/20200630/23525844821/parle...

[1] https://www.techdirt.com/articles/20210825/17204647438/trump...


I think the anonymous internet was way better. People tend to group anyway, so it's not like everyone is forced into the same room.


If the_donald proved anything in 2015 and 2016, the loudest most malicious folks intentionally ram themselves into all communities. The suggestion that bad actors will keep to themselves is the absolute opposite of reality.


>You reaaaaallly would not like the uncensored internet.

I remember the uncensored internet -it was vastly superior to what we have now.

Vastly.


It was not because uncensored content. It was bastly superior because there was a lack of comercialization. Before SEO, before targeted ads, an internet made by educacional institutions and people with hobbies.

Nowadays allow everybody to post anything and you will only see SPAM for the rest of you life, it will become impossible to see anything of value.


I'm not sure if I agree or disagree with you -you definitely have a point,though.

From my point of view it's less commercial vs non-commercial (though I think most of the problems either stem from, or are aggravated by commercial factors) as much as it is diversity versus consolidation.

Ten, fifteen years ago there was a greater number of varied forums and social gathering places with a greater number variety of viewpoints. Now while there's the odd vb bulliten board here and there you have fewer and fewer populated forums -everything seems to have coalescened (socially speaking) into twitter and facebook.

Basically, from my point of view, Social Media has consumed the rest of the social internet (meaning forums). I think there's more than one cause for that but that's what I see as the main thing making the past superior to the present -the ability to not just have your own place, but to have a chance at it reaching out to more people than just you and your friends.


> From my point of view it's less commercial vs non-commercial (though I think most of the problems either stem from, or are aggravated by commercial factors) as much as it is diversity versus consolidation.

There is a case where I'm pretty sure commercialization is the problem. Phone app stores are modeled on Linux repositories. Linux repositories are good. App stores are awful. The problem is the level of interest from bad actors, and the reason they're interested is the commercialization.


>There is a case where I'm pretty sure commercialization is the problem.

It's a problem, and yes -a major one. But I think that it's just one of many factors contributing (negatively) to the situation.


> everything seems to have coalescened (socially speaking) into twitter and facebook.

And Reddit and Discord.

It is a monumentally difficult task to create a forum these days, from a social PoV.


>It is a monumentally difficult task to create a forum these days, from a social PoV.

and legal, and probably technical PoV as well; yes.

The large platforms have sucked all of the oxygen out of the room, and have gained what I feel is an unhealthy control of what can and cannot be said.

I have no idea what can be done to fix the situation; but none the less, there it is.


From a technical perspective, forum software is easy enough to use.


A technical perspective also includes mitigating denial of service attacks and other security issues, including scaling.


For one, you could tell idiots (whoever you decided they were) exactly how they felt, without fearing retribution IRL.

Keeping online identities separate from the real world worked both ways. It protected rational critics, trolls and shitposters alike.

The problem never was anonymity or freedom of expression. No. Social media is when life online went into freefall. Curiously, a pillar of its business model is melding real life with internet identity.


So were the people who used it. Times have changed. Maybe the last time you went to a bar you saw the bouncer eject some hulking meth-head, drool dribbling down his bib, for getting handsy with strangers and throwing a bottle at the bartender. Well, that meth-head has a social media presence today too.


Not really -usenet of old had stalkers and harassers; that's how the killfile was invented.

That is a very, very good analogy, however.

The internet of old had a wider variety of establishments that you could go to. If you wanted a rough-and-tumble dive bar experience it was there. If you wanted a polished experience it was there too.

The effect facebook and other social media has had on the internet is the same effect that walmart had on small mom and pop stores -you can still find one here and there but no, not really.

Incidentally, for all of the censorship and increased barriers to entry (for creating forums) your drooling meth head still presents a danger in the form of doxxing, inciting riots (Jan 6th, any one?) and harassing people.


Sure, Usenet had its antisocial users, and quite a few seedy groups, too. My point is that it's worse today... much worse. In the early 90s the net was mostly students, profs, tech workers and professionals.

Think of the scariest person you've ever met (really, take a moment and actually do it) and ask if he or she would have been online in 1995; the chances are they would not. But they are now.


Uncensored internet isn’t the same as uncensored internet now. Everyone is online and everything is trivially automatable.

Mountains of spam, porn, violence, etc

If you’d ever moderated anything halfway popular you’d know. Shadowbans are an essential tool.

They can be way overused. But there is a reason everyone used them. Constant arms race between sites and spammers.


Vaaaaaastly!


> Shadowbanning is an essential tool for moderation

Does it really work? A lot of people have no trouble working out that it has happened to them. The theory is that a troublemaker won't realise they've been banned, and so will continue sending their trouble into the void instead of coming back as a new account to cause more trouble that everyone else can see. In practice, it doesn't take long for the troublemaker to work out what is going on, so this only slows them down slightly at best. For some major sites, people have even created shadowban detection tools, so even troublemakers who are too clueless to work it out for themselves can discover it.

This site only really has half-shadowbanning. A lot of people have showdead on, read the dead comments too, and vouch for interesting/constructive ones. Maybe that works for this site; but even if that is true, it doesn't really tell us how well a pure shadowban implementation, without showdead, works. Very few sites with shadowbanning have an equivalent to showdead.


Surprisingly yes. It does work. There are some who figure it out eventually, but not before they have spent months wasting their time and saving mine.

I reserve it for people I believe will ban evade or have in the past.


>You reaaaaallly would not like the uncensored internet.

Respectfully disagree. Watching things get disappeared on Reddit sent me in to an anxiety spiral.

I'll take goatse links over that shit any day of the week.


Goatse links would be the least of your problems. I’m imagining my Gmail account without the spam filter. Everything showing up in my inbox, not just the stuff in my spam folder, but also all the stuff they silently drop that I don’t even see. Email would be unusable.


> I’m imagining

Imagination often does not match reality. I have a (non-Gmail) account that is littered over the web, in git repositories and in multiple leaks. I don't silently drop any emails - the only server-side anti-spam measure I have a regexp to reject the usual spam subjects at submission time, which for normal senders will result in a notification to the user. The amount of spam is hardly at a level that would be anywhere close to making Email unusable.


I am sorry, but the uncensored internet is the only type of acceptable internet. Anything else is just gov or corp interests.


An uncensored internet would become an unusable cesspit.


I miss the uncensored internet, before it went all 2.0-shaped.


I can't think of any direct messaging applications that moderate content.


This is private chat. Get off your high horse.


Secretly blocking specific messages is very different from a shadowban, though.


People use these things as alternatives to SMS or even the telephone (for audio chat). It would be absolutely ludicrous if telecoms started shadowbanning their customers, if Facebook wants to act like one it should play by the same rules.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: