Hacker News new | past | comments | ask | show | jobs | submit login
FB messenger silently censoring links, claims they were sent (twitter.com/kylejohnmorris)
608 points by votick on Aug 28, 2021 | hide | past | favorite | 447 comments



Any user of any Facebook property needs to understand well in advance that the company engages in active psychological manipulation and experimentation on its users and then decide if this clashes with their personal autonomy.

Remember them talking about how they engaged in experiments on their users by trying to influence their mood via intentionally showing their various groups different types of content algorithmically on their news feed? Or when they intentionally kept crashing their Android app to see what’s their user retention rate even on faulty software? Or just open up the app on two phones of the exact similar make and model and see the app menu?

My personal anecdote and experience with this is that they also use random errors to disguise bans. I have a personal and a dev account which I made the first time I accidentally got banned for 24 hours because it also reset OAuth app keys, the next time I got banned I logged in to my dev acct. and tried to make a post, since they managed to associate the two accounts, I ended up banned on that one, too. But it didn’t say that, it kept throwing random errors as if it was only a backend issue. These disappeared once my main ban was gone.

So all in all my point is, people should understand that the platform outright does these kinds of things and use or find alternatives based on that


I agree with you. I feel caught in a trap though: I really like the Oculus VR platform. I use it everyday to play quick games if ping pong or racket ball when I need to move around after sitting in my desk chair. Sometimes I spend more time on the Star Wars games, etc. But, as you say, I try to use their VR platform with some awareness of their business model.


I'm no better. I use it to keep in contact with friends, most of whom have moved to other countries but I try to maintain a backup.

I also own an Oculus and have given credit for the Quest in the past as the closest thing to mainstreaming VR to date, it's unfortunate that it operates under Facebook but it's probably the only hardware business they'll have that'll hold any interest to me. I have a separate account for mine because I don't trust or like them (and I really dislike "social" features in games). That being said, I expect it to get caught up and destroyed eventually so I hope at that point the hacker community will have an answer to keep it working outside of the Facebook ecosystem. (There may already be one, I have not checked.)


It's one thing to censor, but it's another to intentionally deceive your users into believing that they sent something.


When sending torrent links (Ubuntu / Arch iso even), FB blocks it. Usually they show a red exclamation point on the message for me so this is quite weird. Of course, this is one of the reasons I don't use FB anymore.


This one also shows a red exclamation point, but because of a bug the exclamation point goes away after you send more messages. You can see it in the twitter screenshots or replicate yourself by sending any gnews link on messenger


> It's one thing to censor

!!!

Are we nonchalantly accepting its OK to censor and moving on to the argument about deceiving users?

Edit: should add more context to this. I don't think we should be casually OK with censorship. Next thing you know, FB will silently(or not) start censoring political opponents, activists they don't like, etc. I am quite unsettled by how the west is doing the same as China. Censorship is just casually taken as granted now. I don't think counter arguments hold for "private platforms". At some point, it becomes public square when a billion+ people use your platform.


>Are we nonchalantly accepting its OK to censor and moving on to the argument about deceiving users?

Personally I'm disturbed at the non-chalant attitude taken towards shadow moderation in this thread. Shadow banning and shadow moderation may make one's job easier but it has a disastrous effect on the forum or community by injecting a definite level of deceit into the moderator/poster relationship.

Any forum where shadow bans /shadow moderation are practiced the moderators by definition can not be trusted.


> Shadow banning and shadow moderation may make one's job easier but it has a disastrous effect on the forum or community by injecting a definite level of deceit into the moderator/poster relationship.

You say that as if it's fact, but I'm not sure it is. HN has a method of shadow banning that is reversible if you are noticed as posting stuff that contributes and follows the guidelines, and even when you are shadow banned, people can opt into seeing your contributions (I've had showdead on from day one, I think).

There are places in the internet you can go to make sure you are heard and you can say whatever you want. If you prefer to post here instead of there, then some examination on why that is might be warranted, and specifically whether the thing you are complaining about helps or hinders making this place somewhere you feel worth spending time.


>You say that as if it's fact, but I'm not sure it is.

How is it not? Shadowbans by definition are deceptive; their whole intent is to deceive the poster into thinking that their message was sent and is readable.

If you post and your message appears to appear on the thread but only you can read it then...you are being deceived. The message board software is lying to you.

Since it is the moderators who decide that then it's the relationship between you (the poster) and the moderators that determines whether or not your post actually successfully completes or if you're being lied to.

That is what I mean when I say that shadowbans inject a factor of deceit into the poster/moderator relationship.

It's possible to argue that it doesn't, but it's also possible to argue that the moon is made of green cheese as well. Neither argument is the truth.

>There are places in the internet you can go to make sure you are heard and you can say whatever you want.

And there are places where if you are banned, you know it; if your comment is removed, you know it. My qualm in this thread is about the deceptive practice of shadowbanning.

> If you prefer to post here instead of there, then some examination on why that is might be warranted, and specifically whether the thing you are complaining about helps or hinders making this place somewhere you feel worth spending time.

May I ask -was it your intent to come off as being condescending and borderline insulting? If not, you ought to be aware that is how that sentence came off. And I'll tell you for nothing that being talked down to after spending 25 years on the internet does NOT contribute positively towards HN being a place worth spending time in.

It's also a hand-wave -meaning a distraction from my primary point which is about shadowbanning and the fact it injects a note of deceit into the moderator/user relationship.

There are pluses and minuses to everything; good points and bad points. Obviously since I will continue posting in HN I see more positive points than negative ones -but that does not change the fact ...and it is a fact... that I believe (and have no reason not to believe) that shadowbans have a negative effect on the larger community that they're practiced on. They make HN a place less worth coming to, and the fact that there are positives reasons to come to HN does not change that.

So -to long didn't read summary; shadowbans undermine posters' faith in moderators and serve to undermine the credibility of moderation as a whole.


I think more precisely, the nature of the deceit is this: Shadowbans lie to a specific individual about their participation in the community.

This is because the user is not considered part of the community. A shadowban is something you deploy against a user that is presumed to be a malicious actor in the system. It's deceit, but in a similar category of deceit to telling a user the resource is 404 not found instead of 403 or 401 because you don't even trust them enough to let them know that they found a resource that exists. It's deceit with the purpose of stymiing further malicious action.


> It's deceit, but in a similar category of deceit to telling a user the resource is 404 not found instead of 403 or 401 because you don't even trust them enough

Is it the same? Showing a 404 is a security measure that applies equally to anyone lacking sufficient privileges. Ideally an app should prevent you from getting these, or show a message like "Not found, or no access".

> It's deceit with the purpose of stymiing further malicious action.

Shadowbans are powerful moderation weapons. Too much so, I'd say. The moderator is both prosecutor, judge and executioner. Who is the lawyer? Who revises after a time if the shadowbanned person has redeemed themself? Maybe their bad behavior was just a temporary fit, or even a mental breakdown they have since recovered from. Or they just become wiser grown-ups over time. All very human things that happen, but now they have a harsh sentence applied to them, and they may not even know about it.

Especially at scale these things should be handled properly. And importantly there should be transparency of how the procedures work, what the status is, and what one can do about it.

I also wonder to what degree on the big social platforms there exist even more subtle measures than all-out shadowbans. Like setting metrics on a sliding scale to define the extent the AI should limit a person's influence on the network.

Maybe this already has a name, but let's call it "shadow suppression". With such measures in place a person's voice can be dimimished for a lifetime without them ever knowing. They get a Like or a Comment here and there, but they will never go viral or even reach the audience that they think they address, and no matter how good their content and behavior becomes.


Normally you don't get any of that process because you don't have any right to use someone else's service. They can ban you for having red hair or blue eyes if they please and they would be foolish but not in the wrong.


"They can ban you for having red hair or blue eyes if they please and they would be foolish but not in the wrong."

I am not sure if that is true anymore, with the implementation of anti-discrimination laws.

I mean practically it is still the internet, but if your service is public and beyond a certain size, someone might indeed sue you for doing that.


Sure, but scale that lack of rights to platforms will millions or billions of users, and you get effective dictatorship. Platforms just wielding too much power that extends well beyond their walled garden, impacting society as a whole. The question is often raised if e.g. FB should be considered a utility service, and have their policies tightly regulated.

Edit: I submitted a separate Ask HN on "shadow suppression": https://news.ycombinator.com/item?id=28344492


It's an odd dictatorship that someone voluntarily submits to.

Being out of Facebook's decisions on what you may say is as simple as closing your account.


People’s desire to fit in will make them give up all sorts of rights. Pay attention to history and this is pretty apparent.


It's a bit of a catch-22 though, isn't it?

If people can make sound independent judgment, we don't need a law constraining what private companies must allow on their services because private users will find the censorship reprehensible and will voluntarily leave. We need the First Amendment to constrain what government can do because people cannot voluntarily leave a government; the same is not true for their Facebook account.

And if people cannot make sound independent judgment, and their desire to fit in will make them give up all sorts of rights... How can we trust them with the freedom to say and hear unfiltered information? The first demagogue who comes along and convinces them they must do what he says or they don't fit in will have the whole user base in their power, right? In such a context, it is perhaps necessary to allow the media its freedom of the press to transmit what is true and refrain from transmitting what is false.


> How can we trust them with the freedom to say and hear unfiltered information?

Why does this always come up? Like this exact example. Where there no others? Do you know how I read this? “I think I’m smarter than everyone, and everyone should look to me for the truth”.

> In such a context, it is perhaps necessary to allow the media its freedom of the press to transmit what is true and refrain from transmitting what is false.

This is mostly what we have now. Go compare the divide between the parties vs the Grand Canyon. This is the result. Why? Because opinions. And what does the left do when they can’t disprove an opinion ? Cancel/censor.

We also need the first amendment to apply to corporations. The left found the loophole and is giddily exercising it.


I'm not sure what you mean by "the left found the loophole," can you clarify?


The gov cannot directly limit speech except in certain, very limited, cases. Instead they get corporations to do it for them. Which is currently very legal.


Well said.


Perhaps it’s time we change these laws, and anybody creating a place to talk with others can’t have any rules banning anybody. Perhaps it’s time we pass laws that force companies to abide by the first amendment.


You foster and shape a community by first telling people what you will accept and then by banning people whose behavior doesn't comport with standards. Here we sit on a functional community with such standards discussing this matter politely with one another wherein your argument would be to remove the standards that make this community worth chatting on.

Just because you and are I reasonable doesn't mean the overarching internet is. It's manifestly not. It's extremely likely that a large part of the signal that makes this community valuable would depart if it was entirely filled with 10x as much noise.

Let people make their own communities with different standards if they don't like this one.


> It's extremely likely that a large part of the signal that makes this community valuable would depart if it was entirely filled with 10x as much noise.

What does this say about those participants who do leave? The entire point of different opinions is different points of view on the same subject. Those leaving simply because they don’t like the general narrative do no abide by this principle and probably shouldn’t have been in the community in the first place. Those leaving because they experienced daily abuse, that’s reasonable. I still don’t think they should leave, but instead grow thicker skin.


I don't want to read through 10x as much stupid, offensive, or otherwise bad content in order to read the same number of insightful content. Life is finite. Overwhelmingly the sort to get themselves banned are sharing crazy. I don't think I'm missing much or indeed anything. Why shouldn't I embrace less crazy in my life.


> Overwhelmingly the sort to get themselves banned are sharing crazy. I don't think I'm missing much or indeed anything. Why shouldn't I embrace less crazy in my life.

Because when someone is censoring you can't know if it's crazy or not. You just assume it is, like you are now.


Nobody's obligated to grow thicker skin to use Facebook. In fact, Facebook would prefer to not put that obligation on their users; it decreases adoption and retention numbers.


And decreases the ability for people to hear harsh communications. You want to take over the US? Just stand right in the middle and start saying mean things.


There are very online fora I am aware of that have any kind of arbitration or check on the execution of administrator or moderator authority.

Facebook is one (which is why they will be reinstating Trump's account in the future: https://about.fb.com/news/2021/06/facebook-response-to-overs...).


Yes, but that example refers to a very high profile account. I wonder if the average person will get the same amount of attention (or at least a reasonable amount) in similar cases. Also this refers to a very noticable account suspension, not a shadowban that may stay under the radar.


Some users are simply not valuable participants. The kind that get shadowbanned can include those spamming Viagra, scams, and other nonsense.

I think that not getting rid of that content/malicious participants (not community members) undermines faith in moderation.


Other nonsense? As in viewpoints that are disagreeable to the moderators in charge?

There’s a clear difference between male enhancement spam and a guy with a poorly thought out comment that most people find disagreeable. A middle schooler can spot the difference.

Back in my day we welcomed those poorly reasoned or flat out wrong comments: you would attack them with logic and use them as opportunities to find the truth of a matter. And the community reading it would revise their priors and the needle would move closer to the truth.

Today those comments get censored, leading to much of the awful discourse you see on FB and Reddit these days. You can’t discuss sacred cows without getting banned from a community or shadowbanned altogether.

The ridiculous rhetoric on the left these days is a direct result if this phenomenon.


No, just the kind of stuff that our customers didn't want, or that were illegal.

Pirated movies, scams that purported to offer pirated movies, people trying to steal passwords, people trying to hack our users.

I think your accusations are aimed at the wrong person.


Read the parents, this isn’t about your specific situation. You defended censoring, saying sometime it’s useful to essentially protect others from what you view as nonsense. This is the problem. What is “other nonsense” and how much of the moderators bias is in it? If someone is posting information, leave it alone. If it’s wrong it’ll be dealt with in short order by someone refuting it with more information. If it’s a spam account just posting marketing links then ban it. These users were not posting marketing links in their private messages.


> How is it not? Shadowbans by definition are deceptive

The question is not whether it's deceptive, but whether "it has a disastrous effect on the forum or community". It is deceptive. Does it have a disastrous effect on the community? It doesn't appear so to me, so I think that needs supporting evidence.

> May I ask -was it your intent to come off as being condescending and borderline insulting? If not, you ought to be aware that is how that sentence came off.

How a sentence is interpreted has a lot to do with the context the person reading it is in. I didn't intend to be dismissive, but I think the point still stands, because I was actually trying to make a point. Why not just use one of the sites that doesn't do shadowbanning or censorship instead? This isn't some sly insinuation you should leave. It's an actual question about what is different here from there in the rules and moderation, and how and why that might contribute to how the community acts at each place respectively.

> It's also a hand-wave -meaning a distraction from my primary point which is about shadowbanning and the fact it injects a note of deceit into the moderator/user relationship.

No, it's a real question that you decided wasn't relevant so then went out of your way to avoid.

> but that does not change the fact ...and it is a fact... that I believe (and have no reason not to believe) that shadowbans have a negative effect on the larger community that they're practiced on.

It hasn't been shown to be a fact. You haven't even really shown how it could be negative to the community, much less provided evidence. "It erodes faith" is both unsubstantiated, and it's also not shown even if it was how that actually results in a problem (I'm not sure there don't exist forums with hated moderators that still function fairly well).

Shadowbanning is censorship and it is deceitful, but it's also aimed squarely at those the moderators think are not part of the community, or at least not productive parts. The whole point of it is to keep those people from realizing their account is banned right away and starting a new account to continue the behavior.

I would argue that for the most part, this has a positive effect. There are plenty of dead posts which you can see too if you want which provide little or no useful contribution to the discussion or the community.

> shadowbans undermine posters' faith in moderators and serve to undermine the credibility of moderation as a whole.

I haven't observed that. I think most people here trust the moderators to use that sparingly, and there's also a system which appears to let the community see and reverse shadowbans in the case it's incorrectly applied or the banned person in question provides more meaningful content later.


Most people aren’t aware that shadowbanning occurs, how it is used, and to what extent. By very definition they don’t see shadowbanned comments or content.

I’m not sure how you can think people approve of something they aren’t even aware is happening.

It’s not just deceptive to the shadowbanned user, it’s deceptive to the whole community, too. How can an average user trust that the content they’re viewing isn’t entirely astroturfed? Or alternatively, 100% organic?


> I’m not sure how you can think people approve of something they aren’t even aware is happening.

Because people often trust people based on reputation, results, or other proxy measures rather than reviewing every action they take.

> It’s not just deceptive to the shadowbanned user, it’s deceptive to the whole community, too.

It is. At the same time I think most people don't care, and are fine not having every moderation action announced to them.

Additionally, as I've stated multiple times here, and so have others, HN allows anyone to opt into seeing these comments from shadowbanned users, so HN doesn't even follow the same ideal of the problem case you and others are putting forth. You'll get a lot farther pushing this idea of shadowbanning being bad and a problem if you actually respond to the reality of the situation you're arguing about here, and the counter-arguments being put forth, rather than just blindly repeating the same thing.

> How can an average user trust that the content they’re viewing isn’t entirely astroturfed? Or alternatively, 100% organic?

How is that anything to do with shadowbanning or moderation? That's a problem entirely separate and that shadowbanning and moderation of the type we're discussing has nothing to do with.

But, if you really want an asnwer to this ridiculous question, it's that you can go into your profile here, find the "showdead" option, turn it on, and then you'll see all the comments you're complaining are hidden and deceptive and hidden from all the users. And if you don't trust this is all the dead comments, you can't really trust the moderators at all (and there's no more or less trust than any other site which says they do or don't do something).


For the record, I’m not talking about HN, I’m referring to Facebook and Reddit particularly. HN handles this issue better than many platforms. Apologies if that wasn’t clear.

So users are okay with being deceived and ultimately don’t care? Let me know if I’ve misunderstood your argument.

That may be true. And knowing human nature you may be right. I disagree that it’s a morally acceptable way to moderate a community or forum, though.

People lose trust when they find they’ve been shadowbanned, or discover that the practice is used. They rightfully understand that the discourse is being manipulated in an underhanded, opaque way.


> So users are okay with being deceived and ultimately don’t care? Let me know if I’ve misunderstood your argument.

Users that expect moderation expect a certain level of housekeeping and don't care if they are exposed to it all, and I would guess a great many don't want to be bothered with it.

The important distinction here is that being shadowbanned is being excluded from the community, and is done to even more effectively keep these people away from and out of the community than just regular banning would accomplish. The community expects to see content from the rest of the community, these people aren't part of that group, regardless of whether they still have an account and can log it, so the amount of deception to the community is little to none.

The alternative to shadowbanning is not letting they people post, it's banning. The effective desired outcome in both cases is that this person no longer is allowed to be part of the community. The effective actual outcome is that regular banning often just results in people making a new account and continuing with the same behavior. The only deception of shadowbanning to the community is that you're not explicitly saying "hey, these posts we tried to stop from being made? We're letting them be made but just hiding them from view, since we don't want them here in the first place."

If I'm going to get mad about that I might as well get mad at the police or a business owner for taking down some banner that was hung illegally across other businesses that I would see on my commute if they didn't act fast enough. Would I feel deceived that people had his some random message someone decided to put up that was unwanted was removed without me being able to see it? No.

> I disagree that it’s a morally acceptable way to moderate a community or forum, though.

If every forum was like that I might agree. But groups of people have the right to choose how to police themselves and there are plenty of other communities out there that do it differently.

> People lose trust when they find they’ve been shadowbanned

Those people are being kicked out. The community, or the mods, want them to lose trust and leave in most cases. I don't worry that someone I think deserves to be in jail is disillusioned because they don't think they deserve to be in jail. Or, if I worry about it, it's not in a way that makes me think they don't deserve the punishment.

> or discover that the practice is used.

I think that's very few people.

> They rightfully understand that the discourse is being manipulated in an underhanded, opaque way.

All moderation is this, whether done by the few with extra powers or outsourced to the crowd. Moderation is censorship. Censorship isn't always bad (like most things in like, it's a matter of extremes being the problem. Too little or too much both have issues, so we generally fluctuate as a society around a sort of middle area).

You can call it underhanded and opaque all you want, but it's only that to those that have already been excised from the community. To everyone else, it's really no different than if that person was banned, except they're less likely to come back for a while.


You notice how all of your examples are about HN and not FB? Where’s the option on FB to turn on banned posts? Where’s the default option turned on on HN?


That would be because A) this thread diverged from that when the assertion that all shadowbanning ain any forum is harmful to the community, and B) as noted elsewhere in this discussion, FB isn't shadowbanning, they're denying sends and a UI bug later is removing the information that the send failed.

> Where’s the default option turned on on HN?

I'm not sure what you're asking. If you're asking if showdead is on by default on HN, no, it's not, and I don't think it should be.

The whole point is to make it so people that are not contributing usefully to, or worse are actively harmful to, the discussion and community are less likely to cause a disturbance. Any one of those people can easily tell if they are shadowbanned if they decide to check just by opening up an anonymous browsing instance and looking at the same discussion. They then have the option of either trying to contact the moderators to make a case, continuing on and trying to align better with the community in future comments and hope someone vouches for them (if they're aware of that), or making a new account and starting over.

But, ultimately, even if they don't even know they're shadowbanned, as long as they try to align their behavior with the community, I think there's a good chance they're be vouched for enough to not be shadowbanned after a while. When that doesn't happen, from what I've seen it's because they haven't really made an effort to change. For example, a dead user fit2rule posted yesterday about how they've been dead for a few months. I looked at their comment history, and the vast majority of them are single line responses. Those aren't necessarily bad, but they probably don't cross a threshold to make people decide they contribute enough to vouch for them. At the same time, the occasional comment they have with more than a sentence that seems to try to actually engage often doesn't show as dead, as someone must have vouched for it because they though it was worth being expressed. If they had kept with that type of engagement, I think they would not be dead at this time.


> the assertion that all shadowbanning

How can you know this? The community is what it is as a result of the discourse that happens here. If you remove one person that discourse, and therefor the community, has changed. So shadowbanning being harmful is more of an opinion of whether all voices should be heard or not.

> I'm not sure what you're asking. If you're asking if showdead is on by default on HN, no, it's not, and I don't think it should be.

I know it’s not, and I think it should be. It’s quite easy these days to have a pop up say “want to see all of the conversation? turn off this setting” and guide the user to turn it off with an easy way to turn it on. In fact, ideally, this censoring would happen on the client configured by rules created by the owner.

> as long as they try to align their behavior with the community

You do realize that everybody aligning means everybody is the same right? What happened to diversity? (cue “some opinions don’t matter” statement)


> How can you know this?

What? You asked my why my comments are about HN and not FB, and I said it's because up-thread a wider assertion was made about all shadowbanning, which is what we're talking about at this point.

> If you remove one person that discourse, and therefor the community, has changed. So shadowbanning being harmful is more of an opinion of whether all voices should be heard or not.

The people that own and or run the forum ultimately control the community. They may let the community exert pressure on them, but button line, they control it, and anyone that's under some other impression needs to come to term with reality. Those people want different types of communities, and will take different steps to ensure they get what they want. People that do not like those steps, or the communities that develop, have the choice of exerting pressure on the people that run it or on other community members, or they can choose to go elsewhere.

HN is not a public resource. It's a private resource that allows the general public. If you want to make a case that FB is special because it's so big and everyone is on it, that's one thing. Clearly define why and what the criteria is, and we can discuss whether that makes sense or not or breaks down in practice or how it could be games. But to expect that if I stand up a little forum with dozens of users to discuss specifics of care and restoration of the Datsun 240z and I'm contending a persistent forum spammer that's putting gratuitous messages about penis enlargement and I find that banner works for minutes but shadow banning seems to actually solve the problem in most cases, I think it's ridiculous to think that most the small community would be up in arms over how I was being deceitful to them about not showing them this spam.

I get that you might think DB and HN and Reddit are bit public resources and might have to obey a different standard. If you do, please outline what makes them different and how we determine that. If you don't, and think everyone should obey this standard, please explain why my theoretical little car forum that is entirely maintained and moderated by me should even care what the forum members thing if I'm happy with the community as is without any of the people that might left because they don't like how I run it?

> You do realize that everybody aligning means everybody is the same right?

Why'd you even bother to post this? I half moment's thought would make it obvious that I'm talking about aligning a very specific subset of items, namely behavior, as I specifically stated.

Communication is only possible with a shared understanding of some base. At the lowest level, that's generally language, but it can be extended to other norms to make communication easier and less error prone.

> (cue “some opinions don’t matter” statement)

It's not that they don't matter, it's that they don't matter in some contexts to some people, and are thus inappropriate. Does Joe Bob's weekly rant about whoever the current politician that has his ire is matter? Possibly. Does it matter and is it appropriate on a small car enthusiast forum? Nope, and he has no right to have it shown there, nor expectation that if he posts it there that others should see it, whether he thinks they do or not.


> What? You asked my why my comments are about HN and not FB, and I said it's because up-thread a wider assertion was made about all shadowbanning, which is what we're talking about at this point.

You completely ignored the question. How can you know that shadowbanning did not negatively effect this community? You cannot know that. Where does the HN exist that never shadowbanned for you to draw your comparison from? This is opinion, not fact. But, I'll let you move the goalposts. HN is not nearly the size of FB however.

> People that do not like those steps, or the communities that develop, have the choice of exerting pressure on the people that run it or on other community members, or they can choose to go elsewhere.

Pressure was exerted, censorship continues. What recourse does one have for the largest social media company in the world? The left is outright giddy at this thought.

> HN is not a public resource.

Given that no login is required one could argue it is public, given that recent scraping judgement.

> It's a private resource that allows the general public. If you want to make a case that FB is special because it's so big and everyone is on it, that's one thing.

I do want to make that case, but not related to size. Any, ANY, forum or anywhere where discourse happens cannot censor anything but spam. Repeated messages, messages that are clearly marketing, etc. Instead you have individuals who are not qualified playing judge, jury and executioner. Judges recuse themselves when they can't be neutral. How many forum moderators do the same?

> But to expect that if I stand up a little forum with dozens of users to discuss specifics of care and restoration of the Datsun 240z and I'm contending a persistent forum spammer that's putting gratuitous messages about penis enlargement and I find that banner works for minutes but shadow banning seems to actually solve the problem in most cases, I think it's ridiculous to think that most the small community would be up in arms over how I was being deceitful to them about not showing them this spam.

Absolutely nobody but the scam artists will care about this type of censorship. But note that you're using marketing spam to justify censorship of others opinions.

> Why'd you even bother to post this? I half moment's thought would make it obvious that I'm talking about aligning a very specific subset of items, namely behavior, as I specifically stated.

You didn't change the outcome, we're all still the same.

> Communication is only possible with a shared understanding of some base. At the lowest level, that's generally language, but it can be extended to other norms to make communication easier and less error prone.

Like censorship? Yea that would certainly streamline things.

> It's not that they don't matter, it's that they don't matter in some contexts to some people, and are thus inappropriate.

And therefore we should censor them because they absolutely cannot be useful for anybody else? How about this one:

Someone's BS filter is tuned mostly OK, but they still need some work. You censor a false post and they miss an opportunity to do just this. What you're going to end up with is a bunch of people who can't think for themselves who require someone telling them exactly what to do and think. It's been known for a while that the left wants the population following exactly everything they say. And given that all things are cyclical we can easily see that it's a reversion to the style of gov the US came from. Whether or not you are knowingly complacent or not I'm not sure, but you're fighting for that style of government back.

> Does Joe Bob's weekly rant about whoever the current politician that has his ire is matter? Possibly. Does it matter and is it appropriate on a small car enthusiast forum? Nope, and he has no right to have it shown there, nor expectation that if he posts it there that others should see it, whether he thinks they do or not.

Discourse happens where discourse happens. And I doubt Joe Bob randomly spout out his opinions, usually someone would insert some slight reference in everyday communications. The left does this one a lot.

But by all means, let's use this to censor?


> You completely ignored the question. How can you know that shadowbanning did not negatively effect this community?

You're the one maintaining that it does. Why try to force me to prove a negative, when you've not supported your own assertions in the first place?

> But, I'll let you move the goalposts. HN is not nearly the size of FB however.

No goal posts were moved by me. If someone takes a specific situation and makes s statement about it that generalizes across everything, I think it's pretty clear that's when the goal posts are moved, and claiming the goal posts were moved later when people actual use examples from the wider world is just a rhetorical tactic to cover that up. If you don't like having to defend the claim against a wider set of cases, simply back off that portion of the claim and say maybe it doesn't apply to everything, and don't defend it. It's simple, it doesn't mean you're wrong, it just means you're willing to revise your argument to be more specific and stronger in the fact of facts.

> Pressure was exerted, censorship continues.

Presumably it wasn't enough to matter then? Should one customer control a service? Should one percent?

> Given that no login is required one could argue it is public, given that recent scraping judgement.

Publicly available is not the same thing as a public resource, but if you want to actually use that as some sort of evidence, some indication of what you're referring to would be useful, as I'm not going to google "recent scraping judgement" and assume we're working off the same facts and referring to the same thing. I would read a provided link though.

> You didn't change the outcome, we're all still the same.

I don't think so. Your claim is now that aligning everyone (on behavior, since I clarified that obvious point and you stuck by your statement) means everyone is the same. That's obviously wrong. Just because you and I agree not to yell epithets at each other here and converse in a respectable manner does not mean we are the same, as is obvious by how we disagree on things.

> Like censorship? Yea that would certainly streamline things.

Yes. One of the definitions of censorship is the prohibition of extreme things. I suspect there are a great many places you support censorship, as almost everyone does. I don't let my children speak to me or their siblings in certain ways in my house. I would be disappointed in them if they did so outside my house, and depending on which child and their age, might enact some punishment. I am censoring them, and I believe it's both for their own good and for the good of those they interact with. I would argue the vast majority of parents do this. Every community does this in some manner (even if punishment is just ostracization).

The problem with taking a hard stance of "censorship is bad" is that then you feel compelled to think of something as bad when it's called censorship and is censorship but when viewed on its merits isn't actually bad. It's similar to how "ads" have been so vilified today, that people take nonsensical positions because to them the word embodies something. Names above of business above and on doors are also advertising, but it's clearly useful and informative advertising of what's inside. Similarly, there are types of censorship, such as a community enforcing its norms, which often but not always are mostly beneficial, at least for the community overall.

Does this result in some people feeling like they've been targeted unfairly? Yes. Were they actually targeted unfairly? That depends quite a bit on the circumstances, which includes personal responsibility of the person and what the community is trying to enforce. Whether a person that's breaking the rules of a community get to do so regardless of their wishes should depend on the behavior and the norms. Breaking codes of conduct in a purely opt-in community, where it was a choice to do one thing over another? That sure seems like someone abdicated their own personal responsibility to me. Being punished because of some aspect of your birth such as gender and race? That seems like a problem, unless the community's purpose is a place for people such as that and you don't belong.

> What you're going to end up with is a bunch of people who can't think for themselves who require someone telling them exactly what to do and think.

As if shadowbanning is mandatory across all aspects of life and everyone will use it with the same criteria? That seems fairly far fetched given what we're talking about.

> It's been known for a while that the left wants the population following exactly everything they say.

Please, don't even start with assigning blame to one side. It's trivial to find instances of bad behavior on both sides along this spectrum, and the only reason you wouldn't see it is if you hadn't bothered to look (but many people haven't). All the sources of the right do exactly what you complain about, any person that doesn't toe the party line is not given the opportunities to have their position heard, and you're getting a curated version of facts like everyone else.

Preventing shadowbanning, or banning, or anything else won't help this. There are natural effects of communities that do this well for most cases, those are just useful to weed out the persistent bad actors that aren't acting in good faith anyway. If you actually care about different views and getting information through to people, you need to branch out and actually engage different communities. If those communities silence even coherent and within the rules interactions than find a different one, since that one won't be receptive to that content no matter what, and forcing them to not exclude it won't help that community, it will just make it entirely dysfunctional for the things it was functional for previously.

> Whether or not you are knowingly complacent or not I'm not sure, but you're fighting for that style of government back.

I think your vision is too constrained. You seem to think forcing everyone to hear everyone else is the solution. It's not. You forcing civil rights militants (for lack of a better term) and white supremacists to interact in their own home territory is not a plan to make things better, it's just a way to radicalize them further.

The solution is not to allow anyone to say what they want and make sure it's heard regardless of the community, it's to make people want to actually understand each other, so they're willing to meet people on their own terms and following the rules of the location they're at.

> Discourse happens where discourse happens. And I doubt Joe Bob randomly spout out his opinions, usually someone would insert some slight reference in everyday communications.

Not in my private forum, not without my say, just as not in my house without my say. I reserve the right to kick out or ignore anyone. And it doesn't matter if that person hears or sees a reference. They know the rules or they shouldn't be there, and "not being able to control myself" is not a valid excuse in the real world.

> The left does this one a lot.

I'm flabbergasted that you seem able to suggest this seriously. To think the left is the only side that censors when the right has also had a long and storied history of it, is baffling. Both sides censor when it suits their purposes. Notable times the right as taken up that mantle in the past include McCartyism, censoring portions of climate change science (during both GWB and Trump)[1], and the many times religious organizations aligned with the right wanting to ban certain books.[2]

> But by all means, let's use this to censor?

There's a difference in controlling what your community you control (because you provide what makes it possible) can do and what the wider public is allowed to do. Censorship when applied to all aspects of life is definitely a problem, and why we have freedom of speech. But we also have freedom of association[3] as an important part of that, and if you look into that closely, being able to police your own community is an important part of that (if you can't express what you want in your community you don't actually have a community).

As I noted earlier, if you want to make a case specifically about Facebook and how it should be treated as a public resource (which I'll clarify doesn't just mean something publicly accessible but something people have a public stake in that the state should ensure has additional protections and requirements, like water or clear air), then go ahead and make that case. I might even agree with you on something like Facebook. But what was stated earlier by rnd0 and which we've ostensibly been discussing is shadow banning and communities in general, but public resources or even Facebook in isolation.

1: https://en.wikipedia.org/wiki/Censorship_in_the_United_State... (the citation goes into the detail linking Trump)

2: https://www.au.org/blogs/christian-nationalists-censor

3: https://en.wikipedia.org/wiki/Freedom_of_association


Way too long of a post, I don't care your position anymore and maintain mine.


I've got bad news for you about HackerNews then, HN has 'dead' users who can post but are only shown if you specifically opt-in and they can't be replied to at all.


But those comments from dead useres can be "vouched" for if they appear to be in good faith, and one or more of those seems to take people out of dead status, I think. I'm not sure what points threshold grants that ability, or how long it has existed, but I do it every few month or so when I see a comment that seems to be perfectly fine but the comment is dead.

I think that might be a pretty good mechanism, all said and done. Someone acts in a way the community doesn't accept enough times they go dead, and then if they act acceptably they start getting people responding and participating to them as they come out of dead status. They only get attention, and positive attention, from acting responsibly.


HN would be a useless trove of violence and hate if all [dead] comments were automatically unhid. Folks who are into 8chan might be into a community like that, but I personally much prefer the moderated approach taken today.


> HN would be a useless trove of violence and hate if all [dead] comments were automatically unhid.

I read with showdead=yes and it is nowhere near that description. In fact dead posts are not even that common on most articles. And most of the dead posts, while on average low quality, are very rarely hateful or anything like that.


The question is are they few and not quite as bad because the shadowbanning, or in spite of it. Those comments can't be responded to without vouching for them, and I doubt most people are going to vouch for them just to respond, unless the comment has some merit. How many people just stop commenting because they are getting no interaction? how many additional posts are prevented because people can't respond to those comments?

Like many policies that might actually have an effect, I'm hesitant to say exactly how much it achieves without data comparing what it's like without it.


I read with showdead=yes as well and can vouch for your experience. You are not contradicting the parent post, IIUC, however: I take it seattle_spring was talking about how HN would be if it lacked the system or something like it at all.


There used to be one dead user that would post... wild racist stuff due to mental illness. But lately yes, I agree with this characterization.


Vouch takes the comment out of dead status. I then see other dead comments by the same user so I don't think it takes the user out of dead status.


I think it depends. I've vouched for a comment and then seen subsequent comments not be dead. I suspect there's a bit more to the machanisn, but I think it can lead to being bit dead anymore, even if it just triggers a mod rereview of user comments to decide if they get a second chance.


I'm not even remotely surprised, and that doesn't really change my point.


Shadow moderation became the norm ages ago to deal with spam. You can debate about its limits but somebody with a hardline position smacks of somebody who has never dealed with the real world issues but is cocksure about how things should be done.


>You can debate about its limits but somebody with a hardline position smacks of somebody who has never dealed with the real world issues but is cocksure about how things should be done.

The canard about what happens when you assume applies in this case. Very much so.

I've moderated forums and I've never had a problem with simply deleting spam out-right when it came up. Same with problematic users and posts.


It's one thing to moderate something in the hundreds to thousands of users, and something entirely different in the tens of millions. When you start having multiple groups of attackers and whole underground markets aimed at taking advantage of your users, simply banning individuals / deleting individual posts no longer cuts it.


Then what? You’ve basically said you have a job you can’t do. And choose the lazy method of banning those, infringing on a very important right. If you can’t keep up with volume then quit?

So many people in this thread are talking about death threats from being a moderator. If being a moderator is such a dangerous job then why do people do it? One has to wonder, they’re either getting paid very well, or they like the power. I’m guessing it’s the latter in almost all cases.


I'd maybe do that for the manual spamming but it's foolhardy against anything machine generated. You're just helping people who couldn't care less about anything but increasing clicks understand how they get detected.


>I'd maybe do that for the manual spamming but it's foolhardy against anything machine generated. You're just helping people who couldn't care less about anything but increasing clicks understand how they get detected.

I haven't had to deal with it recently (since 2015 or so) but at the time captchas seemed to help with the automated stuff and what slipped through we deleted by hand.

Times change, of course, so I see your point


The assumption that you are making that all the users, and specifically the users that need to be shadow banned, are just normal mentally healthy individuals. Mostly, they are not.

If you want to waste precious minutes of your life dealing with the mentally deranged for free -- that's your call. But you shouldn't be disturbed by those who don't.


On HN, shadowbans are implemented via automatic flagging.


> Next thing you know, FB will silently(or not) start censoring political opponents, activists they don't like, etc

They already do. [1]

1. https://crimethinc.com/2020/08/19/on-facebook-banning-pages-...


Is there any better idea for how to stop people deliberately harming each other with bad information? Particularly in light of confirmation bias and the level of effort required to disprove vs the amount of effort required to make stuff up.


It will happen in the US. BigTech is a reaction to the internet. The ruling oligarchy lost control of the narrative. They need to control information flow again.


It's not a public square for being successful, don't use Facebook if you don't like it. I don't either.


Our legal system (fortunately) hasn’t accepted the theory that the penalty for being popular is that you get to be nationalized. How would you feel if you threw a lot of parties in your house, and then one day the government came to you and said you now have to allow everyone to party at your house?


You're making an interesting point which I resonate with - but I want to elaborate and crystalize my thoughts a bit more. I am a staunch supporter of civil rights and we should examine the extremes.

Extrema 1: Social network with everyone on the planet addicted to it, it becomes impossible to avoid. Even if one doesn't want to use it, it will become a necessity to communicate to others. Twitter is an example where it is official source of information from Government agencies to local fire department, embassies, and many politicians, etc. Recently they're requiring an account (and phone number) to access what seems to be the official channels of information for many nations. With milions if not billions of users, it is a significant chunk of the entire world's population. Just like any other industry (big oil, big tobacco), big social media and big tech need to be regulated. There is a spectrum between a private forum of 10k users and a megacorp able to control election outcomes even. What's even more eggregious is that the algorithms that govern the engagement of 1B+ users is developed by people of Silicon Valley (probably 0.000001% of US population or around ~300 people give or take). In private. Closed doors. We oughta see how this is terrifying (even if I agree with their stance and moral principles).

Extrema 2: Big tech merges with Gov in the business of law enforcement. A small party where government has absolutely no business gets interrupted by cops who were alerted by the "thought algorithms". These thought algorithms were developed by Gov + Big Tech by fusing various channels of information and it was deemed that this party is too pick-your-fav-niche-topic and cannot be tolerated. It is my business, my party, my friends and my agenda. No one should have the right to interrupt unless I am violating laws and I am granted due process.

Obviously both extremes are encroaching civil rights of people to have privacy and access to information.


Why does it matter? You can't argue by an analogy, because first you need need to argue why running a social network with 1B+ users is the same thing as throwing a house party.

With great power comes great responsibility. You don't get to be "popular" for free.

And if you don't like it — you don't have to pay that game.


> You can't argue by an analogy

Sure I can. I’m doing it right now. Of course, analogies are imperfect, but they make good teaching and argumentative tools.

> You don't get to be "popular" for free.

Fortunately for society, the law disagrees with you. (Also, your way has been tried in the past, and was not especially popular - to the point of revolution in certain Eastern European countries in the 1960s-1980s.)


Because I didn’t throw a party every night for multiple years in a row that invited everybody in the country then once it started getting super popular I only allowed in those that talked like me.

Free speech was so important is the first amendment. It’s valued even hire than protection.


The Constitution constrains what the Government can do. It doesn’t constrain what private actors like Facebook can do. It never has.

Also your analogy is flawed. Facebook doesn’t do what you suggest today (“only allow in those who talk like me”). They allow almost all speech, with opposing viewpoints, different policy perspectives, different cultures, etc. Arguments happen there all the time.


> The Constitution constrains what the Government can do.

Yes, thanks for the reminder. That’s why this isn’t the gov doing and instead is the left via their means of control. That’s why there’s pending FOIA requests for this very thing.

> Also your analogy is flawed. Facebook doesn’t do what you suggest today (“only allow in those who talk like me”).

Only if you take it quite literally. Otherwise are you saying they don’t censor anybody that doesn’t reiterate the same thing the left is saying?


> are you saying they don’t censor anybody that doesn’t reiterate the same thing the left is saying?

Yes, and there are plenty of examples of it, too. Republican governors and politicians, and right-leaning people of all levels of fame, with very few exceptions, continue to have a presence on Facebook and communicate there with their followers. Pick any Republican governor or Senator: they will be there.


So if I go on and post “misinformation”, which is defined by the left as anything antinarative (see all the “you ain’t black” claims) then what?


Why don't you just stick to characterizing things as "facts" that are actually demonstrably true? Then you don't have to even worry about this stuff.

Facts don't have a well-known liberal bias, Colbert Report joking aside.


Facts and “misinformation” have nothing to do with each other. At least not what the left has defined it as.


Usually? Nothing.

Occasionally, if it's a common piece of misinformation that's been widely-shared enough to justify manual intervention, a fact-check box may appear underneath. This happens to "facts" with a conservative political slant, as well as "facts" with a liberal political slant. If you run a page, your page can get "strikes" for this sort of thing that will cause you to down-rank in people's pages, but if anything, Facebook has been caught out for tipping towards the conservative side of that scale, suppressing that penalty on some popular conservative pages (https://www.engadget.com/facebook-overruled-fact-checkers-to...).

Rarely, you'll share "misinformation" that is also in violation of community standards (I'll leave that to the reader's imagination) and get a time-out proportional to how often that happens.

To understand Facebook's behavior, it's useful to remember that their goal is growth and retention. They want everybody using the service. I suspect, based on observation of their behavior, that they've discovered for themselves that without those measures, growth and retention are being harmed more than they'e harmed via time-outs and fact-checking (i.e. organized boycotts over Facebook being a place where falsehoods spread wildly, people encouraging their more vulnerable friends and relatives to stay off FB so the misinformation parade doesn't convince them to take horse-dewormer, etc.).


> This happens to "facts" with a conservative political slant, as well as "facts" with a liberal political slant.

Yes, we've all seen this "fair" fact checking. Hint, if the left is OK with it, the right is furious about it. You cannot be fair to both in a way that is accepting to liberals.

> Facebook has been caught out for tipping towards the conservative side of that scale, suppressing that penalty on some popular conservative pages

As they should, for both sides. But interestingly this makes my point even more. That everybody is quite aware of what censorship was about and is taking place. Yet they put on this falsity and get shills to help defend them because?

> To understand Facebook's behavior, it's useful to remember that their goal is growth and retention.

Right. They are censoring and aligning everybody on FB to the same thought process, which happens to be a majority of Americans.

> I suspect, based on observation of their behavior, that they've discovered for themselves that without those measures, growth and retention are being harmed more than they'e harmed via time-outs and fact-checking

When censorship is the result, do you think we should have more that a suspicion? What about the people on the right leaving? They don't seem to care about retaining those? Interestingly the people left align with the CEO's own political beliefs.


> What about the people on the right leaving? They don't seem to care about retaining those?

Right, and usually they would. Growth and retention is the lifeblood of everything Facebook does; Zuck would rather cut off his own right foot than intentionally lose users.

Which is why the only rational conclusion I can find with the evidence I see is that Zuck is either afraid that continued inaction was eventually going to land him in jail (or put him through years of Congressional investigations) or they have hard numbers showing that for every X people on the right they retain, they are losing N*X other users, N > 1.


Or, and in my mind more plausibly, this is your typical democrat trying to cancel and persuade others to align to their point of view. How much social flack do you think people get for not being on FB? This entire thing is an effort in persuasion, and if that fails, exile.


The screenshots show two red exclamation points showing it wasn’t delivered


You just defined shadow banning.


It is censorship. Calling it shadow banning doesn't make it okay. Bigger and bigger groups of citizens are revolting over censorship: Bill Maher and tons of Americans. This builds a revolt against Facebook


Right or not, I think charitably, the person you are responding to is pointing out that this is not a new thing in social media sites. It's a decades old technique at this point.


I recall vBulletin 3 software having a mode to specifically make the website slow and frustrating for certain users.

https://www.managingcommunities.com/2009/09/14/troll-hack-gl...


"miserable" mode. It was a mod introduced in vbulletin 2 by community members.

When installed it extended the idea of banning or shadow banning as it degraded the user experience for the person in question. Basically giving the impression that the site was malfunctioning to the point that it was frustrating for the user and they left the site for a while. This included random delays to the page loading, and sending them to a different page and ignoring their intent.

https://www.vbulletin.org/forum/showthread.php?t=93258


I believe Hacker News does this too (disclaimer: I have never experienced this firsthand, and the people I hear it from are obviously not impartial sources).


I’ve experienced this. If you are downvoted in quick succession you will get a “you are posting too quickly” error if you try to reply to anything… even if you didn’t post anything except the original post. Seems to last about a day.


I think that's just the standard rate-limiting; there is supposedly also a thing where Hacker News will throttle the connection as well.


They did that to me years ago, like ten years or so. Took me weeks to figure out the site wasn't slow when logged out! They don't do it anymore (and thankfully, because if they did it to me now I'd slowloris the hell out of this site).


> I'd slowloris the hell out of this site

I think most people who have the technical skill to do this effectively don't telegraph their intentions on what is, to all intents and purposes, social media.


I said they don't do it anymore. I'm not telegraphing any intentions. It would be like saying I'll shoplift in a store that no longer exists.

But saying people don't broadcast their criminal intentions on social media... I'm sure that's more common than you think :P


That's rate limiting. This is enabled on accounts if you ever post something "controversial". Shadow banning, where the site lets you think you are posting successfully, has definitely been used in the past on this site and probably still is.


Shadow banning isn’t really something used in private messages, no?



Facebook is doing this for a while. Pretty sure they always checked privately sent links. Some years ago I sent a .tk domain to a few friends and it got shadow banner after a few minutes (it was a harmless meme site).


Hackernews does shadowbanning as well.


It was done at the unnamed social network I worked at, over 10 years ago.

To combat spam, mostly. People posting links to scam sites to watch sports games.


When faced with software/malware that was appending links to user's instant messages back in 2005, back-end code to silently drop the user's messages was put in place. This was a messaging service that had ~200 million users.


Apparently by Facebook.


There is no mass revolt going on against Facebook. Get real.


Are you kidding?

My entire friend's list left Facebook just to figure out that tragically, Insta was owned by the same dickheads.

The prisoners are perfectly happy here.


It takes a certain amount of hubris to think that because you and your friends left Facebook that this constitutes a “revolt.”

To move that needle requires hundreds of thousands or millions of people to act in concert. Scale matters.


Maybe it is hubris but you know the saying “If shoeshine boys are giving stock tips, then it's time to get out of the market."


Hell, I am out of Facebook, but it’s not because they’re preventing me from sending links to Crazytown, USA to my friends. Five years later and still no revolt.


Quite an excellent advice, specially for early adopters: we sometimes loose track on what is really happening with sites that were good and just "playful" at the beginning...


It's not censorship. Guarantee of reach on a post has not existed on FB for 10+ years if ever.

If it did, FB would be a spammers heaven.


Facebook is still a spammers heaven. Of course, the spam you find on there is a little more sophisticated than "you have won the lottery" emails.


But nobody is banned here. It's banning certain links. When we hear ban we think accounts, not content.


The censorship is intentional, but the silence is a bug in the react code. You can look at the console in chrome devtools to see. There is an error message informing the viewer that their message was censored. The message is incorrectly hidden when you send additional message.

Also refresh the page and the messages disappear from the viewer.


This is the correct explanation (thank you) There’s no intention to hide the fact that certain links are blocked. The bug is that the status of failed message sends (not limited to blocked links) only shows an exclamation mark against the last message on web.


But it's that silence which deceives the user into thinking that the message never arriving is a technical malfunction and not intentional censoring. How many times, do you think the user would see the censoring message and be cool with it? Maybe once or twice, then they will stop using FB likely forever and tell all their friends. But does FB want that? Hell no, they want to retain the user for data collection and ad showing purposes. So how about just not showing the censoring message, make the user believe there was just a technical glitch and not insidious shadowbanning?

I think this "bug" is intentional and done for plausible deniability in case of a lawsuit: "See, there is transparence. It was just a bug in the code. Human error."


Re your last paragraph. Would that really work though? You subpoena the person who wrote the code and ask them under oath. I wouldn’t perjure myself to protect the company, would you?


How do you know the bug is unintentional?


As good as they are at 99.99% of what they do, almost every time a friend brings an example to me of Facebook doing something malicious, it turns out to be simple implementation error.


So that got winners of the C Underhanded Contest working there?


They do it because it is effective. Otherwise people would be more likely to try to circumvent it.


It seems to me FB has been doing this a long time with news feed posts. Often I'll post something and my friends who I check with don't see it in their feed.

Posts with links tend to be censored out more often. For example if I post a link to a charity donation website often my friends don't see it in their feed, but if I don't post the link and only state the name, they see it.


Malicious?


Sort of. It's like shadowbanning/hellbanning. It solves a lot of problems for moderators and prevents angry people from escalating the situation into actually being banned.

I've used it before as a moderator. For the above reasons, but mostly because I'm lazy and don't want to deal with angry people. Still censorship though. I think that ideally censorship ought to be communicated.


>because I'm lazy

Yeah, you described it well.

Start by understanding, Mr Moderator, there is a hell of a lot of difference between public posts and private messages.


Not if your goal is to censor a viewpoint. They are one in the same whether it's a viewpoint made in public or private - the main point of censorship is to not allow you to communicate certain thoughts. It amazes me that some posts here (not yours, but like the one you're responding to) have already seeded the censorship ground. For them it seems the question isn't whether it's ok to censor opposing viewpoints, but whether they should be told they've been censored. The censorship part is a-ok, but not if you're not informed?


*ceded


That was important, thanks.


I've moderated a forum before. Some people on the Internet are just crazy. If you ban them, they will come after you—maybe even in person. Shadowbanning on forums (setting aside the FB case) is sometimes abused, but it serves a vital purpose: preserving the time and safety of volunteer moderators.


Don't they just get twice as annoyed when they realize that not only were they banned, but they were silently banned and wasted time writing posts that went straight to /dev/null? The only time shadowbanning seems like a good solution is outright commercial spam and link farming, where slowing them down is a win in itself.


Surprisingly if you browse HN with "showdead" on, you'll find quite a few people don't realise. Others realise but keep posting comments they know most people won't read (you can tell in some cases because occasionally someone will mention their ban in comments).

(I browse with showdead on because the volume is small on HN, and every now and again you come across people in the former category who were shadowbanned for one or a few incidents relatively far in the past, who since then seem to have acted reasonable, where it's worth pointing it out so they can appeal their ban - mostly the flagged comments or shadowbans are well deserved, though)


I’ve run a forum for 15+ years and found shadowbanning vital in dealing with nutbags. As OP said, some people will pursue you personally and get obsessive. They have one forum they are obsessed with. You have thousands of users that are potentially like this.


Is it difficult to run a forum without the users learning your real identity? That seems like the way to go, but I have no experience with that sort of thing.


It would be possible, though if you participated a lot, you'd leave a lot of writing to be analysed. That would at least deter the casuals.

In my case, I post using my first name and most regulars would know who I am. It's a sports forum and people have recognised me at games, etc. I've had people look up my phone number and call me aggressively about things. I've had countless legal letters and threats. Someone doxxed my parents in the early days. You quickly learn how many unhinged members of society we have.


I think as some point you have to ask if moderating a forum is this important.


They get annoyed but not in the same way; the craziest want an audience and if you don't give it to them they generally move on.


"They get annoyed but not in the same way"

I would be carefuly about that.

"and if you don't give it to them they generally move on."

The craziest has lots of stuff going on. Lots of anger and hate. So if they decide, that you are their mortal enemy for shadow banning them, they might not move on.


It's not worse than just directly modding them -- that gives them more direct rage fuel.


It's a normal part of censorship. During wars some countries have employed sensors to intercept letters from soldiers. The soldiers will never know what was censored en route or indeed if the letter made it at all.


For another example of such totalitarian shenanigans consider the practice of "shadowbanning" : https://en.wikipedia.org/wiki/Shadow_banning

Ostensibly used for spam/troll countermeasures. In practice used to dick with whoever they choose for whatever reasons please their black little hearts.

Reddit and Twitter are notorious for it. Other "social media" organisms too.

AND ANOTHER MORE IMMEDIATE EXAMPLE

Consider the way that downvotes progressively remove a post from view.

It's a way of crowdsourcing the task of censorship.

Many hands make light work.

And the blame is neatly spread around.

And the actual shape of the censorship, the form being conformed to, is only indirectly controlled by the admins. Thus more blame neatly escaped.

It's goddamn elegant is what it is!


Moderation is not the same as censorship.


Only if you're going to take an absolutist stance that "censorship only applies to governments".

But that's not the only definition. wikipedia defines censorship thus:

"Censorship is the suppression of speech, public communication, or other information. This may be done on the basis that such material is considered objectionable, harmful, sensitive, or "inconvenient".[2][3][4] Censorship can be conducted by governments,[5] private institutions, and other controlling bodies. " (https://en.wikipedia.org/wiki/Censorship)

You can argue whether or not moderation is necessary or not but it most definately is censorship.

The way it is being applied by FB is most certainly censorship and highly inappropriate and destructive.


Do you think HN would be a better place if it didn't have moderation (or, as you put it, "censorship")?


That's beside the point I was making, as I said;

>You can argue whether or not moderation is necessary or not but it most definitely is censorship.

Moderation is censorship, and censorship is evil. It may be a necessary evil (because liability if nothing else) but it needs to be seen and treated as evil. Meaning to be used as sparingly as possible.


Censorship is contextually a good not an evil. No matter how divergent our positions there is almost certainly content that both us if asked to rate it would rate negatively. That is to say that the net value of the forum increases as we delete more of it.


What's your definition of evil?


That's an almost impossibly broad question to answer.

In the context of this conversation? Moderation is evil because it is destructive, repressive, exploitable for the cause of tyranny (eg silencing dissent) and is the opposite of that which is good (freedom of expression, freedom of thought, free flow of information).

Like any other violence, there are times when it's warrented but it's always a balance. The greater good is to keep the forum open, so it's necessary to remove illegal materials. And that's an easy call when what is illegal is child pornography; but what about when what is illegal is a political message or even just a series of seemingly random numbers?

Sometimes it's the lesser evil (child porn is always the greater evil) and sometimes moderation is the greater evil (repressing political messages or being used in the service of propping up totalitarian regimes).

I apologize if this doesn't make things clearer for you. As I said at the beginning; the nature of evil is a broad question that has been wrestled by philosophers for millennia.


Some would say that what is true is good, and what is false is evil, when it comes to communicating or sharing knowledge. In that case Facebook is doing the right thing.


Evil is a fundamental attribution error.

Sure, people do bad things, but it’s not ”evil people” who do bad things — it’s just people, the same people who have the capacity to do good things also do bad things.


How is this different from booing you for saying something stupid/wrong (or otherwise disliked even when smart/true) in a public venue. If anything, downvoting seems more honest and less "censorish" than booing, because you still get a chance to respond.


Downvotes scale much better, both temporally and spatially. Boos take more energy, presence and surprisingly even investment.


Well for one, you get to see who's booing you.


HN does that a lot too... using shadow bans and other techniques...


Hacker News does it. Reddit does it sitewide. Individual moderators do it sitewide. Shadowbanning is an essential tool for moderation, for dealing with trolls, spammers, abusive people.

You can argue the tool is being used too widely by Facebook, but it is silly to pontificate against the practice as a whole.

You reaaaaallly would not like the uncensored internet.


Hold your horses. It is not "the internet". It is private chat.

You need to understand difference between communication that is meant to be public from one that is meant to be private.

Law makes clear distinction.


Facebook is not a common carrier. You aren't having a private conversation on any of its properties. You're communicating with Facebook and they relay non-public messages to recipients.


It is private conversation because it is not meant to be made public by its users. Not because Facebook isn't "common carrier".

Piece of paper isn't common carrier. If I write a note to my wife and give it myself it is private because I wanted and expected it private. If I taped it to a billboard it means I no longer expect it private and it is public.


You’re mixing up two legal concepts: expectation of privacy and common carriers.

Despite phone companies being common carriers (and thus being required to carry communications without censoring them), the Supreme Court has made it clear that people should have no reasonable expectation of privacy when they relay communications through a pay phone. The reasoning was that you give up your right to privacy when you involve a third party to your conversation. See Katz v. United States, 389 U.S. 347 (1967).

So the two can coexist.

Only common carriers are forced to carry speech unimpeded, regardless of its content. Expectation of privacy is irrelevant to common carrier status. The former is about whether it can be intercepted. The latter is about whether it can be impeded.


Katz v. United States held the opposite: “The Government’s activities in electronically listening to and recording the petitioner’s words violated the privacy upon which he justifiably relied while using the telephone booth and thus constituted a ‘search and seizure’ within the meaning of the Fourth Amendment.”

More recent decisions suggest that the third party doctrine, according to which there is no legitimate expectation of privacy in information voluntarily turned over to third parties, is unlikely to apply to private messages on Facebook:

https://en.wikipedia.org/wiki/United_States_v._Warshak

https://en.wikipedia.org/wiki/Carpenter_v._United_States


Crap, my bad. You’re right. Clearly I misremembered the case and should have reviewed it first. I withdraw my point about that.

As you said, Katz did say that we do have a reasonable expectation of privacy with respect to person-to-person communications. However, that has no bearing on whether the carrier can or cannot block the communication. That still hinges on whether the carrier is a common carrier.


I would suggest blocking and selectively editing messages is different.

Conversation on messenger is multiple messages, blocking would be preventing entire conversation. Removing some messages selectively changes meaning of other messages


It's not a private conversation. It's content sent to, and hosted on Facebook's servers. Why should Facebook (or HN, or any site) be compelled to host content that it doesn't want? If you ran a web site, should you be disallowed from moderating what user content is available?


> It's not a private conversation.

It is according to Facebook. Here is a quote from their own website:

> We’re dedicated to making sure Messenger is a safe, private, and secure place for you to connect with the people who matter.


Reddit shadowbans in private chat too. Gmail does shadowban equivalent. HN doesn’t have private chat, but I imagine they’d use the tool if they could.

If you’ve never moderated anything halfway popular you have no idea what “uncensored” means in 2021.

It isn’t early 90s usenet. It’s endless spam, porn, etc


Facebook is not an outlier in this case: Reddit shadowbans affect DMs too.

Not sure why you brought the law up, as far as I know what FB is doing is 100% legal.


Do you not understand what private communication is and how it is different from Reddit?


Serious question: What's the difference, for your definition of "private communication", between a FB Messenger message and Reddit DM?

I can't think of any reason one would be considered private and the other not.


Same way I can expect my phone operator to not change meaning of my texts between me and my wife but allow me to block messages from strangers I don't want to be bothered by.


This doesn't answer my question, which I will restate:

Why is FB Messenger 'private' while Reddit DMs are 'not private'?


No, I actually really would. I grew up with the uncensored internet and it was, on the whole, less abusive and manipulative than today's internet culture.


It felt like actual tool or big box filled information you could dug through and find stuff you needed. It wasn't restricted by corporate greed or policed by variety of groups who were trying to impose opinions, policies or "culture" on other people claiming at the same time it's for everyone's sake. It wasn't of course a perfect place but it wasn't commercialized as it is today - and that's what happen: the years of simplicity we're done and it was necessary to make the Internet appealing and accessible to average people, by a certain price.

Seeing the monopoly of Google in browser sector, the whole predatory ads sphere, manipulative social media and the attitude corporations and politicians have towards ordinary people regarding the technology, I don't think I want to see what will be the next big thing in the evolution of the Internet and related tech.


It was a tiny, tiny elite who were online compared to today.

As late as 2003, 90% of the world was not on the internet.

Internet culture represents the average person now whereas it didn't at all up through the first Internet bubble and bust.


Not by today's standards. Typical messaging services with teenagers were utterly filled with what we'd now call 4chanish or spouting hate or cyberbullying. That culture isn't really compatible with prim and proper normal parents and other parent-like people that are now on the internet and throwing their weight around.


It’s way more likely to me that this is simply because the Internet used to be the (nearly) exclusive domain of people who were culturally like you: educated, technologically sophisticated, libertarian-leaning affluent white males. Not that you’re necessarily any or all of these things, but if you were around for “the uncensored Internet” you likely felt comfortable with that as your cultural in-group. I say this as someone who is many of these things, and was also around in the “good old days”.

These days people of every cultural background get a seat at the table. That changes things. And that’s to say nothing of the advent of professional, anonymous astroturfing and disinformation campaigns.

Going back would be an abject disaster.


You didn’t make a point in your reply.


You may have missed it in your rush to downvote me.


I did not downvote you at all, I replied because I wanted to discuss it with you. Others have replied, and I think its really interesting you think an unruly playground needs to be moderated. Others have pointed out you just "go to your corner" or dont engage so to speak. But you seem to think it needs to be moderated. I am not saying you are wrong I 'm curious as to why.


The problem is that I can “go to my corner” but I can’t stop people with no intention of playing by any set of rules from coming with me.

We’ve gone to our own corner here on Hacker News. And yet there’s moderation to keep discussion on-topic and respectful. In my estimation you likely choose to participate in this corner of the Internet because of the moderation of both link submissions and comments. Without it, HN would devolve into a cesspool over time like every other attempt at unmoderated forums that’s been tried. Well-meaning participants would be driven out by trolls, spammers, and angry people with an axe to grind.


I disagree, I come here because the articles I see and comments are read are reached by consensus. Bad comments are normally at the bottom of the page and easy to ignore.

There is shadow banning on HN and I disagree with it, but I should be able to make the above generalisation of why HN seems to work with less moderation than expected.

On sites like 4chan, there is some moderation, but again, the few interesting comments that exist get automatically highlighted by the engagement that occurs within the page. Moderation does not allow this to exist, census does. Moderation just helps but I argue the site would work without it, and thats how the internet used to work. Even newgroups that used the wrong mechanic to handle consensus based uploading, and suffered from spam , had content that was good and easy to find, without any moderation that I could see.


> There is shadow banning on HN and I disagree with it, but I should be able to make the above generalisation of why HN seems to work with less moderation than expected.

I respectfully suggest that HN has significantly more moderation than you believe it does.


I had not realised but I will take your word for it - thanks.


I think I missed it too. I didn't even downvote.


Conversation at dinner with a group of your friends and their friends is going to function a lot more smoothly than one at a table of atheists, christian fundamentalists, Jews, Sunni and Shia Muslims, Americans, Russians, Chinese, Tibetans, Israelis, Pakistanis, Indians, liberals, conservatives, socialists, capitalists, libertarians, fascists, anarchists, environmentalists, trans people, cis people, homophobes, racists, sexists, “the woke”, narcissists, sociopaths, frauds, manipulators, trolls, geniuses, average people, and imbeciles.

The Internet might have seemed “better” back then and like it didn’t need moderation. From many people’s perspective it probably was. But that was likely a function of the reality that most people on the Internet at the time had a lot in common with one another.

We don’t get the benefit of that luxury today.


I disagree with your take -I don't see it being a question of moderation. You had many of those groups online (if not all of them) in the 90's and 00's. Unlike today, if you didn't like the forum you were on you could go to another one where you fit in.

There was a greater diversity of social gathering places.

Now -there's facebook, twitter, reddit and HN.

Moderation has ALWAYS been a factor and that's not what made the old internet better.

What made the old internet better was that it was open and more diverse (in terms of viewpoint and choices). The new internet is basically (on a social level) a few social media corporations which are becoming increasingly sterile.


Being on the internet and unmoderated doesn't mean you have to engage with everybody. The idea is that you get to choose what sort of conversation to engage in, and who to /ignore. Hardly different from "real life", except that the internet gives you a larger pool to engage with and better tools to deal with it.


The problem isn't you choosing to engage with everybody. It's everybody else choosing to engage with you and those you want to engage with.

If every discussion on HN was derailed by bad actors, you'd go somewhere else. If there was nowhere else moderated to go to, you'd probably just stop engaging altogether.


I don't think you’re wrong in pointing out that the internet is likely more homogenous the further back in time we go. It’s a hard thing to be wrong about with respect to any scene. But I don't think I agree that it means that we must bend on principles of freedom of expression, digital civil liberties, and anti-censorship and anti-central control. And if definitely doesn't mean the internet wasn't possibly still the most diverse place on the planet even shortly after its inception. Now that the internet has taken the world stage, we must continually fight and push for the society we want to build, for what’s right, really. We must educate and in certain ways indoctrinate. Otherwise we risk losing our values to the swaths of normies—especially now that other cultures are vying for their own version of comfortable.


>You reaaaaallly would not like the uncensored internet.

Respectfully, speak for yourself.


People don't seem to understand the ramifications.

I prefer truth to being enslaved into conformance by "the algorithm".


Every platform that tries "we're 100% free speech no censorship or moderation" winds up learning the same lesson which is that their problem isn't with moderation it's that you don't like the kind of moderation decisions that are being made. Parler ran this gauntlet on fast forward [0], Gettr did it too [1], places that have tried holding on to the free speech absolutist position eventually burn to the ground in one way or another like the various children of 4chan.

[0] https://www.techdirt.com/articles/20200630/23525844821/parle...

[1] https://www.techdirt.com/articles/20210825/17204647438/trump...


I think the anonymous internet was way better. People tend to group anyway, so it's not like everyone is forced into the same room.


If the_donald proved anything in 2015 and 2016, the loudest most malicious folks intentionally ram themselves into all communities. The suggestion that bad actors will keep to themselves is the absolute opposite of reality.


>You reaaaaallly would not like the uncensored internet.

I remember the uncensored internet -it was vastly superior to what we have now.

Vastly.


It was not because uncensored content. It was bastly superior because there was a lack of comercialization. Before SEO, before targeted ads, an internet made by educacional institutions and people with hobbies.

Nowadays allow everybody to post anything and you will only see SPAM for the rest of you life, it will become impossible to see anything of value.


I'm not sure if I agree or disagree with you -you definitely have a point,though.

From my point of view it's less commercial vs non-commercial (though I think most of the problems either stem from, or are aggravated by commercial factors) as much as it is diversity versus consolidation.

Ten, fifteen years ago there was a greater number of varied forums and social gathering places with a greater number variety of viewpoints. Now while there's the odd vb bulliten board here and there you have fewer and fewer populated forums -everything seems to have coalescened (socially speaking) into twitter and facebook.

Basically, from my point of view, Social Media has consumed the rest of the social internet (meaning forums). I think there's more than one cause for that but that's what I see as the main thing making the past superior to the present -the ability to not just have your own place, but to have a chance at it reaching out to more people than just you and your friends.


> From my point of view it's less commercial vs non-commercial (though I think most of the problems either stem from, or are aggravated by commercial factors) as much as it is diversity versus consolidation.

There is a case where I'm pretty sure commercialization is the problem. Phone app stores are modeled on Linux repositories. Linux repositories are good. App stores are awful. The problem is the level of interest from bad actors, and the reason they're interested is the commercialization.


>There is a case where I'm pretty sure commercialization is the problem.

It's a problem, and yes -a major one. But I think that it's just one of many factors contributing (negatively) to the situation.


> everything seems to have coalescened (socially speaking) into twitter and facebook.

And Reddit and Discord.

It is a monumentally difficult task to create a forum these days, from a social PoV.


>It is a monumentally difficult task to create a forum these days, from a social PoV.

and legal, and probably technical PoV as well; yes.

The large platforms have sucked all of the oxygen out of the room, and have gained what I feel is an unhealthy control of what can and cannot be said.

I have no idea what can be done to fix the situation; but none the less, there it is.


From a technical perspective, forum software is easy enough to use.


A technical perspective also includes mitigating denial of service attacks and other security issues, including scaling.


For one, you could tell idiots (whoever you decided they were) exactly how they felt, without fearing retribution IRL.

Keeping online identities separate from the real world worked both ways. It protected rational critics, trolls and shitposters alike.

The problem never was anonymity or freedom of expression. No. Social media is when life online went into freefall. Curiously, a pillar of its business model is melding real life with internet identity.


So were the people who used it. Times have changed. Maybe the last time you went to a bar you saw the bouncer eject some hulking meth-head, drool dribbling down his bib, for getting handsy with strangers and throwing a bottle at the bartender. Well, that meth-head has a social media presence today too.


Not really -usenet of old had stalkers and harassers; that's how the killfile was invented.

That is a very, very good analogy, however.

The internet of old had a wider variety of establishments that you could go to. If you wanted a rough-and-tumble dive bar experience it was there. If you wanted a polished experience it was there too.

The effect facebook and other social media has had on the internet is the same effect that walmart had on small mom and pop stores -you can still find one here and there but no, not really.

Incidentally, for all of the censorship and increased barriers to entry (for creating forums) your drooling meth head still presents a danger in the form of doxxing, inciting riots (Jan 6th, any one?) and harassing people.


Sure, Usenet had its antisocial users, and quite a few seedy groups, too. My point is that it's worse today... much worse. In the early 90s the net was mostly students, profs, tech workers and professionals.

Think of the scariest person you've ever met (really, take a moment and actually do it) and ask if he or she would have been online in 1995; the chances are they would not. But they are now.


Uncensored internet isn’t the same as uncensored internet now. Everyone is online and everything is trivially automatable.

Mountains of spam, porn, violence, etc

If you’d ever moderated anything halfway popular you’d know. Shadowbans are an essential tool.

They can be way overused. But there is a reason everyone used them. Constant arms race between sites and spammers.


Vaaaaaastly!


> Shadowbanning is an essential tool for moderation

Does it really work? A lot of people have no trouble working out that it has happened to them. The theory is that a troublemaker won't realise they've been banned, and so will continue sending their trouble into the void instead of coming back as a new account to cause more trouble that everyone else can see. In practice, it doesn't take long for the troublemaker to work out what is going on, so this only slows them down slightly at best. For some major sites, people have even created shadowban detection tools, so even troublemakers who are too clueless to work it out for themselves can discover it.

This site only really has half-shadowbanning. A lot of people have showdead on, read the dead comments too, and vouch for interesting/constructive ones. Maybe that works for this site; but even if that is true, it doesn't really tell us how well a pure shadowban implementation, without showdead, works. Very few sites with shadowbanning have an equivalent to showdead.


Surprisingly yes. It does work. There are some who figure it out eventually, but not before they have spent months wasting their time and saving mine.

I reserve it for people I believe will ban evade or have in the past.


>You reaaaaallly would not like the uncensored internet.

Respectfully disagree. Watching things get disappeared on Reddit sent me in to an anxiety spiral.

I'll take goatse links over that shit any day of the week.


Goatse links would be the least of your problems. I’m imagining my Gmail account without the spam filter. Everything showing up in my inbox, not just the stuff in my spam folder, but also all the stuff they silently drop that I don’t even see. Email would be unusable.


> I’m imagining

Imagination often does not match reality. I have a (non-Gmail) account that is littered over the web, in git repositories and in multiple leaks. I don't silently drop any emails - the only server-side anti-spam measure I have a regexp to reject the usual spam subjects at submission time, which for normal senders will result in a notification to the user. The amount of spam is hardly at a level that would be anywhere close to making Email unusable.


I am sorry, but the uncensored internet is the only type of acceptable internet. Anything else is just gov or corp interests.


An uncensored internet would become an unusable cesspit.


I miss the uncensored internet, before it went all 2.0-shaped.


I can't think of any direct messaging applications that moderate content.


This is private chat. Get off your high horse.


Secretly blocking specific messages is very different from a shadowban, though.


People use these things as alternatives to SMS or even the telephone (for audio chat). It would be absolutely ludicrous if telecoms started shadowbanning their customers, if Facebook wants to act like one it should play by the same rules.


This isnt anymore even about freedom of speech but about a product constantly and intently manipulating and deceiving its users. It must be pretty shitty people working there, who relish in playing games with humans as if they're monkeys. Even their automated systems sound like total crap. I have a game on facebook for a decade, where users sometimes try to send some link to their friend through messenger. FB blocks it every few weeks and users have to beg them to unblock it every time. What a terrible company to invest in


As another comment points out the screenshot clearly shows two red exclamation marks indicating it wasn’t delivered.

You can take issue with the censoring (and I do) but it’s not silently censoring which is much worse.


What screenshot are you looking at? Because there are no red exclamations in the image on the linked tweet


Next tweet in the thread shows them. First pic is cropped (not sure why)


the tweet appears to be intentionally deceptive. They cropped the first picture just to remove the exclamation mark.


Is this about the exclamation mark though ? Or about the censorship ?


“Silently censoring” is the key here. The censoring has been well-known for years.

Most of the conversation I’ve seen in the comments has been about shadow banning which just isn’t related to what’s actually happening here.


This has been happening for years. Just try sending a message containing the string `joebiden.info` and see what happens.

We’re way past the rubicon. Get your information as best you can and just hope you’re somewhat aligned with the mainstream.


Just tried it and it sends without an issue


I suspect they have flagged users that get filtered differently. I've been in comment threads where people say very similar things as myself with all the same keywords and I'll get a message days later about a submission being hidden due to some violations.


Depends on how you do it. If it’s E2E then according to this comment from March 2020 it goes through: https://news.ycombinator.com/item?id=23110664


I tried it during the election and it didn't work. Have uninstalled it since then so don't know if things have changed.


Seems a bit random. That worked for me and also a 1337x.to link worked but I've had the latter blocked in the past - it's a torrent tracker.


[flagged]


What? 90% of the garbage circulated on facebook is republican qanon conspiracy theories at worst, and tabloid reactionary garbage at best.


What flavor of think do you suppose they filter for?


Probably the flavor that leads to burning black churches and attempting to overthrow US democracy?



Neither of your links really present any kind of counter point. Not sure what the argument you’re trying to make is.


Reddit was also silently removing comments containing joebiden.info

Not sure if they still are


One of my comments in Reddit was changed by admins. It was something related with d...

Not directly related with my comment but this is a proof https://techcrunch.com/2016/11/23/reddit-huffman-trump/


You were one of the people who had a comment edited in that 1 hour period mentioned in that news article?

Or are you saying there was a different comment editing incident? If so, that would be big news. He promised not to do it again, so if he did, he's breaking a promise, and I think that would be pretty newsworthy.

> "As the CEO, I shouldn’t play such games, and it’s all fixed now. Our community team is pretty pissed at me, so I most assuredly won’t do this again."

https://www.theverge.com/2016/11/23/13739026/reddit-ceo-stev...


> You were one of the people who had a comment edited in that 1 hour period mentioned in that news article?

No.

Or are you saying there was a different comment editing incident?

Yes. I have commented that they should try https://en.wikipedia.org/wiki/Cocaine_(PaaS) but I guess they didn't click the link and then they have altered the meaning of my comment. At that point of time I was aware that Huffman have edited some comments in Reddit in the past, from the news. Thus I was not surprised.


Have you tried contacting anyone about this? Maybe post about it in a subreddit, or talk to a journalist that previously covered the other editing? Or contact reddit pr or an admin?

This seems like a pretty big deal, especially given his previous promise to stop editing comments.


I have posted it in the same subreddit and it received few replies, one of the redditors said that he remembered my original comment. Nothing happened. I did not contact other people becouse I had no proof.


They stopped blocking it shortly after the new year. I've been checking that link every couple months or so.


This post seems to literally be propaganda.


What post? Are you talkinga bout TechBro8615's comment? What is it propaganda for?


sending a message with thedonald.win triggered the same behavior at some point


I just tried and it worked?


100% did not work during the election. I was able to convince my siblings to leave Messenger (where we had our “sibling” chat) because if it. Censoring public FB posts is one thing, but I don’t need zuck telling me what I can say in a private conversation. That’s too far.


If it was so common why not provide documented evidence? It would seem like a pretty important thing to document, assuming it actually happened.


It happened. This website has a blurb about it https://censortrack.org/case/joebideninfo.

I tried it a few months ago and it was blocked. Here's a screenshot I made of it: https://i.imgur.com/KZa60db.jpg


[flagged]


Do you think I'm making this up? I experienced this. Apparently from sibling comments it no longer occurs, and I do not possess a time machine.

I'm not going to jump through hoops to prove this for you - please do some cursory online research.

This is bewildering. I have no agenda nor care enough to comment about a foreign country's election.


joebiden.info is allowed currently, but thedonald.win is blocked currently.

If you want me to screen record the blocking of thedonald.win I can, but I'd rather not screen record my FB messenger page because there's private information there, and editing that out of the screen recording would take some extra work.


Also you cannot always convince all people.

If you had an actual recording one could always ask the question if it was created using an elaborate mock up or something.


After big tech stepped in to ban news articles on Biden selling political influence to foreign nationals most people stopped even considering these kind of things conspiracy.

But to OP's point on joebiden.info:

https://www.reddit.com/r/conspiracy/comments/hr30p3/reddit_f...


[flagged]


I don't really see how what you're saying is relevant. It's possible for someone to be validly elected and also for negative stuff about that person to be blocked on some websites.


In the greater scheme biden's corruption isn't really noteworthy. Everyone in DC has their loser kid on corporate boards, or are running a phony charity, or taking in speaking fees from those they are regulating, or taking campaign contributions in exchange for political favors.

The alarming part of the laptop story was the rapid response big tech took to censor it.


Probably because it's all bullshit spun up as a foreign intelligence operation.


It's not Zuck, it's Zuck and his entire national military.

It's still peacetime so the list is 100% Zuck contents right now.


I find it confusing that a community that frequently acknowledges that free services treat their users as the product simultaneously has expectations that the free service should amplify their speech.

This, like many similar posts here, isn’t censorship. If you want to spread your idea (any idea at all) without hindrance, don’t do it on an advertisement platform.


What exactly is your argument that this isn’t censorship? They’re providing a private chat platform and preventing certain messages from being sent. As far as I can tell this is the exact definition of censorship.


I don't need to make an argument that it isn't censorship. I'm not making the affirmative claim. If you're curious about how I arrived at a conclusion so easily: we're here discussing it openly on the public Internet.


I don't think censorship means what you think it means. The medium and scale at which the censorship is taking place is irrelevant.


Every publication that has ever had any editorial control over its platform has exercised it. It’s entitled as hell to think you can use other people’s resources to publish your own thoughts unconstrained.


It's a private chat, how does "editorial control" possibly play any role here?


Well it turns out it’s not that private.


> I don't need to make an argument that it isn't censorship

Yes you do. This is plain as day censorship. You're not allowed to say certain things to your friends. If you're going to claim that that isn't censorship you are absolutely making the affirmative claim.


> Yes you do.

No, I don’t. I’m not making the claim. Look, still not making a claim! Still not making an argument. It’s up to “Facebook is censoring me” to make the case that warrants a defense. And I’m not Facebook so I don’t have do defend that. I can just… have this conversation without censorship. And if I get moderated here in a way I hope I won’t be but don’t expect… I can go to my blog or Twitter or whatever.


They did. They showed how Facebook censors[1] content in DMs. That has already been supported.

Now, you are claiming it’s not censorship and have provided no basis for that.

[1] https://www.merriam-webster.com/dictionary/censor


Facebook isn’t a censor. Proof: we’re having this conversation. I don’t expect to convince you, but I find it ridiculous that I’m being asked to prove a negative because an affirmative position was extended and you’re not willing to critically examine it.

If FB is a censor, they’re extremely bad at it, and not authorized to be good at it.


"It's not censorship if we can still whisper in the back alleys."

So if Facebook isn't ubiquitous and omnipotent they cannot be considered a censor? You're deliberately applying an impossible requirement that you made up. By this requirement there is nobody that can censor, "China doesn't censor because you can still talk about what you want in America." It's absurd.

FB censors private messages between two people on Facebook Messenger. That's the claim. I'd say it's pretty well backed up with proof and facts in this thread.


Censoring is a verb. They censor (remove content they deem objectionable) communication on their platforms. That is the claim. You’re setting up some kind of straw man here saying they aren’t “a censor” because other platforms exist. By your definition, no true censor exists because someone could always communicate directly 1:1 with someone.


It's censorship, and more specifically it's corporate censorship. When it comes to ranking a feed, you have more of an argument, but blocking direct and supposedly "private" messages on the basis of their content is censorship.


Okay. I'll concede, Facebook is controlling what content is sent on Facebook's own network and services. How. Dare. They? I should be able to send whatever I see fit on their DMs right? Child porn? Coordinated mass violence? Pictures of patterns of holes that upset people with trypophobia? Credit scams? How dare they silence this Nigerian prince?


You're moving goalposts. First you claimed that this wasn't censorship, now you're saying okay it is but it's fine for them to do so here. Censorship is unacceptable in private messages, there's already reporting functionality.


I’m saying none of those causes is censorship, Facebook has no obligation to send your private messages to anyone, and I was glib enough in saying so to include my own personal squick about patterns of holes in things to emphasize the point.

If Facebook doesn’t want to send my message to you, that’s their choice. It’s not my right to send messages through Facebook and expect them to be delivered to you. Even if they contain horrific circles in a pattern that makes me squirm just describing them.


Maybe I'm misreading your message, but you seem to be writing as if your counterpart in the discussion believes these things, despite saying nothing of the kind.

Am I misreading you?


No I’m saying “censorship” as a rallying cry around moderation of a platform is effectively demanding permission for those. If you say a private company can’t moderate private messages it hosts, you’re saying it must allow private exchange of child porn. And harassing people with pictures of patterns of holes they find unnerving. And basically any harmful behavior you can imagine.

If Facebook wants to provide private messaging that’s actually so private they can’t know its content, that’s another story. But if they’re hosting messaging they know about, they’re entirely within their rights to decide what content is appropriate on their own resources.


>No I’m saying “censorship” as a rallying cry around moderation of a platform is effectively demanding permission for those.

I don't think so. The 2 specific situations you list are different. For child porn, it's illegal. So if Facebook blocks it, it's the government choosing what to censor, not Facebook.

And I think it's possible to prevent "harassing people with pictures of patterns of holes they find unnerving" without Facebook censoring content. Facebook has a block functionality, and the receiver could block the sender. As for the sender making multiple accounts, if Facebook blocks that, that's not censorship of content, that's enforcing a 1 account policy.


I wasn't saying any of those things. I was responding to the outrageous claim that this wasn't censorship.

Facebook is also allowed to censor child porn in private messages, and that they do so is a good thing, yet it's still censorship.


> No I’m saying “censorship” as a rallying cry

Is it possible to use the word censorship without it being a rallying cry (which then seems to somehow ~bend reality)?


> expectations that the free service

Well it can't be really free or the service would not survive is it? It's like a club having free entrance / you pay otherwise.


Addressed in the snipped text


Facebook is not a free service. Users pay with their time, attention and mindshare. Mindshare is a limited and valuable resource.


Addressed in the snipped text


One-to-one messaging is not "amplification". Without Facebook's network effects, they would probably be using a different messenger, so you can't even claim that Facebook is enabling a communication that would otherwise not take place.

And the fact that it's not unexpected doesn't make it not censorship, or not worth pointing out.


> One-to-one messaging is not "amplification".

Tell that to the dozens of SMS I receive on a regular basis trying to scam me I guess?


I’d prefer a more detailed analysis than two screenshots from a random conversation.


I just tried with the same url and there's a message telling me it won't send


But if you send another message after that, then the indicator changes to delivered. May be a bug in adding a new feature caused by developer only caring about not sending the message part.


That messenger is subject to facebook's community guidelines is nothing new, Zuckerberg said so himself publicly in an interview a few years ago

https://www.businessinsider.com/facebook-confirms-scans-mess...


Your phrasing seems to imply that

- we should know that messenger being "subject to community guidelines" means silent censorship

- Zuckerberg saying something publicly means it is justified

I would disagree with both of those implied assumptions


You should know it because you signed the terms of service which includes the statement, (in the beginning) that Messenger is subject to Facebook's community guidelines, which at the top of the page tell you:

"As people around the world confront this unprecedented public health emergency, we want to make sure that our Community Standards protect people from harmful content and new types of abuse related to COVID-19. We're working to remove content that has the potential to contribute to real-world harm, including through our policies prohibiting coordination of harm, sale of medical masks and related goods, hate speech, bullying and harassment and misinformation that contributes to the risk of imminent violence or physical harm. As the situation evolves, we continue to look at content on the platform, assess speech trends, and engage with experts, and will provide additional policy guidance when appropriate to keep the members of our community safe during this crisis."

https://www.facebook.com/communitystandards/


It's e2e but we can your messages before you send them. Original

Edit : it appears that it is not e2e by default


That messenger is subject to facebook's ~~community guidelines~~ world view ...


Youtube does this with comments - your UI makes impression the comment is posted, but it actually is not.

(Sometimes the comment can also dissapear from your UI as well - I've tried editing a comment, but it fails saving and after page refreh the comment dissapeared. Reproduced that multiple times.)


Speaking of FB censoring. They now censor my ability to "Like" or react it is fairly random. I will click like on something then Facebook opens a popup that says that I cannot like or react to posts because they will my liking has been harmful to others. Facebook has become unusable. I haven't posted in over a year but they have found new ways to censor.


> I will click like on something then Facebook opens a popup that says that I cannot like or react to posts because they will my liking has been harmful to others

Do you have this on screenshot by any chance or can you make one? It seems way too ridiculous and out of place - but then, I stopped using fb few years ago so things could "change"


Here it is. They seem to call the "Like" function "Reactions"

https://photos.app.goo.gl/QseWerYMd2jVbfpa6


This is a form of shadow banning and it’s more common than you think. As a matter of fact, HN and Reddit do this too.

If HN/Reddit think that you are posting promotions, *HN silently censors your post, claims it was posted*


We call that a dark pattern.


No, a dark pattern (at least in terms of UX) is when you intentionally mislead someone in order to gain something.

The purpose of shadow banning is to make it less obvious to the spammer that they triggered the spam filter, so they get less signals to work around.

Same as websites who, instead of blocking access to (computer) users they think are scrapers, change slight details in pages like for prices and whatnot.


HN reports that they no longer shadowban. Reddit absolutely does shadowban still, and rather capriciously and randomly.


Does that mean:

A - They have unShadowbanned all previously Shadowbanned users?

OR

B - They are no longer actively shadowbanning users, but previous victims of Shadowbanning are going to stay that way?

Am asking since there is a big difference between the two.


Both are false. They still actively shadowban users and already banned users have stayed banned.


I'm interested in hearing reports, please email me if you can show me specific examples.


I'll see if I can remember to mail you in a few days when this account I'm using gets banned.


To be clear: I'm not claiming they don't ban users. They do. I've been told they don't shadowban users: they don't make it look like you're not banned while in reality your messages are all invisible to everyone but you... or so they've recently claimed.


AFAIK: "ban" and "shadow-ban" look the same: you can post comments, but they automatically [dead], the difference is just that if it's a "ban" you get told you are banned by a mod. And while it's probably a lot rarer nowadays, I believe e.g. the spam filter still automatically applies this without a message, it seems there is a system killing accounts that go too deep into negative karma, ...


This is another reason to only use E2EE FOSS messengers, ever.


You also should be able to host your own server instance and be able to communicate with others on the network (XMPP, Matrix, ...).


Certainly agree with this!


That would not prevent this.


Assuming it's some kind of active censorship and not a bug, how would this be possible with Signal for instance?


E2EE would absolutely 100% prevent this.


Unless filtering happens on the client (see recent Apple news)


That's where the FOSS comes in.


Why anyone should care if you can't send links to websites run by kooky nutjobs with false or highly questionable COVID information to other people via FB messenger?


"Kooky nutjobs" do exist, but part of the problem with censorship is that undesireable people can be put into this can all too easily to be silenced, even though their agenda is not really that kooky.

Pro-gay or pro-weed activists, heck even pro-civil rights activists used to be kooky nutjobs of their time.

We haven't arrived at the end of our moral or legislative evolution. We need to hear some voices that are considered kooky.

For a recent example, see Lenore Skenazy and her free-range parenting. This definitely seems kooky to a paranoid, helicopter parent part of the population. What if this part of the population holds the Delete button?


This isn’t the same thing, and I think you know that.


I know that this isn't the same thing, but once you have a big hammer (censorship), let us bet that it will be used fairly indiscriminately.

This isn't a new discussion, and I think you know that. Bureaucracy tasked with suppression of speech and beholden to career politicians hasn't historically shown itself to be subtle and erring on the side of freedom.


Let us not, because pretty much all communication platforms have been moderated in one way or another for all time, even on the Internet (Usenet excepted, which I’d argue hastened its decline; and even then, a lot of news servers wouldn’t carry the crazy groups). There’s plenty of history here already and it seems pretty clear that it hasn’t had catastrophic effects.

As I said elsewhere, if an actual problem arises, fine, let’s do something about it. But that’s not the case today. There’s just no indication that Facebook is going to block mainstream political speech anytime soon. If you find a real example of that, raise it, and I’m sure it’ll get a lot more attention than from just the free-speech absolutists.


Because of the precedent that sets. For example, some would hold that socialism is as economically illiterate as anti-vaccine opinions are medically illiterate to you. Should socialist links be likewise censored?


The currently popular idea of "setting precedent" is overrated, misused and an argument, and shouldn't be taken as "it will definitely happen all the time soon".

The Japanese "set precedent" by bombing Pearl Harbour. How many seaports of the US have been bombed by a foreign power since then?

The US dropped two nukes on Japan. Did it "set precedent" for nations to nuke each other willy-nilly in wars since then?


The precedent that privately-run internet messenger services can censor content was set a very long time ago. You’re decades late to that party.

Besides, we’re not talking about political opinions here. If they ever do that (which is unlikely), then we can become justifiably upset. That’s a hill worth dying on — this one is not.


Try sending https://thedonald.win via FB Messenger


Guess you missed all the 2020 election censorship since you're not in the present wrongthink category.


About this case in particular, one problem I have is that they seem to censor the entire domain. Most of the content on that site is bad and definitely violates Facebook's rules. A lot of it is straight up lies made up by the site owner for political purposes.

However, some articles on the site are just political opinions about the CCP. Because Facebook blocks those, I think the merits of this censorship at a coarse grain are worth discussing.


When 99% of your content makes the rest of it look good, you willingly assume that risk. Nobody should be shedding tears about this.

Also, censorship is forbidden only to the government and to common carriers (which Facebook is not). Private parties (with very rare and reasonable exceptions) have the right to control what others do with their chattels.


You know what I say -- so be it. If they want to censor literally everything that isn't pleasantries and personal conversation I say lets do it. The world will be better off almost immediately.


Why did you bother to post this message then? Shouldn't it be censored?


It's not a link to a 3rd party website containing rant about censorship so I'd argue that's what I'm looking for.


Private messaging platforms have been censoring content for decades and yet I can still openly discuss socialism on any platform I want. Maybe we should evaluate actions individually on the merits instead of constantly focusing on hypotheticals that rarely come to fruition?


Hypothetical applications are how you evaluate a principle. "Does the principle this precedent relies on hold up equally well when applied to socialism?" is a reasonable and useful question in that context.


What about links that call for violence / genocide? Should those be delivered to avoid setting a precedent?


I want you to imagine for a second a conversation between two people, one that goes something like this:

Person A: "I really don't think weed should be illegal."

Person B: "Oh yeah, so what about raping rural villagers to death? I guess you think that should be legal too huh?"

This is how ridiculous you sound.


You’re making up something that person didn’t say (or even imply, really). That’s not ok.


I want you to imagine for a second that someone makes the claim that blocking links for COVID misinformation is setting a precedent. An obvious follow up in order to have a constructive discussion is to establish where the line is, ie whether the debate is a) whether blocking of any kind or b) what content warrants blocking. This is specifically relevant to Facebook since Whatsapp has been used in the past to coordinate/incite violence and genocide.

So maybe stop and think for one second before commenting?


There was already a consensus on where the line should be, it's people making arguments like the one you're making that are muddying the waters so they can "clarify" as you're doing and suppress speech. We already know where the line is.

The line is here: in a private communication between people anything can be said, nothing should be censored. if someone gets caught planning or doing something illegal they go to jail.


That was never the consensus or the “line,” because major chat providers never behaved in the way you suggest. They have always had rules that were codified in their Terms of Use, and have enforced them through various means.

Your absolutist position is interesting, but don’t make it sound like private carriers have ever been held to this standard.


Wait you started this thread making it seem like it was ridiculous to equate covid misinformation and other forms of objectional content (such as calls for violence), implying that you felt it obvious that allowing the former would not mean also allowing the latter. Is that not your position?


Yes, I would say so.


TBH that absolutist position feels untenable. Unmoderated platforms are quickly overrun by objectionable content, and easily usurped by moderated competitors. The only way to force this would be through legislation, and I just think there is no way that the majority of people can be convinced to support/vote for that position.


That's true of communities involving large groups of people; this is Facebook Messenger, which is mostly used for one-on-one or small-group chat, like SMS. SMS is not generally overrun with objectionable content, even though it is mostly unmoderated.


> socialism is as economically illiterate as anti-vaccine opinions are medically illiterate to you

Edgy comment, but unfortunately, even with all the sophisticated jargon and advanced mathematics, economics isn't a real science with objective truths, but medicine is. Don't get me wrong, I too believe that capitalism is the only form of economic system that works, but to equate socialists (however misguided you think they are) with anti-vaxxers is a bit much.


I said there are people who would say that. Whether I am one is not the point.


In this case, the linked article contains a lie. It's not an opinion. The article claims that a study had a result that it did not have (higher viral load amongst the vaccinated). The authors of the study have already published a condemnation explaining the misunderstanding.


Should Facebook block every lie? Where exactly does that line fall? How, anyway, do you distinguish with certainty between a lie and a misunderstanding?


Why don’t we judge each action on its individual merits instead? Why can’t we simply say “yup, this site is a problem, nothing of value lost here” without worrying about whether they’re going to block speech about whether 2 or 3 teaspoons of yeast is the right amount to put in a loaf of bread, or whether my candidate for school board is better than yours?

Let me be super clear here: nobody is entitled to use other people’s stuff to make trouble or potentially harm society. Not your mother’s house; not Facebook’s messenger. If anyone gets to use anyone else’s stuff, it is at the owner’s pleasure and under their conditions.


I ran into this block last year trying to send a link with a comment like "wow this is wild, if this is what people really believe our democracy is in trouble <link>", that felt like a hinderance to even discussing the problem.


That's the very definition of concern trolling.

https://en.wikipedia.org/wiki/Internet_troll#Concern_troll


> A concern troll is a false-flag pseudonym created by a user whose actual point of view is opposed to the one that the troll claims to hold. The concern troll posts in web forums devoted to its declared point of view and attempts to sway the group's actions or opinions while claiming to share their goals, but with professed "concerns". The goal is to sow fear, uncertainty, and doubt within the group often by appealing to outrage culture.[63] This is a particular case of sockpuppeting and safe-baiting.

So no?

The person you reply to is a 7 year old account in good standing.

Also let me give you another example: licensed medical doctors being unable to have a technical discussion between themselves about Ivermectin in one channel and having to move elsewhere.


This edge case, while admittedly existing, doesn’t seem worth wringing our hands over. There are other channels available for that.


In some cases it's hard to tell where the line should fall. This isn't one of those.

In general you can't distinguish but fortunately that doesn't matter. Facebook doesn't block lies, it blocks "known misinformation". The information in the linked article is known to be false, and it is harmful, so it's blocked.


Wow, people on HN are really downvoting posts that take a position against censorship on this thread.


Where do you think some "people on HN" work...? I'm not surprised.


I am beginning to think that threads get bum rushed whenever certain topics are broached.


That’s really nothing new.


Wait I thought this was known? I remember seeing this behavior with a friend when I had FB, and I haven't had FB for 3-4 year already.


I had a similar reaction. I specifically remember sharing a pornhub link (it was not porn, but a movie someone had uploaded to the site) as a joke, but it wouldn't send. I guess the difference here is what's being censored.


Pornhub seems to be fine today.


Microsoft silently censored MSN chats 20 years ago. Couldn't talk about PHP programming easily because any message with 'index.php' was silently dropped. Blew my mind they would dare do that, but I've since learned this is common practice at all major chat apps.

If they don't like a message you are sending they are likely to silently drop it and let you think it was delivered. Just no care at all for the user experience at a base level.


I'm curious, what would be the reason behind banning index.php, especially considering it was a common feature of links at the time?


>Blew my mind they would dare do that, but I've since learned this is common practice at all major chat apps.

I'm sure Discord will be no exception -though I can't think of any current examples that apply to it.

That makes me worry about the way that it's displacing/has displaced IRC.


My hope that future internet will be decentralized


The past internet was decentralized (or at least, pluralistic)


Welcome back to USENET.

What good is decentralizing the transport, if the sites and destinations and applications aren't?


Not surprising. If you were a bit imaginative in the old days, and put your power hungry psychopath hat on (which is a fair description of those in power), you could imagine the ways technology could be used to silently produce net effects and results without anyone really knowing to avoid any protest.

Here’s another one: Google can use your aggregate data along with other sources to determine whether you seem like someone favorable to the regime or not. If not, the probability distribution shifts in a way that directs you toward certain jobs and so forth that keep you from what the refine doesn’t want you near. It’s like a BigTech version of providence, except unlike divine providence, they don’t care about you.

Or if they don’t like you, reveal unflattering information about you to people you know in ways that look like search results.

Lots of possibilities. People are also too complaisant to protest things like these even when it’s made public.


If you’re trying to leave Facebook, going to WhatsApp might not be the best move.


MSN Messenger was censoring some words, if I remember correctly like "Summer Jam" and "papaz". I was unable to find those words but I have found a related news article https://circleid.com/posts/07080616_msn_messenger_censor_inf... Since Facebook was partially bought by Microsoft I believe Messenger is continuation of MSN Messsanger.

I abandon such applications without hesitation.


Yep, it was censoring messages containing specific words and links alike. Silently.


The censorship is intentional on the part of Facebook, but the "silent" part is a frontend bug on messenger.com (possibly also the app). Normally it shows you a red "error" indicator next to the message that says something like "this goes against our community guidelines.

The error can be seen in some of the screenshots. It briefly shows "couldn't send" and if you refresh the page the messages disappear forever.


[flagged]


There are replies to another comment about it being a bug. They are saying that's just a cover. Standard conspiracy stuff.

But people are also questioning the whole idea of blocking messages at all. Everyone thinks they don't need to be protected, but the fact of it is that we're all vulnerable to manipulation. Hell, that's one of the issues that people take with media platforms. Their content selection algorithms and targeted ads are manipulative.


There's a huge difference between shadowbanning users on a public forum and interfering with private chat. Network effects keep people on these shitty centralized plaintext messengers. The only way we'll put an end to this is to force Facebook to allow interop between Messenger and other services. Then at least people could gradually move to a more secure system without losing their whole network of friends.


What do the engineers who code stuff like this think? Do they not feel bad or is it masked by the salary they get paid?


They probably haven’t thought about it to any depth, just have a look at how many HNers - a crowd I’d have previously assumed to be against everything this story implies: censorship, lack of privacy, corporatism etc - will openly support these things on these very boards.


They’re not told to build censorship tools, they were likely told to build tools to help prevent terrorism and child abuse material. The rationale for tools like this always starts from a place where we can all agree.

It’s like asking why scientists would ever invent atomic bombs, they do it to stop Hitler, its only later that those in charge decide it would be good idea to drop a couple on civilian populations.


The majority of software engineers simply do not care. They're in it for a comfortable career with a fat paycheck.


>What do the engineers who code stuff like this think?

"Another day, another dollar"


Trust me. The way activism is going, they would do it even with no money offered.


Indeed, both the "woke" and the "alt right" are united in their love of repressing what other people have to say. You are correct!


And complaining about how they're the victims of "cancel culture" when they're the ones trying to cancel people themselves.


“Maybe this feature will save a few needless deaths from a dangerous yet highly preventable disease”

I mean, I don’t work there, but that’s what I’d think.


No too silently - it didn't allow me to send JotForms to our school parents claiming those links are not safe.


The domain looks like fake new/sketchy to me (I could be wrong). Maybe there is a ban for the entire domain?


This is a bug. The intended behaviour is to notify the user that tried to send the link.

Doesn't make it okay.


I don't buy it.

It is always "a bug" or some other excuse. Then we find out later it wasn't a bug after all.

If you lie frequently enough you loose right for benefit of the doubt.


Well considering that this behavior only appears in specific situations, I'm inclined to believe the bug claim.


Facebook always says everything is a bug. A year later, it is standard operating procedure.


This is why it is best to avoid centralized platforms like FB, Apple iMessage, Twitter DMs, etc.


It's been my experience that Google Voice also silences many of the links that I share with people.

Ultimately I stopped sharing links over several third party services that I've used for years and have transitioned to other services instead.


I think it was Sprint? that was blocking Signal invites over SMS messages. This sort of thing is widespread.


They've been doing this for years with porn, graphic shock sites, and conspiracy sites. Not sure why this needs to be reiterated, but if you even remotely value your freedom and privacy, never use Facebook again for any reason.


It needs to be reiterated because many people missed examples like this in the past. Every new user needs to be made aware of what is really going on, and what Facebook thinks of them as.


Isn’t messenger supposedly using “end-to-end encryption”? I honestly don’t know if I’m mixing this up with what’s app since I refuse to use either one. FB is a blight upon the world. Only a fool believes anything they say.


E2E is not enabled by default, because that generally provides a worse experience (eg, no desktop/web access) for users who don't care about it. You can enable E2E encryption on any conversation by tapping on the conversation title, and enabling "secret mode".


Facebook has been silently hiding messages containing targeted links for at least a decade.

I had multiple reports of censoring people discussing my depaywalling of a large cache of public domain historical scholarly papers [1], and went and checked for myself. Messages which included the link to the documents were reported as delivered but not actually delivered.

I believe they may have started it at the time of one of the high profile wikileaks document dumps.

This experiences is part of why I and my partner have since refused to use their services.

[1] https://arstechnica.com/tech-policy/2011/07/swartz-supporter...


Carriers in the US do this everyday. They block URL from being sent under the "spam" claim but no notice or information is ever provided.


I've had this happen to me for years.

Sharing torrent links, pastebin and a few other sites that can be used for non-legal purposes.


Why is anyone still using FB? They've been arbitrarily censoring content for some time now. Back when the CoViD-19 lab leak theory was a "conspiracy theory" they actively removed that content. Now that it's no longer a "conspiracy theory" I guess it's okay to share links about it now. What has changed? The same goes for the Hunter Biden laptop story. Back when it was "Russian propaganda" it was censored. Now that it's been verified, I guess it's okay.

Honestly, do you want FB deciding what is "truth" for you?


Dump Facebook. Don't be a victim.


As a facebook and messenger user I'm fine with them censoring stuff. I use it to send holiday pics to friends and family etc. If idiots want to cause people to die by pushing covid misinformation then fine censor away. There are other forums if you want uncensored stuff.


Try posting anything from 4chan over messenger and see what happens


Don't worry, it's another 'accident' by the automated moderation systems; brought to you by Facebook. /s

Do you really think that these companies are on your side or are your friends?


"Oh we are so sorry, please feel free to fine us $1 million. This was a rogue programmer!"


looks like it’s changed now, saying “couldn’t send”


Just tried sharing one of the links and it worked


Serious question guys, please help me understand. Why are we still using Facebook services at all? How many more bad practices should all of us, and even me tolerate…


Not all people have the strength to leave social media with risk of leaving as outcast or losing friends. Even if those friends might not be worth it, if you lose them because of the platform.

Dopamine addicition is also real.

People should realize how dangerous it can be, when the main goals of these companies are to get their users to spend as much as time possible on their platforms to watch ads. But many have other things to do, than think stuff like this.


If you run a business, you don't really have a choice. Cut that out, and you lose your ability to contact potentially millions of customers.


I get “couldn’t send” when I try that URL.


Likewise you just can't send any links with duckdns.org domain. BUT you will get (or would have gotten) a warning about that.


Would be great if someone reverse engineered the Messenger protocol and wrapped it with a better messenger.


The protocol used to be XMPP, back a decade ago when I used Facebook I used messenger in a standard XMPP client. You could use it with pidgin/libpurple. Then they walled it off.

It's better to just use something else.


I use an alternative app to communicate over Messenger and it still relies on the same APIs. That said, the app is mostly unmaintained and actually does pretend to send messages with a banned domain. Friends were discussing antivaxxers and I had just received a link over SMS from an old acquaintance to an antivax video. I tried to send it through Disa.IM and it "sent" as far as I was aware, then a few minutes later I was using FB on a desktop browser and that message was showing some kind of failure message that it was against community guidelines. When I went to look at the app on my phone, the message with the offending URL was just gone. like it had never been sent. The feature clearly exists serverside and the alternative client doesn't have a way to handle that part of the API.


For those looking for a no-censorship-ever-of-any-kind alternative, consider Status:

https://status.im/get/

If you don't need or want a crypto wallet or dapp browser, then simply don't use those parts of the app.

Relevant specs:

https://specs.status.im/

https://rfc.vac.dev/

Relevant repos:

https://github.com/status-im/status-react

https://github.com/status-im/status-desktop

There are trade-offs, for sure: since there's (deliberately) no integration with contact lists (address books) of the OS or other apps, your social circle probably isn't using the app already, or in any case isn't discoverable.

The public chats facility has turned out to be too spam-prone for "well known" / advertised chats, e.g. #status. However, if you create a public chat that has some unguessable component (e.g. #myfriends-a9e72ab5) and you share it with friends (even lots) in a reasonably private context, then the chances of it being spammed or randomly joined are quite low. Note that public chats, while "public", are still E2EE, using a chat's name as the basis for a symmetric key.

1-to-1 and private group chats are highly secure; the latter have a max size, and depending on their size and your device, sending messages can be a little slow.

Creating a robust alternative to the existing public chats facility has involved a lot of work: the forthcoming Communities feature provides a discord-like facility whereby founders/admins of communities can take advantage of various mechanisms for moderation and governing membership. The Communities feature can already be enabled in advanced preferences of both mobile and desktop apps, but note it's a WIP.

The moderation mechanisms for Communities don't undermine the no-censorship principle of Status because:

(1) Any user can create a community.

(2) A community's rules are managed by those with a stake in the community: there's no override by Status-the-org nor anyone else.

(3) The underlying nodes of the network form a decentralized p2p network, i.e. there's no central actor/authority that controls the flow of messages.

Re: (3), running a Status node should be simple and incentivized.

The "incentivized" aspect is a challenging problem and not solved yet. Long story short, engineering an incentivized decentralized messaging network (it's not a blockchain!) is harder than incentivizing a blockchain network.

That being said, the "simple" aspect isn't too difficult to solve, sneak peek:

https://github.com/status-im/status-node

Finally, with pertinent laws and regulations in flux across the globe, there could come a day when binaries aren't readily available (from app stores, GitHub, etc.), but thankfully there's always `git clone` and `make`.

Disclosure: I'm a core contributor at Status.


Meh, WeChat had been doing this for years.

Who in their right mind would say China is a dictatorship?

/s


dictators censor their people


In Australia the government's (dubious) re-opening plan is based upon modelling by the Doherty Institute.

Facebook censors anyone from sharing that report.

Here's the link to try yourself: https://www.doherty.edu.au/uploads/content_doc/DohertyModell...


The best thing about HN is https://news.ycombinator.com/newest ... where you can find the least censorship.


[flagged]


The whole Ivermectin thing is dumb and stupid. But are you seriously saying silent censorship of links in Messenger is okay?


Sure, its a private communication service. Why isnt it okay? You are using FB infrastructure to communicate, they absolutely should get a say in what you are allowed to share.


If you asked me to pass on a letter then I tell you that I've done so, you'd be annoyed if you found out that I actually just threw it away. If I told you up front that I wasn't willing to pass it on then you could find someone else to do so, but the silent nature inhibits that.

Even without the silence, there's concern about the power and influence held by a few large social media companies with billions of conversations taking place through them. There's been some bipartisan interest in forcing them to either act more like a common carrier or be subject to the full responsibilities of a publisher.


Yes, I agree censoring info on a well-known, FDA-approved drug is "dumb and stupid."


So, you definitely agree then that the Post Office should be using public funds to run a messaging service (similar to messenger, WhatsApp, iMessage, etc), right?


I’m not complaining about the censorship - I’m complaining about the UI pretending to have sent the message.


[flagged]


> Ivermectin is now well-known as a prophylactic against COVID-19 and its variants if administered in appropriate dosages.

Please share some reliable source corroborating your statement.



Thanks. Not very reliable as their methods have been heavily criticized by the entire scientific community, but better than nothing. We'll see how it turns out.


Well, not the entire scientific community.

63 trials (31 RCT), 613 scientists, 26,398 patients:

https://c19ivermectin.com/

https://academic.oup.com/ofid/advance-article/doi/10.1093/of...

https://covid19criticalcare.com/ivermectin-in-covid-19/

What is needed is effective preventative and therapeutic treatments. Should a highly contagious strain like Delta mutate into something with a higher lethality that evades the current vaccines, there won't be time to develop a new mRNA vaccine before a large number of people are lost.

However, all preventative treatments seem to be strongly resisted, even censored and proscribed by the powers that be, unless they are under patent. Including the Nobel prize winning ivermectin.

https://c19early.com/

I am, btw, 80 and fully vaccinated with Pfizer. I also take quercetin, zinc, vitamin D, and melatonin daily.


Seriously why is anyone still using FB, TW, etc.?


Facebook has been suppressing links and profiles that don't make them money for years.


Why does anyone use fb? I never have had an account. I do help my mother-in-law with hers a bit.

My company asked me to create a Twitter account. I deleted it a month later.

The soln to bad speech is better speech.


>Why does anyone use fb?

If you want to communicate or reach out to people, you go where the people are. Currently that's FB.

On a separate note, the problem with saying that the solution to bad speech is better speech is that it doesn't take crapflooding by bad actors into account. Your good speech is kinda worthless if it's drowned out by disinformation, lies or just plain non-sense.


I use Facebook because the groups I'm involved with IRL communicate on Facebook, or whatsapp (also owned by fb of course).


This is exactly how chinese censorship works. https://citizenlab.ca/2020/05/wechat-surveillance-explained/


The sooner the general public learns about steganography, the better. Unfortunately, the sentiment of anti-intellectualism and blissful ignorance propagated by corporate entities who realise knowledgeable users are more difficult to control and monetise will be a big hurdle against that happening.

Edit: steganography, not stenography.


I think you mean steganography (hiding secret message in plain sight), not stenography (techniques for writing/typing less).


Indeed. Thanks!


Threads about censorship consistently remind me that HN is no different than any other social media platform, subject to users pushing various agendas and misinformation in support of those agendas.

There's nothing wrong with private businesses removing content they don't deem appropriate. There is also nothing wrong with silencing users obviously lying. Thinking differently totally misunderstands social media as a medium.


Classic way to misunderstand the problem.

When “the private business” is a group so large and dominant in their field that they can stop threats by absorbing them, lawsuits, or other tactics, they aren’t just “a private company” anymore.

Your comment reeks of someone who doesn’t think it will ever be themselves that will be censored.


> Classic way to misunderstand the problem.

I could same the same about yours.

> Your comment reeks of someone who doesn’t think it will ever be themselves that will be censored.

Not super concerned with social media posts being taken down, no, as that's always been happening. Websites moderate their content. What's the surprise?

Also amusing name.


I'm going to go on out on a limb and guess that this isn't really a case of censorship, because it's not like Facebook to not tell you directly that they've removed your content. Or they go the other route and post a disclaiming "fact check" under it.. but so far as silently removing posts, and marking them as "sent", seems more likely to be a bug.

When I attempt to navigate to gnews.org the site is very slow, and I wonder if Facebook is having trouble assigning some kind of risk factor to it, then giving up, but ends up sending a "success" response anyway, whereby the front end just assumes the message was sent property and pushes it to your message cache without attempting to fetch it over the network.

edit: since this has been down voted, could you explain why you think I'm wrong?


Probably downvoted because IMO that's a very weak limb you're going on, and the behaviour you're describing probably seems unbelievable to many users.

But well, maybe it is shitty coder(s) mishandling timeouts.


That's not really explaining why (or how) I'm wrong, just that you don't believe me.. which is fine, but not constructive.

It seems generally less believable that Facebook would censor people without telling them.

Other commenters [0] appear to believe the same as I do, and note that FB typically informs the user that their action/behavior is against the site's terms of use.

[0] https://news.ycombinator.com/item?id=28342263


Shrugs, I didn't downvote you.

Since people don't know whether your speculation is wrong or right (unless there's an FB developer around), they can only judge whether your speculation is plausible or not. And I guess they mostly thought it wasn't plausible.


When I wrote "could you explain why you think I'm wrong", I was expecting that people who had reasons to believe I'm wrong would respond, and would explain those reasons, not that someone would explain to me how speculation works.

If you think I'm wrong on a technical level, could you please explain how?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: