Hacker News new | past | comments | ask | show | jobs | submit login
The Twitter Files Part 2: Twitter's Secret Blacklists (twitter.com/bariweiss)
317 points by quyleanh on Dec 9, 2022 | hide | past | favorite | 509 comments



I have been responsible a bit of content moderation myself, and interacted with mods on various forums. It's possible that this is responsible for some of my views, summarized here.

- The "secret" aspect of all this can be largely ignored. Twitter is under no obligation to tell its users exactly how their Tweets are moderated. I am sure most moderation tools are secret, including whatever Twitter (or for that matter, HN) uses today.

- There are aspects of user privacy that are important (e.g. how user data is gathered, where is it stored, whether it complies with various data governance regulations) which would be interesting, but have been largely ignored by these Tweets so far.

- I don't think Weiss understands what "shadow ban" means. It means that you post Tweets, but no one ever sees them. None of the practices she describes amount to what is usually described as a shadow ban or a hellban. Preventing a post from showing up in trends or searches is not a shadow ban. Google SafeSearch does not "shadow ban" explicit images or links.

- All companies have hierarchies of escalation. It is not surprising to me that there is a "top people" level of escalation in the form of SIP-PES that Weiss mentions. In fact, I'm reasonably sure something similar probably exists at Twitter today, consisting of Musk and some close advisors. They might even be using the exact same group!

- A lot of the discussion Weiss quotes as if it's top-secret information seems to show...individuals behaving reasonably while struggling with the extremely hard problem of content moderation at scale? Roth and others seem to be essentially saying, "We don't want to kick these users off the platform. Maybe if we limit their reach a bit, we won't get them inciting howling mobs that target and disturb other users?" Post-Musk Twitter seems to be struggling with the same thing, as the Kanye West debacle shows.

Overall, I don't really see too much that would cause me to gasp with horror. All I can really see is a company that never grew to Facebook scale trying to grapple with its content moderation problem without having the resources or perhaps temperament to hire hundreds of cheap content moderators like Facebook does.


> I don't think Weiss understands what "shadow ban" means. It means that you post Tweets, but no one ever sees them. None of the practices she describes amount to what is usually described as a shadow ban or a hellban. Preventing a post from showing up in trends or searches is not a shadow ban. Google SafeSearch does not "shadow ban" explicit images or links.

Colloquially, I think many information suppression tactics get lumped under the term "shadow-ban" and that your definition is very narrow. It reinforces your position so I understand why you're defining it as such but it's not very convincing to anyone who doesn't share your position. Of course, that's assuming you're genuinely interested in persuasion instead of yelling into the echo chamber.

I do agree with your points about this not being a surprise and how some are over-exaggerating the severity of these leaks but I'm also slightly concerned over how blase "hackers" here on this site are treating the news. All around us are these huge, easy-to-abuse, global influence platforms with cozy backchannels to the most powerful bureaucratic state (US govt) that we've ever seen. And yet many comments I've seen so far are along the lines of, "No worries, this stuff happens all the time and is totally normal and okay. Move along now!" I would have expected more skepticism, cynicism and backlash from this crowd. Am I wrong?


I agree that colloquially it's definition can be looser depending on context, but it's in bad faith that Weiss[0] quotes Twitter's then Head of Policy and Trust as saying Twitter doesn't shadow ban people when in the very blog post[1] she's quoting they clearly define shadow banning as, "deliberately making someone’s content undiscoverable to everyone except the person who posted it, unbeknownst to the original poster."

[0]: https://twitter.com/bariweiss/status/1601013855697588224

[1]: https://blog.twitter.com/en_us/topics/company/2018/Setting-t...


> "deliberately making someone’s content undiscoverable to everyone except the person who posted it, unbeknownst to the original poster."

This happened to my tweet here [1]. It doesn't show up under the parent tweet [2]. It appears to be some sort of domain-based shadow filtering.

It's probably because of the cheap .win domain I bought for $2. Why would Twitter censor that link? One could speculate it was because patriots.win moved there from r/The_Donald. But they "don't shadow ban based on political viewpoints or ideology." right?

For the record, I built Reveddit because I perceived that r/The_Donald made heavier use of shadow moderation than the rest of Reddit. However I later learned that its use is common across Reddit and other sites.

Shadow moderation is nobody's friend, except to serve as a reminder of that fact. It was purportedly built to deal with bots, but bots know how to check the visibility of their content. It hurts genuine individuals the most.

[1] https://twitter.com/rhaksw/status/1594103021407195136

[2] https://twitter.com/TheFIREorg/status/1594078057895063553


Shadow banning was not built to deal with bots. The practice was popularized in various communities long before bots were a serious problem.

It was built to deal with jerks who, when moderated, would sign up under a new account, or would repost the same content over and over again, or who would invade discussions, distract them with vitriol or out of context nonsense – even if each post as original.

The whole point of the concept was that the jerks would FEEL like they were contributing to the discussion, but maybe being ignored, while maintain a static history or login that made it easier to moderate them.


I think that might have been its original purpose, but I can't help but believe that this is evil.

People are quick to jump to the "It's their website, they don't have to do anything yada yada" but there is an implied social contract between the users and the owners that for whatever tier of service (free, paid, ad-supported, whatever) you are using you will receive an equal amount and quality of service as every other user on the platform.

I'm not against moderation, I'm against secret moderation that wastes the time of other people's lives. Just be forthright and tell people they are muted but can still participate.


Platforms say it's for spam [1], which most people interpret to mean bots.

I've never seen a platform provide an explanation like the one you give here.

[1] https://old.reddit.com/r/TheoryOfReddit/comments/8k5qh3/if_y...


You can look up the Wikipedia article on the practice: https://en.wikipedia.org/wiki/Shadow_banning

It has a fairly rich history, and evolution, as well as counter-measures, and counter-counter-measures and counter-counter-counter-measures.


That article barely scratches the surface of what's going on. Reddit alone has tons of secretive ways to remove or hide content, some of which I list here [1]. Others I mention in my talk [2].

[1] https://www.reddit.com/r/reveddit/comments/sxpk15/fyi_my_tho...

[2] https://cantsayanything.win/2022-10-transparent-moderation/


I'm talking about the origination and evolution of a specific practice, called, specifically, "shadow banning".

Yes, other techniques exist. Sometimes people, being imprecise, ignorant and/or lazy, call those techniques "shadow banning", but that isn't what they are.


> Sometimes people, being imprecise, ignorant and/or lazy, call those techniques "shadow banning", but that isn't what they are.

What umbrella term would you use for moderator actions that are kept secret from the author of the content? "Moderation" on its own can be secret or transparent, so by itself isn't an apt description.


You are vastly stretching the definition of shadowbanning if you're posting a public link that everyone (even people who aren't logged in to Twitter) can see, and then claiming that it's suppressed because it doesn't show up in the replies of a totally different account's Tweet.


It's not a stretch, and it fits the definition Twitter provided. You wouldn't be able to find the tweet unless I'd linked it. It's shadow detached from the parent.

Personally I prefer the term shadow moderation.


FYI, I think your tweet must be hidden if the user is logged out.

I'm logged in, and can see your tweet in the replies just fine. Still not shadowingbanning.


You're mistaken. I tweeted twice, once with the .win link, once without. The one with the .win link does not appear even when logged in as a different username. It does appear for me when I'm logged in as the same username.

So there is shadow moderation going on. I'm not so bothered about whether it's called a shadow ban or shadow moderation. Also, I can see why some former Twitter employees want this distinction to be understood given the post they previously put out. I just think that's a hard sell for the public who is just coming to understand this issue. Even I didn't know about Twitter's 2018 post.

Finally, there was already substantial dispute over whether or not Twitter's initial claims were accurate. See "Shadowbanning is not a thing": black box gaslighting and the power to independently know and credibly critique algorithms" by Kelley Cotter:

https://kelleycotter.com/wp-content/uploads/2021/11/Shadowba...


I just want to thank you for creating reveddit.


They explicitly say they don't shadowban people. Pattern matching for moderation is a completely different thing.


The public is still going to interpret that as shadow moderation of people's content.


Shadowbanning people means banning people, it's obviously separate from moderation of individual tweets. AFAIK the only subreddit that has banned me was the_donald, but I can obviously have my content removed if I post to a news subreddit from domains they consider unacceptable.

It is honestly silly to be surprised. Any community with just a few thousand users will ban entire domains, it's not surprising that they resort to banning entire TLDs at the billion user level.


Would most people consider an account shadow banned when it can post as normal, yet its post are secretly set to have near zero visibility? The answer is self evident. Twitter tried to redefine shadowbanning to something they do not do, and then put out a post saying they don't shadowban. That is engaging in extremely bad faith. Using the definition as normally understood, is not.

There was a post on Hacker News when this blog was released: https://news.ycombinator.com/item?id=17623306

One post there sums it up pretty well: Is it just me? Did I just read a blog post that said “We don’t do this thing I’m going to explain how we do the thing we don’t do.”


It doesn't matter what "most people" would consider, because Twitter provided their definition of the term at the time. People disagree about all sorts of terms, and that's totally fair, but what Weiss is doing is retconning her own personal definition of the term on Twitter's policy statements, which were unambiguous. She is, in turn, being unambiguously dishonest.

It's interesting how selective people are about the supposed Gell-Mann Amnesia effect.


Right, Twitter provided their own definition which delicately tiptoes around the unsavory issue and Weiss expanded the definition to suit her own self-interests.

How does that make what "most people" would consider not matter? Seems like a non-sequitur to me but maybe I'm not understanding your point.


Agreed, it's rare for any service to outright say "we shadow ban" as done here [1] for example.

And, most services still use a form of shadow moderation [2]. So it's perfectly fine to discuss here. Attempts to shut down conversation are just that.

[1] https://getstream.io/blog/feature-announcement-shadow-ban/

[2] https://news.ycombinator.com/item?id=33916414


Because the thread is about Weiss's claims. If you want to talk about how Twitter's definition of shadow-banning is problematic, or their moderation practices in general, start another top-level thread.


> yet its post are secretly set to have near zero visibility

Whoever is following you and using the chronological timeline will see your tweets just fine as they happen.

Twitter deciding not to promote your tweet in the trending-ish/follows timeline doesn't seem close to shadowbanning to me.


Isn't near zero visibility the norm for most users?


I have been shadowbanned on Twitter and it is very different, even with no followers. I don't even know what caused it but it might have been reporting a person who was inciting violence (who subsequently committed said violence). Ever since then Twitter has been blocked on my network.


Do they have anything about how they were blacklisting accounts, unbeknownst to the original poster?


> Colloquially, I think many information suppression tactics get lumped under the term "shadow-ban" and that your definition is very narrow. It reinforces your position so I understand why you're defining it as such but it's not very convincing to anyone who doesn't share your position. Of course, that's assuming you're genuinely interested in persuasion instead of yelling into the echo chamber.

Reasonable criticism. I suppose Twitter representatives have probably used this same definition to squirm out from questioning before, and it's not a great look. Although siblings comment are pointing out that Twitter did define it as identical to what I'm saying in a blog post; so at least they are consistent in what they were saying pre-Musk.

On the persuasion front – I'll give it a try!

I think the reason that Twitter hid behind the narrower definition of a shadow ban is to avoid saying the harder-to-stomach truth that "information suppression tactics", as you put them, are essential to content moderation. Think about this – is HN's (supposed) voting ring detector an "information suppression tactic"? Is flagging posts? Why is Bari Weiss not posting breathless stories about them? I think the only difference is that zero people in Weiss's circle use this orange web site. I would love to be convinced otherwise, though.

> All around us are these huge, easy-to-abuse, global influence platforms with cozy backchannels to the most powerful bureaucratic state (US govt) that we've ever seen. And yet many comments I've seen so far are along the lines of, "No worries, this stuff happens all the time and is totally normal and okay. Move along now!" I would have expected more skepticism, cynicism and backlash from this crowd. Am I wrong?

No, but I think we had our moment of outrage around widespread surveillance by the US government in 2013 and the years after it. Also, almost none of Weiss's or Taibbi's breathless revelations are about intervention by the US government – a few are about the Democratic Party and most are about Twitter itself. The only response that I see by a sitting Government official anywhere is to suggest that the Twitter ban on a story is against the 1st Amendment [1].

----------------------------------------

[1] https://twitter.com/mtaibbi/status/1598838041371516929


It's not reasonable criticism, because Twitter provided their definition of "shadow-ban" at the time, and it's unambiguously not what Weiss is talking about.

You can disagree with the definition Twitter chose, but you can't retcon your own definition onto Twitter's policy statements.


I'll be frank - I couldn't give a rat's ass about Weiss or Taibii. I'm not familiar with them and they're obviously trying to squeeze as much juice out of this story as they can.

However, I see a lot of people here dismissing the wider and (imo) more valid complaints about veiled moderation tactics by beating up these two and calling it a day, case closed. I don't care what specific narrow definition twitter chose on a whim in some blog post and I'm not concerned with strictly "shadow-banning". The fact is, these global influence platforms are easy to subvert by immensely powerful interests (state actors, billionaires, etc) and the power we're placing in these people's hands is incredible, way beyond what any authoritarian dictator of yesteryear could even dream up. And when how the sausage is made is ever so slightly exposed, it's dismissed and hand-waved away.

What could go wrong?


Basically every forum uses veiled moderation tactics out of the argument that if they unveiled their techniques then they'd be defeated. Which is what dang argues on Hacker News; in fact, you can't even see if there are penalties on your HN account.


The thread is about the false claims Weiss is making, not about your own opinions on moderation. What you should do, if you want to share those, is start a new top-level thread on this story. I'm sure what you have to say is germane to the story. It's just not germane to the thread you're commenting on.


OP made many points and I responded to one in particular with my opinion. Seems germane to me, although point taken about starting a top-level thread for better visibility/discussion.


You seem very set on keeping this top level thread on the topic you believe you won the argument on, and are refusing to discuss anything else in a very condescending tone. Just thought you should know your thinly veiled attempts at controlling the conversation to "win" aren't going unnoticed.

I'll also add that you're wrong about the topic of the thread, bavell's point is absolutely on topic and the only reason you're refusing to engage and call his argument a red herring is because you know he's right and don't want to admit it.


If a person makes an argument, and you rebut it with a bunch of unrelated arguments, the original arguer isn't obligated to address the unrelated arguments. You think they are, but in fact, that's the coercive argument, not mine.

The assumptive close might sell a used car, but "you just won't admit it" isn't especially persuasive on a message board.

If I wanted to control the debate (or however you'd choose to put it) on this benighted subject, all I'd have to do is keep posting on the thread, which immediately and thankfully got yanked off the front page by the flamewar detector. Oh, wait. You caught me! :)


I very much appreciate your good-faith response even though I'm not entirely convinced (yet?). Getting late for me so I have to bow out for now. I hope we as a community continue to discuss these issues, as it seems to be central to a lot of the problems we face today and will be facing tomorrow.


Twitter used a similar definition to GP of shadow-bans previously, when it became clear that some accounts might not show up in search results:

https://blog.twitter.com/official/en_us/topics/company/2018/...


If the rules are not being applied equally across the political spectrum then your comment is simply obsfucation and misdirection. Completely missing the point.


To the extent that rules don't get equally applied by social media platforms, they're generally biased in favor of the right.

However, the right, due to its inherent animus for large sections of society, breaks the rules so much more often and so much more egregiously (eg Babylon Bee) that actual enforcement decisions are applied against the right more often.


Nonsense. You just define the rules such that this is the conclusion ("everyone that disagrees with me is a nazi etc...")


The rules used to prohibit deliberate misgendering, which any reasonable person would regard as cruelty to a marginalized group. The right (such as the Babylon Bee) disproportionately engages in this action, so it used to get banned more for this.

This is also why anti-trans actions by right-wing state governments in the US have been consistently shut down by the courts (e.g. [1]). There is no evidence to support their actions -- they're driven by sheer animosity.

Nothing "nonsense" about this.

The way the right plays the refs is by claiming that the fact that the right gets banned disproportionately ipso facto means that the rules are biased. That's what's "nonsense" here.

[1] https://www.aclu.org/legal-document/bongo-productions-llc-et...

> Although at least one key supporter of the Act in the General Assembly justified its requirements in relation to supposed risks of sexual assault and rape, there is (1) no evidence, in either the legislative record or the record of this case, that there is any significant problem of individuals’ abusing trans-inclusive restroom policies for that purpose and (2) no reason to think that, if such a problem existed, the mandated signs would address it.


Again, nonsense. You could make these kind of claims along any lines, for example, christians are the target of discrimination for their beliefs etc. You just choose the ones that align with left-wing ideology and everything else is "hate" and deserving of censorship.


Noticing a startling lack of evidence in your posts here. You're in good company with those right-wing state governments over there.


Why do (extremely) simple concepts like this need a 3rd party to support them?

The kind of "evidence" that is normally presented is typically provided by 3rd-rate activist researchers, attracted to dogma, and who simply come to whatever conclusion perpetuates that dogma.


So you still don't have any evidence for your claims then?

I've made claims here and presented clear evidence. You called it "nonsense" with no substantiation other than vague gestures toward Christians (which Christians?) being targeted (how?) for their beliefs (which beliefs?)

I understand that you're trying to drag me down to the meta level so you can beat me with experience, but I'm not willing to take the bait here, sorry.


Neither of us has provided meaningful evidence. We are already on the same level. I havent dragged you anywhere.

You've just pulled up a bit of dross from a highly partisan organisation and are claiming that this is proof of something.

It would be like me quoting fox news as proof.


A judgment made by a federal court in the United States is "a bit of dross", apparently.

I checked your post history. It's pretty clear you are deep in an alternate reality. I hope you can find your way out of it some day.


Lol


>I'm also slightly concerned over how blase "hackers" here on this site are treating the news

There's a long standing phenomenon where the people working in tech have more visibility (or just insight) to witness business models slowly taking more and more advantage of the end user over the course of years, and don't notice that they've slowly hollowed out the standards they think they still hold.

For example: we all just sorta accept that some of our friends and relations work - or want to work - at places like Facebook and Google. There's so many of them that we wind up treating it like a morally neutral job or are seen as a weirdo.

When a layperson talks about their standards/expectations, they haven't had their barometer worn down over the years. All they hear back from those of us is in the know is some version of the "I'm not touching you" game.

"No we'd never do X, we do a lot of things that are in the same category you place X. Things that bother you for the same reason you don't like X. Things that would make you seethe if the tech wasn't impenetrable to you. But we'd never, ever, X."

https://www.youtube.com/watch?v=6_HBmZuJlHs


> I'm also slightly concerned over how blase "hackers" here on this site are treating the news. All around us are these huge, easy-to-abuse, global influence platforms with cozy backchannels to the most powerful bureaucratic state (US govt) that we've ever seen. And yet many comments I've seen so far are along the lines of, "No worries, this stuff happens all the time and is totally normal and okay. Move along now!" I would have expected more skepticism, cynicism and backlash from this crowd. Am I wrong?

I don't think you are wrong. I do think this is a powerful tool, that should be scrutinized. But I also can't think of any better alternative. So, in the meantime, ISTM that talking about how it's talked about (as a great conspiracy and revelation, when it's well-known and the best known solution to a problem we have) seems the right way to go about it.

And note that not so long ago Musk himself said that this solution is what he wants for Twitter¹.

I just think when talking about the "Twitter Files", the more interesting story is how Musk is spreading disinformation and propaganda to increase his own wealth and power and how supposed journalists are uncritically helping him do it.

[1] https://twitter.com/whstancil/status/1601020232994201601


> It reinforces your position so I understand why you're defining it as such

The same could be said of Weiss or (by implication) you. "Shadow banning" sounds scary, which is precisely why people are flinging it around carelessly but also precisely why they shouldn't.


Elon wants to do this very thing. He wants bad actors to not be promoted and instead he wants for people to have to go to their profile.


She explained in the thread:

What many people call “shadow banning,” Twitter executives and employees call “Visibility Filtering” or “VF.” Multiple high-level sources confirmed its meaning.

“Think about visibility filtering as being a way for us to suppress what people see to different levels. It’s a very powerful tool,” one senior Twitter employee told us.

“VF” refers to Twitter’s control over user visibility. It used VF to block searches of individual users; to limit the scope of a particular tweet’s discoverability; to block select users’ posts from ever appearing on the “trending” page; and from inclusion in hashtag searches.

All without users’ knowledge.

“We control visibility quite a bit. And we control the amplification of your content quite a bit. And normal people do not know how much we do,” one Twitter engineer told us. Two additional Twitter employees confirmed.


> The "secret" aspect of all this can be largely ignored.

I bet you would not like for your comment here to be secretly removed. In fact, many users are shocked to discover when this happens, as I share here [1].

[1] https://news.ycombinator.com/item?id=33916519


Reddit and HN are...not Twitter? Guess which one of the three is regularly used by Taylor Swift.

The big difference is that Twitter links are routinely used by journalists and such who will notice and specifically call out Tweets as being deleted. A shadow ban / hellban on Twitter would be much more easily noticed by the media.

On the other hand, no one in the media to a rounding error cares about shadow bans on Reddit or HN.


Ah right, only Taylor Swift and journalists matter. Nevermind where the next Taylor Swift and journalists come from. The platforms will decide for us!


Twitter is not "deciding" anything for you. Platforms like Truth Social and Mastodon exist. The next Taylor Swift and journalists are free to use them.

In fact, many have! It's going to be some Mastodon instance in my case, but not saying it couldn't be Truth Social for you or other folks that think they do a better job at content moderation than Twitter does. This is more productive than holding up Twitter's (admittedly imperfect) content moderation as part of some thought-molding conspiracy.


Twitter is under no obligation to tell its users exactly how their Tweets are moderated.

Twitter has no legal obligation to do so. I think a lot of people would hesitate about engaging with a platform that actively has its thumb on the scale. I would really rather not have the thought in the back of my mind that invisible, unaccountable forces are the reason my views aren't popular, not that I'm an asshole with stupid opinions.


Well, in that case, I have some unfortunate information for you about Facebook, Instagram, hackernews, and much more.


> I think a lot of people would hesitate about engaging with a platform that actively has its thumb on the scale.

I think a lot of people would hesitate about engaging with a platform that doesn't actively have its thumb on the scale. Those platforms exist, they are abandoned.


WhatsApp has a similar internal dashboard for moderation/managing abuse. WhatsApp is E2E encrypted, so it's entirely based on visible user-actions. I'm sure Snapchat, TikTok, Tumblr, YouTube, Pornhub end others that have any moderation tools are in the same boat. There is no smoking gun here, despite the breathlessness from some quarters (including Musk himself[1]). That "Show DMs" link did send chills down my spine though, sometimes I forget Twitter DMs are not E2E encrypted.

This has a lot in common with Mudge's whistle-blowing case: there is nothing tangible to those familiar with the arts, but it appears to be (or ends up being) red meat "Aha!" gotcha material for those who have never worked in similar environments.

1. Which begs the question: does he really think Twitter's mod tools are unusual, or is he feigning ignorance for some reason known to him?


> I'm sure Snapchat, TikTok, Tumblr, YouTube, Pornhub

They do, I list some here [1]

> There is no smoking gun here

Secretive moderation being common doesn't make it normal or right. It's certainly not widely known. For example, I was on a podcast from Aug. 24 this year talking about shadow moderation [2], and the host says around 27:50 that Twitter is not doing this, only excepting "the algorithm". His point, I believe, was that humans were not manipulating content, and I agreed that I was unaware if that was happening.

Plus, if the use of such secretive tooling were widely known, there would be no need to keep them secret.

The claim has always been that secretive moderation is used to deal with spam. Yet, spammers are the most likely to check the visibility of their content. So the net effect is that secretive censorship hurts genuine individuals the most.

[1] https://news.ycombinator.com/item?id=33916414

[2] https://www.wholewhale.com/podcast/what-is-shadow-moderation...


> So the net effect is that secretive censorship hurts genuine individuals the most.

From first principles, the moderators would not consider shadow-banned individuals as 'genuine'.

Since you brought up the subject of spammers - whats your take on email spam transparency, and the use of spam folders? should MTAs (like GMail or Exchange server), in the interest of transparency, immediately reject messages categorized as spam rather than accepting and quietly placing them in the spam folder, never to be read by a human? This would absolutely solve the problem where emails from genuine users end up being blackholed in the spam folder.


Good question. No, I wouldn't support that. The current system works and there is still transparency. One of the two people in the conversation has a chance to review the content.

It may be that inboxes are different from public or open-ended conversations. What often happens in the moderation of public conversations on social media is that neither person is aware that any removal or demotion occurred. So, even the user(s) whose content was not removed does not have a chance to rescue the content from spam.


Yes, the secrecy does matter. I think can immediately think of two big reasons:

1) We should censor for how things are said, instead of what is said. When censorship is carried out secretly it will inevitably trend towards abuse. One of the examples given in these leaks is that @DrJBhattacharya [1] was secretly put on a trends blacklist because of what he was saying. He's a well spoken doctor and Stanford epidemiologist, but he publicly spoke against the political measures being taken against COVID, such as lockdowns. So he was blacklisted. That's just so incredibly wrong, and enabled only by secrecy.

2) Again with the goal of adjusting how things are said, instead of what is said: If you think somebody is an idiot, saying that accomplishes absolutely nothing besides sending discourse to the level of a dumpster dive at a cheap seafood joint on a hot summer day. By contrast, engaging in good faith and simply discussing the topic not only keeps the conversation healthy, but might even actually change some minds. Secretly censoring the shitposter does not really encourage him to engage in a more productive fashion, because he may not even realize he's behaving a way that's not really considered acceptable, let alone attempt to correct it.

---

Literal spam is a different topic, and I think conflating the two is disingenuous. Doing things like blacklisting doctors because of what they say has little to do with banning somebody trying to impersonate other accounts and spamming 'give me 1 and I'll give you 2' crypto scams.

[1] - https://twitter.com/DrJBhattacharya

[2] - https://www.nytimes.com/2022/01/04/briefing/american-childre...


You are using phrases like "blacklisting doctors" and "censorship" while discussing (essentially) a message board that is used to sell advertisements. I don't think that's a reasonable position.

Twitter is a private company that wants to sell advertisements by enabling people to have happy conversations. Fundamentally, they are going to optimize for that, and not whatever high-minded ideals of censorship that we hold dear to our hearts. If a scientist is yelling about mysterious conspiracies and causing arguments on the site, they will use whatever least harmful tool is available to them to shut up said scientist.

It's certainly possible that Musk-owned Twitter will similarly pivot to selling advertisements for MAGA hats and guns or whatever; and censor all talking points that run counter to their views. This would also be a reasonable course of action for them, and many of us have prepared our migrations away from the site in case this happens.


Well, as long as you are in fact consistent about the position that it is "(essentially) a message board that is used to sell advertisements", with the presumable implication being that it is unimportant, replaceable and/or not an important venue for political discussion - and therefore in particular it is not a big deal if it is run into the ground or flipped towards the cause of a political group you are opposed to - then your position is sensible enough. However, this whole discussion is happening against the backdrop of a seemingly overwhelming majority of people in this community and the wider political sphere in the US asserting that it is in fact a big deal, which is inconsistent with dismissing allegations of wrongdoing by an argument from fundamental irrelevance. The majority position on the importance of Twitter is surely worth addressing, and if you disagree with that, you'd do better to raise your objection about relevance to that, rather than to a minority opposition arguing against the majority about a different aspect under the shared premise of relevance.


> However, this whole discussion is happening against the backdrop of a seemingly overwhelming majority of people in this community and the wider political sphere in the US asserting that it is in fact a big deal

I disagree with this, you are clubbing two groups together here. Yes, the wider political sphere in the US might be concerned because a billionaire can upend their carefully orchestrated social media campaigns. And I personally couldn't care less about that. For that matter, do you care about how well your favorite politician's next Twitter ad campaign goes?

Users in this community, on the other hand, are almost certainly technically proficient enough to back up their Tweets and quickly migrate to Mastodon or Tumblr or their favorite social network, should the need arise. In fact, dozens of us have already done so! For us, Twitter turning into Truth Social tomorrow would not be a Big Deal. This is consistent with my statement that the (probably) slightly left-leaning moderation on old Twitter was also not a Big Deal.


I’m fairly certain that the majority of people don’t give a damn about Twitter. The numbers I’ve seen say that less than a quarter of adults in the US use Twitter. It does appear to be very popular among journalists and politicians though.


> Twitter is a private company that wants to sell advertisements by enabling people to have happy conversations.

No, Twitter's mission statement is "to give everyone the power to create and share ideas and information instantly without barriers" [1]

Nothing in there about 'happy conversations' and, while selling advertising has been and may continue to be a means to fund this mission, it is not part of their mission. Means are not ends. Companies must run at a profit to survive but they must also work towards and for the mission they serve and the reason they were formed. The Twitter Files have shown that they were failing at this mission (especially the 'without barriers' part), either because they lost focus or were corrupted, or both.

[1] https://mission-statement.com/twitter/


Enron's company values were "Respect, Integrity, Communication and Excellence". https://www.nytimes.com/2002/01/19/opinion/enron-s-vision-an...


So what's your point? That Twitter's mission statement has always been purely empty PR?


Your previous comments are also correct. This has been the case with YouTube's empty 'mission statement':

> Our mission is to give everyone a voice and show them the world.

> We believe that everyone deserves to have a voice, and that the world is a better place when we listen, share and build community through our stories.

Despite their ridiculous bans to those that never broke their ToS and YouTube never explaining what ToS did they break.

[0] https://about.youtube


> If a scientist is yelling about mysterious conspiracies and causing arguments "Mysterious conspiracies" by who? By the moderators who couldn't spell mitochondria or couldn't tell the difference between gene vs allele? Insults aside, didn't history give us so many "conspiracies" that turned out to be true? Like Heliocentrism,like climate change (in case people forgot how many people denied the climate change decades ago), like the theory of continental drift, like Darwin´s evolution theory, like captalism in a communist state, like the opposite view of McCarthy in the 50s.

I mean, how can Twitter's moderators, many of whom could barely understand college-level STEM or any subject, be so sure that they are on the right side of the history?


> I mean, how can Twitter's moderators, many of whom could barely understand college-level STEM or any subject, be so sure that they are on the right side of the history?

They don't need to be, in order to preserve the peace on their platform (for whatever definition of peace).

If you start yelling about mitochondria or climate change at Disneyland, you will be unceremoniously ejected for disturbing the peace at the Magic Kingdom. On the other hand, you can yell about mitochondria or climate change to your heart's content in the local town square all you want, courtesy of the First Amendment.

I don't understand why people think that Twitter, a private company, should be more like the town square than Disneyland.


> I don't understand why people think that Twitter, a private company, should be more like the town square than Disneyland.

In a way I agree with you. Twitter is a private company, and it does not necessarily have legal or moral obligation to be neutral or be a town square. Its staff should be perfectly safe to be biased towards left or right, or moderate content with or without facts or consistency.

I also agree with you that Twitter's staff needed to consider brand safety and how to retain advertisers. That's also the theory of Stratechery and why Ben Thompson thought that Elon needed to find revenue stream other than ads to really push his ideal of free speech on social media.

On the other hand, I do wish that Twitter a town square and they do have neutral or at least consistent moderation policies for practical reasons: millions of people do get news or information from Twitter on a daily basis. Millions of people do engage in discussion on Twitter on a daily basis. As a result, Twitter does become essential for facilitating and spreading information. In that regard, moderating debates, especially the controversial ones, can be harmful to the public. Imagine what it would be like if you were Botzmann yet you couldn't discuss your theory, or you were John Snow yet you were silenced for thinking cholera was not spread by miasma. Miasma was the mainstream theory in the late 1890s, no? In the eyes of the people in 1895, Snow was spreading "misinformation" that "could danger people's lives", no? We could safely bet that if there were Twitter at that time, the staff would be sided with the miasma theory, no? Another reason is in the spirit of "veil of ignorance". Society changes. People change. We're progressive today, but we can be conservative tomorrow. We can be left-leaning today and love projecting and attacking people's motives, but we can be right-leaning tomorrow and declare heresy this and evil that. I bet Robespierre believed that no one he guillotined was innocent and he could stay in power for long enough. I bet many people believed that the protestants in St. Bartholomew's Day Massacre deserved the death. I bet the elementary school students in China believed that they were righteous when they beat their teachers and school principals to death with knuckled belts. Without us believing that we can expressive our opinions in public safely, there will be guarantee that truth could come out quickly, and there will no guarantee that your group of people will be safe, no matter how dominant you are for now.


> Yes, the secrecy does matter.

Okay, good start.

> We should censor for how things are said, instead of what is said.

What? No. Platforms can censor whatever they want. The point is that users should be told when their content is actioned. Mod actions shouldn't be kept secret from the authors of content.

Therefore, we should be concerned with how platforms are censoring, and not what they are censoring. That is, a platform who tells you when they censor is more desirable than one who keeps their censorship secret from you.


> We should censor for how things are said, instead of what is said.

No, there are certain topics that are simply outside of the bounds of acceptable discourse. For example, Holocaust denial is inherently uncivil and should be off limits from every general discussion forum, regardless of the wording the denier chooses to use.


> I don't think Weiss understands what "shadow ban" means.

She very well may understand, but is assuming (correctly) that her audience does not.


Who is her audience and why do you think her assumption is correct?


> - The "secret" aspect of all this can be largely ignored. Twitter is under no obligation to tell its users exactly how their Tweets are moderated.

Sure, but maybe they should have some obligation? Maybe the people "gasping in horror" at this information actually did have more expectations around transparency and impartiality than you did, and maybe they're right to have those expectations?

> I don't think Weiss understands what "shadow ban" means. It means that you post Tweets, but no one ever sees them. None of the practices she describes amount to what is usually described as a shadow ban or a hellban. Preventing a post from showing up in trends or searches is not a shadow ban. Google SafeSearch does not "shadow ban" explicit images or links.

You're splitting hairs. Laypeople take shadow banning to be various forms of suppression. This is the same as the hacker/cracker debate of 20+ years ago. Terms are defined by their usage, and once laypeople start using a term in a slightly incompatible way to how you've understood it, you've already lost.

Overall I don't quite get HN sometimes. Lots of people are saying this is no big deal, that every site will have secrete blacklists, that every site does moderation and suppresses things, and so on, so what's the problem?

Isn't it obvious that the way you do things can matter?

For instance, most techy people I've spoken to think that no-fly lists are problematic. Why were you put on a no-fly list? How can you petition for a review of your status if you think your placement was unjust? Who gets to decide who goes on that no-fly list? Is there any periodic review of such decisions to ensure there is no systemic bias or that the no-fly list isn't being abused for personal or political ends? There are many documented examples of people being targeted using the no-fly list for political reasons, for instance.

All of these questions apply to secret suppression lists at social media companies. If Twitter is to become a central place critical to democratic discussion, as Musk has stated is his intent, these questions are then just as relevant as the government deciding who can and cannot fly. The tweets Weiss posted have a number of examples of politically motivated suppression, and you shouldn't so causally dismiss that.

Combined with Taibbi's thread showing the government had some active involvement in moderating content, this is setting up a troubling precedent. This is a conversation that needs to take place.


> The "secret" aspect of all this can be largely ignored. Twitter is under no obligation to tell its users exactly how their Tweets are moderated

Let's play a hypothetical game. If restaurants were under no obligations to tell you what ingredients were in their food, but some did anyway, would that change your restaurant eating habits? If so, how?


> Overall, I don't really see too much that would cause me to gasp with horror.

I didn't "gasp", but, silently blocking an esteemed scientist is indeed horrible.

He, and who knows how many others, presented sensible analyses against extended lockdowns in schools. This dissenting view, like with the lableak theory, was systematically blocked from ever reaching journalists eyes, and thus from reaching parents minds.

The harm done to an entire generation of children is very real. The opportunity cost from wasting vaccinations in children is very real. The discussion was warped, and the narrative was rammed down people's brains. There are second-order effects with that kind of brain power-washing at mass scale.

This wasn't an isolated incident either. Literally millions of posts were removed and shadowbanned to various degrees. No one at any of these companies has suffered any semblance of accountability in any form.


> The dissenting view, like with the lableak theory, was systematically blocked from ever reaching journalists eyes, and thus from reaching parents minds.

I would hope that journalists conduct their research through better means than solely reading Tweets. That would be an exceptionally lazy standard, one which would merit a failing grade for a high school paper.

That being said, "systematically blocked from ever reaching journalists eyes" seems basically false given that this person co-authored a story that was published in the Wall Street Journal! [1]

> Literally millions of posts were removed and shadowbanned to various degrees. No one at any of these companies has suffered any semblance of accountability in any form.

Being punished for exerting control over what is publish on your private web site seems to me like massive government overreach to me. I don't think anyone would administer web forums if the government started telling them how they could and could not moderate posting.

----------------------------------------

[1] https://www.wsj.com/articles/is-the-coronavirus-as-deadly-as...


Yes someone published a story in the Wall Street Journal, but this thread is talking about Twitter. Twitter is not exonerated because some other outlet did something differently.

The point here is that they were specifically responding to government requests to censor information. That is truly government overreach, but in the opposite direction to what you claim.


>Being punished for exerting control over what is publish on your private web site seems to me like massive government overreach to me. I don't think anyone would administer web forums if the government started telling them how they could and could not moderate posting.

Just wait until you realize that corporations at a certain scale become a form of government themselves!


> I would hope that journalists conduct their research through better means than solely reading Tweets.

That's not a good reason to suppress scientists tweets.

Also, many of the best journalists of the past decades have been loudly warning of being shadowbanned, supressed, deranked etc.

> I don't think anyone would administer web forums if the government started telling them how they could and could not moderate posting.

A large part of these Twitter reveals show just how much government connections pushed extreme "moderation" (read: censorship) of specific issues.


Not really. Biden just asked twitter take down his sons dick pics.


> silently blocking an esteemed scientist is indeed horrible

My understanding is that anyone using the chrono timeline could still his tweets as they happened. Twitter deciding not to promote his tweets in the algo timeline seems far away from blocking.


[flagged]


Yes, that's the one.

Are you implying that he's not esteemed? He is.

He was right about lockdowns on school children having huge potential for harm. And it was very, very wrong to censor such posts.

And again, this was far from an isolated incident. Millions of posts were removed, tens of millions were shadowbanned, delisted, deranked, hidded, supressed, etc.

Views questioning natural origin, official policy, vaccinating children or school lockdowns were smeared as anti-science and sinister... The irony might be funny if it wasn't so harmful.


Twitter should have been more aggressive with taking down COVID misinformation, not less. I saw "#hydroxychloroquine" trend far too often, with posts advocating using it as a substitute for vaccines at the top of the results upon clicking it.


Are you call Dr. Bhattacharya's warning of the harmful effects on children of lockdowns in schools "COVID misinformation"? How so?

... Did Dr. Bhattacharya promote hydroxychloroquine? I don't think he did. Even searching for those terms, I'm not getting anything.


I don't like the term misinformation. I don't think Twitter should have restricted his tweets. However, it's important to remember Bhattacharya was wrong.

In the op-ed posted above he guessed that the predicted death toll of 2-4 million people was off by multiple orders of magnitude. He said that dramatic interventions like lockdowns would be justified to prevent that kind of death toll, but that COVID was nowhere near that deadly.

So far COVID has killed about a million Americans. So by the logic he lays out in that op-ed his policy proposals were wrong.


We can never know how many people Covid killed in America, because the rubric in the US is to count as a Covid "death" any death where the deceased tested positive for covid. In that group might be someone who would have not died had they not had Covid, but it also includes many people who would have died regardless of Covid positivity.

It is also to count as a Covid hospitalization, anyone hospitalized who tests positive for Covid in the hospital.

In both cases, all we can say is that somewhere between none and all of them are actually in that condition because of Covid.


We can (and have) validated that measurement by comparing the total number of deaths of all causes to analogous past years.


He was responding to an estimate that placed the death toll at 3.4%.

The highest estimates now seem to be at around .28%.

That's more than an order of magnitude. He wasn't all that wrong.

Also, he advised focusing efforts on the most vulnerable; the elderly and people with comorbidities. He was very right there too. All those vaccines into kids and lockdowns of schools after the elderly were triple-vaccinated was a stunning misallocation of resources.


While I don't think many of Jay's points were wrong, he sort of established himself as an anti-establishment person. I think if he had acted slightly differently, we'd all be praising him for keeping us from making a big mistake.

But the reality is that the medical establishment has an immune system honed by decades of vaccine denialism (remember "vaccines cause autism because mercury") and tends to overreact when people- acting in good faith- question its findings publicly.


The medical establishment bring on a lack of trust due to their close ties with pharmaceutical companies who's aim is to simply make profits from the "medicine discovery of the day", no matter the consequences.

If the medical establishment (doctors, scientists, researchers, etc) took no funding from pharmaceutical companies, it might help their position. Taking funding from companies that have way too often been found of wrong doing, just poisons people's view of the whole system.


[flagged]


Why are you being so condescending?


That really was the only question I had.


    > Preventing a post from showing up in trends or searches is not a shadow ban. Google SafeSearch does not "shadow ban" explicit images or links.
Safesearch is an adjustable setting, is that really a good analogy for this?


If we had insight about why the restrictions were applied, it would likely be more alarming. But this response in particular is a product of desensitized, you can't argue that morals and views have shifted even if you were a mod at some point, especially now e.g. post-Covid, post-Trump.


Why is anyone surprised that the major/globally operating social media companies have extensive internal systems for controlling the content that appears via their platforms?

Irrespective of whether or not you agree with individual decisions, the fact that such systems exist should not be a surprise. These companies operate within legal frameworks across multiple jurisdictions that have endless variations of restrictions on speech and content.

What surprises me is that there are people that whine about this without proposing or creating an alternative.


It’s not the fact that these systems exist, it’s that Twitter denied that they did for years, despite all the evidence to the contrary.


> Twitter denied that they did for years

You yourself literally posted a link from 4 years ago that demonstrates the opposite of this claim. How is this comment so high? I feel like I'm at a UFO convention and people just want to believe.


> people just want to believe

Accurate. There’s a strong contingent of folks here that believe “the leftists” are oppressing them and “Twitter silences our voices” is a big part of the conspiracy theory.

Musk is throwing fuel on the fire, because he also just wants to believe.


The hilarious part too is that the same contingent also ignores the screenshots showing that the "Libs of Tiktok" account had special rules around moderating it. So much for "silencing conservative thought" (if you can call that Twitter account "thought").


@libsoftiktok is not a left-wing account. The name is derogatory


That's the point. The special rules were "don't moderate this account at all without getting high level approval".


We've had two releases of documentation that prove what you are mockingly-dismissing here, and you pretend that it's still all in people's heads. What am I missing?


They don't prove that. This reasoning is like saying the law oppresses men because they are disproportionately convicted of violent crimes.


Don't read anything into this, but can you see the next step here if you replace "men" with "black men"? Is this actually the point you want to make?


I don't see how that changes anything except to make the analogy politically fraught and easier to derail. The logic remains the same. Certainly, conservatives have no problem with the logic if you modify the analogy as you have suggested.


My only point that as with anything (and especially with crime statistics), "it's just not that simple" and nuance matters, in the sense that (in the modified analogy), to say at least some systemic racism doesn't exist in the way the prison stats play out seems naive at best, it seems similarly naive to assume there's no bias in this case as well.


> My only point that as with anything (and especially with crime statistics), "it's just not that simple" and nuance matters,

If this is actually your point then you are supporting root's argument, not rebutting it. Root was pointing out poor causal relationships determined from lack of recognition of confounding variables. You just listed another (substantially more famous) example of such a relationship.


Obviously nothing is "just that simple", if that's your only point so be it, but that isn't telling us much since this is true of literally everything.


That was such a bad faith argument/read that I don't even know where to begin.


I would like to see some actual numbers. I am sure they have enough data that could either prove this claim or not.


And I think everyone here is hung up on the literal definition of shadow banning, and not the CONCEPT of monkeying with people's posts without their knowledge or awareness. It's the same thing, and I feel like everyone wants to give them a pass because of pedantry.


Their user agreement says they may limit visibility of tweets. Is their moderation, transparent, no. do they promise that it will be? Also no. I'm fairly sure my own account has some visibility limits on it and has done for ages, but I don't think about it much.


I think it's largely a disagreement on definitions - tweet 7 in the thread even hints at this:

>>>7. What many people call “shadow banning,” Twitter executives and employees call “Visibility Filtering” or “VF.” Multiple high-level sources confirmed its meaning.

My definition of "shadow banning" (or hellbanning) is that the user's tweets/posts are not visible to anyone except them; Twitter was not doing that. Twitter was clear in saying "oh, we limit visibility, don't allow certain things in trending topics, etc." and apparently some people think of that as shadow banning.


> Abusive and spammy behavior. When abuse or manipulation of our service is reported or detected, we may take action to limit the reach of a person's Tweets. Learn more about actions we take, including temporary and permanent account suspensions, and limiting account functionality.

https://twitter.com/ashleyfeinberg/status/160101566099344998...


By all means, everyone, feel free to be pendatic about the language here, and deferring to their weasels words, in the direct light of the documentation that has been released, because you agree with the editorial direction. It has been SPECIFICALLY revealed that they frobbed the knobs when they didn’t like what was being posted, while admitting that their own policies weren’t being triggered, and they had no justification.


> It has been SPECIFICALLY revealed that they frobbed the knobs when they didn’t like what was being posted, while admitting that their own policies weren’t being triggered

Where was this revealed? I saw Bari Weiss claim it happened but the screenshot she posted as evidence specifically says the account violated their policies.


It's literally in the same thread!

https://twitter.com/bariweiss/status/1601020845224128512

I guess you're reading this and saying that "indirect" violations are the same thing as direct, i.e., defined, violations? Is that what's happening? People downvoting me are being absolute pedants about the definition of "shadow banning," and then just give this part a pass and say it's fine?

I guess this is the world we live in now, in every respect. There's so much STUFF going on, and so much written about the stuff, that people can choose to focus on whichever half they want, and find plenty to support their position. But I still live in a world where the general IDEA is the main thing, and Twitter has been shown to do things outside of their stated policy. And, sure, as someone else pointed out, they're allowed to do whatever they want. But they didn't have to hedge and lie about it.


they literally had a press conference about most of these features in 2018


> Twitter denied that they did for years

twitter has denied what for years? if you're implying they denied they moderate, delete, or derank content, this is absolutely not true.


> it’s that Twitter denied that they did for years

When ? I haven't seen any evidence of this.



That's not shadow-banning

They fucked up and admitted it (what algo is infallible?)

The fuckup wasn't targeted, it was algorithmic, and it affected both sides of the isle.

Your link literally demonstrates the opposite of your claim.


Unless I’ve missed something there’s still no evidence of Twitter shadow banning users. Weiss incorrectly calls a bunch of things “shadow banning” but not anything the matches the actual accepted definition.


Limiting reach is not the same as shadow banning which has a specific meaning.


"Limiting reach" is what Twitter users think shadowbanning is. I've seen major accounts declare that, because less people are retweeting them than they think is normal, they must be shadowbanned. Obviously, if they were actually shadowbanned, nobody would be retweeting them.


That's not a good definition. If I have 0 followers I have very limited reach. You need a consistent definition. No one thinks downweighting content is shadowbanning either (unless that weight is 0, but then we're back to the normal definition).


This seems to say what they did wasn't how Twitter defines shadow banning.

https://twitter.com/bendreyfuss/status/1601019554980761600?s...


Reminds me how certain governments redefined mass surveillance then denied it.


Is it really redefined though? iirc it's how Reddit also implements shadow bans, no one else can see your messages other than you, to you it looks normal.


And the systems were overwhelmingly used to throttle one type of political viewpoint.


Indeed and here is their tweet saying they do not

https://twitter.com/twitter/status/1022658436704731136?s=46&...

Isn't there a fiduciary duty to be honest with their investors?


Fiduciary duty is a responsibility to act in someone's best interests (the investors in this case)[1]. The requirement to not make misleading statements affecting investors is a different requirement [2].

To the substance of your statement, they don't shadow ban, so that's not a misleading statement. The fact that a lot of people take an extremely broad view of what shadow banning is doesn't change that. Twitter actually define shadow banning in the link in the tweet you posted, and explain what they do and why that's not shadow banning.[3]

[1] Very good explanation here https://www.investopedia.com/ask/answers/042915/what-are-som...

[2] Not to "misrepresent" material facts, which is a form of securities fraud https://www.investopedia.com/terms/m/misrepresentation.asp

[3] The original link is down but I found it in the internet archive. https://archive.vn/m4bT4


Musk went through with the deal because he couldn't prove Twitter lied to investors, so I see no reason to trust him now.


It's generally assumed investors can read. https://blog.twitter.com/en_us/topics/company/2018/Setting-t...


> without proposing or creating an alternative

They have an alternative in mind. It's called unfettered free speech.

Problem for Twitter is that advertisers aren't interested in a fancier 4chan.


If Twitter content will become similar to 4chan content, the advertisers will behave with Twitter as with 4chan.


> Why is anyone surprised that the major/globally operating social media companies have extensive internal systems for controlling the content that appears via their platforms?

No one is. These Twitter Files are just building on a right wing prior of them being shadow banned. But they have yet to demonstrate that right wingers are specifically targeted. They don't show why these accounts got flagged (the crux of the issue and none of this means anything without demonstrating this knowledge) but just show prominent right wingers that did get flagged. We just get left with more questions. Do prominent left wingers also get flagged? Do prominent left wingers act in ways that would similarly be flagged? Is flagging biased? (I'm sure it is, but lets' measure it)

It is a common statistical manipulation through leading someone to causal conclusions via selection bias (sometimes called "malinformation"). While not directly said these tweets (and the previous ones) are targeted to make the reader believe that Twitter silences right wing voices. This may very well be true, but they have yet to provide strong evidence of this. Just samples. Problem is, we don't know anything about the distribution of samples so we have no means of determining if they are cherry-picked or not.


This is exactly right.


If this[0] is accurate, then 99.7% of political contributions made by Twitter employees went to Democrats in 2022.

Is it unreasonable to suspect people on the right are targeted more often than the people on the left? Wouldn't that be the default assumption and the opposite position would need evidence to prove otherwise?

[0] https://twitter.com/mtaibbi/status/1598829996264390656?s=20&...


> If this[0] is accurate, then 99.7% of political contributions made by Twitter employees went to Democrats in 2022.

You're jumping a step here because you're making a sampling error. Like I said before, we know nothing about the distribution.

Let's put this in an extreme example. Suppose there are 100 people. 99 of them have the favorite color green. 1 of them has the favorite color red. Does that mean the logo will be red? What assumptions are implicit here to say yes? (this is where the logic fails) If a singular person controls the color of the logo and that person is not chosen at random then it really doesn't matter how many people have a favorite color green. The logo will be the favorite color of the person that makes the choice. Everyone else is noise.

Coming back to Twitter it would be naive to assume that the choices are being made democratically or uniformly at random. So what does the median employee's opinion matter? We know that higher incomes tend to lean more conservative but this is hard to say here because those studies generally aggregate out all salaries >$100k/yr, which might as well be a random sample. If the trend continues (reasonable assumption, but naive to take as fact) then it is reasonable to assume that managers (who we are assuming make more money than the median employee) are more conservative. But again, we just don't know.

The data you are linking to is actually irrelevant to the conversation at hand (it also has a huge yearly variance, making it even more noisy).


Yes, it is unreasonable. This is not how companies work. The personal preferences of employees are not what govern the operations of their employer.


Sure would be easier if the world was that black and white but unfortunately there are a lot of gray areas employees encounter which are left up to their subjective judgement.


Tesla employees gave 93% to Democrats; do you think the cars work better if the owner is a Democrat?

Employees of Trump Entertainment Resorts gave 100% to Democrats!


It isn't an unreasonable leap for people in our position with very limited information, but it is still just a guess. It's a shame that both of these reporters, with eyes inside the machine, seem to want to HINT that there is systemic bias without providing any supporting evidence.


> It isn't an unreasonable leap for people in our position with very limited information

No, this is exactly WHY it is unreasonable. This is how conspiracy theories work. Humans want causal understandings but they have limited information and their imaginations fill in the gaps. If you have little information you should also have low confidence. Look around here at how many people are certain of conspiracy. Look in the Twitter thread too. Much of what people are saying is highly unreasonable and this is not healthy for us as a community.


That tweet is as good as worthless imo. For all we know the entire $167k was from a single executive who had no moderation power.


No, the vast majority of mass killings in the US are by right wing extremists, resulting from the vast number of calls for violence from right wing leaders and social media posters. Misinformation and mass delusion like QAnon is way more prevalent on the right. And so is denial of reality like Alex Jones' attacks on Sandy Hook parents. Where are the equivalent of QAnon and Alex Jones on right?


What is your source for this? As far as I understand the vast majority of mass killings in the US have no political motive.


Source? Most mass shootings were related to gang violence last I checked.


The vast majority of mass killings are not politically motivated and are in fact gang related. The vast majority of politically related mass killings are committed by people with severe mental illness and the political leanings of the perpetrators are irrelevant. Of the politically related mass killings done by people of sound mind, the distribution is fairly even politically speaking.


They surreptitiously colluded with congressmen and FBI spooks to suppress political speech before an election. I think you’re missing the bigger story here.


The current releases haven't established anything even close to that. Can you cite which tweet(s) you are referring to? Honestly, if they do have evidence of something in that ballpark, these two have done a terrible job exposing it.


source?


Ok, let's say they did, it's a private company, 1st Amendment doesn't apply in the manner you seem to think. In fact it's the opposite, they as an entity / company have 1st Amendment rights to do as they please with content on their platform.

That being said, yes they suppressed propaganda and disinformation that could lead to bodily harm (as on J6). People like Fox News hosts pushing disinformation that could lead to people dying over COVID is easily crossover into legal territory. Why would they want that shit on their platform? Is it their fault all the disinformation comes from right wing kooks?

This argument is ridiculous. It's like Nazis complaining their propaganda was being removed from the radio and their voice was being silenced.


> It's a private company

Twitter benefits from network effects, which makes them immune to an extent from the normal rules of private-sector competition. Telephone carriers have to carry certain speech, your power company can't shut off your power because they don't like your speech, etc… Arguably the major social media platforms are in a similar position.

Twitter also colludes with government officials, as revealed in the previous "Twitter files" installment.

> they suppressed propaganda

I just checked the Twitter Rules [1], the Rules don't forbid "propaganda".

And I just checked the definition of "propaganda" [2], seems like it includes many things that nobody alleges is against the Rules (for example, any political ad, no matter what it advocates, is "propaganda").

[1] https://help.twitter.com/en/rules-and-policies/twitter-rules

[2] https://www.merriam-webster.com/dictionary/propaganda


What the congressmen did was illegal on their part. What Twitter did was at best unethical, but ironically, severely tampering with our election which the fake left seemed to be so upset about. The tweets weren’t propaganda, some of them were about Joe Biden being involved in illegal deals with eg China. The source information for this, his sons laptop, has been verified as authentic by for example the Washington Post.

There is no way to bend far enough to excuse what happened here.


> What Twitter did was at best unethical, but ironically, severely tampering with our election…

Huh? Twitter does not administer US elections last I checked. I am sure people could read this hugely consequential information you're claiming on Reddit, 4chan, OANN, AM radio, Fox News, the Wall Street Journal, or various other outlets that right-leaning voters typically seem to use. In fact, I seem to recall all of those running several stories about it at the time.

Or are you claiming that Twitter is the sole source of media that the entire United States consumes?


If the story circulating on all the major social media platforms didn't make a difference they wouldn't have suppressed it.


I'm not sure where to even start with this clearly agitated post. If you genuinely think all disinformation comes from "right wing kooks" you might be in an information bubble.

I would have thought at least the Hunter Biden laptop story pierced that bubble for most people recently by demonstrating these "right wing kooks' were the only ones NOT spreading misinformation.


I think it’s because they said repeatedly they were doing no such thing.


You really think someone would do that? Just form a corporation and tell lies?


"New Twitter policy is freedom of speech, but not freedom of reach.

Negative/hate tweets will be max deboosted & demonetized, so no ads or other revenue to Twitter.

You won’t find the tweet unless you specifically seek it out, which is no different from rest of Internet."

How is this any different than Elon's stated goals from 3 weeks ago?


A shadowban has the property that it's hidden from the user. Elon seems to want to make these kinds of actions transparent, like, for instance, deboosted tweets being visible as deboosted by their creator/other users, which is a pretty big difference. This difference is already visible in initial intent: Twitter hid what they were doing https://twitter.com/Twitter/status/1022658436704731136, while Elon is explicitly saying what he will do.


That tweet is referencing this[0] blog post.

> We do not shadow ban. You are always able to see the tweets from accounts you follow (although you may have to do more work to find them, like go directly to their profile). And we certainly don’t shadow ban based on political viewpoints or ideology.

It specifically mentions factoring in user behavior:

>What actions you take on Twitter (e.g. who you follow, who you retweet, etc)

> How other accounts interact with you (e.g. who mutes you, who follows you, who retweets you, who blocks you, etc)

How was Twitter hiding this?

[0]: https://blog.twitter.com/official/en_us/topics/company/2018/...


Twitter's visibility filtering wasn't based on "Your" User Behavior. It was based on what their moderation team decided to filter or not filter.


Sure, there was human involvement. Seems irrelevant to the comment I was responding to - Twitter did not hide that they were deranking some Tweets and there doesn't seem to be any difference between their current policy 'revealed' in the Twitter Files and what Elon proposed.


Problem is that there is a reason shadow bans exist.

It prevents nefarious actors from easily probing the limits of their content moderation processes.

It really is funny how Musk is just taking Twitter back to the start.


He'll end with the exact same practices, but with a different set of enemies.


He's going to speedrun learning moderation from "first principles". And we get to watch him learn (just like when he started banning people who changed their handle to mock him).


It's already easy to check to see if you or your post is shadowbanned.


I think the issue is not that this feature exists, but that it was abused to silence criticisms primarily from the right, while at the same time twitter denied this.


If it was done in secret how could you know it was abused to silence criticisms primarily from the right?


Because people can see that their message that used to get x retweets and likes, is now only getting y which is far less. It is very common to see people complain about being shadow banned on twitter, and not knowing for what, or why.


> people can see that their message that used to get x retweets and likes, is now only getting y which is far less.

If we go up to the top comment in this chain we see

> Negative/hate tweets will be max deboosted & demonetized, so no ads or other revenue to Twitter.

- Elon Musk

So... you're in agreement?

(btw, this isn't shadowbanning. Shadow banning would be 0 likes and 0 retweets and 0 views)

> t is very common to see people complain about being shadow banned on twitter, and not knowing for what, or why.

Actually this is my entire complaint with these Twitter Files. They show examples of people getting delisted but do not show the tweets that led to these decisions. That is a CRITICAL element of the story. We can't determine if Twitter was acting in good faith or not without this knowledge. We also have no idea if these examples are selection biased or not. Probably since there's only right leaning stuff and thefp.com is a right wing organization. Maybe Twitter does have a left bias (it probably does) but we sure aren't getting a fair shake.


Okay, so that gets us to “it seems that my account has been shadowbanned.”

Has someone compiled a dataset of users that appear to be shadowbanned, that tweet political content at least some of the time, as well as the political lean they have?


The main issue here is not that shadowban exist but fact that twtter officials explicitly denied its existence.


> but fact that twtter officials explicitly denied its existence

I haven't seen anything here that indicates that it ever did exist.

Limiting reach is different from people not seeing your content at all.


Shadowban is not a total hiding content from everyone, it also implies selective hide[1].

[1]: https://en.wikipedia.org/wiki/Shadow_banning


> Twitter is working on a software update that will show your true account status, so you know clearly if you’ve been shadowbanned, the reason why and how to appeal

https://twitter.com/elonmusk/status/1601042125130371072


The OP twitter thread doesn’t seem to be describing shadow banning.


> How is this any different than Elon's stated goals from 3 weeks ago?

It is different because Elon's goals are at the level of tweets, and the screenshots here depict action at the level of users.


Like Kanye West?


You're right it's not. I gather if you want to get two sides to talk who aren't talking, i.e. those who support that message vs. those who don't, you need to appeal to both.


I think there might be a reason two sides aren't talking.

Conservatives: Rope, tree, journalist: some assembly required

Fuck Joe Biden

Fuck your feelings

These are all t-shirt slogans you can buy at conservative political events, and they're quite widespread rather than exceptional.

Also conservatives: Why am I being shadow banned, this is viewpoint discrimination!

To be clear, there are obnoxious people and outright assholes all over the political spectrum. But conservatives have made outrage and offensiveness their brand, in recent years. I remember being astonished and disgusted by the gleeful antagonism of Rush Limbaugh back int eh 1990s when conservative talk radio first became A Thing following the Reagan administration's abolition of the 'Fairness doctrine.'


> I think there might be a reason two sides aren't talking.

Both sides can certainly find a reason not to speak to each other. The left claims the right is hateful, the right claims the left is obscene. It's always been like that, with one having the power to censor the other.

Here's an example [1]. One side, a well known entrepeneur/investor, is arguing for no censorship, and the other side, a well known disinformation researcher [2], argues for limits of the reach of certain content. One has blocked the other, so they no longer communicate directly.

Five years ago, someone at Twitter might have observed this interaction and decided, "I'm going to secretly action content. That will satisfy both the disinfo labeler and the anti-censorship crowd." Yet in doing so, as we can see, nobody is satisfied. They're still not talking to each other, and they don't understand why. The reason is due to the secrecy built into all of these platforms where an unknown third party is actioning content without anyone involved in the conversation knowing about it.

See also Free speech for me--but not for thee : how the American left and right relentlessly censor each other [3] (Nat Hentoff, 1992)

[1] https://twitter.com/noUpside/status/1599532506252214272

[2] https://www.wired.com/story/free-speech-is-not-the-same-as-f...

[3] https://archive.org/details/freespeechformeb0000hent/page/n9...


Interesting though these points are (and I'm familiar, as I am personally acquainted with DiResta), I think you are looking at this issue in too abstract a manner.

Ther's no obligation to have dialog with others who are actively threatening to kill you. Don't owe them a conversation or even a hearing.

I don't disagree with your characterization that 'the left claims the right is hateful, the right claims the left is obscene.' But there is a clear qualitative difference between the two sides in terms of the willingness to insinuate, actually issue, and finally carry out death threats, and it's not a wholly new phenomenon. I put it to you that the people who are amused by and choose to promote messages like 'rope, tree, journalist: some assembly required' do not actually give a shit about free speech but are opportunistically employing the grievance for political leverage.

Get back to me on this when the right stops openly calling to murder people on such a frequent basis and I'll be happy to consider it in more abstract terms. I see why you're concerned about this, but having extensive first hand experience of political violence I think shadow bans are a relatively minor issue by comparison.


> Ther's no obligation to have dialog with others who are actively threatening to kill you.

I never said there was. Even free speech law has limits and may punish such speech.

What I advocate is for authors of secretly removed content to be able to discover the removal, because even if the comment is vitriolic, the offending author may perceive the lack of a response as tacit approval of their words.

> But there is a clear qualitative difference between the two sides in terms of the willingness to insinuate, actually issue, and finally carry out death threats

Left wing extremists are currently saying that words are violence [1]. This is wrong, short of something that "in context, directly causes specific imminent serious harm", a definition from Nadine Strossen [2].

[1] https://archive.ph/1NeiV

[2] https://books.google.com/books?&hl=en&id=whBQDwAAQBAJ&q=in+c...


> Left wing extremists are currently saying that words are violence [1].

It’s the Republican, Trump-appointed, FBI director’s assessment that extremist right-wing political violence (read homegrown right-wing terrorism) is far and away a bigger problem than left-wing violence.

Jan 6 I hoped would have demonstrated that clearly once and for all. After that day, it has become impossible to “both sides” political violence in the US — it really is asymmetrically coming from the right.


It's not my intent to pick sides. I highlighted the left there only because parent focused on the right.

Regarding shadow moderation, I can promise you that every side of every issue in every geography and on every major platform uses it. See the talk linked in my profile for examples.


It's getting very boring hearing Americans going on about "Jan 6th". Some protesters entered a building. It's not that big a deal, no "political violence" happened.

It's been 20 years and Americans have finally recently stopped harping on about "9/11", after decades of them murdering orders of magnitudes more innocents. It's incredible to see them committing some of worst attrocities in the world and still loudly yell about their own tiny issues.


Leader of Oath Keepers was found guilty of seditious conspiracy, so you’re wrong about that.


I'm aware of what you're advocating because you repeat it so often. I have no objection to your points about discoverability (indeed I mentioned else where that sufficiently obnoxious tweets could be highlighted as ban-worthy, rather than simply being removed - although this wouldn't mitigate the harm done by posting slurs etc.

It's a little odd to me that you contemplate hypotheticals like 'perceiving the lack of response as tacit approval.' It's not that this is incorrect, but that you're overlooking evidence of the alternative: when platforms like Twitter or FB leave posts up (to gather both argument and support) but attach some sort of note saying 'this post might be disinformation' or words to that effect.

That is what you are asking for, no? A clear signal that the social media post is disfavored by the platform operator in some way, such that the author is notified and given some context, and so is everyone else. I would appreciate if you would clarify whether or not this meets your desired standard of transparency.

The reason I bring this up is that when this approach is applied, authors of the controversial posts tend to hate it and still scream that they're being censored by being publicly shamed, or having words inserted into their social media post by the platform operator (notwithstanding the extremely obvious distinction.. Twitter has gone farther again by allowing users to add meta-commentary in the form of notes (previously birdwatch), but on controversial topics said notes are often railed against by the original author or the subject of meta-controversy by people trying to spam the note system with negative characterizations of the notes themselves.

If you're going to make transparency of moderation into your political cause (and you very much seem to approach it this way), then I think you should go all the way and address the questions of why people are not happy even when they get what you are advocating for, and just pivot to a slightly different variation of the same argument about how they're being censored and it's a terrible injustice etc.

Incidentally the definition in your 2nd link is not from Nadine Strossen; it's just a restatement of the prevailing legal standard for incitement (from Brandenburg v. Ohio) and indeed seems to be offered as such in the text. Like the 'true threat' doctrine, this definition is being interpreted in increasingly elastic fashion in our era of instantaneous mass communication.

I'm familiar with Strossen's book but also consider it to be written from a comfortable suite in an ivory tower. Like many well-intentioned idealists, she acknowledges the possibility of violence but argues that it must be met by reasoned debate and nonviolent resistance. I reject this posture, because it basically says people who are the target of violence should accept their role as punching bugs (or targets of gunfire) in exchange for the possibility of moving the conscience of elites who review circumstances, form policy, render decisions, and recognize peers (eg accepting or rejecting the validity of other states). In this mode of argument, willingness to passively sacrifice oneself is the threshold of acceptability - becoming famous for your advocacy and then dying for it like Christ, King, or Gandhi is the way to go. And conveniently, once people are dead they can be cited as moral exemplars without the troubling possibility of them reappearing and critiquing subsequent outcomes.

Oddly, I don't see Strossen or her peers throwing themselves in front of violent mobs in an attempt to bring them to moral clarity. Having at various times been arrested, attacked, beaten by a mob, and beaten by cops while engaged in wholly non-violent political activity, I do not give much weight to pure idealism that isn't grounded in cold hard reality.

This brings me back to your first link, which complains about transgender extremism and was ironically published on the first of April this year. Just in the last three weeks, we've seen a mass shooting at an LGBT club leaving 5 dead and a further 18 injured (Colorado Springs); multiple militant groups, openly armed, demonstrating against a drag queen story hour event while police looked the other way (Columbus, Ohio); well-organized attacks on electrical substations that left a whole county without power for days and happened to coincide with a drag show in the county seat (Moore County, North Carolina).

Now, my strategic assessment is that opposition to drag shows is often a convenient excuse for militant organizing and action, and if all LGBT people magically teleported away to Planet Fabulous tomorrow, the militant organizing and action would quickly pivot to some other scapegoat.

But please, don't waste my time with links about 'transgender extremism' and free speech issues unless you are willing to address the fact that those who oppose the acceptance and freedoms of LGBT folk are themselves engaged in the suppression of free speech and indeed life. I have yet to see any of the conservative/libertarian champions of free speech address the abundant right-wing attacks upon it, probably because they're scared of their own side.

And by address, I don't mean dismissing it with a truism like 'violence is illegal of course.' Rather, try providing evidence of how your preferred approach would mitigate actual and ongoing harms, and why havens of largely unrestricted speech (eg parts of 4chan) are not utopias of thoughtful discussion but rather cesspools of bigotry that celebrate and incite violence as a means to drown out discussion. It's hard for me to take you seriously when the issue you complain about at such length involves so little hardship.


> It's hard for me to take you seriously when the issue you complain about at such length involves so little hardship.

It seems you feel that secretive online censorship is not a big deal. I don't see the point in trying to convince you. Plenty of other people, such as those I quoted here [1], do care.

If you'd like me to respond further, please narrow it down to a question or two.

[1] https://news.ycombinator.com/item?id=33916519


Is this information new (as of #27)? Doesn't every social media platform augment reach for troublemakers? For example, here is Elon's new policy[0]:

>New Twitter policy is freedom of speech, but not freedom of reach.

>Negative/hate tweets will be max deboosted & demonetized, so no ads or other revenue to Twitter.

>You won’t find the tweet unless you specifically seek it out, which is no different from rest of Internet.

None of this info seems to warrant such a spectacle.

[0]: https://twitter.com/elonmusk/status/1593673339826212864


> Is this information new (as of #27)?

yes, it seems like twitter employees lied when asked directly about shadow banning and other issues.


Spectacle is how Musk brands and motivates his companies. Tesla was originally climate change and lately has been self-driving. Space X is colonizing Mars. Twitter will be "radical transparency".

Even if Twitter does the exact same actions, Musk will frame them all as"radical transparency"


I think it's funny that the outrage tourists also managed to be angry that libs of tiktok was also put on a "no action" whitelist. They're not even reading the disclosures; they're just screaming.

https://twitter.com/bariweiss/status/1601018810495995904


"no action" whitelist meaning in this case that nobody can reverse (possibly found to be incorrect) previous actions without taking additional steps?

ie "Even if this is wrong don't touch it"


Meaning no matter how egregiously this account violates the content policy, do not suspend until first consulting with higher ups.


Yes and also meaning (and extremely likely in this case to be the reason - rightly or wrongly) "do not lift these restrictions until first contacting this other group."?

Not my first thought when reading the words "no action whitelist" Yours?


At Twitter's volumes, lifting of restrictions manually doesn't scale. Speaking as an outsider: how I would design it would have the lifting of restrictions being automated and set up at the time restrictions are applied (set on a timer or deletion of an offending tweet). Lifting of restrictions manually would be far less frequent than applying them.

With that in mind, I get the distinct impression that the LoTT account was being treated with kid gloves/white gloves.


"Kid gloves" seems like a reasonable inference to me. There are sufficiently prominent cases where scaling doesn't matter a damn, eg the decision to suspend or not the president. Taibbi claimed there was a lot of back-channel manual intervention going on from both repubs and dems for manual intervention. One would imagine there was a lot also not from partisan politics. Back channel who you know seems like one of the way things were done. Did it get out of hand? Should it be done at all. Plenty to discuss there.

The other inference that occurred to me is that multiple twitter departments pulled these levers and didn't always have one view on it - hence the "don't flippin' touch this" sign with the message on who to talk to. That's not there as an endrun around automation.


> There are sufficiently prominent cases where scaling doesn't matter a damn, eg the decision to suspend or not the president

Does @LoTT belong to such distinguished company? I don't know, but I know this is not an instance of Twitter "silencing the voices on the right" despite the context of the reporting


Belong? Will they clearly were being treated as a special case.

This is clearly a case of a tipping the scales against a non-liberal voice and is being provided as an example of how these things worked.

Was the treatment deserved, justified, correct or not? This is the question. How arbitrary is it? How could that be abused to lose Hillary an election? (Or whoever). Was it? Most importantly how does this process fare when government officials and politicians make direct threats when making censorship demands?

I find this case a lot less interesting than finding out about the involvement of the fbi etc. I'm far more interested to see if and how Wikileaks stories were de-boosted or not as one controversial example. People claimed that blogging about Assange's court case suddenly got no Twitter referrals. Possible? If so, was it true? If so Why? Did Twitter employees put their thumb on the same to boost (or bury) Bernie?

All these things, i guess we'll see. I strongly doubt Taibi would be in on a cover up unless the CIA had him by the balls or similar, don't know much about Bari. We'll have a lot more confidence whatever comes out. Perhaps some of the people repeating (on twitter) how uninteresting it all is are worried it's too interesting? I'm fascinated to see how much or little was going on, how and why.


What are you saying? LibsOfTikTok didn't have a blanket protection. It required the special group to sign off on any action.

Who's angry? Who's not reading what disclosures? What are they angry about?


"Take, for example, Stanford’s Dr. Jay Bhattacharya (@DrJBhattacharya) who argued that Covid lockdowns would harm children. Twitter secretly placed him on a “Trends Blacklist,” which prevented his tweets from trending."

What. Why?


"Still trying to process my emotions on learning that @twitter blacklisted me. The thought that will keep me up tonight: censorship of scientific discussion permitted policies like school closures & a generation of children were hurt."

https://twitter.com/DrJBhattacharya/status/16010379837793894...


I don’t know how other people think about this, but in my mind there is a large difference between censoring and not promoting, and the trends blacklist feels more like the latter.


IIRC the trending topics on Twitter are a mixture of automated and manually-selected. Not manually selecting him would be "not promoting", but putting him on a blacklist means he was prevented from naturally showing up - that's censoring.


Is this an answer to GP's question? If so, I'm having trouble figuring out what it is.


He was blacklisted due to his opposition to school closures and other intrusive NPI's during the pandemic.


[flagged]


So instead it's better to stunt children's development with lockdowns and give them early heart attacks with jabs? Regardless of yours or my position however, this debate shouldn't have been censored in the first place.


People had pointed out how stupid his proposal was when he made it, and he ignored them completely. At that point, he was just a crank (https://sciencebasedmedicine.org/jlockdowns/). The debate was never censored: he was allowed to communicate with policy makers if he had any evidence to support his proposal. What was censored (or rather, not promoted to people who weren't his followers) was his wild rants to fellow Christian libertarians who believe the invisible hand of the free market is the gentle hand of the baby Jesus himself, which we should not shackle with regulations.

Slowing some children's development (while drastically accelerating others' because they were tutored by their parents who had to stay at home with them) is not even on the same scale as mass deaths and people dying outside hospitals due to lack of capacity.


Let's not let numbers, humanity and sound judgement get in the way of a good conspiracy yarn.


Well, first of all, “tweets” don’t “trend”. Terms do. So this description feels off, already.

Next, I wonder, but don’t care enough to look it up, if this guy is adequately summarized by only mentioning his concern for children?

Screw that, I did look him up: https://en.m.wikipedia.org/wiki/Jay_Bhattacharya#COVID-19_pa.... He was one of the people behind the “Great” Barrington Declaration and, kn the early months of the pandemic, argued, among other things, that COVID is rather harmless. He also took money from the airline industry without disclosing as much in his publications.

It’s arguable if Bhattacharya’s reach needed to be limited. What’s really hard to argue is that the thing about children is an adequate characterization of his statements during the pandemic. This is prime evidence that this story is not presenting anything close to a fair interpretation of the documents they have been given, and that you have, unfortunately, fallen for it.


He's a Dr. and a Stanford professor of medicine. The Great Barrington Declaration was signed by almost a million doctors. They were warning about the harm lockdowns would do to kids and they were correct.

Even if they weren't, this would still be unacceptable.


The Great Barrington Declaration was signed by almost a million doctors

You're off by an order of magnitude and then some. Also, none of the numbers they do claim are verifiable. https://gbdeclaration.org/view-signatures/

You know, when you make a specific claim like that it's really worth the 30 seconds it takes to check it again to verify your memory is correct.


According to its own website, the great barrington declaration was signed by around 15,000 doctors.


Thanks. I didn't realize there was a further breakdown from the 934,094 total and can't edit it now. Here's the exact breakdown.

medical & public health scientists: 15,989

medical practitioners: 47,278

concerned citizens: 870,827


FYI, anybody who wants to can sign the "Great Barrington Declaration," and check whatever box they want: https://gbdeclaration.org/#sign

The list of medical practitioners and scientists includes a number of homeopaths and obvious fake names.


Sounds like appeal to authority, a logical fallacy.


Exactly, you can't trust any authority (Twitter moderation team included) to determine truth for you.


As I said, it’s debatable if the guy’s reach needed to be limited, the point you are arguing.

What’s not debatable is that the breathless outrage-bait under discussion misrepresented the case for limiting the Dr’s reach with a straw-man argument, and so did you.


> misrepresented the case for limiting the Dr’s reach

Could you help me understand what the argument is that this person should have been put on any form of blacklist?


[flagged]


That article points to a study that supports her claim, testing healthcare workers to demonstrate that they weren't infected, not merely asymptomatic.

No Googling necessary. It's right there in the article you linked to.


There was data to suggest that. The data that suggested that was linked in the article you posted:

https://www.cdc.gov/mmwr/volumes/70/wr/mm7013e3.htm?s_cid=mm...


From your link "Estimated mRNA vaccine effectiveness for prevention of infection, adjusted for study site, was 90% for full immunization"

Does 90% equal 100% now?

Her statement was false, even based on the limited data set you're referencing. It's even more false now.


People sometimes speak in hyperbole as a point of emphasis.

Behold, humanity.


If we can now expect the leading health experts in the world to be potentially speaking in hyperbole, then we can't expect the public to trust what they say to be accurate.

That's a problem.


If you take someone casually rounding up from 90% to 100% as a deep reason for concern, I have some bad news for you regarding weather forecasts.


No expert in any field would tell you that "casually rounding from 90% to 100%" should ever be done in any serious setting. Ever.

Idiots on the Internet are free to do so, but not the Director of the CDC or the US President. That gets people killed.


Surely, you can't be serious. The setting isn't a paper but a news interview, where they want the general idea of a study, not the details. In a news interview, saying "people who smoke get cancer" is perfectly acceptable (and not likely to "get people killed"), even though not 100% of them do.


All of that is irrelevant. Twitter had a secret blacklist. Gross behavior from a gross company.


EVERY social network has a "secret" blacklist. Every. single. one.


So then the problem is worse than people thought.

I am glad that you agree that this is such a huge problem, and it is a good thing that now more people are away of how extensive it is.


Guess what, Hacker News does too.

Try working in integrity for a few years, you'll realise why these things are necessary, because people just won't stop being jerks.


Twitter being one of the largest communication platforms in the world is something that I am much more concerned about than a much smaller internet forum.

So my point stands.


A “secret blacklist” that’s been known forever, or at least for two years: https://twitter.com/dancow/status/1601017194456186881?s=20&t...


... and I've worked on Reveddit for four years. Less than 1% of Redditors know that all removed comments are secretly removed.

It's hard to make people care about this stuff. Most of us only start to see the harm of something when it's shown in context. Just showing that something can be theoretically harmful isn't enough.


TYVYS shadowbanning and silently removing comment is such cowardly behavior. Against spam bots, fine. But against what are obviously real people? It's wrong.


TYVYS?


"Thank You For Your Service", but with a typo on the middle letter.


Oh, thanks for making an account just to explain that!


No problem!


Here is an announcement from 2018[0]:

>“It’s shaping up to be one of the highest-impact things that we’ve done,” the chief executive, Jack Dorsey ,said of the update, which will change how tweets appear in search results or conversations. “The spirit of the thing is that we want to take the burden off the person receiving abuse or mob-like behavior.”

> The new system will use behavioral signals to assess whether a Twitter account is adding to – or detracting from – the tenor of conversations...

> The updated algorithm will result in certain tweets being pushed further down in a list of search results or replies, but will not delete them from the platform.

[0]: https://www.theguardian.com/technology/2018/may/15/twitter-r...


IMO that's fine, but if you want me to trust such a system, the author should see on their tweet some indication that it's been demoted.


> The updated algorithm will result

These screenshots are about actions taken by Twitter employees, not an updated algorithm.


Why is that gross? It's their speech to decide what they want to promote or not promote. Why are you anti-free speech?


Not when the platform (a) benefits from network effects that make it immune from private sector free market competition (b) actively colludes with government officials


It's not trustworthy when a system actions a user's content without telling them. Free speech principles are also a thing.

The question is not what is getting demoted, it's how. In this case, it's being done secretly, and Twitter isn't the only one doing it:

https://news.ycombinator.com/item?id=33916414


This outrage is hilarious coz HN shadowbans people too. Which increases site quality which is why all the complainers are on here.


Transparently removing content is the normal way to moderate a forum. This research [1] suggests it reduces mod workload because users learn the rules. Discourse doesn't secretly remove content and is popular.

It isn't accurate to say secrecy increases site quality. No such qualitative study has been done.

[1] https://www.reddit.com/r/science/comments/duwdco/should_mode...


That's talking about article submissions, not comments. Couldn't read the PDF because the link is broken.

More than 95% of the time I see a flagged account on HN, they post complete garbage that leads to more flaming replies if not removed promptly. HN has a very limited set of moderators, like one or two, who cannot police every comment 24/7.

>Discourse doesn't secretly remove content and is popular.

Popular where? In corporate and niche business use cases? What are some public Discourses that allow everyone to post?


> That's talking about article submissions, not comments

Shadow moderation was implemented without doing any research. I agree it's about time more studies are done on all types of content and all platforms in order to assess whether or not this functionality furthers the platforms' goals.

> Couldn't read the PDF because the link is broken

Good call. Blog post summarizing [1] and pdf [2]

> Popular where? In corporate and niche business use cases? What are some public Discourses that allow everyone to post?

All of them that don't use the ShadowBan add-on, I guess.

Indeed shadow moderation appears to have made platforms more popular. I won't disagree there. But I also think it's clear it has contributed to echo chambers and increased isolation and tribality.

I think we're reaching a point where the public wants to know what's going on in social media. Its harmful nature is not just driven by preference-driven news feeds, which we already know can be toxic, it's also driven by shadow moderation. That's the other shoe that may be dropping here.

[1] https://medium.com/acm-cscw/does-transparency-in-moderation-...

[2] https://shagunjhaver.com/research/articles/jhaver-2019-trans...


https://dl.acm.org/doi/10.1145/3359252

You could have just use the menu to find it, it only took a few seconds. There's a preprint available there if you need it. https://shagunjhaver.com/research/


HN has an option to be able to see the removed content.


No. I don't assume I have any rights other than vis-a-vis the government. I've dealt with a lot of corporate bullshit from tech companies, it's an annoyance but I just handle it and don't make a career out of whining about it as some do.

I've been arrested and kept in jail overnight on false charges for being a political activist, people complaining about being in Twitter or facebook jail don't impress me much (especially when almost all of them have a backup account).


Presumably you eat at restaurants whose food you like and buy hardware whose quality you like. It's the same with social media. You can give attention to systems you support and share information about them. The alternative you propose sounds like cowering to company overlords.

And where are you that it is illegal to be a political activist?


I live in the good old USA, where I have the privilege to be a member of the underclass.


> Twitter is working on a software update that will show your true account status, so you know clearly if you’ve been shadowbanned, the reason why and how to appeal

https://twitter.com/elonmusk/status/1601042125130371072


> so you know clearly if you’ve been shadowbanned

A shadowban that the user clearly knows about sounds like an oxymoron.


I’m not sure I agree.

Perhaps it’s an issue with the definition of “shadowban”, but I understood it to mean “you tweet, but no one sees”. Letting a user know that their account is in this state for this_reason actually seems like an improvement to me, not an oxymoron.


Letting a user know defeats the point of doing it (also, this is not what Twitter is doing).

The point of shadowbans/hellbans as they exist on say, reddit, is to deal with what would be termed criminally as a fixated person. If they know they're banned, they just make a new account, and another one after that, and another one - if they think they can't post they engage in ever more extreme measures to keep doing it.

A hellban is a solution here, or at least a relief valve because of course this person doesn't actually engage with the community or care to, they're their to grief it: so letting them continue to think they're posting normally while being invisible keeps them occupied until either they lose interest or somehow figure it out.

Moderators are a finite resource, and spending all their resources chasing down someone who won't take the hint is exactly what a hellban is for.


>Letting a user know defeats the point of doing it

Why? Isn't this a signal to the broader population that certain behaviors result in this kind of downranking?

>Moderators are a finite resource, and spending all their resources chasing down someone who won't take the hint is exactly what a hellban is for.

Again, it's it useful to signal that accounts that violate the rules (or are right up at the edge of violating them) are in this state?


Have you ever moderated anything? A discord server? A forum? IRC channel? Literally anything at all?

You didn't actually address the OPs claims and the reason why shadowbanning exists. Sometimes you get people that are literal stalkers, harassers. People that chase out other individuals. The problem is when you ban them you give them a clear signal that they are unable to post. So what do they do? They create a new account. Then they repeat the process.

Not informing them that they violated the rules is the entire point. The broader population already knows that these behaviors are bad. The problem is dealing with specific users that refuse to give up and refuse to go away. To them their account has zero value; a ban means nothing to them. Their value is based on dealing damage to other users of your community.


It's your definition that is misunderstood, not the oxymoron.

> Shadow banning, also called stealth banning, hellbanning, ghost banning or comment ghosting, is the practice of blocking or partially blocking a user or the user's content from some areas of an online community in such a way that the ban is not readily apparent to the user

https://en.wikipedia.org/wiki/Shadow_banning


Okay. I guess I'm saying that I think it's dumb when you don't tell the user that their account is in this state.


HN doesnt tell you the state of your account. In fact, I don't know of any service that says that they do.


That's the whole point though. Otherwise they'll just make a new account.


Why?


I assumed shadow banning was invented for forums to hide low quality users without giving them a signal to create a new account.


But when they do eventually figure it out, they end up hoping from one newly created account to another. Unless using new accounts are prohibitively expensive, and they usually aren't, then you'll end up just replacing one problem with another.

The correct way to moderate is to be civil and treat "low quality" users as human beings. Reserve inhuman bans for inhuman users (bots).


What do you do when those 'low quality' users are sending rape threats to specific users? Or stalking them in this scenario? The police aren't going to do anything because it's barely an actionable threat and hard to trace.

The entire reason shadowbans exist and have existed for decades is because there's an inherently unsolvable problem with certain users that extremely fixate on certain users or communities. If you think they can be treated by being civil then I don't think you've been a moderator for any extensive period.


Ban evasion was a thing long before shadow bans. I've seen persistent users disrupt IRC channels for years, maybe decades. You can't reason with or befriend somebody who holds a grudge that long. Moderators will just have to deal with the problem repeatedly. Shadow bans at least minimize recurrence by taking away the immediate feedback.


IRC generally has no barriers to account registration while Twitter generally requires a phone number. In the former bans are meaningless, in the latter they pose in an insurmountable hurdle to all but organized and funded campaigns. The real problem with shadowbans is "us" - humans. Shadowbans are a great tool for helping to minimize the impact of things like literal spammers and scammers, but it's also a tool that's great for minimizing political dissent, creating false impressions of the popularity (or not) of various concepts, or generally grossly manipulating and distorting public discourse.

As public online discourse becomes a more engrained part of normal society, shadowbans may even end up being an action that, in and of itself, end up being regulated and controlled. Shadowbans can help create a better and freer society for everybody, but they can also be used to create a dystopia. In some ways it's akin to a real world weapon. A firearm can be used to uphold and protected great values, but it can also be used to impose horrific ones.


> IRC generally has no barriers to account registration while Twitter generally requires a phone number. in the latter they pose in an insurmountable hurdle to all but organized and funded campaigns.

Since when is getting a new phone number that receive SMS an insurmountable expensive? There are literally dozens, perhaps hundreds of services selling this. Example [1] https://www.textmagic.com/sms-pricing/ seems to cost me 7c per text received with no account fees, and I only need it to receive a sign up message.

That is far from "organized and funded". People paid $8 to make troll accounts on Twitter when it was possible.


Just to clarify …

You are correct that burning a SIM card is not that expensive.

However, your example of a text capable VOIP number is not a good one. FAANGs require a real mobile number and these texting services (twilio, for instance) will not work.

All of this to say:

It’s a bit more expensive than you think. If you really care about succeeding, you’re going to need to burn a SIM card.


It sounds desirable but we shouldn't have to rely on whoever's in charge. Elon could say he's no longer shadow moderating and simply swap out who gets moderated. Or, if he really does get rid of it, the next guy could quietly bring it all back.

We should be building external systems that can check the visibility of our own content. It can be done with an extension, for example.


Depriving features of their core purpose seems to be the Musk M.O. to date, starting with paid checkmarks that then forced creation of Official tags. Making shadowbans not shadowbans means there's no real reason to have them at all. So just get rid of them?


This way reveals to people whether or not they were impacted.


Musk will now speed run the reasons shadow banning became a popular practice.


One of those reasons is that people didn't know it was happening. Without that, bots will have a harder time taking over the system. Shadow banning only benefits bots because they're far more likely to check for the visibility of their content than genuine individuals.


Love to see non-employees of Twitter having access to my DMs. I don't think that's consistent with the user agreement or the privacy laws in the state of California.


I don't think they even have access to the dashboard. I assumed they were given the prepared data. An earlier tweet by Musk or a journalist mentioned lawyers would deliver the data to them direct from Twitter.

Even if they did it's clear that whoever made the screenshot in the libsoftiktok screen doesn't even have access to see their email. Like all other tech admin panels, there are access controls. Just because the 'delete website' button is visible to you doesn't mean your account has permission to use it.


Weiss's co-author @abagailshrier commented: 'Our team was given extensive, unfiltered access to Twitter's internal communication and systems.' I'm gonna take this at face value pending clarification.

Just because the 'delete website' button is visible to you doesn't mean your account has permission to use it.

Well, nothing looks greyed out. Again, I'm going to assume they had access unless there's specific information to the contrary.


Now the head of Trust & Safety is admitting the screenshots were "requested" from her[0]. This was hardly a balanced investigation.

[0] https://twitter.com/ellagirwin/status/1601084794288640000


It's a bit of a mess really. Musk fired the deputy general counsel the other day, commenting shortly afterward that he had only learned about the man's previous stint in a similar role at the FBI on Sunday. People quickly pointed out that Musk had actually commented on the tweets about that person and his FBI connections last April.

I feel that what we're seeing is largely investigation theater designed to validate a particular outcome rather than a real inquiry. If you consider this from an information/hybrid warfare point of view and think of Twitter as territory, then it's reminiscent of highly engineered plebiscites that precede or follow invasions to give them an aura of legitimacy.


That's exactly what it is, and I'm beginning to believe that the only winning move is not to play.


Here's a hint to a further revelation for those who continue to pretend that the Twitter Files are a 'nothingburger'.

Ian Miles Cheong :

> So here’s a question for @elonmusk and @bariweiss: were any political candidates — either in the US or elsewhere — subject to shadowbanning while they were running for office or seeking re-election?

Elon :

> Yes

https://twitter.com/elonmusk/status/1601123235755487233


Come now, this could be verified with evidence. I'm sure these actions exist in a database and email trail. Musk could release the evidence himself, or use an actual journalistic process and have someone impartial review it. But there is none, and so he hides behind his disdain of "the MSM" and posts yes/no answers to leading questions that only serve to rile up his base.


Your comment seems to me to be full of bad faith.

> use an actual journalistic process

There is a journalistic process being followed - two established journalists have been given comprehensive access to internal files and are documenting what they have found, while putting their discoveries in context. Elon is also promising that these files will be released publicly for other journalists and the interested public to pore over at a later date.

> this could be verified with evidence

And who's saying it won't be? This was not Elon volunteering this information himself. This was simply him replying in the affirmative to a question put to him. (And no, it was not a 'leading question'.)

> he hides behind his disdain

> questions that only serve to rile up his base

Not worth replying to these. Just quoting as examples of your bad faith.


>two established journalists have been given comprehensive access to internal files and are documenting what they have found, while putting their discoveries in context.

Ahhh yes, posting tweets on Twitter is "putting their discoveries in context". Normally journalists provide context by writing in depth articles after they have formed all their thoughts.... and maybe even have it reviewed by an editor.

>And who's saying it won't be? This was not Elon volunteering this information himself. This was simply him replying in the affirmative to a question put to him. (And no, it was not a 'leading question'.)

Then say the name. If he knows it happened, then take the .5 second to type out 10 or more characters and name a name. OR say something like "Yes, I'll get a list shortly". But no, he gives a snide 'yes'. Which only furthers to rile up the mouth frothers. It's so immature.


Given Elon's recent behavior, is it reasonable to worry that, after answering in the affirmative, no additional information will be forthcoming? Or is this a bad faith criticism?


Yes, it is reasonable to worry that no additional information will be forthcoming... Elon has a habit of lying. Is that a controversial statement?

Is Elon's true motivations to shine sunlight at the old twitter or is it to put on another dog and pony show? Because he is failing at the former and not even trying to disguide the latter. I am interested in understanding more about twitter's past and most importantly CURRENT policies. Ideally in a more neutral tone. Elon's stated new policies seem pretty familiar with the old ones. With the one difference I am currently seeing is that sometime in the future, one might be able to see if their account is being "shadowbanned" or not.


Elon's given unprecedented access to journalists, and is considering releasing all that information to the public, but you're worried that he's going to remain secretive regarding a topic that he voluntarily decided to tease? Elon, whom you characterize as running a dog and pony show, is going to miss another opportunity for drama?

No, sorry, but there's no way I can see how that stance is reasonable.


The Twitter Files are much less earth-shattering than what Elon implied with the response we are talking about. That is, the planned, deliberate muzzling of political figures _for their political views_, ordered by other political figures or in-power government figures.

I have no doubt Part 3 will be forthcoming, and if it unequivocally points to that, I will sit up and take note.

That said, all of this drama is for something that is now dead. That is, Twitter is dead as a platform for political discourse. Even given the previous Twitter administration's apparent overmoderation of rightwing voices, it was still the loudhailer for what was almost an overthrow of the US government. For better or for worse, those shackles have been removed. November 2023 and on may see a replay, and the loudhailer has no filter anymore.


> it was still the loudhailer for what was almost an overthrow of the US government.

Baseless conspiracy theory.


Handpicking journalists, and then laying down requirements for them to follow, vs something like the Panama papers where every news agency was granted access. Those things are not the same.


Elon is smart, he's making all these news on Twitter to increase user engagement on his platform.

I'm pretty sure he'll release this publicly à la "wikileak" to everyone after the exclusivity on Twitter has worn off.


The only requirement was that the story be broken on Twitter. Hardly the most chafing restriction a journalist has ever been subjected to.


The only requirement we've been told ...


I think others have already replied with basically what I would have replied with, but to be clear: I was not talking about "Twitter Files Part 2", I was talking about the comment thread you posted.

My hyperbole was not necessary, you're right, but one can't say EM doesn't have disdain for MSM - he says it almost every day.


Political candidates can spread lies and hate. I don’t see why they shouldn’t be subjected to the same rules as everyone else.

Musk feels the same way about Alex Jones — who was part of a civil and not criminal action. What he did was immoral but not illegal. I’m glad we have a system to address the harm he did but disappointed in Musk that he can’t apply the same standard beyond Jones.


There must be a ton of ex (or current) Twitter employees on HN: can one of you verify if the user management tool shown in the photos is an actual thing and provide more information on how it works?

As another sibling comment states, other platforms also do similar things. It's easy how such systems get started and evolve. In fact, given the severe downvoting I see on this and similarly polarizing threads on HN, I think if someone built a browser extension that hides comments by users from a user created list many people HN users would probably use it.


It is, and none of this is new information, given this article from 2020:

https://www.vice.com/en/article/jgxd3d/twitter-insider-acces...

A more pressing question, IMO: one of the screenshots shows a “direct messages” button. Does Weiss and/or Taibbi have unrestricted access to read users DMs right now?


The 2020 article that you linked only mentions an internal tool that was used to change email addresses, it does not mention visibility filtering at all, which I believe is the new thing here.


The images included have "Trends Blacklist" and "Search Blacklist" tags on accounts which seem to be the 'smoking gun' for Bari Weiss


Which is kind of funny since it’s been known these things exist since at least 2010:

https://www.nydailynews.com/entertainment/music-arts/justin-...


"Does Weiss and/or Taibbi have unrestricted access to read users DMs right now?"

Now that would be interesting.


[flagged]


Or anyone he considers his enemy/detractor. Anyone who has ever exchanged words with Elon, or even looked him the wrong way better have deleted their DMs - and asked their counter-parties to do the same - before Musk walked in the door at Twitter HQ. Let that sink in.


deleted dms likely mean nothing, they probably just get marked in the db as 'deleted' and not displayed on normal clients... but to the elon/admin client no such restrictions


why do you want to distract readers with BS like access controls imposed on the journalists?

the head of twitter security testified to Congress how twitter had poor security controls and audibility across the company. If that button does allow ability to read DM, Twitter employees have surely abused that for ages without repercussions.

Let’s stay focused on the important details rather than worrying about established journalists exposing the power the former twitter employees wielded to sway public opinion


> If that button does allow ability to read DM, Twitter employees have surely abused that for ages without repercussions.

How many employees have access to the tool? Do all users see that button? Seems pretty likely they’d have tiered access levels, no? What if access to user DMs was a hugely restricted piece of functionality and Musk just said “screw it” and gave it to two non-employees on a whim?

You’re writing off a potentially huge breach in user trust, not to mention legal repercussions.


Twitter employees were seen bragging about this long ago, as exposed by Project Veritas - https://twitter.com/Project_Veritas/status/16010371927882915...

you are pushing fake news to distract from the core abuses, all to protect liberal interests


You’re running some tired schtick here. How is what I said “fake news”? If Musk opened up DM access to two non-employees that is a huge legal liability. You haven’t actually countered that point with anything, you’ve just waffled.

You’ve also not explained why an audience like HN is incapable of discussing multiple issues in a discussion forum. Do you really think everyone here is so dumb?

Let’s be real here: you don’t want any discussion that strays from the talking points dished out by Weiss because such discussion would show her talking points are largely nonsense. The Twitter Files is a house of cards but as long as the right people keep saying it isn’t the audience will keep huffing it like glue.


https://twitter.com/ellagirwin/status/1601084794288640000

the new head of twitter trust & safety notes she provided screenshots of the internal service to the journalists

please continue trying to distract with the fake news; the liberal house of cards are falling down and I’ve got my popcorn out


> "Our team was given extensive, unfiltered access to Twitter's internal communications and systems"

One of the journalists in question. Also the wording in your tweets leaves some wriggle room:

> For security purposes, the screenshots requested [...]

Are we going to get some transparency from Twitter about exactly what tools/communications were accessible by the journalists in question? Otherwise this seems to be an exact repeat of what Twitter has already been doing for years, except now it's a private company and accountable to only one absurdly rich individual.


Incredible, how openly twitter gas-lights users when it comes to policy enforcement.


I mean, that's not news, is it? Virtually every social network does that (even HN shadow bans people). (Not saying it's right, only unsurprising.)


It's surprising when you see it happen to your own content. Check out the reactions from people whose content has been secretly moderated that I quote here [1]

These aren't far right or left, it's everyone. Likely all of us have been secretly moderated at some point without our knowledge.

When I discovered this happening on the platform I used, I built a tool to let me know when it happened, and I published that tool for others to use. Now it sees 500k users per month. Similar tools could be built for all other sites that do this.

[1] https://news.ycombinator.com/item?id=33916519


> even HN shadow bans people

HN also gives you an option to see this content in your account settings (showdead = yes).


That's different to letting the person it was done too see it.


Fair. It would be trivial to add this functionality, I would assume.


If you defined "that" in "every social network does that" for HN and the other major social media sites, you'd realize just how different the definitions are. The differences matter.


Not to mention the commenters here as well:

"There's no evidence people and tweets are being suppressed"

<Shows Evidence with receipts>

"This is old news, nothing new here."

Is there a mailing list where these things are coordinated? I remember there was JournoList in the late 2000s, so is there a new one for internet commenters of a certain political persuasion? How do they get the talking points memo?


> "There's no evidence people and tweets are being suppressed"

Wait, where is this one?

I'm sorry, but it doesn't take coordination to know that shadowbans exist. It's spoken openly on Twitter, there are tools to test accounts, etc. It's not surprising because it's open knowledge. No political persuasion necessary to be in the know here.


> Twitter denied that it does such things. In 2018, Twitter's Vijaya Gadde (then Head of Legal Policy and Trust) and Kayvon Beykpour (Head of Product) said: “We do not shadow ban.” They added: “And we certainly don’t shadow ban based on political viewpoints or ideology.”


I bet a large fraction of HN didn't know anything about Twitter's moderation; this is normal. I also bet a large fraction of HN did know.


> I bet a large fraction of HN didn't know anything about Twitter's moderation; this is normal. I also bet a large fraction of HN did know.

I'm sure this is true. But the claim is that there's some politically-inclined motivation to it all.


Are you sure those are the same people? It's not like HN is a hivemind...


The same coordination network which was used to agree that the first drop's response had to include "doing PR for the world's richest man", the same phrase magically repeated by a dozen establishment journalists.


[flagged]


What are the political leanings of random Twitter employees...?


A glance at political contributions of Twitter employees at 99.7% to one party gives you a very strong prior in what the political leaning of random Twitter employees to be.


The same thing happened with negative coverage of the tech industry by the mainstream media. Journalists kept acting like tech people were just being paranoid about the media. Then recently Yglesias casually tweeted that there was a topdown directive for negative coverage, and acted like it was no big deal and everyone should have known.


All platforms secretly action content, not just Twitter. See my other comment listing some:

https://news.ycombinator.com/item?id=33916414


It seems like the dispute here is around the definition of "shadowban". We can all agree it includes situations where the user posts and 0 other people see the post.

Twitter admits to limiting visibility of posts.

So the dispute becomes a spectrum/threshold question. Most people would say that if 10% of the people who would normally see your posts get to see them, that's pretty shadowban-y. And most people would agree that if 90% get to see your posts, that's more like "limiting visibility" than shadowbanning.

The key questions are: (1) to what extent was visibility reduced, and (2) at what threshold should we consider "limiting visibility" to be tantamount to a shadowban?

There are also indirect effects to consider: if your posts are shown to only 60% of the people of a normal post, then you will get fewer RTs and fewer new followers in the long run. So consistent "visibility limitation" has a compounding effect that could substantially slow your account growth and reach over time.


I’m not really shocked. Interesting to see it all finally come out though.

People were certainly made fun of for implying that they were shadow banned (in the sense that they were more suppressed than the average user) and tons of people would outright deny it was even happening.


In the early 00s forums dealt with “trolls” as we labeled them. Moderation was used to keep things on topic and deal with people who did not add to the community, made threats and really made being a moderator annoying. Total free speech gives you 4chan or kiwi farms. There needs to be a balance of content moderation to keep off literal hate speech and threats to prevent that. Twitter would not be a billion+ dollar company with out moderation. So if Elon wants to make the process more open I am all for that but really any one who has dealt with the cesspool forums and social networks can become knows content moderation is needed and moderators are human in the end. Sometimes you simply can’t have a conversation with a troll to correct the behavior.


Total free speech gives you a place where openly discussing a topic is easier than without it? OMG! Sign me up!!!


When Musk announced his new policy towards moderation (speech, not reach), I pointed out it sounded no different in practice than "shadow banning", which was already something that certain segments of people considered censorship.

The reaction to this drop, which just describes how any moderation system at scale works/would need to work, is a nice confirmation of that belief.

"They have a secret blacklist!!"

It's a database.

They have a database.


One important change coming from elon:

Terminal for each user showing their Shadowbanned status. And ability to appeal against it.


They will 100% eventually, if not soon after, have a special status, that isn't visible to the end user, and isn't appealable, because moderation at scale requires it.

It'll be called something different, or framed in some different way of course.


HN after post on Twitter's shadow bans: Its all normal. Move on.

HN after post on some user getting booted off Stripe: This is not normal. Why is Stripe doing this?


Twitter's current head of Trust & Safety has admitted that the posted screenshots were "requested"[1] from her. This clearly isn't a balanced report.

https://twitter.com/ellagirwin/status/1601084794288640000


I think you're misreading that tweet. What she is saying there is that the reporters did not have access to Personally Identifiable Information (PII) and that they had to get information on the various filters applied to an account via screenshots of the control system provided by the internal Twitter official who was has this access and is also responsible for keeping PII private.

Nevertheless, thank you for linking that tweet. It's an important one for clarification of the process that is being followed, and the care being taken.


Twitter calls it "visibility filtering" [1]

Facebook gives mods a "Hide comment" button [2]

TikTok calls it "visible to self" [3] [4]

Truth Social does it [5] [6] [7]

Reddit shows all removed comments to authors as if they're publicly visible [8]

Open source tools are built to do it [9] [10]

Textbooks advocate "disguise a gag or ban" [11]

I call it Shadow Moderation [12]. The system intentionally does not show users the ways in which their content has been actioned. The solution is simple— provide users with the same view that the moderating system has. Whenever their content has been actioned, let users see it.

It may be the result of two groups who fail to connect. Those who don't want any censorship at all, and those who want disinformation to be handled by the platform. If there is no olive branch and no concession made between these two positions, then platform designers may seek to satisfy both by secretly actioning content.

If there is now wide understanding that this happens everywhere, maybe we have a chance to build a platform whose express goal is to not withhold censorship actions from the author of the content.

[1] https://twitter.com/bariweiss/status/1601014175366402048

[2] https://www.agorapulse.com/blog/hide-comments-on-facebook/#o...

[3] https://www.theguardian.com/technology/2019/sep/25/revealed-...

[4] https://netzpolitik.org/2019/cheerfulness-and-censorship/

[5] https://www.tiktok.com/@cheyenne.l.hunt/video/71115053899502...

[6] https://www.citizen.org/article/truth-cant-handle-the-truth/...

[7] https://www.tiktok.com/@cheyenne.l.hunt/video/71118581026385...

[8] https://www.reddit.com/r/CantSayAnything/comments/zdubov/wri...

[9] https://getstream.io/blog/feature-announcement-shadow-ban/

[10] https://meta.discourse.org/t/discourse-shadowban/85041

[11] https://kraut.hciresearch.info/wp-content/uploads/2020/02/ki...

[12] https://cantsayanything.win/2022-10-transparent-moderation/


>If there is now wide understanding that this happens everywhere, maybe we have a chance to build a platform whose express goal is to not withhold censorship actions from the author of the content.

I want to know what rock people have been living under all this time, that this sort of thing is news to them.


I think to a lot of people who pay attention to this sort of thing (eg the crowd already familiar with tools like reveddit) might not be a shock but it is a rare case of it being admitted in clear terms with specific examples which correspond to the sorts of claims that are often dismissed as rumors by third parties. While not anywhere near as high profile, it's kind of like how anyone who was paying attention probably had a decent idea of the kind of surveillance governments might be engaging in, but it was still a big deal when Snowden released the documents making it undeniable.

As the thread says, Twitter used a bit of trickery by calling it "visibility filtering" so they could lean on the technicality that "we do not shadow ban", because they too preferred to leave in some plausible deniability.


It's a shock to most people [1]:

> ...what is the supposed rationale for making you think a removed post is still live and visible?

> ...So the mods delete comments, but have them still visible to the writer. How sinister.

> ...what’s stunning is you get no notification and to you the comment still looks up. Which means mods can set whatever narrative they want without answering to anyone.

> ...Wow. Can they remove your comment and it still shows up on your side as if it wasn't removed?

[1] https://www.reveddit.com/about/faq/#react


Every person I introduce to reveddit has the exact same reaction. Shock and righteous indignation.


Shadowbanning only works because it is news to people. That's the whole point. People aware of shadowbanning can easily check their posts from a private browser window or similar to see if they exist.

If it wasn't news to people, then no companies would use it, because it would be ineffective!


The existence of shadowbanning, much less fine-grained moderator manipulation,'secret blacklists' or opaque moderator actions shouldn't be news to people, certainly not here. Much of the outrage around this, like much of the outrage in the 2000+ comment dumpster fire that is the previous and still ongoing "Twitter Files" thread, is manufactured for political purposes.

Like yes, social media platforms can moderate content and decide how it appears, or doesn't appear, under whatever arbitrary criteria they want, and they don't need your approval first. That isn't news at eleven, I guarantee it was spelled out in the terms of service no one ever bothered to read.


Shadowbans have existed for literally 40+ years. There have been multiple forums, platforms, channels, BBSes which try to make an egalitarian system where moderation is clear and transparent. These systems all universally fail because it fails to function at scale or it becomes easily gamed by bad faith posters or it falls apart because you get the truly fixated and toxic posters that tear your community apart.

The problem is you assume every actor is rational. That they all want to simply post in good faith. The moment you make that assumption you become a fool, someone that will be manipulated and controlled. You will get people gaslighting you into thinking they're the heroes while they've been sending threats of violence to other members of your community.

There is no good solution. I doubt there will ever be a good solution because it's trying to solve an inherent problem in the human condition.


You're right the existence of shadow bans is old, but you're wrong that it's critical for a forum. Discourse doesn't use it, and Mastodon I believe is reviewable. So was Usenet. Kill files were controlled by users.

Also, Reveddit is used extensively on Reddit without subreddits falling apart. Mods of many groups will often link it to provide transparency into their work.


> Reddit shows all removed comments to authors as if they're publicly visible [8]

This is not true. There are cases where comments are silently removed without any public indicator; To the author, it appears as if the comment is posted normally. To other accounts however, there is no indication that the comment was ever posted.


There may be some wires crossed here. Reddit does show all removed comments to the people who wrote them as if they're not removed. I built Reveddit to reveal this. You can lookup a random user with this link [1]. There's a 50% chance they have a removed comment in their recent history, and chances are they weren't told that it was removed.

> To other accounts however, there is no indication that the comment was ever posted.

This is another important detail, so thank you for bringing it up. Removed leaf comments (comments with no children) do not show any marker at all. I mention it in my talk at 1:23 [2],

> Note that other users only see one marker here. They are not shown a marker for replies that have no children. So there is no indication to them that there was even a second comment here. The vast majority of removed comments fall into this category. Anything removed automatically for containing a phrase or link from a subreddit's ban list has no chance of receiving a reply.

[1] https://www.reveddit.com/random

[2] https://cantsayanything.win/2022-10-transparent-moderation/


Thank you for pointing this out, I never realized just how many of my comments had been removed in this way! Pretty freaky considering how many of them were simply just citations / sources for things. I was honestly under the impression that my 10yo account never had a comment removed until now.

I use reveddit a lot on threads but never thought to use it on my own account. Thank you for making such a useful tool!


> I use reveddit a lot on threads but never thought to use it on my own account. Thank you for making such a useful tool!

No sweat. FYI there's also an extension for alerts, linked below. My main goal was to alert people to their own removed content. I'd love to know if there's a better way to route users in this direction. There's currently a little note in a blue box that appears if you've never visited a user page.

https://www.reveddit.com/add-ons/direct/


This may be tangential to the topic at hand, but thank you for carrying the torch for true transparency on Reddit.


Thanks for your support! I think it's self evident that transparency is a better idea, at least the case of content moderation. The other way, full control via opacity, doesn't work out. You have to remove what you don't like, then remove comments that discuss what was removed, and so on.


Add HN to that list.


Thank you for your terminology. I will now go out into the world and discuss your term, "shadow moderation", at length with my fellow peers. We need more independent thinkers like you in the world.


> .win domain

lol, you had me going for a moment there.


It's my blog's domain, which I got for $2/year, what's wrong with that? We should turn up our nose at everyone who bought the cheapest domain?

Interesting you bring this up though because tweets mentioning .win domains still get shadow removed, even today with the new owner.

This Tweet [1] only appears when directly linked. It does not appear beneath its parent [2] for anyone but me.

[1] https://twitter.com/rhaksw/status/1594103021407195136

[2] https://twitter.com/TheFIREorg/status/1594078057895063553


That's likely because .win has a terrible reputation due to the number of shitty websites on that TLD. I would personally prefer to spend a few $ more to avoid creating an unnecessary negative impression.


The point is people won't notice this because Twitter doesn't tell you your tweet was hidden.

Does it make sense for Twitter to shadow remove tweets from a whole TLD? The fact that it needs to be done secretly is evidence that most users don't know that using such a TLD will "create a negative impression".


I don't think it 'needs' to be done secretly so much as big companies cannot be bothered to get into fights over every moderation, and find i more economical to avoid legal liability by not saying anything at all. They don't bind themselves with any promises to do so in their user agreement (which is naturally tilted fully in their favor, since their most avid users are not their customers, as such), and a calculated indifference means less overhead than clear communication. Is that infuriating to some people? Sure, but it's how a lot of businesses operate because it's in their economic interest.

Again, you're not a customer. The advertisers and data brokers are the customers. Keeping this in mind is why I lose little sleep over being shadowbanned and having my visibility limited on social media.


> I don't think it 'needs' to be done secretly so much as big companies cannot be bothered to get into fights over every moderation

And then you get into a situation where it can be alleged that a whole political ideology, half the country, has been sidelined by your secretive work.

Short term, that worked for awhile. It is not a long term strategy.

> you're not a customer

Users are part of the product. Lying to them en masse is not a good way to build trust.

> The advertisers and data brokers are the customers. Keeping this in mind is why I lose little sleep over being shadowbanned and having my visibility limited on social media.

I guess you've never tried to say something that the majority didn't want you to say, as has happened many times throughout history, from the Spanish Inquisition to Anthony Comstock to Communism to McCarthy to the Civil Rights Era.

Free expression is held in regard not just when dealing with government, but also when dealing with each other. Shouting over each other isn't considered civil discourse, and neither is censorship wherever it occurs, in private or public places.


I guess you've never tried to say something that the majority didn't want you to say

I've been jailed twice, stabbed, kicked in the head, had my ribs broken, and hit with a wide variety of chemical agents, at the hands of both police and mobs, all while engaged in non-violent political activity. To me, you sound naïve, at best.


Did you really just judge a blog you didn’t read on their TLD? That’s petty.


No. I read it and then I made fun of the TLD.


Its newsworthy because Twitter has been publicly denying that shadowbanning has been in place, and they've denied that preferential treatment based on preferences of internal teams has been in place. Both of those are now being shown to be false statements. Glossing over that fact is a great disservice to the conversation.


Also newsworthy because it's used to hold certain accounts down and to hijack ideas and talking points from less popular accounts on a regular basis as well. These features regularly can be used to keep certain accounts popular and profitable while others get blacklisted unfairly and with no awareness.

There is nothing to stop large platforms from using shadowban features to fuel corrupt activity for profit and influence if they continue to operate in non-transparent ways like this.


That's not the only newsworthy element. Saying that would gloss over the pervasive use of this secretive feature.

The use of this stuff is probably in the companies' terms of use, patents etc. The public is still largely unaware that it happens, as evidenced with Twitter.


Let me quote Elon:

"New Twitter policy is freedom of speech, but not freedom of reach.

Negative/hate tweets will be max deboosted & demonetized, so no ads or other revenue to Twitter.

You won’t find the tweet unless you specifically seek it out, which is no different from rest of Internet."


These “Twitter Files” have been so disappointing. I was really expecting to see some crazy stuff, but all I’m seeing is a company that was moderating their content. Even the incident that Weiss points out of someone being doxxed and Twitter doing nothing about it happened under Elon’s rule, and was probably not moderated because of the chaos that Elon created. Even the people that were “shadow banned” makes sense, I’m actually surprised that they didn’t moderate accounts like Libs of TikTok even more considering that they’re actively shutting down children’s hospitals and directing hate and violence towards transsexual people. All in all, a big nothingburger.


libsoftiktok has not ben actively shutting down children's hospitals. They have been reposting public information from the hospitals that they find objectionable, such as encouraging young children to have life altering surgery, that has been stopped in other countries because of the potential for harm[1].

There readers have also expressed outrage at the objectionable activities and raised concerns with whoever would listen.

[1] https://www.bbc.com/news/uk-62335665


Exactly what I’m saying, Libs of TikTok has been using their followers to direct hate and violence towards the LGBTQ+ community and as a result they’ve put children’s lives in danger by shutting down hospitals. Twitter’s advertisers don’t want to be associated with such hateful conduct. I’m glad we agree.


Do you have an example of them advocating for or encouraging violence against anyone?



In other words, Twitter was moderating user content according to a systemic far-left political bias at the company. But since your bias matches, you don't think it's a big deal.


If Twitter had a far left moderating bias, explain how Donald Trump remained on the platform for years while repeatedly and unapologetically violating the TOS. It took him using the platform to launch a failed coup to get him banned.

Very hard to buy that the platform was a political tool of Democrats when it was being used effectively as a political tool by the leader of the Republican party.


He was POTUS, that's why. And still they got rid of him as soon as they could. Less well known figures got booted for less.


He was POTUS when they got rid of him too. He also wasn’t POTUS while he violated the TOS before he got elected. So your reasoning doesn’t follow

All you’ve done here is confirmed that Trump, a conservative right-wing politician, got treated better by the supposedly biased left-wing Twitter mods than anyone else on the platform. He violated the Twitter TOS daily and got a free pass over and over again by Twitter mods. Really cuts against the “left-wing bias” narrative you’re pushing here imo.


If the political bias you’re seeing is “far-left” then you’re very far to the right. Advertisers don’t want to advertise on a website that is being used to direct hate and violence towards the LGBTQ+ community, advertisers are the main source of revenue for Twitter, thus content directing hate and violence towards the LGBTQ+ community are moderated. Similar arguments could be made for other ways in which Twitter was moderating their content. Those are decisions to boost revenue and to make Twitter a safer place for people to hang out, I don’t see how those are “far left” biased decisions. If employees were pushing a “far left” agenda I’d expect to see every conservative shadow banned, instead conservatives were protected (ex. Libs of TikTok not being kicked off). Last year Twitter even admitted to boosting conservative content in order to try to appease conservatives claiming a bias [0]. Im sorry but I just don’t see anything that proves there was a “systemic far left political bias” at Twitter. If anything I’m seeing a far-right systemic bias growing at Twitter with Elon at the helm.

[0] - https://amp.theguardian.com/technology/2021/oct/22/twitter-a...


The average person would not agree a far right systemic bias is now growing. What you're seeing is a correction towards the center.


Just more outrage theater by the right.


A priori, I think the idea of someone buying a prominent company and releasing internal communications about controversial decisions of the past is super interesting. Is it unprecedented?

It's a shame that he's choosing to go about it in this way. Releasing things in a slow drip via two uhhhh lets say highly polarizing pundits is bad for transparency. Everything people are seeing is first filtered through and framed by these two people. Wouldn't it be better to simply release the raw data all at once and let people comb through it? I know he says he's going to do that later, but why not now?


I don’t understand somebody’s reasoning here: it is not news so it is not true.


There must be a ton of ex (or current) Twitter employees on HN: can one of you verify if the user management tool shown in the photos is an actual thing and provide more information on how it works?



I wish they'd release the whole list of banned topics so people could decide for themselves what was suppressed and why, instead of having it drip fed to them.


I think Austin Allred gets to the heart of the matter, which is that the Twitter statements were crafted to give the impression that deboosting / deamplification was purely algorithmic, rather than based on unappealable human tagging of accounts: https://twitter.com/Austen/status/1601032626004930560


Why is this flagged ??


Users flagged it.


This post has got 75 upvotes in the last 1 hour. Surely there must be some merit to keeping it up.


It may not even be those with strong opinions on the topic that are flagging this. I think some may flag topics they see as potentially inflammatory as a means to avoid flamewars, uncivil discourse, etc.


Because large part of HN users prefer to live in denial about facts they don't like.

It's also fact there are brigades and bots operating on HN and NGAF.


"Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data."

https://news.ycombinator.com/newsguidelines.html

https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...


Always funny to see these new revelations, while at the same time reading https://help.twitter.com/en/using-twitter/debunking-twitter-...

"Does Twitter shadow ban? Let’s discuss this one upfront. Simply put, we don’t shadow ban! Ever. We do rank Tweets to create a more relevant experience for you, however, and you’re always able to see Tweets from people you follow. Check out our company blog post for more details."

and news articles from earlier, such as

2022-05-03 https://www.wral.com/fact-check-hannity-says-elon-musk-bid-e...


Wow this post is #208 on HN: https://i.imgur.com/e4lQko4.png

It's only an hour old. The post must've been flagged, which I know has a pretty heavy influence on the ranking here.

For those flagging, this conversation is worth having. Whether platforms secretly demote people's content is important to them.


> For those flagging, this conversation is worth having. Whether platforms secretly demote people's content is important to them.

It is an important topic but the Twitter thread linked is not a particularly useful resource. For one, none of this is new information:

https://www.vice.com/en/article/jgxd3d/twitter-insider-acces...

That it’s being brought out now with such fanfare smells like corporate propaganda.


> none of this is new information

It is news to many when social media takes action against content without telling the author: https://news.ycombinator.com/item?id=33916519


It’s also not new news that Twitter did this. Elon recently pledged to do exactly this!


Indeed he did suggest "free speech doesn't equal free reach".

If we're now saying that statement is bad then we should hold his feet to the fire. Conflicting statements doesn't mean there is no principle to espouse; it just means there are conflicting statements.


Exactly! I though this thread was part of Elon's pledge.


I noticed all the twitter posts disappeared on HN after the trump ban was lifted. Not sure if it's a coincidence or active policy.


you should get Musk to organize a brigade of his followers against dang and HN.


Great suggestion, I'll call him up now.


...which is why I avoid HN homepage and read it through Serializer.io (set only HN content) which also allows syncing through unique URL across devices.

HN homepage is incredibly brigaded, but since it's in line with admins opinions they don't do anything about it.


Or shadowbanned, like me.


Sadly, many don't even notice full account shadowbans [1] [2].

But account-level shadowbans are sort of known to exist. What's being reported here is harder to detect. That is, the secretive removal of individual pieces of content, or reducing the reach of an account's content.

[1] https://www.reddit.com/r/tifu/comments/351buo/tifu_by_postin...

[2] https://www.reddit.com/r/AMA/comments/8z3sd4/i_have_used_red...


How did you even find it? I only found it trying to submit it because I didn't see it. And HN loves posting dirt on tech companies here - understandably so - because a lot of it is highly technical. So this being absent is really suspicious.

Like, I'd love to know how they implemented the "Trends Blacklist". In the years older leak I had assumed that it was on the trending terms, but Weiss revealed that it was targeting people and tweets.

Or the "Search blacklist". That's basically sabotaging your own search function so that things that are popular/relevant/trending are hidden. That makes no sense for an platform to even do for anything other than censorship of people on purpose.

Or what does "Do Not Amplify" do? Does that mean they do amplify other things, but never want these accounts to be amplified by that process? How was that implemented?

Musk just tweeted: "Twitter is working on a software update that will show your true account status, so you know clearly if you’ve been shadowbanned, the reason why and how to appeal"

That'll be good to see. Exposing this stuff to the people affected is just...good faith interaction with your users. I'm sure a lot of people are going to be justifiably mad. I wonder what this will do to twitter's stock price...


> How did you even find it?

When you submit with the same link, HN brings you to the post.

I'm pretty into this topic, having spent the last four years trying to raise awareness about it. First I thought it was only one political side abusing it, then I thought it was only Reddit, and this summer I realized it was happening on every platform. I suppose others have known for longer but FWIW this is a talk I gave on what I know [1], and an HN discussion on it [2]. See also my other comment in this thread about where else it happens [3].

It's hard to figure out how to tell the story about what's going on because it keeps expanding. I've learned something new about this almost every day for a long time. I think it does boil down to a single point though, which may be treat others how you want to be treated. That doesn't always win out in the real world, but it does when the issue is big enough.

[1] https://cantsayanything.win/2022-10-transparent-moderation/

[2] https://news.ycombinator.com/item?id=33475391

[3] https://news.ycombinator.com/item?id=33916414


>How did you even find it?

I saw it in the HN Telegram feed. Stories that get taken down here remain on the feed.

https://t.me/hacker_news_feed


Discrimination as alleged by conservatives but pooh-poohed as others as merely delusional is now clearly proved.

How can tech self-regulate itself, or should there be external agencies who would watch the growing power of tech companies? and who would watch them?

Decentralization never seems to take off, so we seem to be left with these existing players like twitter, meta, google ...who are de-facto monopolies....


To me the evidence seems to point to the opposite, like the screenshot of LibsOfTikTok: that account was being shielded from moderation action by a warning note saying not to act without speaking to a superior first. Sounds like the account was at benefit, not detriment.


You think that was to protect them? I figured it was because they didn’t want any heavy handed actions to be “misinterpreted” as them putting their finger on the scale. Those actions required more political finesse.

They specifically said in their communications that LibsOfTikTok hadn’t violated any rules directly. Why the suspensions then?


> How can tech self-regulate itself

Would it be a stretch to suggest 'the government'?


Outside of removing things like CSAM wouldn't it be illegal for the government to tell social media how it must moderate?

I'm from the US though so not an expert or it's laws.


DMCA and court orders in the US. Many more in other countries.


True. Still nothing about banning or shadow banning though, or even needing to tell users they do so.


Telephone companies are forced by the government to carry certain speech.

Similar laws could be expanded to other major communication platforms.


Right, so now that this thing has been proved we're moving on to the "They did and it's a good thing" phase.


[flagged]


“The people I disagree with politically can have free speech in the mediums that have the least reach while my political tribe will freely exercise our speech on the mediums with the most reach” is a bad faith argument I keep hearing from fake leftists who are really hardline authoritarians.


The mediums which have the most reach have the most reach explicitly because they have less spam porn hate and crazy. If twitter gets more "democratic" more crazy, more hateful, less desirable and less popular shall we suggest the next big platform welcome an equal number of undesirables?


The problem is that for a lot of people “hate and crazy” is a euphemism for “political ideas I disagree with”.

This isn’t new or original. Anyone can classify their political opponents as crazy, dumb, misguided, “hateful” is the new one, ignorant, *ist.

I think a healthier framing is, people who disagree with me politically have their reasons for doing so and the best for everyone is if we have a dialog.


I think the problem is that you can't accept that twitter users don't want to engage with your bs, and that lead to twitter shadow banning it.


They banned political speech they disagreed with, and it sounds like you're happy they did.

> your bs

I have no dog in the fight.

> twitter users don't want to engage with

The country, however, is pretty evenly divided along ideological lines so there is no homogeneous "twitter users" group.


You are free to run your own website if you don't like your placement in trending topics or users feeds kind of like you can share your videos on your blog if ABC wont carry it.


> You are free to run your own website

I am, but that doesn’t solve the problem with Twitter manipulating our elections, does it?


There's a point where this "all opinions have merit" argument is just nihilism and lazy thinking.


So I see the term fascist being thrown around against conservatives frequently on sites like Reddit and I just chalk it up to people on sites like that not really knowing what the term means.

HN to me is different. I fully expect people to understand and mean what they say here.

So I'd like to know from a fellow HNer what exactly do you mean by calling "these" people fascists and who exactly are these fascists? Do you literally believe these people whoever they may be are actual textbook definition fascists? If so, why?


It's a strange hyperbole isn't it. Same as when people are accused of being "hateful" and "spreading hate" just for voicing a dissenting opinion on some controversial topics, without any hate involved at all.


You summed up what is really going on well.


There are literal fascists within the alt-right, and the conservatives as a whole have failed to distance themselves sufficiently from them.

“If there's a Nazi at the table and 10 other people sitting there talking to him, you got a table with 11 Nazis.”

Nick Fuentes is a fascist. He has complimented Hitler and Putin - even saying the media comparing Putin to Hitler “as if that wasn't a good thing.” He holds a variety of other repugnant views which are tightly associated with fascism and Nazis.

When Ye and Nick Fuentes (an actual fascist) have dinner with a leader of a party, and that party didn’t immediately abandon that leader, they all qualify under the 11 Nazi’s category. See also: no abandonment or even major pushback against the leader saying “very fine people” were chanting “blood and soil” at Charlottesville etc.

[edit] also in the context of social media moderation, most people moderated hold extreme views, so when talking about conservatives who have been moderated, they are more likely to be fascist. Same as how of moderated left users, they are more likely to be Stalinist’s or other extreme groups when compared to the left population as a whole.

The difference is, sometimes when conservatives have complained about moderation, the examples given were of prominent alt-right fascist adjacent figures being banned. Christian Nationalists and White Nationalists are fundamentally fascists, but some of them are defended in cultural wars by a good chunk of conservative voices. E.g. Alex Jones received conservative op-Ed’s in his support when he lost a lawsuit.


> “If there's a Nazi at the table and 10 other people sitting there talking to him, you got a table with 11 Nazis.”

Wow, this Nazism stuff is super contagious! I shudder to think that most of the world has caught it by now.


You jest but fascism was and is super contagious.

The Nazis went from a fringe political movement to a solid minority voice (with 20% of the seats and about 33% of the vote). Then came the Reichstag Fire and heavy repression of opposition - getting them to 44% of votes in a “free” election. And then came the Enabling Act and there were no more votes.

What this saying means ultimately is that enabling fascism via silence makes people anti-democratic. It does not take many people passively sitting out of the way for authoritarian regimes to take power - and they are only removed by the bullet box not the ballot box.

I will note I can’t find the providence for this saying actually existing in post WW2 Germany and it is likely a modern invention. It is nonetheless a true sentiment of some of the soul searching German and other fascist ruled societies made after WW2.


Fascism isn't a fad like bell bottom jeans. It forms under specific material conditions.


> a political philosophy, movement, or regime (such as that of the Fascisti) that exalts nation and often race above the individual and that stands for a centralized autocratic government headed by a dictatorial leader, severe economic and social regimentation, and forcible suppression of opposition.

Lets take Trump, since you asked for a specific example. Certainly this criteria applies to Trumpists be definition, libs of TT, etc:

[x] America first (nationalism)

[x] Attempts to discredit elections and strong arm state officials (centralized autocratic government headed by a dictatorial leader)

[x] Build the wall (severe economic and social regimentation, racist)

[x] Muslim ban (severe economic and social regimentation, racist)

[x] Undermining the free press (forcible suppression of opposition)

[x] January 6th (forcible suppression of opposition)

[x] Telling the Proud Boys to "stand back and stand by...Somebody's got to do something about antifa and the left" (forcible suppression of opposition, racist)


Pretty hyperbolic. When I think of a dictatorial leader, Hitler, Mussolini, Stalin, Pol Pot, NK's Kims, Putin come to mind.[0]

Did Trump (or any US leadership past or present) wield that kind of violent and absolute power over government and people?

> severe economic and social regimentation

Again, I think of NK, China's treatment of Uighurs, their Covid lockdown policy, etc.

The left and right have different ideas on what is good for the nation and they have some commonalities too. I think this is healthy. What's not healthy is demonizing the other side because of different perspectives and beliefs.

Ironically, this demonizing of other groups is actually more akin to fascism (not calling anyone fascist, just pointing out the irony in this case) as that is the strategy used against the opposition by real fascists. E.g. Hitler towards Jews.

On Reddit, the language used against anyone not aligning with the Left is downright scary and dehumanizing. [1]

[0] Dictator: a ruler with total power over a country, typically one who has obtained control by force.

[1] See r/politics


VF happens at google and other big tech as well, probably even at a larger scale. And they have a right to do so, but once a complexity reaches a certain level, they should be considered publishers and be fully responsible for content, just like a newspaper editors are.


The American philosopher John Rawls had this interesting concept "Veil of Ignorance"[1]. It basically says that only when you don't know who you are will you think clearly about the social justice. That is, pick whichever principles you think the society should stick to, but do so only when you don't know in which position you'll end up being. So, you can think that the justice is killing the damn riches so you can live in one of those water-sideproperties, but what if you turn out to be one of the riches? You may think that it's only fair to let poor people survive by themselves, but then you will be surprised if you are born in a struggling family. Or hypothetically, you may think it's totally fair to ban the hell of this damn Dr. Jay Bhattacharya because apparently he's spreading misinformation, until one one day you find that you are shadow banned on Twitter for warning people of climate change simply because the employees there didn't believe in climate change. So, yeah, unless we believe that all dominant companies will always be run by Californian-like lefties or our life will never be impacted by right-wing organizations or policies, this Twitter file things should concern us.

[1] https://en.wikipedia.org/wiki/Original_position


Yes.

Random thought; This would likley make for a fun board game concept.

Players are tasked with design societal rules.. and then woops, spin the dice and are now confronted by the rules they set for themselves.


Beautifully said


The problem with these "Twitter Files" is that it is being released in the form of "bombshells" when in reality they are exposing nothing. But by doing it in the form of a bombshell the right wing is making it look like they have been oppressed all these years, when it is their voices that are mostly promoted on these platforms.


Either you're kidding, or you're so far left that everything else looks right wing.


Can you post a quick breakdown of what is actually being revealed? I'm honestly shocked that anyone at least a bit technically literate didn't understand the types of tools social media companies use.

Elon and others seem to be pitching these releases as huge bombshells about conservatives being silenced. I haven't seen any evidence for that claim at all. The most detailed analysis we've got so far was a spreadsheet of employee political donations. Maybe they are building up to it?


Exactly. Elon is going for theatrics but none of the releases are big enough to break through to the average person. All of them together might but these scraps of news aren't doing it.

Also they cite these right wing personalities being suppressed but honestly there are so many of them, of course some will be flagged. I'm pretty left leaning and I'm struggling to even think who would be the leftwing equivalents of some of these names. We don't really coalesce around political personalities so who would there be to get flagged?

The presentation of these details is being done in such an overtly gleeful right-wing manner that it's hard to take it in objectively.


I don't understand how this is difficult to understand.

Twitter lied about their activities, and they targeted conservatives and the political right. Those are facts. Inexcusable.

Elon is pro-regulation in a non-deceitful manner. He's solving the problem.


And then some people downvote my comment. Probably the same intolerant cancel culture activists that plagued twitter.


Sorry about off topic but am I banned on top pages of HN? Or is this topic filtered?

Recently I don't see this thread on the first 10 pages. Now it's appeared on page 7. It should be more visible with the number of upvote and comment, shouldn't it?


Topic is being downvoted/flagged, I assume. Normally HN loves tech company drama. And there are screenshots of twitter accounts from the employee/moderator tooling. It's super interesting stuff, even just technically.


It's appeared on #27 and then be flagged. Just wow HN.


It's just the normal up-and-down of sensational submissions on HN. The 'wow' factor is a function of the attention you're paying to it.


This kind of censorship is nothing normal dang.


This is hardly censorship:

The Twitter Files - https://news.ycombinator.com/item?id=33838556 - Dec 2022 (1549 comments)

The Twitter Files Part 2: Twitter's Secret Blacklists - https://news.ycombinator.com/item?id=33915734 - Dec 2022 (481 comments)

... and it's quite normal. It happens with every sensational or inflammatory story, especially the Major Ongoing Topic kind: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....

We don't have any interest in censoring this—it's being moderated the same way we moderate anything else in this category, regardless of which way the political vectors point.

You guys are cherry-picking a tiny and arbitrary sample of datapoints—i.e. what you happen to notice, when you happen to be looking, which is (1) a tiny time sample, and (2) strongly influenced by how you feel about the story. Other people who look at different times and/or feel differently notice different things and come to completely different conclusions. Then you (some of you, at least!) turn those unreliable, sample-biased and feeling-influenced observations into flaming arrows and fire them at the mods—the same mods, btw, who turned off the flags on the story in the first place.


This is interesting... I believe you that no one at HN is putting their thumbs on the scale here. But I suspect what's happening is community censorship by "hivemind" like a post on reddit critical of Democrats getting downvoted to 0 in r/politics by the community because many of them are. Or actual nazis getting praise on 4chan...

The media is ignoring or downplaying this story. Reddit is ignoring or downplaying this story. https://www.allsides.com/story/free-speech-second-twitter-fi... shows clear bias in covering the story.

It's kinda hard to argue that there's not a massive bias in big tech: https://i.imgur.com/taGzsZP.jpg

Why is tech overwhelmingly Democrat-donating/leaning? Even more interesting, perhaps.. There are 4-5 companies with a significant Republican presence, but at first glance I can't figure out what they have in common. Oracle? Old company. Intel? Old company. HP? Old company. Uber. WTF? And not Salesforce?! Not even eBay. Neither tech nor company age explain the majority.

There's a sociological phenomenon here I lack the skills/data to figure out, but it is very interesting.


Read HN through Serializer.io, HN homepage is heavily brigaded for years already, completely unusable full of nonsense. You would be better off even reading TOP 24 hours posts through besthackernews.herokuapp.com though I prefer chronological order at Serializer to see new items and it can also sync through unique URL across devices


Quite ironic that this post itself is shadowbanned.


Please define "shadowbanned."


It would be useful to have a somewhat in-depth visual of Twitter's engineering architecture along with an explanation of exactly where in the system these 'visibility filters' were implemented. There's also mention of an 'amplification block' which makes one wonder just how natural the phenomenon of a post 'going viral' really was in the past.

Twitter claimed to be a user-driven platform (not counting bots of course) where user actions would dictate what was popular and what went viral, but now there are some questions about just how controlled such events were. Questions like "How much of the this was stage-managed?" and "How much was bot-driven?" deserve answers.


The title "The Twitter Files" is totally wasted on this and what's with the photographs of screens for a more dramatic effect instead of screenshots.


I love "The Twitter Files" name, it's hilarious and totally keeping in character with the conspiratorial nature of those involved (sounds like The X Files, very mysterious and suspicious!).

The episodic format of this - alongside being forced into exclusive twitter threads when it would be much more easily understood as a longer article or series of paragraphs - also comes across as a pretty blatant and desperate attempt by Musk to make twitter "the #1 source of news"



[flagged]


[flagged]


Am I right in believing that the only supporting evidence (and this is very weak evidence) for this claim in the two parts so far is that Twitter employees donated more money to Democrats than Republicans? Has there been any other evidence that enforcement was biased? In this latest story, Weiss said the teams were handling hundreds of cases per day and then gave a total of four right-wing examples.


[flagged]


The 4 stages.

It's not happening.

It's happening, but it's equally applied.

It's not equally applied and that's a good thing.

We already knew about it. Why is this news?


Some things that would be good if they happened don't happen


[flagged]


I feel like HN is pretty close to being the most balanced 'techie' social platform I've come across. I very often see discussions here which surprise me in that talking about them on Twitter, Reddit (in a similar community) etc would handily get your post deleted/banned/hidden. One that comes to mind is discussions about vaccine effectiveness data without either polarized extreme being dominant (ie no one acting like it's a sin to even question it and no one acting like it's some secret plot to spread autism for some reason).


It only seems balanced to those that agree with its bias.

Which is kinda proven given that I currently can't reply to you because HN is rate limiting me...

FYI: this comment took 3 hours to make


> Why does big tech, including HN all have the same auth-left wing bias?

I think intelligent people are abnormally capable of self-deception.


Everyone is heir to the same inherent flaws that plague humanity but smart people are less likely to practice self deception.

If you find yourself on average on the opposite side of the vast majority of smart people the first thing that is in order is self refection. While being smart doesn't make them correct the last thing that would be useful would be to dismiss the majority of smart folks.


smarter people are deceived by different things but they are no less subject to propaganda. one of the propaganda points that works against all people is "you are in the club of smart people, let me let you in on some things most people either dont know, or dont want to know!"


> but smart people are less likely to practice self deception

Smart people are better at contriving complex justifications that seem right but are in fact wrong.

People don’t usually set out with the intention of deceiving themselves.


Do understand that if/when you get banned from here it will because you're being rude and disrespectful.

Not because we are all trying to push some left wing agenda.


> you're being rude and disrespectful.

I don't see any of that in the above comment. Did they edit it to make it less so?


[flagged]


>apparently it has to be on the front page or you are violating the first amendment rights of those who want it on the front page!

This is a ridiculous strawman fallacy. HN loves gossip about tech companies getting caught doing things all of the other times it has happened. Hundreds of comments and upvotes on each one. But not this time? What's the difference?


> Hundreds of comments and upvotes on each one. But not this time?

These so called Twitter files have gotten over 2000 comments combined on HN. Not sure what your problem is.


[flagged]


So since their concerned have been proven justified, they don't get to talk about them because of their tone when they first raised the concern? They weren't crying wolf. They were right the whole time, which honestly justifies the howling to me.


Sorry, I don't agree with your premise. When I see a quantitative analysis backing up conservatives' claims that they're unfairly discriminated against, I'd consider it valid.

And yes, I do have a problem with their tone. Conservatives are way overdrawn at the good faith store, having made a years-long habit of hyping things up. This example is from 2005: https://zfacts.com/zfacts.com/metaPage/lib/Weekly_Standard_M...


It doesn't require your agreement to be true: https://twitter.com/bariweiss/status/1601007575633305600


I wonder what Musk would say about the idea of limiting Tweets instead of banning them.

Oh, wait!

  New Twitter policy is freedom of speech, but not freedom of reach.

  Negative/hate tweets will be max deboosted & demonetized, so no ads or other revenue to Twitter. 

  You won’t find the tweet unless you specifically seek it out, which is no different from rest of Internet.
Elon Musk, Nov 18th, 2022: https://twitter.com/elonmusk/status/1593673339826212864


This all seems like a major publicity stunt. We all know that Twitter is filtering users. The question is why they are filtered. There's a clear bias to specifically show right wingers being banned but other independent sources say it is roughly even. I don't want to really see political leanings so much as I want to see the content that caused them to be flagged in the first place. If someone promotes Nazism or some other universally hated sentiment I don't care which party they belong to.

I want to know WHY these people were flagged.

Without that being said I can't read these (and the previous posts) as any more than propaganda. It has a right wing bias and that feeds into a prior narrative but we all know that social media is amazing at causing selection bias. If they really want to show that Twitter's prior policies favored the left wing then they need to show more data and fewer direct examples. There's been this narrative that most employees at Twitter are left wing therefore filters target right wingers but that is naive unless we assume all Twitter employees' voices are equal and equally contribute to the creation of these filters.

This isn't so much whistleblowing as it is propaganda. Musk wants to drive eyeballs and the master of hype is doing what he does best. Lord around a nothing burger and promises the moon. He's just using that right wing prior that many have and exploiting that with selection bias without actually demonstrating that there is a bias.


It is there in the thread

> 3. Take, for example, Stanford’s Dr. Jay Bhattacharya (@DrJBhattacharya) who argued that Covid lockdowns would harm children. Twitter secretly placed him on a “Trends Blacklist,” which prevented his tweets from trending.


> who argued that Covid lockdowns would harm children.

Using science or using propaganda? The issue, as I'm trying to make as abundantly clear as possible, is that we still don't know what the offending tweet says. Therefore we cannot actually conclude if the down weighting was appropriate or not. If he said "I have a paper which shows that under lockdown children are performing worse on tests" then I'd be upset. But if he said "The lockdown is harming our children and turning them into literal Nazis" then yeah, I do want that taken down. I'm just unwilling to speculate as to what the tweet actually said. You're putting at lot of trust in The Free Press[0], a new (self admitted) press organization that has little history[1]. In an age of misinformation we have to be constantly asking ourselves "do I believe this information because it is accurate or because it fits my preconceived notions (priors)?" Be critical of these people too. If what they are saying is true, then fair critique, as I've laid out, will roll off of them and truth will be on their side. If they aren't saying what is true, then do we also not have a duty to speak up when we find flaws in their arguments? If you want a free and fair media, we have to be able to criticize and evaluate our internal biases.

[0] https://www.thefp.com/

[1] Google them. Literally googling "thefp" doesn't come up with them. They do not seem to be associated with the advocacy group The Free Press. And one of their sources has uses MD in their username, claims in the bio that they aren't a doctor and it is their initials, ShellenbergerMS is not a taken username... Looking these individual people up, we do find a specific political bias and not a uniform one like their website claims.


Bari Weiss was a NYT columnist for years, she resigned when her colleagues bullied and harassed her for toeing the party line. She started a substack called Common Sense with Bari Weiss, which she's been sending out daily for the better part of two years now. She apparently has over 250k subscribers, just today she announced they were rebranding that newsletter to the free press, and officially coming out as a media org. (where as before it appeared to be mostly beri weiss with regular guest columnists)

What exactly are you insinuating?


> Using science or using propaganda?

I just checked the Twitter Rules [1], they don't forbid "propaganda".

And I just checked the definition of "propaganda" [2], seems like it includes many things that nobody alleges is against the Rules (for example, any political ad is "propaganda").

[1] https://help.twitter.com/en/rules-and-policies/twitter-rules

[2] https://www.merriam-webster.com/dictionary/propaganda


This is a bad faith read of my comment and the chain of conversation[0]. Spreading false and dangerous information does in fact violate Twitter's rules. As I've repeatedly said, we can't draw any strong conclusions without knowing the actual post that resulted in the ban. Without that we are just speculating and indulging in our personal biases. Please stop perpetuating this witch hunt and help demand real proof and strong evidence.

[0] I should mention that bad faith responses even violated HN's own rules. Though this is not often used (as evidence by this entire post) but is used to encourage high quality discussions. Let's try to have one.


Simple falsehood doesn't appear to be against the rules either, only deliberate lies/manipulated media are.

> We can't draw any strong conclusions without knowing the actual post that resulted in the ban.

We don't know what the post in question is, because Twitter is not transparent and shadowbans people without telling them why. With no evidence to the contrary, we should presume Dr. Bhattacharya innocent.


> Simple falsehood doesn't appear to be against the rules either, only deliberate lies/manipulated media are.

So Twitter made this claim. FP didn't deny it either fwiw.

> With no evidence to the contrary, we should presume Dr. Bhattacharya innocent.

While I'm a big fan of Blackstone's Ratio you need to recognize that I am not accusing him of anything. He's not on trial here, the FP is. My claim is that these Twitter Files are propagating controversy without providing any substantial evidence. Dr. Bhattacharya is only part of this conversation because YetAnotherNick claimed that the post about him was direct evidence of manipulation. Which, if you read back, I said that this sample is not informative since we do not know what led to the ban. Without that evidence we don't know if Twitter acted in good faith or not.

Again, Dr. Bhattacharya is not on trial. Proof of his innocence would help The FP's argument, so why are they hiding it? The same is true for __every single one of the samples given__. We have ZERO information about the content that was posted that led to the actual down-weighting and bans. Because this information is being _intentionally_ hidden from us we must be suspicious of The FP's motives here. If you visit their website I'm sure you'll find a bias that strongly correlates with the claims they are making here: Twitter devalues right wing voices. Problem is, the claim still has no evidence. Just correlations. But these are not the same thing and that's why people are fighting in the comments. There is absolutely no information given that can allow us to draw accurate _causal_ solutions.

Don't confuse criticism with political hackery.


It's kinda ridiculous that 'bad tweets' are hidden so nobody is any the wiser unless they screenshotted it in time. I think it would be better if they were given a frame or put in a different color with a tag like 'user was banned for this tweet.'


Hearing people who weren’t censored - and who in fact benefited from censorship - tell those who WERE censored that censorship is no big deal is infuriating beyond belief.


Fascinating to see how the comment section is skewed only by the people who are invested enough in this melodrama to have come running to HN to discuss the thread. Will be interesting to see how it shakes out when the rest of the world wakes up.


No offense, but pretty much everyone outside America is already awake long time ago, especially the buggest part of world population living in Asia. Whole America has like 1/5 of Asia population.

I think you should refresh your time zones knowledge.

But yeah, Northamerican leftists are not awake yet.


Online platforms should not be allowed to moderate content according to whim or unwritten bias.

If a piece of content is moderated, it should remain in place and be clearly labeled with an explanation from the TOS.

Exceptions: If the moderated content is spam it can be collapsed. If it's illegal, it can be removed.


None of this is new information. There is an entire help page about this

https://help.twitter.com/en/rules-and-policies/twitter-reach...

The only new info I learned was Libs of TicTok apparently got special protections not afforded to regular users


The Four Stages:

1. It's not happening.

2. It's happening, but it's equally applied.

3. It's not equally applied and that's a good thing.

4. We already knew about it. Why is this news?

^---- You are here.


5. TruthSocial does it too


Exactly!

"Fascist do it, but it's justifiable when we do it because we are moral and the ends justify our means. We're moral because our name is literally antifascist!"


It’s marketing material, content moderation and free speech is a super hot topic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: