This is a core critical problem. I suspect reddit has significant automated processes on top of this.
In my original comment I mention mods banning for hate speech in an abusive / ironic manner (so nobody is truly worried for their safety for example).
I think reddit used to automatically forwards reports for this category for admin review probably due to a legal requirement. Due to the mass quantity of reports vs staff, I suspect they automatically flag mod reports as credible. As such they have automatic admin superbans.
Admin reviewed bans are a higher level of ban and super bans and use every possible privacy exploit to go far beyond basic account or cache banning. It’s far beyond even IP banning.
These seem to be intended for truly extreme, severe, violent actions such as credible acts of immediate impending mass violence, coordinating true terror, promoting actual adult offensive exploitive content, etc.
Instead, they map hate speech and some other serious reports to this category. Technically, it sounds like it makes sense on paper, but this is abused so heavily that essentially meme reports are nearly sending automatic SWAT raids to people’s homes.
I’m guessing these are only summarized in a broad category to staff as “Mod reported + Admin reviewed. User banned for performing or conspiring terrorism / active police verified death threats / extreme exploitative adult content / active police verified mass murder / hate speech”
That last one technically fitting the rest if it’s used correctly (or at least one could argue) but as a chain of OR listed reasons, someone who was ironically banned because the mod “hated that they disagreed with the mod’s decision to ban scumbag steve memes from the meme subreddit” is suddenly flagged in the same category of true terrorists, as if Bin Laden himself had his own subreddit and was posting live content of his active mass public attack.
This is one reason they want people to download their app. Any association such as unique app combinations installed or other accounts (including non-reddit accounts) can be linked to a user even when they opt out of traditional tracking. They can then sell this data or use it in a magnitude of ways.
This is a specific core problem I’ve been tracking but this is a general template example of many of Reddit’s problems I’ve strongly disagreed with that go beyond general debatable business choices or general preferences.
In my original comment I mention mods banning for hate speech in an abusive / ironic manner (so nobody is truly worried for their safety for example).
I think reddit used to automatically forwards reports for this category for admin review probably due to a legal requirement. Due to the mass quantity of reports vs staff, I suspect they automatically flag mod reports as credible. As such they have automatic admin superbans.
Admin reviewed bans are a higher level of ban and super bans and use every possible privacy exploit to go far beyond basic account or cache banning. It’s far beyond even IP banning.
These seem to be intended for truly extreme, severe, violent actions such as credible acts of immediate impending mass violence, coordinating true terror, promoting actual adult offensive exploitive content, etc.
Instead, they map hate speech and some other serious reports to this category. Technically, it sounds like it makes sense on paper, but this is abused so heavily that essentially meme reports are nearly sending automatic SWAT raids to people’s homes.
I’m guessing these are only summarized in a broad category to staff as “Mod reported + Admin reviewed. User banned for performing or conspiring terrorism / active police verified death threats / extreme exploitative adult content / active police verified mass murder / hate speech”
That last one technically fitting the rest if it’s used correctly (or at least one could argue) but as a chain of OR listed reasons, someone who was ironically banned because the mod “hated that they disagreed with the mod’s decision to ban scumbag steve memes from the meme subreddit” is suddenly flagged in the same category of true terrorists, as if Bin Laden himself had his own subreddit and was posting live content of his active mass public attack.
This is one reason they want people to download their app. Any association such as unique app combinations installed or other accounts (including non-reddit accounts) can be linked to a user even when they opt out of traditional tracking. They can then sell this data or use it in a magnitude of ways.
This is a specific core problem I’ve been tracking but this is a general template example of many of Reddit’s problems I’ve strongly disagreed with that go beyond general debatable business choices or general preferences.