Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Although I do foresee Reddit “super mods” forcibly removing the blackout through their backend, removing mod status from the “rebels” and inserting their own friendly mods until they can find replacements.

I agree this is likely, which will extend the negative reactions further and alienate even more users. Unless Reddit execs do a 180 in the very near future I think the platform will be forever crippled.

Large networks rarely die overnight and whatever happens they will still have a number of users for the foreseeable future, but the steady fade to irrelevance will begin in earnest.

Power users drive a huge amount of content and moderation, and they are especially livid right now for good reason.



There are not enough volunteers to handle the load. Companies depend on free labor to moderate things and will never be able to afford the actual cost of doing business without that free labor.

I currently moderate 2 subs (down from 5). I'm looking to offload those 2 without having them get banned for being unmoderated (again).


You say this, but it’s only a matter of time until we get AI moderation.


I can't wait to make a seemingly-innocuous post on some subreddit, only to have that post removed for "failing to meet community guidelines" and receive absolutely zero intelligible response from the AI moderators.

To be fair, that's still possible (and sometimes the case) with human moderators, though I find that experience to be exceptionally rare, and the remedy (just don't use that subreddit) easier to perform. At least if the mods of some sub go haywire, users can always create another sub. With widespread use of AI moderation, there'd be nowhere to go to escape.


> I can't wait to make a seemingly-innocuous post on some subreddit, only to have that post removed for "failing to meet community guidelines" and receive absolutely zero intelligible response from the AI moderators.

THat's how their whole "Appeal" process is now. We had accounts suspended permanently for daring to question the mods at a large subreddit, and they nuked every account that logged in from our home IP. Wife's account, which was innocent, was permanently suspended. Same with housemate behind us that used our WiFi.

The appeal process? Within 5 minutes we got a useless and canned message that was along the lines of "After careful review, it was been determined that... blah blah blah."

We've since made new accounts, but it was 100+ days later, and all of our reddit usage is way down as a result. My account alone was 8+ years old. Gone, no appeal, nothing.

Reports in /r/help were that they used AI to handle appeals. Not sure if it's true, but it feels like it is.


Strange, I had all of my accounts banned but my spouses weren't. Same wifi. Appeal process was over 48h later and I was unbanned. If they were doing that, it seems any account that ever used Tor or a VPN would be banned.

Maybe they saw your "it was my neighbor on my wifi, I swear!" and reflex-nuked.

Also, mods don't suspend accounts, employees do. From user reports. Then admins reinstate or don't.

I wouldn't put it past them to auto-nuke any appeal with the word neighbor, though. That's the "my dog ate my homework" equivalent of trying to get out of a warranted ban.


FWIW, I've played around with Palm a bit and if you feed it some rules it's pretty good at explaining which rule a "bad" post violates. It could be a good initial triage


Kind of happens already with auto moderator.


We already have AI moderation at places like Facebook and they still need a small army of moderators. Zuckerburg has been trying to automate moderation for years.


I wonder if they will still make the AI sign an NDA so it can’t talk about the PTSD it has from work.

Because they make all their content mods sign one that also absolves them of all responsibility.


Something like GPT 4 didn't exist 5, 10 years ago, re Zuckerberg. It won't take much further improvement for eg OpenAI's models to effectively handle moderation based on learning from instruction for a given community's moderation needs.

There's no scenario where the AI moderators don't become superior to human moderators in a broad sense (only in select, very narrow cases will humans still be better at it). That's one of the easier areas for AI to wipe out in terms of human activity.

This isn't then. The game has changed, permanently and dramatically. Anybody calculating what I'm saying based on GPT 3.5 or 4 is doing it very wrong. That's a comical failure of not looking ahead at all. Look at how far OpenAI has come in just a few years. Progress in that realm is not going to stop anytime soon.

Nvidia is unleashing some extraordinary GPU systems at the datacenter level that will quite clearly enable further leaps in LLMs, and they'll mostly trivially handle moderation tasks.


Pretty arrogant to call it a comical failure when OpenAI is openly stating they're not training GPT 5. I think people who think we can do much better than GPT 4 without insane cost scaling and a nuts amount of data (that's going to get harder to collect when regulations start to kick in), probably don't know what they're talking about. We're deep into diminishing returns here.


I do think we can do much better than GPT4. Size isn't everything. Small models can outperform large models, when trained and finetuned in the right way. And transformers are hardly the be-all and end-all of language AI either - there's plenty of reason to believe they're both inefficient and architecturally unsuited for some of the tasks we expect of it. The field is brand new, and now that the ChatGPT has brought the world's figurative Eye Of Sauron on developing human-level AI, we're going to see a lot of progress very quickly.


Yeah, but if the tech keeps improving at the rate it has been then more and more of it is going to get automated. And tbh, that’s a good thing. Moderating is a horrible job that frequently causes mental health issues for the people who have to do it.


I expect that AI will increase the need for human moderation. LLMs are better at producing passable spam than identifying it, so any gains in AI moderation will be more than offset. I actually worry that AI spam will become too powerful for even good human+AI moderation and we'll see the death of high quality open discussions online.


Where will reddit get enough mods to police the hundreds (thousands?) of subreddits each with tens of thousands of members that are rebelling against the API changes? Blocking/firing the existing mods would make the site descend into chaos.


They are really only going to care about the top tier subs, the rest will be replaced by the community. So really we're only taking 20-100 subreddits, depending on definitions.


Depends on what you consider "top tier". 26 subs with over 20 million subscribers are private or restricted. 86 subs with over 5 million subscribers. I was too lazy to count, but there's several dozen more over 1 million subscribers and a ton over half a million.

I'm skeptical they can afford to lose all those subs or replace their mods.

source: https://reddark.untone.uk/


8000 subs went dark.


[flagged]


Wow, very condescending!


I don't know... I've been on there on and off all day, and it's honestly better today than it has been for a long, long time.


Well they’ve already started doing it


Which subreddits?


source?


[flagged]


I recently learned this is actually something that college aged people do for extra cash.


Besides alienating the users even further directly, forcible replacement of the mods with sycophants will also make the subs be ruled by people who don't quite get theie local cultures, don't know what's it's "normal", and that'll be overburdened and outright despised by their userbases, who'll shit the subs even further.

It'll be hell.


It already happened




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: