The manner in which they rolled it out is a very real concern, but moderating based on how the moderators think you wrote the content is a dangerous policy that is ripe for abuse, and SE has a history of overly aggressive moderators. I don't blame SE for wanting to nip it in the bud.
If a post isn't constructive, they can and should moderate it away, whether or not it was AI generated. If they want to have a rate limit on number of answers per day, they can do that. But they need to moderate based on observable metrics, not guesses at what's happening behind the scenes.
A user has been writing (bad) answers in broken English for months, and now suddenly writes answers in perfect GPT-esque style while the technical aspect is still bad.
The moderation burden for the second kind is much higher, while other users are much more likely to mistake it for a good answer. If the policy is "no AI-generated content allowed", should moderators be allowed to suspend the user?
All that that example shows is that native-English bullshitters have long had an advantage over ESL bullshitters. We don't need to take away tools that good ESL contributors could use to get more recognition, we need to find an actual solution to the bullshitting that has always plagued SO.
I think this is partially true, but insist there is more to it than that.
Careful, clear writing is hard even for native speakers, and good writing serves as a proof-of-work: answers that have had more work put into them are a signal that more work has been done, i.e higher quality answers.
Also, trying to hold back the tide on AI is dumb and doomed to fail. People need to find some way to exist along with these tools. The editors at Stack Exchange are not the kind of powerful cabal that is going to roll back the last three years of computer advances.
An excellent use-case of AI with Stack Overflow would be to integrate it and use existing questions and answers to help people solve problems. A terrible use would be to feed the answers back into the system, because it reduces both the value humans and eventually AI bring.
As a person that looks at stackoverflow answers, it's also awful to see the long-winded AI answers. If there's mistakes in the answers, there's no way to get the submitter to make changes because they likely didn't understand it in the first place.
There's ways to do it correctly, but inexperienced developers opening chatgpt and blindly copying answers to harvest karma isn't it.
Regardless of whether it's doomed to fail, the content these "computer advances" are creating on the site is definitely garbage at the present time. I've flagged hundreds of posts, mostly brand-new users attempting to rep farm. A huge percentage of the answers are flat-out incorrect or have little to do with the question.
Banning all moderation without consulting the community is not a step towards coexistence, it's a step towards turning the site into a dumping ground for stale LLM spew in the name of engagement.
Companies like Stack Overflow seem to care less about the quality and accuracy of the site's content as long as the engagement numbers look good. Unfortunately, a lot of users don't seem to care about quality, either, which is how we've wound up with the current state of affairs where just about every social platform is increasingly flooded with bot spam, misinformation, scams and junk.
By banning low-quality posts, including unverified LLM answers, the site can secure a future for itself as a bastion of quality. Or it can turn its back on the experts that built the content that's used to train the LLMs and hope that LLM quality improves enough that expert humans aren't necessary.
Even if that gamble works out for them and LLMs do mostly replace humans as some commenters optimistically seem to expect, then there'll still be no need for Stack Overflow, as one can simply ask an LLM directly. That's why effectively dumping human experts for LLMs seems like a failed business model either way. The best approach to move forward seems to be to carve out a space that provides unique value that LLMs can't and ride that until LLMs make significant advances.
Disclosure/context: I'm a daily SO answerer on strike, among the top ~4k users by rep overall. I'm pretty sure most folks who are dismissing the problem don't monitor tag feeds or curate queues enough to see the flood of blatantly wrong LLM answers spamming in from new accounts.
If a post isn't constructive, they can and should moderate it away, whether or not it was AI generated. If they want to have a rate limit on number of answers per day, they can do that. But they need to moderate based on observable metrics, not guesses at what's happening behind the scenes.