The meta post linked here is targeted more towards an internal audience of active users.
There are two big parts to this issue, one is that the company is overriding the decisions of the communities and essentially preventing them from moderating AI-authored content entirely. The second one is the way this was done, with no feedback at all, extremely quickly, with vast differences between the public policy and what they told the moderators.
The manner in which they rolled it out is a very real concern, but moderating based on how the moderators think you wrote the content is a dangerous policy that is ripe for abuse, and SE has a history of overly aggressive moderators. I don't blame SE for wanting to nip it in the bud.
If a post isn't constructive, they can and should moderate it away, whether or not it was AI generated. If they want to have a rate limit on number of answers per day, they can do that. But they need to moderate based on observable metrics, not guesses at what's happening behind the scenes.
A user has been writing (bad) answers in broken English for months, and now suddenly writes answers in perfect GPT-esque style while the technical aspect is still bad.
The moderation burden for the second kind is much higher, while other users are much more likely to mistake it for a good answer. If the policy is "no AI-generated content allowed", should moderators be allowed to suspend the user?
All that that example shows is that native-English bullshitters have long had an advantage over ESL bullshitters. We don't need to take away tools that good ESL contributors could use to get more recognition, we need to find an actual solution to the bullshitting that has always plagued SO.
I think this is partially true, but insist there is more to it than that.
Careful, clear writing is hard even for native speakers, and good writing serves as a proof-of-work: answers that have had more work put into them are a signal that more work has been done, i.e higher quality answers.
Also, trying to hold back the tide on AI is dumb and doomed to fail. People need to find some way to exist along with these tools. The editors at Stack Exchange are not the kind of powerful cabal that is going to roll back the last three years of computer advances.
An excellent use-case of AI with Stack Overflow would be to integrate it and use existing questions and answers to help people solve problems. A terrible use would be to feed the answers back into the system, because it reduces both the value humans and eventually AI bring.
As a person that looks at stackoverflow answers, it's also awful to see the long-winded AI answers. If there's mistakes in the answers, there's no way to get the submitter to make changes because they likely didn't understand it in the first place.
There's ways to do it correctly, but inexperienced developers opening chatgpt and blindly copying answers to harvest karma isn't it.
Regardless of whether it's doomed to fail, the content these "computer advances" are creating on the site is definitely garbage at the present time. I've flagged hundreds of posts, mostly brand-new users attempting to rep farm. A huge percentage of the answers are flat-out incorrect or have little to do with the question.
Banning all moderation without consulting the community is not a step towards coexistence, it's a step towards turning the site into a dumping ground for stale LLM spew in the name of engagement.
Companies like Stack Overflow seem to care less about the quality and accuracy of the site's content as long as the engagement numbers look good. Unfortunately, a lot of users don't seem to care about quality, either, which is how we've wound up with the current state of affairs where just about every social platform is increasingly flooded with bot spam, misinformation, scams and junk.
By banning low-quality posts, including unverified LLM answers, the site can secure a future for itself as a bastion of quality. Or it can turn its back on the experts that built the content that's used to train the LLMs and hope that LLM quality improves enough that expert humans aren't necessary.
Even if that gamble works out for them and LLMs do mostly replace humans as some commenters optimistically seem to expect, then there'll still be no need for Stack Overflow, as one can simply ask an LLM directly. That's why effectively dumping human experts for LLMs seems like a failed business model either way. The best approach to move forward seems to be to carve out a space that provides unique value that LLMs can't and ride that until LLMs make significant advances.
Disclosure/context: I'm a daily SO answerer on strike, among the top ~4k users by rep overall. I'm pretty sure most folks who are dismissing the problem don't monitor tag feeds or curate queues enough to see the flood of blatantly wrong LLM answers spamming in from new accounts.
We deeply believe in the core mission of the Stack Exchange network: to provide a repository of high-quality information in the form of questions and answers, and the recent actions taken by Stack Overflow, Inc. are directly harmful to that goal
Unfortunately, this seems a naive take; the core mission of the network is to serve the commercial purposes of the business.
That's a very snarky way to put it. It's very well within the rights of the SE community to express their perspective and demand this to be respected. That's not naive. Naive would be assuming this is guaranteed to work. But if they actually implement a moderation strike then SE is going to fall apart sooner than later. So, it's not like they have no leverage.
Expecting a business - with infamous episodes of contempt towards moderators - to behave in the idealistic way presented here is naïve.
Might it submit to pressure? Perhaps. But the wording of this letter in its presumption of the motives of a private organisation misunderstands the reality of the commercial world.
It's not a statement of expectation, it's a statement of value. Your uncharitable reading does not change at all what the statement is trying to convey: there's a mismatch between what the moderators perceive as the business' value and how the business acts to protect and expand on that value.
Very true. Throwing up your hands once you realize that corporations tend to be amoral profit maximizers is not helpful. Expecting corps (even non-profits) to behave altruistically is naive, so we have to take real steps to ensure their survival depends on them being good actors.
Specifically, organization of contributors actually sounds like a very effective way of holding these morally dubious “content” middle men accountable. The consumers are too heterogeneous, numerous, and uninvested to realistically coordinate.
Of course, but strikers are paid workers operating within a legal process, they can't just be easily supplanted by free alternatives as can moderators.
But that's getting off the topic of whether or not the wording of the quoted paragraph reflects reality.
The idea of a strike being 'legal' or not is an unfortunate yoke that labor movements have allowed to be fitted to them. It's a silly idea and should be addressed as such. Labor has power. The collective masses have power over the few who oppress them. It doesn't matter if those people decide to call certain labor actions legal or illegal, they should happen (and must happen) all the same to keep the balance of power equitable.
> they can't just be easily supplanted by free alternatives as can moderators.
That's an incredibly naive take. Do you think there is an abundance of moderators just waiting to fill the ranks? Several SE sites are struggling with a lack of moderators already.
Similarly, there is no abundance of community members curating questions and answers. It's all volunteers, and it's not like there's a huge untapped pool that they can access on-demand.
It's kinda like saying that the entire community can be easily replaced because it's free. No, the community is the business value!
Lastly, the main anti-spam tool of the site (smokedetector-se.org - community-built and -hosted) is also offline as part of the strike. The tooling built by SE the site is in no way shape or form adequate for combating the amount of spam the site receives. Sure, it's not irreplaceable, but it's not "free".
My original point was about an incorrect reading of what the network is there for; the distraction into strikes and whatnot might be of interest but not on that specific point - which holds regardless of your rather silly supposition.
Are we heading towards a Soviet Union 1.2? It seems like there’s writing on the wall that isn’t clear enough for my eyes. It seems that a temporary mass destruction of free market economy is anticipated.
Not to get too tinfoil hat about it, but an economy controlled by a number of equity firms one can count on a single hand, with fingers to spare, hardly seems a "free market".
Hence recent developments of large brands committing billion-dollar acts of seppuku in near-realtime: somebody is pulling strings in non-obvious, non-"free market" ways.
It's a zany time to be alive, and one hopes that sites like this one can help redistribute a "free market" sensibility that seems on the wane.
> Not to get too tinfoil hat about it, but an economy controlled by a number of equity firms one can count on a single hand, with fingers to spare, hardly seems a "free market".
Is this the index fund conspiracy theory again? Where companies like Vanguard supposedly form some sort of shadowy evil cabal that secretly controls the entire economy?
I don’t fundamentally disagree, just I’m feeling uncomfortable watching the world slowly converging into “a great war for a great reset” type of thinking.
Sure, which is why it is not helpful or interesting to point it out.
When companies talk about their "mission", they are referring more to how they intend to deliver value to their owners, usually by identifying some social need and satisfying that need in exchange for value.
The core mission of the system that is stack overflow is in the eye of the beholder. If you ask a army of unpaid volunteers who maintain it or one of the even larger army of contributors who actually provide expert level content for free, I think you'll be a different answer than the C-suite will give you.
The business is a helpful abstraction layered over the top of all these people doing the actual work. It's useful as long as it keeps the lights on. When it stops doing that the community has the right and the responsibility to move the work product of the huge quantity of people actually doing the thing here somewhere else.
Well then maybe "the business" shouldn't depend on the gratis effort of hundreds of thousands of unpaid volunteers. If I read between the lines, it would seem to me that management has decided they can make more money by allowing AI-generated content. I don't think this is true. Yet. But whether it is or isn't, this is a revocation of the implied contract we, the users, have had with the site.
To the extent that this is true, it's a corruption of the purpose of companies in our society.
The reason we created "corporations" as a concept was not simply to have a vehicle to make as much money as possible, any way possible. It was to serve society by providing services and making products, which would then be sold, and, if they were good enough, would make the company a profit.
This idea that no business should ever be expected to do anything but what makes them the most money the fastest is toxic and is ruining our economy and our society.
Well this is off the topic of the quoted passage in my comment, but your use of "unquestioningly" - is a strange choice of word, and reflected elsewhere in the letter in "unchecked". I found that quite a dubious wording as well.
That's exactly what moderation is for, checking, questioning, etc, its not an argument against AI-generated content.
There seems to be an emotional reaction, but people will still value the highest quality content, whatever its provenance.
If the argument is that the workload will be too high, well make that argument - don't sidetrack with misleading and idealistic appeals to emotion.
Unfortunately (again), there are plenty of people who will buy from and work for businesses that don't subscribe to the same views as those in the letter.
The problem is that the public policy as posted by SE is simply different than the private communication from SE. Those private rules were shared in places that have an expectation of confidentiality, so the moderators are not entirely free to just post those without violating that expectation.
The public policy by SE is misleading, it makes the rule appear a lot different than it actually is. I am a mod on a small SE site, so I have seen the internal communication and it does essentially prohibit moderating AI-generated posts except in some very narrow circumstances.
https://openletter.mousetail.nl/
The meta post linked here is targeted more towards an internal audience of active users.
There are two big parts to this issue, one is that the company is overriding the decisions of the communities and essentially preventing them from moderating AI-authored content entirely. The second one is the way this was done, with no feedback at all, extremely quickly, with vast differences between the public policy and what they told the moderators.