Hacker News new | past | comments | ask | show | jobs | submit login

I don't think they can win, and I don't mean just the moderators here, SO in general can't win a fight against AI with bans of all kinds.

I see one possibility for them, embrace AI and generate a quick/automated first reply by AI marked as such (with a disclaimer) for every post. It should be subject to the same voting system.

The error of moderators and SO here is to discard AI generated answers because some wrong (but sounding right) answers it can generate... when often AI answers are also correct and even at times out do what a single human would have found/answered.

If you can harness the existing human knowledge and correct the "bad" ones (badly rated AI answer given less weight) to feed the models of the future it seems like a win for everyone in the long run. A ban misses this opportunity and generates even more work for moderators which will inevitably also ban some innocent users and valuable content.




> I don't think they can win, and I don't mean just the moderators here, SO in general can't win a fight against AI with bans of all kinds. > I see one possibility for them, embrace AI and generate a quick/automated first reply by AI marked as such (with a disclaimer) for every post

Problem is, most contributors would probably stop contributing (answering questions) if that were the case. If there's an automatic answer that is correct 2/3 of all times, that would mean lots of time spent reviewing automatic answers and lots of time "wasted" (where a contribution isn't needed), which will probably discourage most of them


This doesn't track, The OP will vote on the automated answer before the question is public. If it solves his problem then this is a major reduction in low quality question spam, if it doesn't then the AI post is already hidden so it doesn't waste mod time.


But is the OP the best placed to know if the AI answer is correct? They're obviously not an expert, and even if something seems to work initially doesn't mean it's the correct way of doing it. Especially in the context of code, which is I think the majority of Stack Exchange's traffic, where you could easily have some code which seems to work, but has some bug/edge case/insecure practice/etc.


> correct 2/3 of all times

You are an optimist


Indeed. For me the most infuriating part is the amount of low-quality questions that could easily be answered by AI before submitting the question. They definitely should embrace AI and generate an answer. If you think that question isn't properly answered, include why the AI proposed solution is wrong and let humans answer it.

As for the correctness of the solution: that's why the voting system and tickmarks are there. Wrong solutions would ideally be downvoted and never marked correct, I don't really see how AI is making a big difference here. Moderators today aren't running the answered code either.


The difference in how much effort it is to create a correct-looking/sounding answer without an LLM, and how little effort it is with e.g. ChatGPT. It's a force-multiplier on the side of people creating bad answers. Worse actually, "bad but convincing sounding answers", which drive-by voters are especially prone to misevaluate.


> The error of moderators and SO here is to discard AI generated answers because some wrong (but sounding right) answers it can generate... when often AI answers are also correct and even at times out do what a single human would have found/answered.

But why should SO exist for this? If people wanted an LLM to answer their questions, they could just go and have that. There is no purpose in caching stale LLM answers on SO.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: