Hacker News new | past | comments | ask | show | jobs | submit login

Still, if the answer happens to be factually correct is it an issue?

Say, a person has an answer but English is not their native language and they manage to stir ChatGPT into writing a good answer. Would we prefer to have that posted instead of keeping a question hanging without an answer at all?

The only issue I can see with AI use is the rate of new content generation. Recent models are quite OK at giving a decent answer. SO is not a pinnacle of exceptionally well thought out answers from people either. There are great detailed and well sourced answers but more often then not you get an incomplete, outdated or even just plain wrong answers. Bespoke artisanal hand-crafted ethically sourced answers from fully organic free range humans that still lead to stack overflows and misalignments elements on webpages.




"But what if they were correct answers?" is largely an irrelevant hypothetical side-issue.

In practice, as reported in comments at https://meta.superuser.com/q/15021/38062 and in many other Meta Q&As, the answers that people are lazily machine-generating at high volume are far from correct; and the consequent upvotes that they garner reveal the unsurprising fact that there are a lot of people who vote in favour of things based upon writing style alone.


> "But what if they were correct answers?" is largely an irrelevant hypothetical side-issue.

Why? It becomes irrelevant if an account is spamming, sure delete everything regardless, but if I have a generated and correct answer in my otherwise pristine account?

What I'm trying to say is, the fact that some people use it to spam shouldn't make it a simple ban condition. Otherwise that'd be banning emails to fight spam.


"if I have a generated and correct answer in my otherwise pristine account?" is just a re-phrasing of the irrelevant hypothetical side-issue.

You haven't. People aren't. This is a hypothetical that isn't the reality, and an irrelevant distraction.

Go and read the comments where I just hyperlinked, then read the months of back-discussion on this in the other Meta Q&As that I mentioned, starting with the likes of https://meta.superuser.com/q/14847/38062 right there on the same Meta site, and a lot more besides on many of the 180 other Stack Exchange sites, continuing with the likes of https://math.meta.stackexchange.com/q/35651/13638.


> You haven't. People aren't. This is a hypothetical that isn't the reality, and an irrelevant distraction.

How can you be so sure? If one day comes a 100% reliable way to detect all AI-generated responses, how can you be sure that also the good ones won't get deleted in one major sweep?

Yes, I see there are many people who despise the AI generated spam on many sites. But nothing you posted proves that all (I'd even say, "significant portion of") AI generated content is spam.

I don't see anything wrong letting the AI generate an answer and edit the rough/wrong parts if necessary.


> I don't see anything wrong letting the AI generate an answer and edit the rough/wrong parts if necessary.

And this is not what the previous moderation policy was trying to prevent. What it was trying to prevent is answers from people skip that second step.


I was opposing to this part:

> "But what if they were correct answers?" is largely an irrelevant hypothetical side-issue.


I'm sceptical of the premise. ChatGPT doesn't watermark its answers. There's no decent way to detect what is "OpenAI garbage" and what is not. One of the comments says: "I detect such answers by the fact that they simply make no sense, although they seem well-written." I feel like this is a subject of survivorship bias. Would the commenter know a good ChatGPT answer from a human-produced answer?

A separate question is why there's still a lot of crap questions/answers on SO if quality is the goal? There's a plenty of low-effort and incorrect answers made by real people that are not penalised in any way.


You can ask all of these questions while also empowering moderators to use the tools at their discretion. To refuse to allow moderation while not providing any solutions is the worst of all options.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: