Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Stack Overflow recently put a ban on chatGPT generated answers because they often sound correct but end up being wrong.

https://meta.stackoverflow.com/questions/421831/temporary-po...



Human generated answers on SO have the same problem; they frequently are persuasively, confidently written and wrong. Isn’t that what comments, voting, etc., exists to help sort out?


If you read the post about it, the issue is the ease with which an AI can be abused to write up a few thousand confidently incorrect posts.

The problem exists on both ends but there's a rough limit to the amount of people-hours dedicated to writing confidently incorrect answers on SO, whereas someone could task a machine to pump out answers 24/7 and inundate the site quickly.


> there's a rough limit to the amount of people-hours dedicated to writing confidently incorrect answers

A personal limit, sure. But in aggregate they add up.


How will they know? I don't think they'll be able to enforce this except the most obvious cases (where answers are posted by bots without any human review). Cat's out of the bag.


Stack Overflow can firstly warn users that using machine generated text will get you banned.

SO can train a model to detect GPT3 output from different models, and then run it over all recently submitted questions and answers.

https://huggingface.co/openai-detector is already fairly good at detecting ChatGPT generated text: so far my experience is that it says it is fake then it is fake, but it can be fooled into thinking generated text is “real”.

Although perhaps a little unfair on those people using GPT to tidy up their spelling and grammar.


A text written by a human but tidied up with GPT is not detected as GPT generated text. I've tested this extensively with large volumes of human written text.


It’d be neat if chatgpt encoded a signature via steganography of some sort, sort of like a watermark



I wonder how soon well be able to make a fingerprint of a running computer system in a way that an answer can be automatically tested against a [sanitised] virtualized machine?

Within a limited domain, like Ubuntu installs, this seems doable -- though maybe in a future of essentially unlimited bandwidth and storage we can just upload a virtualized version of our computers.

Like, here's my computer, my Chat AI keeps swearing at me, how do I fix it. Then solutions posted on StackOverflew can be interpreted and tested by an AI against the virtualized copy of the machine.


I started using AI to write unit tests for me and it gives better results for writing unit tests than code, because it's simpler.

I believe it will be getting better together with TDD.


When asking ChatGPT yourself, I presume most of us would be aware that its answers could be wildly wrong. Given that we know this and it's in the front of our mind when asking ChatGPT, I think it has the potential to be very useful.

Putting those answers on Stack Overflow is perhaps concerning for the reasons outlined in that link, in a way that it isn't when we deliberately ask ChatGPT ourselves.


Isn't the main issue attribution? I mean you've got guys who put in hours a week over years to get rep and demonstrate their knowledge base then some script kiddy blows them out of the water. I'd be kind pissed. I think AI generated answers would be more acceptable if they were attributed to AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: