Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The garbage is in the source material used to create the model, not the questions.


More likely due to lack of "good" data than to existence of "bad" data. ChatGPT is know for its ability to "hallucinate" answers for questions that it wasn't trained for.


Same comment still applies. ChatGPT sometimes gives good and bad answers.


In fact ChatGPT doesn't know anything about true and false. It's just generating text that most closely resembles text it's seen on similar subjects.

E.g. ask it about the molecular description for anything. It'll start with something fundamental like the CH3N4 etc then describe the bonds. But the bonds will be a mishmash of many chemical descriptions thrown together. Because similar questions had that kind of answer.

The worst part is, it blurts forth with perfect confidence. I liken it to a blowhard acquaintance that will make up crap about any technical subject they have a few words for, as if they are an expert. It's funny except when somebody relies on it as truth.

I don't think GPT3 at its heart is an expert at anything. Except generating likely-looking text. There's no 'superego' involved anywhere that audits the output for truthfulness. And certainly no logical understanding of what it's saying.


I love ChatGPT for simple tasks. It is currently wreaking havoc on some communities tho. Including one I created on reddit.

https://www.reddit.com/r/pinescript/comments/1029r7p/please_...

People have taken to asking ChatGPT to create entire scripts to trade money. When they don't work, they go into chatrooms or forums and ask "why doesn't this work" without saying it was made by ChatGPT. It causes people to open the post, read it a bit and only maybe after a minute or two of wasted time, realize the script is complete nonsense.


I'd argue that level of ambiguity counts as garbage out, although I'm confident it will get better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: