Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t understand how this is a surprise to so many people. The thing can’t reason. If you point out an error, even if it’s real, the GPT doesn’t suddenly “see” the error. It constructs a reply of the likeliest series of words in a conversation where someone told it “you did X wrong”. Its training material probably contains more forum posts admitting a screw-up than a “nuh-huh” response. And on top of that it is trained to be subservient.

Because it doesn’t _understand_ the mistake in the way a human would, it can’t react reliably appropriately



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: