Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That applies to humans as well.


No it doesn't, humans can recognise when they don't know something, current language models usually can't (yet)

Their training objective, which is to predict the next piece of text in their training data, does not incentivise them to respond that they don't know something, as there no relation in the training data between the AI not knowing something and the correct next text being "I don't know" or similar




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: