Hacker News new | past | comments | ask | show | jobs | submit login

An additional difference is that humans know what they dont know. LLMs will happily and confidently make up nonsense.

I'm not entirely cynical on the value of LLMs, but I've yet to see one say "I dont know", or "I'm not sure, but here's my best guess".




> I'm not entirely cynical on the value of LLMs, but I've yet to see one say "I dont know", or "I'm not sure, but here's my best guess".

I've used LLMs to build form autofilling based on unstructured documents. It correctly does not answer fields that it doesn't know, and does not try to guess anything. It has been pretty much error-free.

It's all about your prompting. Without explicitly being given guidance on how not to answer, you're right, they will never say they don't know.

If you've never actually seen that happen, I encourage you to experiment more with LLMs; there's lots that can be achieved with the right prompting.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: