Hacker News new | past | comments | ask | show | jobs | submit login

What's the misunderstanding? The hallucinations I've seen from AI tend to not be that dissimilar to when humans are guessing. They will guess that a function like that exists. The difference being that humans are capable of checking their work and see if this actually is true by referencing some existing source of truth.



An additional difference is that humans know what they dont know. LLMs will happily and confidently make up nonsense.

I'm not entirely cynical on the value of LLMs, but I've yet to see one say "I dont know", or "I'm not sure, but here's my best guess".


> I'm not entirely cynical on the value of LLMs, but I've yet to see one say "I dont know", or "I'm not sure, but here's my best guess".

I've used LLMs to build form autofilling based on unstructured documents. It correctly does not answer fields that it doesn't know, and does not try to guess anything. It has been pretty much error-free.

It's all about your prompting. Without explicitly being given guidance on how not to answer, you're right, they will never say they don't know.

If you've never actually seen that happen, I encourage you to experiment more with LLMs; there's lots that can be achieved with the right prompting.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: