Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I bet that there's a similarity between this and what happens to LLM hallucinations.

At some point we will realize that AI will never be perfect, it will just have much better precision than us.



I honestly see hallucinations as an absolute win, it's attempting to (predict/'reason') information from the training data it has.


I think this is a misuse of the term hallucination.

When most people talk about AI hallucinating, they're referring to output which violates some desired constraints.

In the context of chess, this would be making an invalid move, or upgrading a knight to a queen.

In other contexts, some real examples are fabricating court cases and legal precedent (several lawyers have gotten in trouble here), or a grocery store recipe generator recommending mixing bleach and ammonia for a delightful cocktail.

None of these hallucinations are an attempt to reason about anything. This is why some people oppose using the term hallucination- it is an anthropomorphizing term that gives too much credit to the AI.

We can tighten the band of errors with more data or compute efficiency or power, but in the search for generic AI, this is a dead end.


It’s weird because there’s no real difference between “hallucinations” and other output.

LLMs are prediction engines. Given the text so far, what’s most likely to come next? In that context, there’s very little difference between citing a real court case and citing something that sounds like a real court case.

The weird thing is that they’re capable of producing any useful output at all.


I don't think I see them as a win, but they're easily dealt with. AI will need analysts at the latter stage to evaluate the outputs but that will be a relatively short-lived problem.


> I don't think I see them as a win

Unavoidable, probably

> but they're easily dealt with. AI will need analysts at the latter stage to evaluate the outputs but that will be a relatively short-lived problem

That solves only to some degree. Hallucinations may happen at this stage too. Then either correct answer can get rejected or false pass through.


> it will just have much better precision than us.

and much faster with the right hardware. And that's enough if AI can do in seconds what humans takes years. With o3 the price is only the limit, looks like.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: