Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Only if it is reliably correct.

Google does offer an AI summary for factual searches and I ignore it as it often hallucinates. Perplexity has the same problem. OpenAI would need to solve that for this to be truly useful



IME Google's summary is not actually hallucinating, the problem is they are forcing it to quote the search results, but they're surfacing bad/irrelevant search results because Google's actual search hasn't worked in years. It's a RAG failure.

For instance I searched for the number to dial to set call forwarding on carrier X the other day, and it gave wrong results because it returned carrier Y.


This is why my most used LLM after code suggestions is Bing. I like that it has lots of references for the things I ask it to double check and read more, but at the same time it can help me dig deeper into a subject rapidly and better formulate the exact question I'm trying to ask and give me a link to the actual data it's getting it's info from.


Agreed, hallucinations can be pretty bad and can hurt trust a great deal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: