Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I remember asking for quotes about the Spanish conquest of South America because I couldn't remember who said a specific thing. The GPT model started hallucinating quotes on the topic, while DeepSeek responded with, "I don't know a quote about that specific topic, but you might mean this other thing." or something like that then cited a real quote in the same topic, after acknowledging that it wasn't able to find the one I had read in an old book. i don't use it for coding, but for things that are more unique i feel is more precise.


I wonder if Conway's law is at all responsible for that, in the similarity it is based on; regional trained data which has concept biases which it sends back in response.


Was that true for GPT-5? They claim it is much better at not hallucinating




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: