Hacker News new | past | comments | ask | show | jobs | submit login

> humans will say that they're not sure and might be wrong

Is that so? https://innocenceproject.org/dna-exonerations-in-the-united-... These people were convicted by people who were 100% convinced their memory was correct. The DNA evidence, which is "harder" evidence, said otherwise, and in these cases, was exonerating. (There are hundreds, possibly thousands, of other cases like this by the way, where the imprisoned innocent is NOT yet exonerated, all based on overconfident eyewitness testimony that yet managed to convince a judge/jury.)

There is also the well-known Dunning-Kruger effect, the cognitive bias where individuals with limited knowledge or expertise in a particular area tend to overestimate their competence and confidently assert their opinions. We've literally seen this countless times just since the 2016 US election, just watch literally any Jordan Klepper interview https://www.youtube.com/watch?v=LoZ2Lt_aCo8 (honestly, this is a little too political for me to use as an example, but I ran out of time seeking out unbiased examples... Mandela Effect? Misplaced keys being common?)

I'm afraid you're a little off, here, on your faith in humans not hallucinating memories and knowledge.




Ironically, if you agree with pmarreck above, scarblac's comment can be seen as an example of a human hallucinating with confidence, precisely what they were arguing is less likely to occur in the organic side of the internet.


That “if” is doing a good bit of lifting though. Nobody is talking about the hallucination rate.

How many times have innocent people been wrongly convicted? The innocence project found 375 instances in a 31 year period.

How often do LLMs give false info? Hope it never gets used to write software for avionics, criminology, agriculture, or any other setting that could impact huge amounts of people…


Yeah I was defin... perhaps guilty of sounding ver.. somewhat confident myself.

Luckily I only said humans add some doubt to what they say some of the time :-)


I think this is overall a good criticism of the current generation of LLM's- They can't seem to tell you how sure (or not) they are about something. A friend mentioned to me that when it gave ChatGPT a photo of a Lego set with an Indiana Jones theme earlier today and asked it to identify the movie reference, it meandered on for 2 paragraphs arguing different possibilities when it could have just said "I'm not sure."

I think this is a valid area of improvement, and I think we'll get there.


They never did human always do it, rather that they do “some of the time”. Whereas I’ve yet to see an LLM do that.

Also experts tend to be much more accurate at evaluating how knowledgable they are (this is also part of the D-K effect). So Id much prefer to have a 130 IQ expert answer my question than an LLM


Fair enough, but given that access to a 130IQ expert on the subject matter at hand may be either expensive or impossible to obtain in the moment, and ChatGPT is always available 24/7 at very nominal cost, what do you think is the better option overall?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: