> So I thought this would be a fun way to demo the extent to which different models hallucinate random things.
I think that it’s really important to distinguish between hallucination (perceiving what is not there) and confabulation (producing fabricated memories). Applying the former to LLMs begs the question of whether or not they perceive, while the latter is much more applicable.
I think that it’s really important to distinguish between hallucination (perceiving what is not there) and confabulation (producing fabricated memories). Applying the former to LLMs begs the question of whether or not they perceive, while the latter is much more applicable.