I get the strong impression that even people who have much experience using LLMs have astoundingly little insight into what they are actually witnessing. This is often paired with astoundingly little insight into what's actually going on in their own cognitive processes.
Somehow, it's still not clear to most people that LLMs and even vector databases create knowledge that wasn't in the original data.
In fact, that's most of what they do! Isolated, non-novel direct quotation is the exception, not the rule.