If you will excuse analogy and anthropomorphism, the human analogy of what we do and don't understand about LLMs is, I think, that we understand quantum mechanics, cell chemistry, and overall connectivity (perceptrons, activation functions, and architecture) and group psychology (general dynamics of the output), but not specifically how some belief is stored (in both humans and LLMs).
If you will excuse analogy and anthropomorphism, the human analogy of what we do and don't understand about LLMs is, I think, that we understand quantum mechanics, cell chemistry, and overall connectivity (perceptrons, activation functions, and architecture) and group psychology (general dynamics of the output), but not specifically how some belief is stored (in both humans and LLMs).