> People do use them and they trust them, unfortunately.
Yep, and it’s hard to communicate that to them. It’s hard to accurately describe even to someone familiar with the context.
I don’t think “trust” is the right word. Sitting here on 19 Nov 2025, I do in fact trust LLMs to reason. I don’t trust LLMs to be truthful.
If I ask for a fact, I always consider what I’d lose if that fact were wrong.
If I ask for reasoning, I provide the facts that I believe are required to make the decision. I then double-check that reasoning by inverting the prompt and comparing the output in the other direction. For more critical decisions, I make sure I use different models, from different providers, with completely separate context. If I’ve done all that, I think I can honestly say that I trust it.
These days, I would describe it as “I don’t trust AI to distinguish truth”
Yep, and it’s hard to communicate that to them. It’s hard to accurately describe even to someone familiar with the context.
I don’t think “trust” is the right word. Sitting here on 19 Nov 2025, I do in fact trust LLMs to reason. I don’t trust LLMs to be truthful.
If I ask for a fact, I always consider what I’d lose if that fact were wrong.
If I ask for reasoning, I provide the facts that I believe are required to make the decision. I then double-check that reasoning by inverting the prompt and comparing the output in the other direction. For more critical decisions, I make sure I use different models, from different providers, with completely separate context. If I’ve done all that, I think I can honestly say that I trust it.
These days, I would describe it as “I don’t trust AI to distinguish truth”