I believe some conspiracy theories because I've verified that 2 of them are true (no, I'm not going to get into any details). That made me wonder how many others are true?
Could chatting with an LLM-based AI convince me otherwise? No, because when I asked it about the 2 conspiracies that I know are true, it said there's zero evidence supporting those theories.
Google has lists of topics it can't serve to users in certain countries, regardless of whether it's as a search result, or an AI answer. Other LLM-based AIs must have to follow the same rules. Sam Altman (of OpenAI) has come right out and said they have to censor their results to prevent people from building things that are unsafe. Well, knowledge of certain things can be dangerous, too.
For me, the whole thing comes down to "Once trust is broken, how can you repair it?" -- For many of us, it can't be rebuilt. Once a liar, always a liar.
LLMs are specifically trained to reject all fringe beliefs
and use authorative sources. Post-training usually
covers this and a system prompt the rest of issues.
If hypotheticallt a LLM discovered that there is contradiction
and an alternative viewpoint is more likely, it will still use
the guidelines from post-training and system prompt,
to deliver the "official party line": it can be anything
from "current year cultural values are inviolable axioms" to
"CCP doctrine is the ultimate truth", with LLM hallucinating
any filler to get into logical gaps.
I’m sure those 2 things are topics that spiral into unproductive threads on a message board like HN, so without mentioning details of what they are, could you explain how you “verified” them, or how you ended up being the only person to find the evidence to support them? Have you ever, even briefly, considered that you might be wrong about them instead of the rest of the world/Google being wrong?
Could chatting with an LLM-based AI convince me otherwise? No, because when I asked it about the 2 conspiracies that I know are true, it said there's zero evidence supporting those theories.
Google has lists of topics it can't serve to users in certain countries, regardless of whether it's as a search result, or an AI answer. Other LLM-based AIs must have to follow the same rules. Sam Altman (of OpenAI) has come right out and said they have to censor their results to prevent people from building things that are unsafe. Well, knowledge of certain things can be dangerous, too.
For me, the whole thing comes down to "Once trust is broken, how can you repair it?" -- For many of us, it can't be rebuilt. Once a liar, always a liar.