Don't have time to watch a 42m vid now, but I can see how people are starting to view ChatGPT (and similar models) as some miraculous oracle, of sorts. Even if you start using the models with your eyes wide open, knowing how much they can hallucinate, with time - it is easy to lower your guard, and just trust the models more and more.
To get a reality check, open up 3-4 different models (ChatGPT, Claude, Gemini, etc.), and ask them topics you know really well, and questions you already know the answers to. And see that maybe a quarter, or 25% will fail somewhat. Some topics are of course easier for these than others.
Oracle is a better word for this than religion for what you are talking about. Maybe people should remember how notoriously tricky oracles were even in their believer's eyes (the "an empire shall fall" story.
This video is about people who believe ChatGPT (or another LLM) is a sentient being sent to us by aliens or the future to save us. LLM saviour is pretty close to a religious belief. A pretty weird one, but still.
> o get a reality check, open up 3-4 different models (ChatGPT, Claude, Gemini, etc.), and ask them topics you know really well, and questions you already know the answers to. And see that maybe a quarter, or 25% will fail somewhat.
I have tried this a bit with ChatGPT, and yes, there are a lot of issues. Things such as literally true but misleading answers, incomplete information, and a lack of commonsense.
Besides, the debate on oracularizing AI is much more fun than endlessly debating the meaning of consciousness.
People place plenty of trust in astrology, tarot, and I Ching without requiring they have an subjective experience.
If anything, there's a tendency of technologists to have a blind spot identifying AI as such. The dismissal and sometimes contempt held for divination makes it genuinely difficult to recognize it when it's not decked out in stars and moons.
It's interesting if anything that the Barnum principle applies in both cases.
The internet is full of pure nonsense, quack theories and deliberate fake news.
Humans created those.
The LLMs essentially regurgitate that, and on top they hallucinate the most random stuff.
But in essence the sort of information hygiene practices needed are the same.
I guess the issue is the deliver method. Conversation is intrinsically felt as more "trustworthy".
Also, AI is for all intents and purposes already indistinguishable from magic. So in that context is hard for non-technical people to keep their guard up.
Moreover, one they get into the wrong track, they just dig in deeper and deeper until they've completely lost it. All the while saying how clever and perceptive you are for spotting their fuck ups before getting it wrong again. It seems like if it doesn't work pretty much first time (and to be sure, it does work right first time often enough to activate the "this machine seems like knows its stuff" neurons) you're better off closing it and doing whatever it is yourself. Otherwise, before long you're neck-deep in plausible-sounding bullshit and think it's only ankle deep. But in a field you don't know well, you don't know when you're going below the statistical noise floor into lala land.
To get a reality check, open up 3-4 different models (ChatGPT, Claude, Gemini, etc.), and ask them topics you know really well, and questions you already know the answers to. And see that maybe a quarter, or 25% will fail somewhat. Some topics are of course easier for these than others.