Yep, the LLMs' "success" starkly shows how little real thinking is actually done by people when writing or talking. Mostly it is just stringing out something that sounds OK.
The LLM true believers demonstrate a kind of inversion of the trajectory of science: instead of looking at history and seeing how things are never quite what they seem at the surface level, the believers look at the surface of humanity and proclaim that there is in fact less to things than what is commonly believed. And then they claim that LLM/AI will be able to easily surpass these primates because why not?
It’s not God of the gaps (these fantastic things you cannot explain are because God); it’s AI of the mundane.
Humans absolutely say things without understanding them. They do that all the time. What is religion but a discussion of the ineffable? Does the concept of "understanding" even meaningfully apply there?
Look, if you're going to make a claim about LLMs and Human minds being intrinsically different, you're going to have to lay out a testable hypothesis.
Saying "LLMs will never be able to solve programming problems with variable renaming" would be a testable hypothesis. "LLMs cannot reason about recursion" would be a testable hypothesis.
Something like "LLMs can act as if they understand but they don't truly understand" is NOT a testable hypothesis. Neither is "LLMs are different because we possess qualia and they don't". In order for these to be actually saying something, you would need to bring them to conclusions. "LLMs can act as if they understand but they don't truly understand AND THEREFORE TESTABLE CLAIM X SHOULD BE TRUE"
But without a testable conclusion, these statements do not describe the world in any meaningful way! They are what you accuse LLMs of producing - words strung together that seem like they have meaning!
It seems like right now the testable hypothesis "LLMs generate text that is fundamentally different from human beings." That's effectively the Turing test. Current LLMs do a better job of passing the Turing test than things in the past, but it doesn't take a lot of effort to distinguish.
It's difficult to generalize because it can be tuned to do any one thing. It's the whole process of doing anything that requires the full apparatus of a human being, and there is no sign that LLMs are approaching that any time soon.
Which is the other problem. We're talking about what LLM's might one day do, rather than what they currently do. It's entirely possible that one day LLMs will be as flexible as human beings, training themselves for every new scenario. I have reason to doubt it, but the basis of that doubt is only noticing the mechanical difference between brains and LLMs. I cannot prove that the limit cases will remain different.
A LLM will parrot what it learned without any understanding, which is unlike a human.