Look, if you're going to make a claim about LLMs and Human minds being intrinsically different, you're going to have to lay out a testable hypothesis.
Saying "LLMs will never be able to solve programming problems with variable renaming" would be a testable hypothesis. "LLMs cannot reason about recursion" would be a testable hypothesis.
Something like "LLMs can act as if they understand but they don't truly understand" is NOT a testable hypothesis. Neither is "LLMs are different because we possess qualia and they don't". In order for these to be actually saying something, you would need to bring them to conclusions. "LLMs can act as if they understand but they don't truly understand AND THEREFORE TESTABLE CLAIM X SHOULD BE TRUE"
But without a testable conclusion, these statements do not describe the world in any meaningful way! They are what you accuse LLMs of producing - words strung together that seem like they have meaning!
It seems like right now the testable hypothesis "LLMs generate text that is fundamentally different from human beings." That's effectively the Turing test. Current LLMs do a better job of passing the Turing test than things in the past, but it doesn't take a lot of effort to distinguish.
It's difficult to generalize because it can be tuned to do any one thing. It's the whole process of doing anything that requires the full apparatus of a human being, and there is no sign that LLMs are approaching that any time soon.
Which is the other problem. We're talking about what LLM's might one day do, rather than what they currently do. It's entirely possible that one day LLMs will be as flexible as human beings, training themselves for every new scenario. I have reason to doubt it, but the basis of that doubt is only noticing the mechanical difference between brains and LLMs. I cannot prove that the limit cases will remain different.
Just because someone sometimes says something without understand does not in slightest mean that that is the common occurrence.