Humans make sense of the world with an extremely limited amount of information. I have not read all of GitHub. Comparatively, I have not read a fraction of it. And I could not read all of GitHub even if I wanted to.
However, I do not need to read all of GitHub anyway because, as a human, I am capable of understanding.
Current generation "AI" is not capable of understanding. Current generation "AI" merely computes the probability of any given word appearing next in this sentence based on an utterly absurd amount of raw data.
That is not what humans do at all. We do not need this information. We cannot even process it.
We do not know whether or not that is what humans do or how qualitatively different it is to what humans do, because we don't know all that much about human reasoning works.
Put another way: Some Markov models are Turing complete, so "merely" computing probabilities can be Turing complete with only minor steps, so trying to downplay the potential capabilities of models like this by handwaving about things we don't know is foolish.
We don't need that scale of input, but we also don't know if LLMs need that much input to do well, or if our current training protocols are simply poor. With ongoing work on reducing the training cost, it is at a minimum clear that current training methods are far from optimal.
You make a logical leap. "Machines learn differently from humans, therefore they cannot understand". My definition for "understanding" isn't just "whatever humans (and nobody else) do".