Humans ability of conceptual modeling is entirely constrained by their perception. Humans can't conceptualize a tesseract. All of their conceptualizations are in some form related to their perception.
When you're thinking of a pen, are you thinking of the atoms, molecule compositions and all the quantum effects going on? No. Your mind is, just like ML, working from a template constrained by your perception (visual, haptic, auditory).
Humans are more advanced than ML, but they are nothing special.
Human can't visualize a tesseract, but human can conceptualize the Idea of a tesseract in the symbolic space, by math, physics, or in other words, by Reason.
The Symbolic is a radically simplification of all the complexities impossible to be fully sensible. Even though the simplification is always particular, contingent, and full of ambiguity (human languages) and often inaccuracy (Newton's laws vs relativity), without the simplification, without Reason, ML systems are probably like animals, eventually succumbing to the full force of the complexity of reality.
Perception in animals changes the structure of their bodies as they are coordinating with their environment, such that they acquire motor techniques and hence new ways of structuring their perceptions.
Perceiving the world isnt a passive activity in which facts strike your eyes. The world doesnt have these facts, there is no data, and nothing from which to simply "average to a template".
When light strikes your eye, there is no "keyboard" in it. Nothing in it from which derive even a template of a keyboard.
There are only templates in datasets we prepare -- and we dont prepare them by actually "encountering datasets in reality". Rather, we arrange reality and measure it so-as-to-be-templateable.
What animals have is the ability, with effort, to engage in this dynamic -- to arrange the world to make it knowable. It is this process of arrangement which requires intelligence; not "taking the average after it has happened".
There is nothing in the world to be "percieved" in the ML sense, as in: ready for analysis. That's an illusion we construct: we make "perceiable data".
It is our bodies, and our concerns, which make the world itself perceivable. The world itself is infinitely dense with infinities stacked on infinities. There isnt "data" to be templated.
And this isnt theorertical: no ML system will ever work. All that works is our data preparation.
Humans ability of conceptual modeling is entirely constrained by their perception. Humans can't conceptualize a tesseract. All of their conceptualizations are in some form related to their perception.
When you're thinking of a pen, are you thinking of the atoms, molecule compositions and all the quantum effects going on? No. Your mind is, just like ML, working from a template constrained by your perception (visual, haptic, auditory).
Humans are more advanced than ML, but they are nothing special.