All learning amounts to remembering. A machine learning model is basically remembered weights. Everything we've learned is what we remember as solidified in our neurons, etc.
Note that learning is not about discovering yourself. You also learn if you're capable of remembering (and optionally applying) something that somebody else taught you. Students e.g. learn a formula for X, the periodic table, etc.
And this contraption not only remembers (that is, learns), but also has an algorithm to figure the solution in the first place (that is, tries and discovers).
So the title "machine learning" or even "AI" is perfectly legitimate. It's not GAI, but it's also not 2020 (not that we have GAI in 2020).
>All learning amounts to remembering. A machine learning model is basically remembered weights. Everything we've learned is what we remember as solidified in our neurons, etc.
That's "not even wrong", as Pauli would say. Your paragraph suffers from using a shaky non inversible analogy:
Machine learning often uses an analogy for the brain, neurons, activation functions, etc. Some accuracy about the real world is sacrificed in that analogy for the sake of being useful and to have something to reason with and shared taxonomy. We accept that loss for the sake of being productive and for lack of actual equivalents.
What your first paragraph did is use that analogy of the brain used in machine learning, that is shaky to begin with, and use it to reason about the biological brain as if we did not have the actual thing.
In other words, we had a biological brain that we clumsily modeled to get work done in ML, and the paragraph used that model to reason about the brain itself. Similar to how you translate from French --> English --> French and get a different output than the input.
Remembering certainly plays a role in learning, though it is but one component. For it to be what learning is, everything has to be exactly the same with every instance.
To use the analogy, a machine learning model returns predictions/output for instances it has not necessarily seen. Our brain produces outputs based on situations that had not yet happened, at least not in the 'anisotropic time' universe, before that.
>Your paragraph suffers from using a shaky non inversible analogy
That's neither here nor there though. The brain doesn't have to follow a specific model (e.g. the shaky model ML uses, and I alluded to), for the analogy to work.
It just has to have memory and remembrance as a core attribute of learning, which it does.
Whether this happens through neurons and weights or some other mechanism, it's still remembering. Your counter-argument focused on implementation details.
>Remembering certainly plays a role in learning, though it is but one component. For it to be what learning is, everything has to be exactly the same with every instance.
Well, remembering doesn't just "play a role", it plays the biggest role in learning. It's not even remotely optional: without remembering there's no learning.
And inversely, discovering ourselves what we learn or applying it are both optional. I can listen to a professor tell me about something someone else discovered, and never apply it, but as long as I remember that information, I have learned it.
And, of course, as I already wrote, the contraption doesn't just remember but also discovers the solution (and can even re-apply it).
>Well, remembering doesn't just "play a role", it plays the biggest role in learning. It's not even remotely optional: without remembering there's no learning.
This is different from the phrase in your first paragraph that stated:
>All learning amounts to remembering.
There is a difference between saying that remembering is a necessary condition for learning, and saying that remembering is learning.
Memory plays a role in learning. Does it play the biggest role? Let's assume it does. Is learning only memory? I don't think so. Did you not mean to say that all learning is remembering and but wrote so even though your thoughts on that are more nuanced? Probably.
>And inversely, discovering ourselves what we learn or applying are both optional. I can listen to a professor tell me about something someone else discovered, and never apply it, but as long as I remember that information, I have learned it.
See, here again I'll have to disagree. I'm looking at it from the standpoint of output and outcome of a system where that output is not simply information retrieval.
Let's say the information is about driving. I can have a driving manual memorized. Can I remember that information about driving? Yes. Have I "learned" driving? No.
>Memory plays a role in learning. Does it play the biggest role? Let's assume it does. Is learning only memory? I don't think so.
The end result of learning (having learnt) is, I'd say, only memory.
If the memory is of static information or dynamic state (the difference between a file with data and a program's runtime loaded with state) it's still however just memory.
What else would it be?
Sure, the process of learning, on the other hand, can take different forms. Those would be analogous to loading a memory store via different mechanisms.
>Let's say the information is about driving. I can have a driving manual memorized. Can I remember that information about driving? Yes. Have I "learned" driving? No.
Let's say we can distinguish between two types of learnable things.
When it comes to information from the digits of pie or the lyrics of a song, or the periodic table and many other things, having such things memorized is enough for us to say we "learned them".
Your example with driving, however, is not about mere learning some information, but about learning an activity. In your example we can surely say you've learned the contents of the driving manual. That's not the same as learning to drive, but nobody said it is. It is still learning the information within the manual, though.
Now, for learning to drive one would need to learn some/all of the same information (perhaps in another form, simpler, more colloqual, etc), and also the motions involved, how to respond to various road situations, etc. This however is a "loading the memory part".
Isn't the end result the same though? Information (somehow, not really relevant how) stored in the brain of the person that learned to drive (including things like "muscle memory")? The process is not the same as memorizing static information (e.g. a piece of text), but the end result is still a brain that has a new state, similar to an RAM or SSD that has a new state.
See also my point above regarding static vs dynamic memory.
Note that learning is not about discovering yourself. You also learn if you're capable of remembering (and optionally applying) something that somebody else taught you. Students e.g. learn a formula for X, the periodic table, etc.
And this contraption not only remembers (that is, learns), but also has an algorithm to figure the solution in the first place (that is, tries and discovers).
So the title "machine learning" or even "AI" is perfectly legitimate. It's not GAI, but it's also not 2020 (not that we have GAI in 2020).