Hacker News new | past | comments | ask | show | jobs | submit login
Learning Is Remembering (saveall.ai)
422 points by p-christ on Sept 26, 2022 | hide | past | favorite | 229 comments



There is a difference between remembering something (like a fact) and a deep understanding of a concept.

Let me give you an example in programming: I sometimes have to google for the names of functions I've already used a hundred times. Sometimes extremely simple functions (just a while ago it was list.size() in Java), only because I can't remember the name of the function, but of course I understand the concept - the length of a list/array. I just can't remember if it's length(), or size() or count(), or whatever.

This deep understanding feels quite different from simple memorization. It feels more like something in my body, similar to muscle memory or like playing a piano piece. It's something I feel I will never forget.

I'd love to learn more about this kind of understanding. For years I've tried to find good reading about it, but much has been disappointing.


Your comment reminds me of paper by Bestsy Sparrow [1] where she hypothesizes that "Our brains rely on the Internet for memory in much the same way they rely on the memory of a friend, family member or co-worker. We remember less through knowing information itself than by knowing where the information can be found." [2]

From the papers Abstract:

"No longer do we have to make costly efforts to find the things we want. We can “Google” the old classmate, find articles online, or look up the actor who was on the tip of our tongue. The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it."

[1] Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips - https://www.science.org/doi/abs/10.1126/science.1207745

[2] https://news.columbia.edu/news/study-finds-memory-works-diff...


I immediately thought of the book "Thinking, fast and slow" from reading your comment. I don't know if you're familiar with it, but it talks about two "Systems" that guide our thinking. From the description of the book: "System 1 is fast, intuitive, and emotional; System 2 is slower, more deliberative, and more logical."

The more you do something, or think about something - the more it becomes a "System 1" operation(fast, intuitive). Like typing on a keyboard for example, when you first started using a keyboard you had to think about every letter you want to hit, but now you probably don't even think about it for a millisecond. I guess what I'm trying to say is: Intuitive understanding = Action * Repetition.

Read the book if you haven't, it might be the kind of material you're looking for. I'm not a big fan of psychology and I think it's 90% mumbo jumbo, but this book hit a few nails for me.


Thanks for the suggestion! I've actually read TFS, and I would even consider myself a "fan" of psychology, I definitely don't think it's mumbo jumbo. But in my opinion the book was quite shallow, only scratching on the surface of the matter.

Edit: To clarify, shallow is maybe the wrong word. I think the book gets its main point across very well, and it's a good point. But the majority of the book is just further evidence supporting that main point. Which is fine, but I wish it would develop this main point a bit further. The book could be summarized in a short paper and wouldn't lose too much of its value, in my opinion.


Note that there is quite a bit of shaky evidence used to further that point.

https://replicationindex.com/2020/12/30/a-meta-scientific-pe...


> I sometimes have to google for the names of functions I've already used a hundred times

This is why I have always been terrified of having to do a standard 'whiteboard' coding interview, and really don't want to ever participate in administering them to others. I know if I was ever tested in this way, I would be unable to remember the basic function names in languages I have used daily for years... and yet I have developed sophisticated software tools in these languages with a large global userbase. I fear that in an interview people would decide I am a fraud, and couldn't have really done what I have done when it appears I actually don't even remember the basics. But my mind just doesn't work that way, I can't remember length vs size either, even if I can design a complex algorithm, and turn it into a good piece of software under the actual conditions of work in the real world.


I've written plenty of whiteboarding interviews in something resembling Java and decided that I needed a tuple class, which for most purposes doesn't really exist in Java. So I just used the concept of a Tuple. I can say that most people on a whiteboarding interview panel don't really care if you borrow abstractions of ideas, the most I've been asked to do is define an interface for these implementations.

Honestly if they care that much about rote memorization of syntax, it's a red flag.


Good whiteboard interviewers will happily stipulate that something like the thing you can't remember the name of definitely exists and you can call it whatever you want in your solution.


> This is why I have always been terrified of having to do a standard 'whiteboard' coding interview

I too don't like the idea of these, but in the case where you're solving a problem in front of someone I feel like as long as you explain your thought process you'd be generally fine. If you can't remember whether the method is length() or size() or something else just pick one and explain what you're doing. Intelligent interviewers would likely understand that you're familiar with the concepts and would be able to accept your result or let you compile/execute and fix the issue.


back when whiteboards were a thing, most were not testing exact syntax..


I think it feels different because that knowledge is more deeply ingrained with what you already know. That knowledge has many connections and relationships with other knowledge you have. Whereas the name of a function is more like a leaf node, it isn't connected to very much and i think this is the source of the different feelings you describe.

To expand on this a bit - our memories maintain "schemas" of knowledge and whenever we learn something new it gets slotted into our "schema". The better something fits into our schema the easier it is to remember. I think if you googled schemas you'll be able to find more material on this


I'm not sure I agree. I think what I tried to describe above couldn't be classified as "knowledge" at all.

To be honest, I think we (humans) don't understand this type of understanding very well. Which is somewhat ironic :-)

Edit: But thanks for the suggestion about searching for schema, will definitely take a look!


The keyword embodied may be fruitful — there is a lot of noise around the term but embodied understanding is closer to what you want, I think. You may have to relax some ideas about how things work for a while in order to absorb/integrate it.

The knowledge defined in the article is very rational and reductionist in its presentation. I don’t mean that derogatorily, but it does mean it can’t account for all the emergent features of our existence. The reductionist approach can be useful as a set of signposts for locating reliable mechanistic constraints.


Thank you for that keyword! A quick search brought me to the wikipedia article [1], which sounds quite promising.

I agree completely with your analysis of the article by the way. Beautifully said!

[1] https://en.wikipedia.org/wiki/Embodied_cognition


Understanding is just knowledge of how or why something works


Intuitively, I don't believe it's that simple. But I'd love to learn more about it.


There may be more to it but at a minimum understanding usually requires you to remember a how or why, so understanding is rooted in memory as well. Perhaps also the memory of a broader cognitive context of the world and how a given objects fit in that context. Without memory there would be no capacity for understanding.

Have you ever gained a solid understanding of a complex topic after a lot of study? When you start to forget the relevant details of the context you’ve created, your understanding fades as well. “Understanding“ is just a complex function of memory to a large degree.


I think you've touched on two different ideas:

(1) That a deeper understanding of something can make it simpler/easier to remember. One way to look at this is that you remember a model of the world. That complexity of that model determines the quantity of information you need to remember. If you can find a simpler model, your mind needs to store less information to cover everything you need to know.

(2) That a concept and its name are different things. You understand the concept of an array-length-giving function, but you don't remember the name attached to it. It could go the other way: you could know there is a function with a certain name but not remember what it does. Or you could know what it does and what it's named. (This seems at least tangentially related to linguistic relativity (https://en.wikipedia.org/wiki/Linguistic_relativity), a.k.a. the Sapir-Whorf hypothesis. The question is basically to what degree our minds build ideas on words/language and to what degree we form ideas independently of language.)


Reminds me of Feynman's anecdote about the names of a bird.

> See that bird? It’s a brown-throated thrush, but in Germany it’s called a halzenfugel, and in Chinese they call it a chung ling and even if you know all those names for it, you still know nothing about the bird. You only know something about people; what they call the bird. Now that thrush sings, and teaches its young to fly, and flies so many miles away during the summer across the country, and nobody knows how it finds its way.


The name is first and foremost a way to attach that deeper knowledge of the bird to a symbol so it can be communicated to others. The name is necessary, but usually tells you very little about the thing it refers to.


Counter-anecdote time from Murray Gel-Mann, who did think names were important [1]

[1] https://writings.stephenwolfram.com/2019/05/remembering-murr...


I don't think anyone in computers can purport to names being important. Shell's sort, Bloom's filter, awk, c++, Java vs JavaScript....

The number of names that are misleading or almost willfully obtuse is absurdly high. (I'm actually cheating for folks in my list, as I typically see it just shown as shell sort and bloom filter. Which hides that it is named for someone.)


At least Shell invented the sort named after him. In the sciences you have Stigler's Law (no scientific discovery is named after its original discoverer). https://en.wikipedia.org/wiki/Stigler%27s_law_of_eponymy


That is hilarious! Yeah, my qualm is less about Shell's sort being named after him, and more that many places present it as shell sort, where you can easily think there will be some form of shell game involved. Akin to thinking a bloom filter must be something about growth or flowers blooming.

I also don't mind algorithms named after people, and can understand some of how that gives provenance, at the least. The Knuth-Morris-Pratt algorithm is a good example of that. What makes things difficult is that many of the other sort algorithms were not after people's names. Unless there is a Merge that I am unaware of. :D


It's interesting you mention playing the piano as an example, because that's a very good example of how the opposite of what you're saying is true. Try to learn a piano piece simply by playing it many times, and it'll never happen. However, try to commit the notes intentionally to memory, and reading the notes becomes unnecessary very quickly. Same with trying to memorize text; if you read without explicitly trying to memorize it, you can read it infinite times and never manage to recite it back.

I think your example about list.size doesn't contradict this. It's the difference between "familiarity recognition" memory vs "consolidation and retrieval".

You do not bother committing "size()" because it's unnecessary (or because there's a lot of interference from other languages which do the same thing, hence the cost of committing that fact for later retrieval is not worth doing so).

But you have committed the "concept" to memory, and this can be retrieved even without the need for the label.

"Understanding" is only part of the equation. But I do generally feel that "having the basic building blocks readily available helps you build complex understanding on top of them, which then helps you consolidate the complex stuff better" is probably the main direction, rather than "understanding a complex concept helps you better remember the basic building blocks that made it up in the first place".

This is also why anki works so well for learning theoretical subjects (even if it has traditionally been more popular for pure memorization tasks, like language learning)


You still have to remember your understanding. That's something you realize when it has been a long time since you took a class on something back in college.


True that there's a large difference between information and knowledge but I think the intrinsic part of knowledge itself is the ability to recall information i.e. remembering.

I started using Anki (A spaced-repetition tool) to improve my long term memory by creating cards for everything I like to remember no matter how trivial it might be, Then the act of reviewing the Anki cards became a chore.

I'm currently trying to solve the problem of reviewing Anki cards by having an always-on review system[1] which lets me review cards when ever I want instead of a rigorous schedule.

[1] https://memoryhammer.com


My guess: the concept of "a length of an array" is the thing you use to think with when you need it.

Why?

Because it is a single entity(through abstraction). And because it is a single entity and you use it a lot, it gets connected to many other concepts.

And because you've solved problems with it,it has more emotional value.

This means you can more easily retrieve it and it has more "meaning"/context to you.

list.length gets used less often and it just a concrete implementation of the concept so maybe it's just linked to the concept.

And it's also more complex to remember and use because it's made of 10 items.


> google for the names of functions

Copilot is a lifesaver here. I almost never have to google function names anymore because it just fills them in for me, while usually generating exactly the code I wanted around it.


I've resisted co pilot and auto complete because when interview time comes and you have to write code outside of your toolset, you'll realize you haven't committed things so well to memory. This bit me earlier on in my career after working in visual studios for a long time. I now code almost exclusively in vim.

I'm not saying you shouldn't use it, but before you go into technical parts of interviews I would recommend you do some coding without all the bells and whistles tools.


Why would you give up such big productivity multipliers just to make the rare event of interviewing a little easier? If an interview is coming up then just spend a few days practicing, or just don't - I've never had an interview go badly because I didn't memorize java.util.date.


Not saying that your advice in any way wrong, but it's a bit sad that the only thing holding people back from such a productivity multiplier, innovation or even completely new way of coding is the inane interview process of this industry...


The paradox of copilot is that anyone who is skilled enough that they can be responsibly trusted to use it doesn't need it.


Completely agree.


If only there was a way to know which methods the object has. We could call such concept something like auto-discovery or magic-completion.


I use that too, but Copilot is just better much of the time.


Great. If i ever feel like my impostor syndrome could use a bump I'll be sure to throw Copilot into the mix.


Curiosity got to me and I finally tried the GitHub Copilot free trial. My conclusion is that it's good for that stuff I would have googled, like "I can't remember, how do I get the date again?" or "Is it array.count() or count(array)?" Instead of typing it in Google I just type it in a comment. It's like having google in the IDE.


It's even harder to remember specific standard library functions after learning several programming languages. I know the languages themselves but still have to look this stuff up every time. One reason why Ruby is such an ergonomic language is there are aliases for all those names: you can guess length, size and count and chances are all three will work.


I have the same feeling, "knowledge" is more about knowing what is available, what options are there and broadly what each option does. You don't have to remember exactly how each option works, just know that it exists. When you need it, you can just search how it works, but knowing what to search for is crucial.


On the other hand, to the point of the article, you can't get a deep understanding of a concept without remembering the concept, other concepts that it relates to, and how.


> There is a difference between remembering something (like a fact) and a deep understanding of a concept.

This is basically the Searl argument against AI, right?


Yes! This is the case for me. I have a hard time remembering exact things, but have very goed understanding of their concepts.


As someone who has been using Anki for the past 4 years to learn Japanese and now Chinese, I've recently found that the initial "learning" step is not that hard. (i.e., given 辞書, I remember "dictionary") What is hard is keeping the content I've learned, _learned_, for more than a day.

So now I'm wondering, is my initial "learning" process wrong? Since definitions are fairly simple and I find them easy to remember at first but hard to hold onto. And my process in Anki for New cards is to rep them like reviewed cards until I pass the card; so really, I'm just staring at the kanji + definition until I remember it.

This brings me to my question: is there more Anki can do when it comes to learning new cards? Is there something we can do when learning new information that will help make it "sticky"? Especially in regards to "simple" facts, like basic kanji -> definition mappings, where there are no mechanics to understand, just simple mappings.

Edit (an addition):

Also, I love SRS, but I don't understand how it can be advertised as completely different than rote memorization. When you learn a new physical flashcard, you're learning it the same way as you would be when learning a new Anki card. The only difference is that Anki will show it to you again at a more efficient time in the future, rather than at some regular interval.


Fluent Forever book has helped me a lot with Language learning. It shows you how to learn a language mainly using flashcards, apart from other things.

I'm recommending you some things that have completely changed my experience using Anki for language learning:

- Don't use translations. Use images instead. For learning how to say dictionary put a picture of a dictionary. It will be easier to remember. Also, you usually won't find a perfect translation for a word.

- Use cloze cards for grammar. Instead of learning the rules, understand them and put four sentences with placeholders and repeat them. This process will make this way of constructing sentences stuck in your head.


clozemaster.com is a website that does exactly this (your second point).


Anki is great for higher level learning as well. Instead of having just individual words, have full sentences or even paragraphs. If you can read it aloud and understand it, then you can mark it as learned, but if you need to use the definition or explanation for any of it, mark it as unlearned. Once you have a decent initial vocabulary and grammar, this is far more effective at pushing things into long-term memory. Most of language is contextual, so anything to increase the amount of context on an individual card will help with getting better holistic knowledge of the language.


I dont understand why some people still go by brute force memorization when there are better methods. Those that our brain naturally uses. You won't see memory champions using Anki for competitions as their main memorization tool that's for sure.

I've studied a few languages, and by far the easiest, fastest and most long-term proof way to memorize words is by free association.

It takes some initial effort to be able to create these associations, but when done, it's done. There are some words I now basically can't forget, because the association is just too strong.

Example:

Waarschuwing = dutch word for warning. With free association i did it like this.

the "waar" part of the word immediately made me think of english word "war"

schu, shoe

wing, wing

So i vividly imagined a flying shoe with a trompet warning about an upcoming war. Done. Unforgettable for me. The more ridiculous, emotional (funny, stupid, ...) and obvious the better.

Some words are easier than others, and the ones that don't have a quick association jump out quickly are at a higher risk of being forgotten, but even then, they will stay way longer and without much less effort than with brute force, which is basically just creating memories out of nowhere, with no associations to anything on your brain. Our brain loves to relate things to others. Unrelated things = useless things. Everything that is useful has a relationship to something else.

So when you try to memorize by a way that is not based on association, indeed, only brute force works, which is a way to tell your brain "welp i can't really tell you what's the usefulness of knowing this, but somehow it keeps coming back into consciousness, so i guess i need to keep it in here somewhere. A self sustaining island, a dot of knowledge with no connection to anything else i know or care about."


> I dont understand why some people still go by brute force memorization when there are better methods. Those that our brain naturally uses. You won't see memory champions using Anki for competitions as their main memorization tool that's for sure.

Thats because Anki isn't for memorization, it's for time optimization and scheduling. Nothing about Anki makes you remember a fact better, that's still on you, and the memory champions use ALL those tricks and techniques.

Anki just schedules the non-memory-champions needing to mostly remember most of the things, and minimize the time taken to try and do that.

Said as a multi-year daily Anki user getting really close to a 1000 day streak.


I learn a lot. The way I've learned best for me is to mimic children. Especially small children.

They focus intently at seemingly incomprehensible complexity. Then they play with the small fragments they can get their hold on. "ba-ba-ba-ba" when they're learning to speak.

They play. They combine two (or more) different things they're interested in into a single experiment which we think of more as play. Mushy food? I wonder what happens when I throw it on the floor? Oh look how it landed all funny! Look how mum and papa reacted!

Grinding through cards learning languages never really did anything for me. Immersion and regular (but not exhaustive!) play stretched over time makes the neurons of my mind much more reliable.


Arguably, the whole point of play is learning.


For some, yes. And their secondary benefit is joy.

For others, no. It's joy. And their secondary benefit is learning.


Learning is often a byproduct of play, but the actual point of play is highly subjective to whoever is doing the playing. Although, in my experience, if you need to learn something, making it into a game is a highly effective and fun way to go about something that might otherwise be pretty dull.


To learn a word or character, you need to be able to understand it immediately and be able to form a sentence with it without delay (at least not more than the occasional "uh..."). Remembering the meaning after some fiddling is not good enough.

That has to be achieved by actually practicing the language, either via listening, speaking, or writing.

Anki is there to help after that step by keeping you refreshed with the exact tones, strokes, multiple pronunciations or more ambiguous CJK characters etc. but it can't be the motor behind the process.

For spaced repetition, although it is a form of rote memorization, the key insight is that memory is formed through recall, not review. The system then (purportedly) hits you at the very moment you are about to forget the card. It's counter-intuitive but it's easier to encode things in long term memory by recalling them at that point as opposed to when it's still fresh in your mind.

This is analogous to studying in university via explaining things to yourself versus just highlighting and rereading the textbook. Every student knows the former is more effective.


A friend who is amazingly good at languages made the analogy, you can't read about how to play a musical instrument and get better at playing it. You actually have to play it.

Anki might help some, maybe reading, but speaking and listening require actually speaking and actually listening.


The analogy can be taken too far; if you can recognise a note by sight on sheet music, you’ve learnt everything there is to directly memorise in paying an instrument. All the rest is practice. This is not the case in language learning. A language which is not closely related to one you already know has tens of thousands of words, the meanings of which can’t be deduced from other things. There is no use to doing listening practice if you do not understand almost all the words being used; you will be stuck looking them up one at a time in a dictionary. It is also very difficult to correctly pick out words you don’t know in a language you are new to.


> Is there something we can do when learning new information that will help make it "sticky"?

For me, a big part of learning a new word involves creating chunks. Those chunks then form bigger chunks. Let's use 辞書 as an example: 辞 is talking/words (and also used as kanji for quitting) and 書 is books. So word book = dictionary.

The kanjis are also broken down by radicals. 辞 is 舌 (tongue) + 辛 (spicy). There isn't a logical connection between the combination of radicals and the kanji, so I'd make a memorable story instead [1]. Thus the only things I have to purely memorize are the radicals.

I go through above process when I'm creating a card and I believe the process itself helps me learn the word much faster than just rote memorization. Of course, you need to start using the word in your daily life too!

Edit:

[1] When I create a story, I close my eyes and try to imagine the story happening as vividly as possible. I also have a rule where radicals in the story must appear in the same order as in the kanji


It seems like you've gone pretty far. I assume you started with something like radicals (constructive strokes) and built up to more complex Kanji/pictograms. I know in speaking to native Mandarin speakers that they see stroke order in the characters and were intrigued that I don't see that in english. Often we just learn the sound/typed elements (e.g. pinyin), which I guess might be like modern asian kids.

Have you looked at something like wanikani.com with the mnemonics or some of the more historical derivations of the different characters? Those might help build an internal story for why those characters mean what they do. As an adult, that helped me.


Stroke order sounds like a burden to Westerners but it actually helps with memorizing characters, and makes your handwriting less awful.


Yeah it needs to help you do better "encoding". This is what determines how sticky information is straight after learning it.

One way that Save All (a company i run that's like anki, https://saveall.ai/) helps you with encoding is that it suggests alternative ways that we can quiz you on a card you just made. This helps you engage with the card a bit more as you're making it which makes it more sticky.

Other than that a good way to also improve encoding is to link new knowledge to existing knowledge. Things like Roam help you do this with their backlinks


I thought active and free recall were both superior to mixed encoding and interleaved practice. Is there a study linking more encoding to enhanced active recall? From what I understand, the suggestion is to add desirable difficulty (which I guess leads to better encoding, but different encodings dont nefessarily add desirable difficulty).


Not sure about the relative importance, but save all also has active recall & interleaving. The encoding happens when you first learn something & create a card, and the active recall, spaced repetition & interleaving happens later when you review cards.


Anki is great, but it doesn't hold a candle to full sensory integration. If you moved to Japan or China and rid yourself of English, you'd be forming all sorts of associations a white screen with text can't replicate.

SRS is great for the characters -> definition pathway, but we need so much more. Listening, speaking, interacting, using the language to describe the world around us in conversation and navigate through it.

Don't stop using Anki though.


To answer your edit:

I think you are understating the difference this "more efficient time" makes. SRS outclasses every other approach we have for long-term memorization by several zeroes.

Bringing your car to the shop when the "check engine" light is on and bringing it into the shop after every single drive is superficially the same, but only one of them makes actual sense...


The answer is yes. I’m working on it with https://reader.manabi.io which aims to enrich memorization with more colorful contextual anchoring. Working on a SwiftUI rewrite with Anki integration currently.


Calling learning remembering, is like calling a computer a hard drive. Sorry, but, remembering is just one part, although important part of learning. There is loads of difference between remembering the words of a foreign language and fluently speaking it. The process of learning is complicated, eg. good teaching is not giving the final answers to learner, which should seem faster, but letting the learner arrive at their answers, even sometimes arriving at a wrong answer at first. And the point about reducing forgetting, that's also a terrible way of learning. If you want to minimize forgetting, that means you have to learn in baby steps, repeat everything using some kind of spaced repetition algorithm, and endure the pain of doing all this for a considerate amount of time. Instead, learning the content rapidly (meanwhile forgetting a lot of the content), getting a whole picture, and gradually gain more understanding repeating the process using different textbooks/courses, is a much more faster way of learning. The brain is more used to BFS (breadth-first-search) than DFS (depth-first-search), and the whole process is much more stimulating. This is because new knowledge needs to be encoded with connections to pre-existing knowledge, and most knowledge is intertwined together, for example in calculus, limits, derivatives, integrals, infinite series, etc. Learning limits in isolation of the whole picture, optimizing for less percentage of forgetting, often leaves learners confused, why am I even learning this, and what use is this for. Even though the percentage of forgetting is much higher in the latter holistic BFS process, the overall content mastered in the same month or week duration is higher. So forgetting is actually normal and you shouldn't panic when you forget things- since you will gain more understanding each time you learn and re-learn the content. For the last ten years, I've been making products for learning, and every day now and then, I'm still learning something new about learning.


I think the point being made is that remembering is a huge if not key aspect of learning anything, and people underestimate the importance of memorization. Maybe that title is a bit misleading, could've been "No serious learning can happen without memorization" but that doesn't flow as well.


If anything the problem used to be that we overestimated the importance of memorization. Maybe it's changed now, but trying to free cognitive load is important. It's the same reason why literary societies beat oral ones. The emphasis on memorization was reduced.


I think it flows pretty well, fwiw. Strikes a better balance between readability and ambiguity.


yes exactly


There's an old saying in Asian cultures about learning.

- You do not truly understand until you "forget" what you've learned.

I know, it's weird and opposite of what Feynman Said. But it makes sense.


A less eloquent but probably more helpful translation/phasing might be:

You have not truely mastered something until you've forgotten the step-by-step process of it (and presumably had to reconstruct it by doing it and paying attention to how you do so).

(This is also why it can be hard to teach something you're very good at - often, you literally don't know (well, remember) "how to do it" in a form that's actually useful to them.)


Does this mean you've internalized it?

I think that generally is a characteristic of something who's mastered something.

It does make sense, but I don't think it's strictly true. You can understand something without making it second nature.


> eg. good teaching is not giving the final answers to learner, which should seem faster, but letting the learner arrive at their answers

Why is that "good" teaching? ---> Its because if they arrive at the answer themselves then both the answer & the process for figuring it out will be more ingrained in their long-term memory!

> Instead, learning the content rapidly (meanwhile forgetting a lot of the content), getting a whole picture, and gradually gain more understanding repeating the process using different textbooks/courses, is a much more faster way of learning.

I agree to an extent. It's inefficient to try and remember everything beyond a point. But spaced repetition is very efficient e.g. with Save All it might only take you 5 minutes to remember something for 10 years... so it is much more efficient than you think to try and remember more things.


Learning is a lot of things, fact collection is one small part.

Learning is mostly model building. Building a model to predict future sensory inputs from past sensory inputs. Building a model to predict future sensory inputs from past sensory inputs and control inputs.

Learning is not just recording sensory inputs for future recall, it is taking them and building useful abstractions with them.


This is why abstraction is so important (came here to say the same thing).

Wikipedia is factually correct but often lacks insight. It puts the learner at the wrong level of abstraction, limiting how much more can be learned.

For example, a sine wave looks complex, and has a great deal of inherent complexity around stuff like transcendental functions. But it's just a spiral, the side view of a radius arm turning through time along the x axis with a period of 2 pi radians and a radius of 1.

But if readers don't know that, they get stuck at the abstraction of trigonometry instead of the far deeper relations between things like complex numbers and higher dimensions.

That's why I think it's difficult to learn quantum mechanics without a teacher. It just ends of being a bunch of matrices and handwaving that makes little sense intuitively.

This is why the debate around higher education is silly IMHO. Sure, someone can avoid college and get hands-on experience in application. But they'll miss out on the theory and abstraction that allows them to transcend their area of expertise. That's good enough for most people, but most likely won't result in true mastery. No schooling is not better than schooling if one wants to do important work.


Your brain is neither BFS or DFS. Your brain is a biological neural network. It is associative, that is, your brain follows paths of relationships. The brain actually is "like calling a computer a hard drive" in that memory and processing is wrapped up in the same mechanism. "Thinking", "knowing" and "remembering" are not separate things for a brain, they are all flavors of the same thing.

You wrote "learning limits in isolation of the whole picture, optimizing for less percentage of forgetting, often leaves learners confused". The is true. The reason is because, if you do it in isolation, you're not forming connections to related knowledge.

The main split in brains is short-term memory vs long-term memory. And I suspect (speculating here) that forming more connections to items in long-term memory helps in moving a fact from short-term memory to long-term memory.


>There is loads of difference between remembering the words of a foreign language and fluently speaking it.

And alternatively put, there's a difference between learning to read, and learning to speak. Yet both would boil down to primarily rote memorization, as languages tend to do.

Language is not a good example to prove your point on.


I 100% agree. This is true in my experience as well. Is your content available in English? If not, can you recommend some books for derivatives/ integrals or statistics please


I agree with most of whats written here. I think there is a major point you're overlooking here. There is simply just too much information we're expected to know nowadays. You could argue that the amount of information we have to cram into our brains in such a short amount of time makes it very difficult to build good long term memories. High School and university curriculum's are forever expanding jamming more and more in the same period of time. Even in professional settings such as being a web developer or data scientist for example where new things are being invented by the second that then become the standard. At some point you'll need to offload long term memories to external sources. So while you may forget the actual content of what you need to know, you could for example remember what to google and the summary of the content you expect. Essentially our long term memories is transformed into pointers for external information banks. We only keep whats absolutely essential to perform our functions.


The problem is that if you don't commit information to long-term memory you can't use it reason effectively in other contexts, and you have to add it back to your short-term memory every time you look it up -- so outsourcing your memory to a knowledge bank is limiting the complexity of the tasks you can handle.

So there might be more information you're expected to know in modern jobs -- but if you spend a bit more time consolidating rather than acquiring new information, you can build the foundations on which more advanced skills can rest.

For transparency: I work with the OP


A key point here is that our brains don't work like computer hard drives. Our brains are a lot closer to how, in biology, a single cell stores the entire DNA "data" that's needed to replicate but just using a few base pairs.

We likely store information more in some type of loose graph structure, where we recall / "remember" something by re-creating links to that piece of information. There seems to be very very little "storage cost" for the billions of pieces of information we keep in our brains.


> There is simply just too much information we're expected to know nowadays. You could argue that the amount of information we have to cram into our brains in such a short amount of time makes it very difficult to build good long term memories.

I would also argue that the time we spend memorizing "facts", of questionable utility, takes away from the time that could be spent on learning better methodologies for thinking.


Andy Matuschak's thinking / independent research work here is very relevant:

- https://andymatuschak.org/

- https://www.patreon.com/posts/71081197

- Literally a related quantum computing / physics example (with Michael Nielsen): https://quantum.country/


It may also be interesting to consider this in the context of learning physical skills. When you're learning something new, there are a lot of things that you are doing wrong. A not-so-good coach will see them and start telling you things that you need to change. But you can't remember all these things. A good coach will find one key thing which you can keep in mind to work on.

The Inner Game of Tennis frames this as a difference between the two selves. Maybe that could be considered through the lens of decreasing the number of things you need to remember in order to improve.


I’m glad you brought up this point. I think there are analogues to “muscle memory” when it comes to learning concepts, too. For example, I don’t remember all the techniques to compute integrals explicitly, but I know that I’ll be able to pick it up quickly, because of my “muscle memory”. I feel like this falls into “remembering”, but not “memorizing” when it comes to definitions.


The best coaches I have worked with have cues they use instead of, or with, the explanations. Things like "pinch your shoulder blades" and "push the floor away". These give the practitioner an understanding of how it should feel, physically, without the need to go into details about which muscles and joints are involved.


> Maybe that could be considered through the lens of decreasing the number of things you need to remember in order to improve.

This! learning materials who master this way of information giving based on their importance is key to effective learning, and I think a lot of them don't give importance to it.


yes! i think that's completely right, it takes so much energy & working memory to adapt to feedback that it's really easy to over-instruct people and give them too much feedback to handle.


I have a couple of questions...

How do you trust your memory when (as I understand it), every time you recall something your brain changes in ways that can affect recall later?

Is there a distinction between learning and knowing? For example, in my undergrad days I learned about Simpson's rule for numerical integration. I jammed enough of it into my memory to pass the test and quickly forgot all the details. Now, 30+ years later, I needed to calculate the volume of a pretty complex space. I remembered the existence of Simpson's rule but absolutely nothing else about it. Looking it up on Wikipedia I was able to re-learn it well enough to apply it in my job and move on to the next problem.

If, over the past 30 years, I had been using flash cards to remember the details of Simpson's Rule, I would have wasted a lot of time. Re-learning it when needed also means I don't have to rely on my faulty, dynamic memory. For me, it seems like there's a sweet spot to remembering enough to know the concept and then relying on the internet to fill in the details as needed.


> every time you recall something your brain changes in ways that can affect recall later?

Yeah that phenomenon is called "proactive & retroactive interference".

> For me, it seems like there's a sweet spot to remembering enough to know the concept and then relying on the internet to fill in the details as needed.

I think i agree with you to an extent. For example, there is no point memorising all the digits of pie so that you can use it when programming, it's much more efficient to just remember what pie is at a vaguer higher level than to put the energy into memorising all its digits.

But i would say that most people go too far the other way and only remember less than the efficient amount.


Firstly, in terms of the distinction between learning and knowing -- the thing that matters most is the strength of the encoding in the brain. If you just memorise something with 0 understanding, the connections in the brain aren't as strong -- so they disappear. Whereas if you know something thoroughly, the connections are much, much stronger.

These strong connections are why when you go back and look at it, you recognise it and you know how to apply it - because you still have some of the residual memories from this strong encoding. But in the meantime, you probably haven't been able to apply it in an analogy for example.

Secondly - there's a classic on the topic of Spaced Repetition written by Gwern.[0] Gwern calculated that, given the average amount of time you spend testing yourself on something, and the exponential increase in how long you remember it, if you would spend more than 5 minutes per 10 years looking something up, you should use spaced repetition to remember it.

For transparency I work with OP on Save All.

0: https://www.gwern.net/Spaced-repetition


> Firstly, in terms of the distinction between learning and knowing -- the thing that matters most is the strength of the encoding in the brain.

I'd vote for the ability to perform a skilful epistemic analysis of the retrieved information being more useful. I prefer this because it can overcome any natural immutable shortcomings in the underlying process.


Learning is as much forgetting as remembering.

When I study something, I go for awhile, but eventually it becomes difficult, confusing, hard to see the forest for the trees. Particularly with technical information and skills like programming (or natural) languages.

When I come back a little bit later, I find that I only remember the things that made sense; my confusions are forgotten, and there is fresh mental space and energy to master a bit more of the terrain before I need another break.


I'm still staggered how potent a post effort pause can get. I can spend hours and days trying to improve something and not being able to see the easiest spots. I come back 4 days later and everything just jumps out as obvious as day. No confusion, no fatigue, lots of ideas, enthusiasm, creativity..


Our subconcious mind often works on our problems while we aren't thinking about them, i that's more what's going on here than forgetting. your brain figured it out for you while you were doing something else


That's sometimes true but I think my point still stands.

When I'm studying a foreign language, I learn some words and they stick, but I'm exposed to a bunch more that I don't remember next time. I forget those meanings, but the ones that stuck are now vivid and with me, brighter.

When I'm studying Kubernetes, I end up reading a ton of information that's irrelevant to the task at hand, and lots of it doesn't make that much sense because I'm new to it. The next day, when I come back, the things that I actually understood remain, ready to be the foundation for new learning, which they couldn't have been when they were mere data points in an overwhelmed brain. I don't remember the parts I was confused about yesterday, just this stuff that now makes sense.


To unite our points, I might be thinking of something like: the immensity of sensory and cognitive data that pass through (sub)consciousness during the learning task are sifted and sorted in the unconscious while not learning; one might call the sifting "forgetting" and the sorting "figuring out".


Learning is a graph. Remembering many modes will create the higher abstraction of learning the shape and the sometimes abstract properties of the graph.

If you read a book you might not remember the previous pages, but if you start in the middle you wont understand anything.

The nodes of knowledge might be forgotten, but its ok if you have the graph. The nodes of knowledge that were available to you initially might have been biased. Then you need to unlearn, that is often harder.


The unsuccessful learning technique the author describes at the beginning of the article is essentially the LIFO "stack" data structure/algo: You learn until you encounter a gap in your understanding, upon which you push your current material onto the stack, fix the knowledge gap, then pop the parent from the stack (with additional pushing/popping along the way, as needed).

Leaving the "working-memory-max=4" hypothesis aside (I'm sure it's all true), the "stack learning" paradigm has been hugely successful for me post-classroom, and (amusingly) specifically with Quantum Mechanics.

About ten years ago, I tried to understand what QM was about, by doing exactly what the article describes, starting at Wikipedia's "Quantum Mechanics" page. I got the exact result described: no understanding of the material.

However, the search eventually led me to ocw.mit.edu, where the same LIFO-based stack learning was super effective. Along the way, a refresher calculus course (also from OCW) got pushed onto then popped from the stack, as did the course "Vibrations and Waves" and the entire Walter Lewin lecture series (he has since been removed from OCW, but can be found on youtube).


In the 1600s, educators in Nuremberg (Germany) were proud of a pedagogical technique that they had developed, called the "Nuremberg funnel" (https://en.wikipedia.org/wiki/Nuremberg_Funnel). In this "hydraulic" model of learning, they were proud to have developed an effective technique for "pouring" knowledge into the empty brains of students. Centuries of research have shown that this model is flawed in many ways (as many commenters here have pointed out): people never have an "empty" mind and learning involves effortful integration of new concepts with existing knowledge; for the most important types of learning it requires application, exercises and social interactions.

This author seems to believe in the Nuremberg funnel and the hydraulic model of learning, but simply with a limited "flow rate" for the brain. It's disappointing to see such as simple-minded idea, which has been so profoundly debunked by huge amounts of research, on the front page of HN.


> It's disappointing to see such as simple-minded idea, which has been so profoundly debunked by huge amounts of research, on the front page of HN.

My read is that the author actually agrees with you. The author points out that all students have a starting non-empty state (long term memory), and the goal of learning is to build on top of that. It's just defining the space of "learnable" things as distance from long term memory. This doesn't seem that controversial?

They even call out all the things you call out as effective:

> Everything we know about learning efficiently is directly related to memory - "good" teachers, "good" explanations, images, diagrams, maths problems, essays, practical assignments all are good for learning because they help move things into your long-term memory.

I suspect the thing upsetting you is the call out to spaced repetition?


I’m not sure what you mean. I agree that learning often requires things like applications and exercises, where did I imply otherwise?

I’m saying the reason why applications and exercises are useful are that they lead to a change in your memory. Not that they aren’t useful.


I'd offer a similar criticism. The problem with "our working memory has a maximum capacity of roughly 4 [chunks of information at once]" is that the units ("chunks" was original) winds-up undefined and moreover it tempts one just divide whatever a person seems to be able to process into four things and call them units.

And the way I'd see your "funnel" criticism applying is that each student can easily begin with a different set of mental tools, some of which let them take an idea as one relatively small "chunk" and some of which might process the idea as several "chunks".


I am utterly baffled that no response in this thread so far has taken issue with the statement "As you probably know intuitively, it won't work."

For me, this _does_ work and I have proven it many many times over the years by adding entire categories of technical knowledge to my repertoire. And not superficially, either - I get paid very well to do things professionally that I taught myself by reading Wikipedia.

If my experience were commonplace, the "it won't work" statement would be highly contentious in the comments here. Since it isn't, I guess I can deduce that I must be an outlier.


Maybe you're really good at working with uncertainty?

This article focuses on the number of new things overwhelming our working memory, but I'd argue another big problem is the difficulty of leveraging abstractions even if you don't understand them.

In this exercise, both myself and the article get sidetracked in understanding the basis of all the abstractions as well, rather than just letting the abstractions be, with all of their uncertainty. But if I were better at the latter, I could see this approach working.


I did the exercise mentioned and was happily bouncing around between 3 and 6 levels deep in Wikipedia before I thought I should come back to the article to see what else it said, at which point I became very confused.

Perhaps "comfortable with knowing that I don't know things, yet proceeding anyway" is a better description.


Maybe you already know quite a lot about physics? Or maybe you have an unusually large working memory and didn't go far enough down the wikipedia hole until you got lost?

The working memory limit explained in the article must apply to you unless you have some sort of a very special form of photographic memory


That's the thing though, I never get "lost".

I've never studied quantum mechanics aside from snippets I picked up through pop science articles or such, so it's not like I started with a big body of knowledge already in place.

It's hard for me to imagine what feeling "lost" in Wikipedia would be like. Maybe reading some article about a complex historical battle would do it - lists of people with important names would probably become a jumble without really clear identifiers of who did what and why.


Ok how about this more difficult experiment to prove the hypothesis:

I'm going to send you a wikipedia page written in chinese. You're going to read it once. For every chinese word you're allowed to look up the meaning on google translate once as you read. You're not allowed to do any other form of learning besides reading.

Do you think in this case you'd also not get "lost"?


I think I would get lost, but only due to the vocabulary and translation load and the "once" restrictions - not due to the content of the article itself.

In practice I end up reading the same wiki articles dozens of times as I go up and down the link trees, with each pass connecting various ambiguous ideas together.

If I could annotate the pages with arbitrary amounts of translation notes as I read them, I could probably get through the article though it would take a very long time.


Ok but the point stands that there is a limit to your working memory. Your limit sounds a lot higher than most peoples but it is still limited otherwise you’d never get lost even if you could only read things once.


I don't _feel_ like my working memory is particularly large, but that's also a hopelessly subjective assessment.

Maybe it's more like my working memory can hold more things by aggressively discarding details until I need them - same capacity, but with better lossy compression. Not sure if I could measure that either, but it feels closer.


Selection bias. People read “learning is memory” and choose not to waste their time reading the piece. If the headline is silly, usually the rest is too.


Reasonable, but it's literally the first point the article makes. Of the fraction of commenters that did actually read the article, I still would've expected at least a few to object.


I wrote this and would love to know what you think about it. Am particularly interested in hearing any reasons why it's wrong


Learning is memory, in a sense. More broadly learning the the physical structure that process leaves on your brain. When you truly learn, you do less remembering and more simulating. The process of simulating actions (multi-step addition algorithms in elementary school or wrote memorization of math-facts) is the process that changes the structure of your mind that isn't dependent on "memorization".


that's really interesting, can you explain this bit a bit more? Or let me know what to google to learn more about this?

"The process of simulating actions is the process that changes the structure of your mind"


The mind, in this sense, is composed of mental models that we use to predict how our various actions might change the state of the world. A good example is driving a car [0] - when first learning, we lack an intuitive grasp on how turning the wheel might effect the car, but as we gain more experience we can simulate just how the car will respond right before we take the action, jarring us if we're wrong. This process can be applied to robots as well [1].

0. https://www.sciencedirect.com/science/article/pii/S187705092...!

1. https://www.creativemachineslab.com/uploads/6/9/3/4/69340277...


I would point you to two different motivating examples:

* Childhood development. A child who has had early childhood adverse experiences mind is effected. A fMRI or chemical detection can see physical differences in the brain. But the brain is plastic (it can change) over time. Works on this subject have been published by Dr. Dan Siegel https://drdansiegel.com/books among others. The key insight here is that academic learning is not significantly different at a core aspect then behavioral learning.

* Physical sports/martial arts depend on a reaction time much smaller then what is afforded by going through the full frontal cortex. "muscle-memory" isn't real (it isn't "memory" as you think of it). What you have in these cases are "short-circuits" (this implies structural changes) that are able to act before you are consciously aware of what is going on. The same applies to math facts and other fundamentals, you move things away from memory that needs to be retrieved and into reaction. Reading C-syntax for programmers or signing your name is something that has been turned into structure that doesn't need "memory".

I think your initial premise is correct. We have limited memory. How do we overcome that? We write. Writing is important because with it we can overcome our natural memory limitations. You cannot think about complexities (well or clearly) if you cannot write.

The danger of writing is that you can produce something that is both irrational and nearly impenetrable to the casual reader. For example:

> Foucault's use of the concept is descriptive, that is, analytical and explanatory, and at the same time normative and critical: he describes the grip biopolitics have on individuals through technologies of power in a way that makes manifest the repression at work in these biopolitical processes.

The above, taken directly from an "academic" published journal, could be said to have meaning. Unfortunately, each one of these words has an alternate meaning that is not normative to English, making the entire (actual) meaning opaque. "contecpt" "analytical", "explanatory", "normative", "critical", "biopolitics", "individuals", "technologies", "power", "manifest", "repression", "processes" are all defined differently then a standard English dictionary. So even if you can get past the convoluted sentence structure, the intended meaning will still elude you.

But the answer to memory limitations is clear writing using common definitions of words. I bring this up because as you extend your memory beyond what it innately has, the more likely you are to fool yourself (and others) with sophistry.


> Writing is important because with it we can overcome our natural memory limitations.

Very interesting. Is the theory here that by writing things out on a page we are then able to manipulate the ideas in our head without the usual limits of our working memory? Working memory is still limited but because all the information is so nearby and within view we can quickly put things in and out of our working memory so its limit doesn't impede us as much?


Yes indeed. You are engaging your visual cortex as retrieval tools and paper for storage. Visual is fast.


Thank you for writing it. I have been feeling something similar. The ability to truly understand requires the ability to know few terms and what they mean. I thought it was only me who felt it like so. Most of my friends who venture on creative tasks are able to recall a lot of things because of practice and without having to be distracted!

But does it effect the field like programming? When programming, if I can remember the context, then I can easily search it (research paper, books, documentation, forums etc) Now, with Co-Pilot, isn't it effectively beneficial to understand a topic and develop a general problem solving framework for ourselves so that we can let the AI do it's thing?

What are your opinions regarding it?


Even with programming I would argue you'd be a much better programmer if you can remember more. Obviously sometimes you're going to have to look things up but the more you remember the more problems you'll be able to solve.

Specifically you won't be able to solve a programming problem if the answer requires you combining over 4 things you don't have in your long-term memory (even if you can look them up). This is the main reason why Jeff Dean is a better programmer than me even though we both have access to google - he has more knowledge & experience of programming in his memory than me that means that even though we can both look things up he is able to solve way more problems than me.

Co-pilot slightly changes the type of thing that's valuable to remember, but it doesn't change the importance of remembering things. I think, as you implied, co-pilot probably makes remembering some types of syntax or boilerplate less important.


Ohh, that's a cool opinion!!! Thank you ^_^


Creating a tool to help others learn is awesome.

I think what the article misses is that there is a ton of knowledge we can't 'write down' and therefore memorization is not enough. For example learning to solve Integrals. Yes, there are some rules and tricks one can memorize which helps but I would argue the only way to get good at solving integrals is to interact, resp. solve them.

Another point are second order effects of how one learns, for example curiosity and resilience. There might be a long time negative effect on motivation of a topic, when there is too much focus on 'memorization' (It certainly was that for me in my French class;)).

I really enjoyed the 'bad reputation' part and agree that it is sadly viewed as not important enough by many.


The spacing effect can also be applied to solving problems, like "distributed practice" as opposed to "massed practice" with calculus homework [0].

Speaking completely personally, forgetting a lot of information is the easiest way to ruin my motivation to learn a topic.

[0] https://link.springer.com/article/10.1007/s10648-022-09677-2


It's not bad, but you're limiting yourself "out the gate" so to speak. If you really want to explore memory and learning you must study and practice (self-)hypnosis. Most of what we think we know about how the brain and mind work is wrong. E.g. memory: "our working memory has a maximum capacity of roughly 4" is not an absolute rule. Your brain, right now as you read this, is tracking 100,000's of variables and recording a trace of everything you are seeing, hearing, smelling, tasting, touching, and doing. Almost all of this is done unconsciously, and for good reason.

Anyhow, you can learn to have better "interfaces" to your automatic unconscious abilities and leverage them to e.g. remember instantly and durably any mathematical equation, etc.

- - - -

As an aside, the strategy you sketched out at the start of your article would be workable if you have great recall, however an even more better strategy would be to recapitulate the discoveries of physics in roughly historical order. Your understanding of quantum mechanics would be much deeper and richer, and you would be following a story (an epic story made up of so many fascinating smaller stories, and one that is still going on! Albeit things have calmed down this last century or so, but no one thinks we have reached the climax yet.)

That would be the way to do it: start with the Greeks and the Alchemists and proceed to follow the trail(s) of how we as a species sussed out the mysteries of the physical Universe.


There are plenty of different way to learn, and the optimal probably depend on the material to learn.

For example, in machine learning, there is something called stochastic gradient descent, where to learn you present a single random element at a time from the dataset. In the end it will have learned of all concepts, by becoming more and more confident in each individual concepts.

For example to learn QM, you pick a random QM wikipedia article, and try to push through the article, even though there are some things you don't understand. Then you do the same thing, for a different unrelated QM article.

For learning tennis, you don't learn specifically forehand, then learning backhand, but you alternate them at random so that you have a single unified way of playing with smooth transitions, instead of having to switch between different "modes" of thinking.

Sure more memory can allow some speed-space trade-off in learning ability, but using your memory too much may make you miss some fluency that may have emerged. For example the old-school of machine learning was using databases and K-near neighbors, which used a lot of memory and was slow. But the new-school of machine learning are using constant memory algorithm and compressing the data in it, and it can learn to generate all the pictures in the world with only 4 Gb of weights.

Learning is imagining, once you bootstrap your imagination, its bandwidth to synthesize new examples from which you can learn from, is much greater than the bandwidth of looking up new data material to learn from.


assuming our brain is a CPU, the 4-working-queue-short-memory is basically our L1-cache. We need L2 and hard drive for longer term memory, otherwise the system can not really function well.

however long-term storage is just one vital factor, another one is the 'deep learning' neurons that understand the content it stores and more important to connect the dots among various neurons.

we need both: understand and store. Neurons do both for us.


yes, we need understanding - the best way to get things into memory is to make connections between neurons through understanding. And this keeps it in memory for longer than if you just memorise it — and it will enable you to deploy the information in new contexts.

My takeaway from this article is that if you only focus on understanding (and so do not commit it to memory), you cannot reason using this information in unfamiliar contexts later on, once you've forgotten it.

So the best thing to do is to: 1. Make sure you understand something thoroughly 2. Test yourself on it using spaced repetition to ensure you keep it forever

For transparency: I'm one of the cofounders of Save All (linked site) alongside Petros the author


Copy-pasting my top-level comment:

I am utterly baffled that no response in this thread so far has taken issue with the statement "As you probably know intuitively, it won't work." For me, this _does_ work and I have proven it many many times over the years by adding entire categories of technical knowledge to my repertoire. And not superficially, either - I get paid very well to do things professionally that I taught myself by reading Wikipedia.

If my experience were commonplace, the "it won't work" statement would be highly contentious in the comments here. Since it isn't, I guess I can deduce that I must be an outlier.


There's probably something to be said about batch sizes and stability.


Slash-Dotted!? Site doesn't load. (shrug)


how you mean? this link doesn't load for you?

https://saveall.ai/blog/learning-is-remembering


Just loaded, after 20 attempts. ;)


hmmm very interesting, not sure why that could be happening! I can see loads of people are reading it now but i would have thought our server (arranged on vercel) could handle it


I experienced this myself when I was a kid. I wanted to do mental arithmetic like my dad but I couldn't do multiplications for large numbers even after my dad explained how he did it. Turns out I had to memorize the time tables first, otherwise I would run out of swap space.

If you want to experience this yourself but you already know the time tables, you could try to memorize log tables and then you can do those fancy arithmetic tricks that old school engineers used to do with slide rulers.


yeah mental arithmetic and running out of swap space is such a relevant / good example of the phenomenon, might add that into the article


It seems like learning how to learn and learning what matters in the world (and therein how to be happy) are the two most important things to learn. Yet, for some reason, our standard school curriculum don't touch on either subject. Why not?


> learning how to learn

This is taught, at least at German schools. It‘s just that nobody pays attention.

> learning what matters in the world (and therein how to be happy)

This is way too subjective to be part of a curriculum. Trying would border on indoctrination.


> This is taught, at least at German schools. It‘s just that nobody pays attention.

Interesting, is the curriculum anywhere online? Curious to take a look.

> This is way too subjective to be part of a curriculum. Trying would border on indoctrination.

I see it as the opposite of indoctrination. Teaching students how to choose their own values (and how happiness is a function of your values) gives them more freedom not less. You wouldn't want to push any specific value framework, but you do want to teach students that it's possible to choose your own adventure, instead of getting indoctrinated by your society's prevailing values. A curriculum such as:

- What are values?

- The connection between happiness and values

- Value traits (e.g. extrinsic vs intrinsic, behavioral vs situational)

- Value prioritization and hierarchies

- Different cultures and their distinct value frameworks

- Practical applications (e.g. learning how to change your values; trying out different values for a day/week)


Sorry for the late reply, I hope you still see this.

> Interesting, is the curriculum anywhere online? Curious to take a look.

They are, but in German: https://www.mk.niedersachsen.de/startseite/service/rechts_un...

Also, it’s been a while since I went to school. Not sure what changed in the meantime.

> Teaching students how to choose their own values (and how happiness is a function of your values) gives them more freedom not less. You wouldn't want to push any specific value framework, but you do want to teach students that it's possible to choose your own adventure […]

I fully agree with that. I apparently just misunderstood your original comment.


Reminds me of this article I just read: http://paulgraham.com/lesson.html


> There are now ways to get rich by doing good work,

That sounds like different test too


i wish school taught me how to learn instead of me having to wait until i was 25 and doing a free coursera course


Have you seen any SaaS or consumer products that focus on "how to learn?" and "digesting information" aspect?. I'm legit asking because my team is working on that exact idea with mental models and AI.

If you are curious, we'd appreciate candid feedback from you. Just launched pre-beta on Product Hunt today.

https://www.producthunt.com/posts/neble

Thanks


> Have you seen any SaaS or consumer products that focus on "how to learn?" and "digesting information" aspect?

Yes Save All helps with that in a way

https://saveall.ai/


Remembering is required for initial stage of learning. The whole point of learning is to understand things to remove remembering from the scene and be able to pull the knowledge from thin air. Therefore the title of the entry is misleading, very much so :-).


Please explain how you can understand something without remembering the information that the understanding is made up out of?


Spaced repetition is already a well-understood and well-accepted idea. Why feel need to debate about it?

The more important thing is the product which is SaveAI which is using AI to make the spaced repetition process much more efficient and effective. I never got into Anki because it seemed to me to be a tedious process to create decks but this new approach with using AI is much more attractive.

But I guess people like to argue/exchange ideas and this post is getting much more traction compared to the Show HN.


I think the software that is promoted here is actually detrimental to "remembering".

The core thesis here "Learning is intertwined with memory" was indeed hashed out by psychologists in the 1970 (Craik and Lockhart); It was precisely these two who put forth the "Levels/depth of processing" idea, which emphasized the importance of the encoding process. A more "meaningful" encoding enhances memory.

The author not so secretly wants to sell us an app for spaced practice. And yes, spaced practice is indeed important and has been well studied in memory research. But I feel that the tool that they are selling is the antithesis of good learning, especially if you want to check "Quantum Mechanics" off of your list. If I got it right, the main difference of their app to Anki or any other flashcard program is that you do not need to spend so much time drafting your questions because an AI can do it for you. But this is precisely where deep encoding would come into play. You need to decide how to phrase that question and write it down! You need to draft that answer! The classic crib sheet phenomenon.

Regarding computerized support for such human "deep learning" I can recommend "concept mapping" tools (VUE from Tufts or ihmc CmapTools). But a pen and paper is great, too.


You review the AIs suggestions which itself is a form of encoding (that is arguably more powerful than the encoding you describe)


Well shouldn't be too difficult to test these hypotheses, if the AI approach is better then actually run a study and show that!


Learning is remembering and understanding. You have to progressively build layers of understanding (and be able to recall it quickly).

This article focuses mostly on the second part, about remembering.

You could recall things but you may not fully understand it, which is going to make it harder to learn.

And of course, you might understand something, but eventually you will forget it (either few days or few years, depending on how often you use related memory). When you forget, it makes it hard to learn.


If storing information and creating associations between bits of it were 'learning', then databases would be very smart indeed.

No, learning is not 'storing pieces of information in long-term memory and recalling them'. It's not the ability to recall information. At the very least, learning is information+behavior change+understanding+values and attitudes associated with the information. It's much more complex than memorization and recall.


Thanks for sharing this and I've been certainly thinking along the same lines.

To add to this, I've known many folks that can accomplish certain tasks almost automatically and creatively. If you asked them to recall exactly what they did to achieve it they couldn't. And this usually isn't action on concrete information either but on intuition alone.

If humans worked primarily on memory we'd have been toast a long time ago. There's too much variation in the natural world to confront it solely on the basis of memory. I'd say we're more experientially oriented as opposed to memory oriented


I mean .. its true if you limit your definition of learning..


in what way is the definition limited / what should it be?


IMO, such a viewpoint about "learning" is limited to learning 'knowledge'; i.e. this sort of 'learning' is limited to repeating (replicating) facts external.

I wonder, if learning is remembering, then what is "understanding"??

from my own viewpoint, learning is about something external; for example "what's the word for such and such concept?" ..in english or in spanish?

point being that you need a corpus of consensus about what the specific linguistic-culture calls the learned concept.

but then, what does it mean to understand?

I think the way towards making sense of this (answering it) needs to consider learning of physical (do-able) actions. Because when considering such skills as learned/understood, the distinction between learn/understand seems to vanish.

So then maybe understanding has more to do with having learned something to a proficiency level that allows one to teach (show/explain) to another how to do that action?

finally, to throw a proverbial wrench into my own attempts to make sense, what does it mean to perceive something complicated, such as the meaning out of arbitrary alphabetic glyphs? how is the meaning out of a text understood? what did we have to learn to be able to do it? is it just a matter of knowing most of the contents of a dictionary??


This is cognitive science in a nutshell. Most early philosophers argued exactly what "wisdom" means both externally and internally. Look to the Socratic philosophers for simplified explanations.


I think learning mathematics for example is not about just "remembering".


I was brought up with this distinction - my family always put more pressure on understanding than remebering but now after mamy years I start to suspect that there is really no difference between those two - understanding is probably just remembering proper models that are useful to solve problems that You want know how to solve.


remembering vs understanding is uncompressed vs compressed storage:

If I tell you a sequence of numbers: 1,2,4,8,16,32,64,128

And you try to remember them, having never seen these numbers before, you have to remember each individually and it will not be so easy.

But if you, before trying to remember them, apply a little computing power to figure out that its a sequence of powers of two, starting from 2^0 going up to 2^7, then you have compressed the information I gave you. You understood (presumely) the source of the information and you will be able to remember the numbers much easier.

One strategy I see people applying to unknown data they want to remember is to try to establish links to already known information or made up stories. For example when given the sequence of numbers above but not knowing about exponentials some people would try the follwing:

  * 1: the first number is one (as on the number line)
  * 2: the second number is two (same)
  * 4: I have four friends
  * 8: I ate lunch with my friends
  * 16: my sister was also there, she is 16 years old
  * 32: the house number of the restaurant was 32
  * 64: we ate sushi, my dad also likes sushi, he is 64 years old
  * 128: one-two-eight sounds a bit like "want to eat" and yeah I also like to eat
By doing so some people seem to achieve quit good memory of an unknown topic. But from my point of view they are only re-encoding the information to sort it into already existing bins in their memory instead of compressing it. The amount of information is not reduced but increased and it seems harder to reconstruct the original encoding/information. Additional without compressing the numbers to their generating algorithm it is not possible to use the "learned" knowledge for anything but reciting.

This all leads to Solomonoff's theory of inductive inference.


“[…] the purpose of memory isn’t to remember the past. The purpose of memory is to, at least in part, so that you don’t repeat the same errors that your repeated in the past [..]”

– Jordan B. Peterson

I like to think that Spaced Repetition learning pre-empts these errors, so we are apt to recall when we productively need that memory.

[1] https://m.youtube.com/watch?v=yJ6_cV_RtQU


Whoa. That quote is kind of amazing. My brother committed suicide, and sometimes memories of the past really haunt me. That quote is actually really comforting.


This is the worst possible example.

Learning basic QM is 90% complex differential equations and 10% physical intuition.

There is a reason why you generally don't get to introductory quantum mechanics until second year after (or while) you're doing calculus, differential equations and linear algebra.

To quote a lecture I once saw: the deepest point of any state is a mine shaft somewhere. You don't find that shaft by gradient descent.


why does that make it the worst possible example?


Because learning quantum mechanics isn't about learning quantum mechanics, it's about learning a bunch of maths to learn quantum mechanics.

It doesn't matter how much of the wiki page you remember, without understanding a few key pieces you're wasting your time.


Not really.

There are different learning styles. I find memorization very difficult (poor passive recall), yet my comprehension is superior (aiding deduction.)

I find many with “photographic memory” to be sadly inadequate at comprehending and exploiting interrelationships.

There is something like dexterity which develops in the mind, an impression left which is not exactly remembering, yet definitely stored in the brain.


> I find memorization very difficult (poor passive recall), yet my comprehension is superior (aiding deduction.)

So, the [learner's paradox](https://fs.blog/the-learning-paradox/) is that students do things that are easy and gratifying but have no lasting impact. Baseball players practicing batting should be practicing a random mix of pitch types but vastly prefer 'massed practice' sessions. I.e. 20 fastballs then 20 curveballs then 20 changeups or whatever. Yes, you can do very well on that 20th pitch, and perfecting it will be gratifying, but you come in tomorrow closer to pitch 1 than the person who mixed it up does.

It's worth pointing out that "Photographic memory" isn't durable (typically), and may in some ways impair a person -- like a guitarist who's so good at listening and replicating a song they never bothered to learn sheet music, they may end up using short term memory as a crutch and commit fewer things to long term memory in a way that impairs them in later life, especially versus someone who had studied other similar networks of concept in the past.



Myths are a myth

I’m not speaking of trends, I’m speaking of the architecture of our minds.

There are definitely differences in how various minds work, and these inconclusive pseudoscience studies making wild assertions are as junk science as the junk science they feed upon.


> There are definitely differences in how various minds work

Yes, but we still all learn in roughly the same way. The differences aren't big enough.


The good news is that memory techniques have never been as accessible as they are now.

I am fully in agreement with the conclusions drawn in the article, but the bad news is that even with those techniques it can be a slog. Any Anki user will tell you that maintaining dedication and avoiding burn-out is your Achilles' heel, not the limits of your human ability. Understanding something for the first time (in the Feynman technique sense of being able to explain it well), and doing that multiple times per day, takes up a lot of mental energy even if you are smart and naturally talented.

In conjunction with using memory techniques, we need to add dietary practices where we become much more selective with the information we take in. Places like HN give the illusion of learning (and to a great extent help broaden your mind about certain topics), but the actually utility of all that random knowledge butts up against the opportunity cost. There are already many more worthwhile pursuits than can be fit into your lifespan.


All it takes then is just move from a method of remembering to another more efficient and enjoyable one, I can illustrate what I am trying to say using your Anki example by saying that once a language learner for example reaches a certain level in the language after acquiring a sized set of vocabulary, the learner can move to "abundant reading" as a way to memorize words through frequency as a more effective method of learning new vocabulary.

So it's all about remembering, but what differs is how someone approaches it depending on their level and understanding of the topic they are trying to learn.


This is definitely true. The snowball effect and the networking of information helps a lot.

I'm trying to look at the problem with a wider lens, however. For example, if learning that language is actually something we want to do in opposition to all the things you could be doing. In the context of public education, since this is the theme the author focuses on, we don't just study certain topics, we study a certain spin on some topics that is determined by a range of government officials.

In other words, you have to deal with severely limited energy and interest compared to all that is available in life so cutting chaff is probably even more important than boosting your ability to remember. In fact, selecting what to learn is life, just like a sculptor removes the parts of the marble that aren't in the end result.


Completely agree on Anki requiring too much effort, its the main reason why i'm making Save All (https://saveall.ai/). Save all is a simpler version of anki that use AI to try and make it less effortful


It's not wrong, it just doesn't cover much breadth of learning nor how humans learn over time. Also it doesn't consider different learning styles. It's seeing learning as just chunks of conceptual information where that is reducing the problem too far.

It's not just some simple "move from short term memory -> long term memory" to make more room. If it was, we would all optimize for that outcome. People would have written books about learning that glorified this concept. Teachers would be teaching it in schools to have students score better on tests.

Learning is much more a lifelong mindset akin to the famous Socrates quote of "I know that I know nothing". Or even the idea that we change through the books we read / things we learn, but don't remember much of what we did.

So I don't agree it is all about remembering because like GI Joe said, knowing is half the battle.


Learning styles are basically a myth actually. It's been shown we all learn in roughly the same way. Check out some of the research mentioned here

https://www.techlearning.com/news/busting-the-myth-of-learni...

> People would have written books about learning that glorified this concept.

There are lots of books on it actually. Make It Stick is my favourite book on this, you should try it!


> Learning styles are basically a myth actually.

You have a preference of how you learn, no? It's not a myth if there's perennial truth to it.

> There are lots of books on it actually. Make It Stick is my favourite book on this, you should try it!

I've read many titles including this one. Not everything is going to be "learned" with spaced repetition, interweaving, retrieval, and varied practice. These are great modern methods to learn effectively for the short-term, but are not by any means concepts you're going continue practicing past your formal education. (i.e. Anki flashcards for everything you want to learn)


> You have a preference of how you learn, no? It's not a myth if there's perennial truth to it.

I might have an emotional preference but there's no good evidence that different people learn better in very different ways


Perhaps, but if you have a preference for one way of learning as opposed to another, you're probably more likely to stick with that preference and learn through keeping up the habit of learning via that preference. If it's a method you don't enjoy, you'll probably just drop that topic and not learn it – not necessarily because the topic itself was the issue, but how you went about learning it.


You can remember something without fully understanding it. There is more to learning than just remembering.


I agree with the core idea of your comment, but I think the author is as far as I understood it think of learning as "fully understanding", so if you didn't fully understand it then you didn't learn it yet.


If you don't understand it then you don't remember the information that compromises an "understanding" of the thing


The words used make the whole thing very confusing. Understanding and remembering are two very different thing in cognitive literature. Understanding is deeper about connections, and remembering is more recollection of facts or knowledge. It is fair to say that you cannot understand something without having some knowledge, but you can certainly know a lot of facts without having the slightest understanding.

It also poses wildly different challenges, where the metrics by which one is to judge the "degree of having attained knowledge" in the broader sense of the word, depends entirely on which of those aspects one value.

Some literature refers to this dichotomy as "instrumental" or "relational" understanding. You see this very clearly in math, where students can recall the facts of equations, but they don't understand it.

It can very well be that the author of this article is aware and appreciates this distinction. But, the phrase and title of "learning is remembering" will certainly evoke suspicion. Just because it is much easier to remember something one understand, does not mean that by remembering, one understands.


In math people apply formulas they don't understand to numbers that they don't understand to get answers they don't understand.

For some reason we then give them degrees and call them educated if they remember all those things they don't understand in an exam.


This was Socrates’s position too as related in Meno[1]. I don’t see any reference so I wonder if the author rediscovered that position independently.

[1] https://en.m.wikipedia.org/wiki/Meno


i had not seen this before, thank you very much for sharing!


When I was 25, I had a scary stellar memory. I never used a calendar. Never missed any appointments. I could remember exactly what people said in meetings several months ago. I never forgot names. I still almost never forget names, but mostly my memory is gone. Well, because of life.

My advice. Organize yourself. Use as much cognitive offloading as possible. Keep a diary, a document, wiki or whatever. Use calendars, reminders, alarms, Kanban boards and on. Make sure to understand concepts and how things really work. Practice how to spot what is important. Take your time on important things. Learn the tools. Use the tools. Learn how to pick the right tools.


Some other things to consider. A working memory of 7 is a median number for the population. Some folks will have a greater working memory (some outliers are in the 80+ range). Some folks will have a smaller working memory.

Specifically, someone with ADHD (a topic close to my heart), the size of working memory is typically around 3 items. Related, long term memory is also a bit more chaotic for those with ADHD; when you can only have three linked concepts at a time in working memory, it's going to be encoded with fewer overall links data.

That all is to say, if you're building a tool to help people remember things, don't just build it with the median in mind.


Per the author, they’re launching a service called “Save All” — which is simplified version of spaced repetition; might be worth explicitly stating this in the intro or footer of the linked blog post.


Yeah, the author mentioned it in a reply to a comment mentioning Anki.


Are you saying we should mention it for our own business benefit? Or that it's bad to not state it in article for some other reason?


Largely because it explicitly narrows the context, but also because launching service like that increase odds of biases in my opinion.


I didn't mention it in the article because the article is hosted on the same domain (with a link to it at very top of page) so thought it was quite obvious.

Also I wanted it to stand alone as an essay to see what people thought. If i linked to the product within the article it would turn the whole article into an advert rather than an essay


Okay, but you don’t even mention spaced repetition until the end and only in a single sentence.

And obviousness is relative; I did not even notice the logo and base domain, read the title and scanned the text for core concepts, spaced repetition again is only mentioned once, even though in my opinion it’s literally both the topic and your solution to the issue; which was not obvious until I read your comments. Think if you were to ask random people who are not aware of the company, what the post is about — then define spaced repetition and the reasoning behind the company’s name, then ask people again what the post is about — answers would be noticeably different; you could even hide the context and ask if concept of “save all” was mentioned on the page and what it means.


Spaced repetition is not the topic. The topic is memory and there are many solutions to trying to remember more, spaced repetition is just one of them. There's also things like elaboration, dual coding, mnemonics, memory palaces etc.


From the linked article:

>> Well, new technologies (Save All link) leveraging techniques like spaced repetition mean it's much easier to remember what you learn so its time to rethink that. You don't have to forget what you learn anymore.

Might be wrong, but at the point topic is covered and a single solution is presented, it becomes the topic.


Confused. Did the domain change? It’s currently hosted on the product website. Seems like pretty clear, standard content marketing.


thanks. and yeah the domain didn't change


Maybe "remembering in long-term memory" means understanding, internalizing, and intuiting? I got the same problem as the author described when taking physical chemistry and linear algebra. All of sudden, there were so many concepts per page and the concepts were so abstract to use that my usual M.O of getting very good at derivation and proofs by manipulating equations stopped working. Only then did I realize that I needed explicitly focus on intuitively understanding each concept.


I think remembering is not the key to learning. Learning is understanding, in the sense mentioned in cloogschicer's comment + making connections with existing concepts: this is usually called "transfer". Transfer enables us to solve novel problems. Transfer is the real test, and probably the final goal, of learning.


Given comments about using memorization for code etc, wanted to share this other article about how that can be incredibly effective:

https://senrigan.io/blog/chasing-10x-leveraging-a-poor-memor...


I think the Wikipedia system of learning mentioned in the article would work. I call it "just in time learning". You click the link, you read it, if you decide that it is useful then you learn it by repeating it and understanding its relation to what you already know.



Is the author Petros Christodoulou, the computer scientist cited here: https://scholar.google.com/citations?user=mWdm_bkAAAAJ&hl=en ?


yes that's me, how comes?


He's moving it from working memory to long term memory :D


lol oh dear, sounds ominous


Um, there’s like 300(0) years of research on this, and, really, it isn’t anything like this simple. For “recent” science you can start on volume 1 of the jep journal of learning and memory, which is like 50+ years of continuous monthly publication.


Yeah some simplification is required to condense 300(0) years of research into a digestible article but is any of it misleadingly incorrect?


“Some simplification”, even if it is technically correct can be misleadingly incorrect in the sense that it misleads you into thinking that the object of analysis is simple, which is incorrect. Analogous example: A computer is a machine that you stick numbers into, and it does stuff to them, and other numbers come out. But you can only put in a limited number of numbers because it can only do stuff to a handful of them at a time. The rest need to get saved someplace else, called RAM. So machine learning is RAM lookup. Is that misleadingly incorrect in the sense I described?


Related, the Ebbinghaus forgetting curve

https://en.wikipedia.org/wiki/Forgetting_curve

See plot with spaced repetition halfway down.


yep that's the one! good old ebbinghaus


How they describe learning quantum mechanics at the beginning is literally L. Ron Hubbard's Study Tech. It's how Scientology teaches you how to learn.


A long time ago a professor described learning as "getting used to".

Funny coincidence to the article, it was at the beginning of a lecture about quantum mechanics.


I've always felt there is... 1.) Things you know you know 2.) Things you don't know you don't know 3.) Things you know you don't know.


I would argue memorizing idea is not learning. If you can memorize idea it doesn't mean you can explain it using different concepts.


saveall.ai looks really cool.

I wish it were a paid product that I can download my cards from.

I don't want to be in a position where I put in lot of effort to input my cards and then you shut it down.

If the founder is here, do you plan to make this a paid service? If so, when? and will you provide the ability to export my cards? If this is not going to be a paid service do you plan to Open Source it?


It is partly paid already, you can pay for faster life generation in the quiz. We're almost profitable & very well funded so we aren't going to be shutting down. If we ever did shutdown we would automatically export everyone's cards and send them to you, we also will probably be adding in an export feature for you to do that yourself early next year.


no it isnt, come on. if learning were just remembering then the existence of wikipedia would mean that you already know quantum mechanics, but of course that contradicts the premise. There is something more you have to do other than literally be able access the information. at the very least an installation step is required.


not sure you read the article


no, not at all learning means you don't have to remember because you have understood it and can recreate it at will


learning is about connecting the dots, not just gathering dots. e.g. based on what i know, what should i learn next?


This is true in as far as it goes with respect to short-term / working memory.

But I think it glosses over the most important detail; long-term memory comprises many distinct sub-systems. "Remembering" is typically (IME) associated with explicit/conscious long-term memory processes, but I think the most important thing for learning complex domains like Quantum Mechanics is implicit memory, specifically procedural memory.

https://en.wikipedia.org/wiki/Long-term_memory#Implicit_memo...

In order to understand higher-level concepts like the Schrodinger Equation, one must first thoroughly understand lower-level concepts, starting with mathematical theories like calculus, and physics models of energy. It's not possible to simply fill out an Anki card for these lower-level concepts and memorize the wikipedia page; one must practice performing those mathematical operations until they are unconscious like riding a bike.

This is recursive; one can't understand higher-still domains like Quantum Chromodynamics or Quantum Chemistry until one has worked with the Schrodinger Equation (and other QM formulations) enough that it's deeply lodged in your procedural memory.

This is why it takes ten years to train a physicist (and even then one could argue that physicist is at the beginning of their journey); one must spend undergraduate years working out a bunch of problems by hand, gradually layering new concepts on top of old ones. It's not like pushing a software engineer though a dev boot camp, where a dev can produce something that is simple but useful after a few months. There is no way of usefully abstracting away the lower levels of complexity in Physics.

The other side of that observation is useful too; software bootcamps are possible because we successfully abstracted away a lot of complexity. Good abstractions (even when leaky) can be immensely valuable. Imagine having to do a PhD to have enough expertise to build a web application? I think this achievement is perhaps under-rated, particularly in this community that takes joy in peeling back the layers.

I suppose the other observation I'd make is, don't overuse Anki; it's good for memorizing facts, but not necessarily helpful for procedural memory. You still need to practice doing the thing in order to master complex domains. (I'm open to the argument that having facts memorized might make the procedural acquisition faster, but I'm not certain that it's more effective than just spending the memorization time on focused procedural learning practice instead.)


I wish it hadn't pointed to another silver-bullet app at the end of the article.


Why? I almost didn’t include a link so am interested to hear your answer




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: