"One method comes from the physicist John Wheeler (the PhD advisor of Richard Feynman). Wheeler recommended that, after we solve any problem, we think of one sentence that we could tell our earlier self that would have 'cracked' the problem. This kind of thinking turns each problem and its solution into an opportunity for reflection and for developing transferable reasoning tools."
That always struck me as the natural way to compress knowledge for exams. I've got nowhere near the intellect to remember whole derivations and proofs, so I have to boil them down just to have any hope of retaining them.
Doesn't always work, though. Turns out that memorizing the form of the Lorentz transformation will allow you to solve arbitrary length contraction / time dilation type problems, but nowhere near as quickly as the physics GRE demands. C'est la vie.
"For chess, deliberate practice includes deep analysis of grandmaster games."
If this is true, could studying (reading) good code do the same for programming? One could, for example, begin to write an application and find code to a similar application where they would be able to check why things are implemented a the way they are as they go along. The tricky part here is deciding which code is worth studying.
I believe "read good code" is already standard advice for improving your coding skills. However, I think there are many differences compared to studying chess by analysing games.
One way that it works for programming is doing small exercises and then checking the answers. For instance, the 99 Problems in Prolog (or its translations to Lisp, Haskell, etc):
But as you say, for larger problems it's harder to find good "answers". Even for something relatively small like unix utilities. Say you want to write cat or echo. You get source code from BSD and GNU and Solaris and they're quite different from each other (I recall seeing a comparison somewhere, mainly putting the GNU code in bad light, perhaps unfairly. Anyone has the link?).
This is why I haven't yet read all of Peter Norvig's coding essays -- you can depend on him to publish code that's really hard to improve on (by the metrics he's writing for, like clarity). I like to write my own first before reading someone else's, because yes, it really does help you suck the juice out of a learning opportunity. Feynman and Turing both seem to have emphasized this too.
Really polished code is hard to find. I've been playing around off and on with Ken Thompson's regular expression search paper most recently.
I feel that the end result of polished code isn't useful unless you can see the process they went through to get there. It seems difficult to retrace their footsteps.
I've found that extra info to be interesting and useful, it's true, when I can get it. For instance, Norvig's Lisp-in-Python essays came out with the code still in some flux and I could compare his improvements to mine.
Did he really read only those 2 books? I'm not a chess player, but still I find that incredible. Surely he didn't learn all his openings only by playing and reading one collection of games?
I'm not really surprised, I went through Josh Waitzkin's chessmaster tutorials, and one thing that stuck out was him talking about how his coach had him focus on the mid- an end-game, and hardly any on openings.
The idea I think is that by not knowing openings very well, you end up quickly getting out of the opening book, neutralizing your opponent's advantage and into where you're strongest.
"One method comes from the physicist John Wheeler (the PhD advisor of Richard Feynman). Wheeler recommended that, after we solve any problem, we think of one sentence that we could tell our earlier self that would have 'cracked' the problem. This kind of thinking turns each problem and its solution into an opportunity for reflection and for developing transferable reasoning tools."