TLDR: The author independently re-discovered what you may know as Old Code Syndrome.
I think that's because mathematical papers place too much value on terseness and abstraction over exposition and intuition.
This guy's basically in the position of a fairly new developer who's just been asked to do non-trivial update of his own code for the first time. All those clever one-liners he put into his code made him feel smart and got the job done at the time. But he's now beginning to realize that if he keeps doing that, he's going to be cursed by his future self when he pulls up the code a few months later (never mind five years!) and has zero memory of how it actually works.
I'm not intending to disparage the author; I've been there, and if you've been a software developer for a while you've likely been there too.
Any decent programmer with enough experience will tell you the fix is to add some comments (more expository text than "it is obvious that..." or "the reader will quickly see..."), unit tests (concrete examples of abstract concepts), give variables and procedures descriptive names (The Wave Decomposition Lemma instead of Lemma 4.16), etc.
It would be really nice if all it took to understand difficult mathematics were some easy programming tricks.
The problem with looking at old code is you forget what is going on or what the purpose of different components are. The problem with looking at old mathematics is that it is genuinely very difficult to understand. You work very hard to be an expert in a field and get to a level where you can read a cutting-edge research paper. Then if you let that knowledge atrophy, you won't be able to understand it without a lot of re-learning when you look at it again.
Unfortunately cute tricks like comments and concrete examples won't save you here (if concrete examples even exist -- oftentimes their constructions are so convoluted the abstract theorem is far easier to understand, and often times they are misleading. The theorem covers all cases, but all of the easily understandable concrete examples are completely trivial and don't require the theorem at all.)
Programming has existed for, say, 50-100 years. We have recorded mathematical history going back thousands, with contributions from most of the most brilliant human beings to ever exist. Do you think perhaps there's a reason why a simple and easy trick like commenting and renaming lemmas has been discovered and solidified as standard practice in programming, but hasn't been adopted in mathematics? Are mathematicians really just SO dumb?
The answer is those tricks just aren't good enough. Mathematicians do exposition. They do plenty of explaining. Any textbook and even many research papers spend a huge amount of time explaining what is going on as clearly as is possible. As it turns out the explanation helps, but the material is just plain hard.
> Programming has existed for, say, 50-100 years. We have recorded mathematical history going back thousands, with contributions from most of the most brilliant human beings to ever exist.
Mathematics with a solid logic foundation has also existed only for the past century, and writing programs is actually equivalent to doing a mathematical proof as some have already pointed out.
The actual problem is that when programming you are talking to computers, so you have to lay out each step, or the computer will not know what to do. Sure you can use libraries to do very complex things all at once, but then that's because those libraries have already laid out the steps.
But when doing mathematics you are talking to humans and you often skip steps in a very liberal way. There is indeed a library of theorems that is written down, but there is also an unwritten library of "it trivially follows", which, if a mathematician is asked to actually write it down, might feel humiliated.
When you think that there is something so trivial that computers must be able to do, you still have to find or invent a library for that. When you think there is something in mathematics so trivial that it must be true, you just need to convince your audience.
One day all mathematical papers will come with a formal proof, but that day has not yet arrived.
To clarify the last sentence: I am not criticizing mathematicians for not doing formal proofs. Often mathematical publications are not final "products" but explorations into new methods. Requiring formal proofs on every publication, with the status quo of computer-assisted proving, will surely impeded the development of mathematics. What I am saying is that I hope one day with the development of computer-assisted proving, the chores involved in doing formal proofs will be reduced to such a degree that mathematicians are more inclined to do them than not.
Mathematical papers will not come with a formal proof until formal proof systems "auto-prove" button is strong enough to do everything for them and it can come as an afterthought that takes 5 minutes.
Formal proofs do nothing to help understand the mathematics, all they do is help to check for errors in logic -- have you ever read a formal proof? They are leagues more obtuse and difficult to understand than any piece of mathematics from any field.
A program is technically truth-value equivalent to doing a proof, but the resulting document/piece of text are MILES apart. The proof is an explanation in plain language to others who understand the topic why something is true. The formal proof is a sequence of opaque inference rules that the computer promises is equivalent to a real proof.
It's the difference between understanding a formula and plugging it into your calculator. The calculator is more likely to compute correctly, but it gives you no help when it comes to actually understanding mathematics.
Mathematicians as a community completely reject the notion of using formal proofs (while sometimes begrudgingly accepting they may be useful to check our work) because they completely miss the point of what we are actually doing.
True, but the point is that in principle you can write very detailed, fully formal proofs, in exactly the same manner programs are written. The rest of my comment then discusses why it is not written that way.
Doing formal proofs manually is like writing turing machine instructions. Computer-assisted proving gives you higher level tools, but apparently it is still not powerful enough to be accepted in mainstream. And unlike programming where you get stuck if you haven't invented higher level languages, mathematicians have the widely accepted tool of "it trivially follows". Bourbaki might disagree, though.
Users of high-level languages, typically do not think about what the compiler is doing, and if in fact they do not know the compiler action--often the only way to find out is by reading the compilers source code.
This is akin to looking a math paper seeing "it trivially follows..." and your only recourse to find out why it so trivially follows is to get a mathematics degree.
With programming the information is always there. If you don't understand a higher level language but do understand the language in which the compiler is implemented, you can always read the compiler.
The "it trivially follows", the information is simply missing. Gaining a mathematics degree will often give you the intellectual power of finding the missing pieces yourself, but the only sure way is to ask the authors themselves.
At least in programming it's possible to step into a function and/or work at multiple levels of abstraction.
Getting a math degree just to be able to fill in the gaps created where 'it trivially follows' is equivalent to being required to memorize the API for a framework because clearly defined documentation is hidden behind a paywall.
so called 'intellectual power' has little to do with it. What you're describing is familiarity with navigating a minefield of poorly structured information and tapping a academia (ie at a very high cost of entry) for 'special access' to resources that bridge the gaps.
Of software development suffered from the same informational constraints and lack of innovation, we'd still be playing pong.
Well said. Memorizing the API is fundamental to success in most branches of math - but this erects a barrier that a lot of able individuals are unable to cross. I've always thought this was a problem but I almost never hear of anybody complaining about this so I figured it was just me. Languishing in the established system is usually interpreted as "sour grapes".
It strikes me as a collective means for mathematicians to reduce entry into their field and thus to increase their own salaries. Maybe you don't have to get a "math license" to practice math, but you have to somehow acquire a huge body of knowledge that is almost never spelled out fully in the mathematics literature. This will prevent many people from becoming mathematicians and thus make math a more exclusive and lucrative endeavor.
To be cynical about it.
I guess that's not much different from most fields' use of specialized vocabulary, etc. There are many informal methods of dissuading people from competing in your own little corner of the labor market.
> Do you think perhaps there's a reason why a simple and easy trick like commenting and renaming lemmas has been discovered and solidified as standard practice in programming, but hasn't been adopted in mathematics? Are mathematicians really just SO dumb?
No, they aren't, but the example in question would have been easier to understand if he had tried to explain what was going on, instead of just saying "it is trivial that this or that follows". I think the comparison with "old code syndrome" is pretty much spot on, to be honest.
... especially if code is to be considered in the terms of information theory. Context is important and context can't be encoded without raising the entropy. Also, speech and to a large degree mathematical symbolism are sequential, so multi dimensional problems would have to be broken down into one sequential dimension, raising the entropy exponentially or whatever or loosing information. Can't help it.
As far as I can tell, nobody has really adopted any of his proposals yet, and most papers continue to be written in pseudo-prose. Remember, mathematics used to be written in full prose (e.g. "the square of the hypotenuse is equal to the sum of the squares of the other two sides") and it took hundreds of years for us to realise that "x^2 + y^2 = z^2" was a better notation.
While mathematics may be "genuinely very difficult to understand", so is any complex piece of software. Software engineering techniques might be useful to mathematicians, even if they are all new-fangled and modern.
I remember years ago telling a coworker that I was an ACM member and subscribed to the SIGPLAN proceedings. He looked at me and with all sincerity asked, "You can understand those things??"
To which I responded, "About half," but I totally sympathized with his question. Both Math and CS need the reincarnation of Richard Feynman to come and shake things up a bit. There's too much of the 'dazzle them with bullshit' going on. It's no wonder that it takes so long for basic research to see application in real scenarios. You people bury your research under layers of obfuscation about half the time. Does it really help anybody to do that? Why do you do that?
"If you can't explain it simply, you don't understand it well enough." is my new favorite Albert Einstein quote.
Thing is, I did understand it. Hell, I looked through my notes from maths degree (5 years ago) and guess what - most of it seems like nonsense. The worst bit is that because these were notes to myself - jam packed with comments of "so obviously" followed by a transformation I can make no sense of at all.
It makes me pretty sad to think what a waste of time that learning was. Also the flip side of "hehe - I was well smart" is "shit - I'm now a moron"
>"If you can't explain it simply, you don't understand it well enough." is my new favorite Albert Einstein quote.
Yes and no.
I spend a fair amount of time explaining things to children. Not exactly five year olds, so no ELI5. More like ELI13. But to do this often requires over simplifying points to you either hand wave or or even sometimes given incorrect examples that are 'good enough' at the level that you are aiming at.
For example, consider explaining gravity as mass attracting mass. That is over simplified and breaks down at certain points, but for explaining to a kid why objects fall when you drop them, and even giving an opening to explain things like acceleration of falling objects, it is good enough.
So a better way of saying it is that if you understand both the subject matter and your audience well enough, you will be able to given simple explanations that will increase the audience's understanding.
> "If you can't explain it simply, you don't understand it well enough." is my new favorite Albert Einstein quote.
It’s also one of his more moronic quotes. Certainly on a very abstract, dumbed-down level everything can be explained simply; sure, if you cannot give your parents a rough idea what you’re doing, you might want to look into more examples. But there are plenty of things which require a very extensive basis to be understood thoroughly.
For example, it is very easy to summarise what a (mathematical) group is and for anyone with a basic understanding of abstract maths, it will be understandable. It’s also very simple to find some examples (integers with addition, ℝ/{0} with multiplication etc.) which might be understandable by laypeople, but you will either confuse the latter or only give examples and not the actually important content.
Further, when you have “simply explained” what a group is, can you go on and equally “simply explain“ what a the fundamental representation is and how irreducible representations come about? You just need a certain level of knowledge (e.g., linear algebra) already and not every paper can include a full introduction into representation theory.
"You just need a certain level of knowledge (e.g., linear algebra) already and not every paper can include a full introduction into representation theory."
Then why is paper still the overwhelmingly preferred medium?
Using hypertext it would be trivial to link to an external source describing the specific concept used from linear algebra.
Not providing supporting links is only good for an audience that holds the entirety of mathematical knowledge in their heads (ie mathematicians in academia).
The rest of the world, incl those who have since moved on like the author, don't fit into that category.
Therefore, the work can only be accurately read and understood by the tiny minority of specialists capable of decoding the intent of the work.
Limited reach = limited value to society.
Is the intent of a PHD really to advance the field of mathematics? Or is it just another 'measuring stick' for individuals to prove to others how 'smart' they are?
While "easy programming tricks" are not all it takes to understand difficult mathematics, writing a math paper like good code makes it drastically easier to understand what's going on.
E.g., encapsulation - given a theorem with a highly technical proof, but ideas from the proof are not super important, hide the proof in an appendix. This is directly analogous to private functions which are only used to power a public one.
Naming lemmas is also helpful, although not sufficient.
Mathematicians aren't dumb. But they do have a set of traditions and a rhetorical style which is often blindly copied. Also, most mathematicians haven't been exposed to good software engineering practices - consider how few use git and instead just email around "paper_v3_chris_edits.tex". Having worked in both the math world and the CS world, math folks can learn quite a bit from software.
A paper cannot teach a layman everything they need to understand the topic, but there are many papers out there I have difficulty understanding because of how they are written, but when reading them with a companion exposition, I can comprehend. It is a balance of getting one's point across while targeting a wide enough audience.
It is the same with code. No one should be writing production code at the level where any non-programmer fluent in the language of the comment's could understand. But they should be writing it simplified enough so that performance isn't impacted and that anyone who is maintaining the code should be able to understand it without having to spend extreme amounts of time digesting it. And sometimes a key performance boost will turn into a 'here be dragons'.
I do think Mathematicians are optimizing a bit too strongly for similar level peers, but please don't think I'm trying to say the are dumb for doing such.
since when are standard tools of good education 'cute little tricks'? mathematics is hard. mathematicians are some of the smartest people in the world. yet the current tendency to not do these things or not do them enough is a consequence of a certain culture, efficiency constraints and even the near universal use of LaTeX.
Programming and math _is_ essentially the same (Curry Howard isomorphism) and the problems are indeed equally complex.
The difference is that programming is driven on economical terms, hence agility, flexibility, etc. has been developed.
Mathematics is driven in the university sphere, where mostly intrinsic motivation drives. Not many mathematics professors have the urge to sit down and learn scrum, read manifests of coding practice etc.
The math culture will eventually see itself be forced to go these ways to keep up.
I don't think that programming and math are the same in a practical sense. I am studying math and CS and they're fairly different. Programs deal with specific things, types and data that you manipulate and see with your eyes. Maths deal with abstract concepts for which finding examples can be pretty difficult.
Also, while programming, you can design your functions and their interfaces before writing them down. In maths, that's impossible: you're constrained by what can/cannot be done.
Scrum or coding practice doesn't apply here. Maths don't have to "keep up", they're already way ahead of their applications. It also doesn't need to be fast or flexible, just rigorous. And while motivation and intuition helps (specially when learning) some concepts cannot be motivated or given an intuition, and even in that case you still have to learn the small details and formalizations.
> Programs deal with specific things, types and data that you manipulate and see with your eyes. Maths deal with abstract concepts for which finding examples can be pretty difficult.
Have you forgotten the pains that you went through trying to understand the difference between values, pointers and references, lexical and dynamic scoping, static and dynamic typing, and the like? Can you see them with your eyes? Have you ever tried fully explaining any of those to a non-CS major in half an hour? :)
Abstractness is pretty subjective. For non-programmers, even the idea of CPU and memory can be abstract. And it doesn't help to open up a computer and point to the hardware; that is like claiming a mathematical paper is not abstract by pointing to the very concrete paper and ink that embodies it.
I'm not saying those concepts are not difficult. But values, pointers and references are something that relate to memory. You can simulate in a paper how is a value or a pointer managed in a program. Same with scopes and static typing. There are specific examples for all of those.
Some concepts in mathematics are way above the abstraction level that you mention. For example, the projective plane, or nowhere-differentiable functions, or geometry in higher dimensions; those are concepts that not only do not have any physical equivalent, but are also very difficult to grasp and imagine in your head.
True. But still, I think the problem lies more in the fact that mathematicians skip too many steps in their prrofs (see another comment of mine) than the inherent abstractness of mathematics.
I've done mathematical research in the past and I actually think a lot of the practical methodology I've learned from software development could be incredibly useful to mathematicians. I'd love to work full-time on a selection of related research problems with a group of coworkers following a sort of "agile" process with quick morning standups, a centralized repository of works and proofs in progress, "proof reviews", Trello boards, and so on.
Would it actually work? I honestly don't know, but I'd seriously love to try it.
Programming is a subset of math, namely, the part of math that deals with algorithms. There are many parts of math that are not contained in programming. For example, floating point numbers are finite representations of real numbers, but since floats are finite, without some math theory beyond just algorithms, there is no way to understand them. There are may similar examples.
The differences you note are what I consider cultural differences.
Math _indeed_ have the idea about interfaces and implementations. They are called something different (exitensisal types from a PL point of view).
I will recommend you to look into type theory, in particular the book "types and programming languages" provides a very good introduction.
In CS we have two areas that make it completely clear that math and CS are coupled, that is complexity theory and programming languages. After having taken a course in each, and pondering a bit, it should indeed be possible to see the that they are the same.
> I think that's because mathematical papers place too much value on terseness and abstraction over exposition and intuition.
If it only happened on mathematical papers... It is all over the place, in books, classes, etc. Few professors tell you what are they doing and why; they just throw formulas, theorems and symbols at you. Intuition is not given its necessary importance, first because it's hard to grasp some concepts, and second because sometimes there's no easy intution behind them unless you have deeper knowledge in the subject.
Regarding "unit tests", or descriptive names... That's very difficult once you get into advanced math. Examples are probably going to be trivial or too contrived to be useful. And if you have to give a name to a theorem or lemma, you'll end up describing its full contents.
In Germany you get to your advanced courses for the last two years of High school. I picked Math and Physics.
We did a bunch of linear regression(with maple?) i believe. And some manual differential equations. I was one of the best in that class, but when I asked what we need this stuff for the other people in the class looked at me and said the following:
"if you ask this question you're in the wrong class"
I went on to study engineering, and i guarantee you the other two still don't know what that stuff is good for. They learned the formulas by heart and then went on with their lives.
I mean it was fun for me, it was basically a coding exercise. And I was happy because I was faster than everyone else, but I didn't get why we were doing it.
The same actually bothered my about the studies. Just one example: folding is a fairly easy concept. But for some reason they first want to drill it into your head, you learn a bunch of techniques and then if you're lucky and you stick with it, eventually it clicks and you'll realize why in a completely unrelated class.
Why can't we just provide a simple real life example first and then go on explaining the details?
Aren't things easier to grasp when you can have a real understanding/connection to it? Isn't that precisely why people that learn coding at home tend to be better than those that just studied it, because they were told it's a solid profession to study?
I think with a good example and visual representation you can probably teach most of the stuff that's taught in Uni to young kids. But then you would be forced to admit that you wasted a lot of time during your own life, who'd want to admit such a thing.
> Why can't we just provide a simple real life example first and then go on explaining the details?
That's a very good approach, which I try to follow in all my writings. Basically, I imagine a reader that is generally uninterested in the material, so the first thing to do is to "pitch" the mathematical concept using a simple example, or just say why the concept is useful. In the remainder of the lesson, I assume the reader might lose interest and stop reading at any point, so this is why I put the most valuable content first (definitions and formulas), followed by explanations, and finally a general discussion about how the material relates to other concepts.
> I was one of the best in that class, but when I asked what we need this stuff for the other people in the class looked at me and said the following:
"if you ask this question you're in the wrong class"
I am always so disappointed when I hear people express this. Being able to place an effort in a broader context is so helpful in being able to approach the work well. A few teachers in the Literature department in high school had the same attitude and it was incredibly demoralizing and left me kinda directionless in their classes. I wish things like https://youtube.com/watch?v=suf0Jdt2Hpo had existed back then to give me some idea of what useful and interesting literary analysis looked like and could do.
I think there are teachers for whom rote repetition is teaching.
I used to know a math lecturer, and his attitude was very much that it was his job to throw proofs at his students, and the bright ones would put the rest together for themselves.
He wasn't even remotely interested in the less bright ones, and certainly not in presenting the material in a way that made it easier for them to follow.
Digital has real potential here, because you can build animations and virtual math labs to explore concepts and give them a context, and suddenly math becomes practical and not just an excuse for wrangling abstract symbols for the sake of it.
> Why can't we just provide a simple real life example first and then go on explaining the details?
This. People learn differently, in my case, if I can't get the 'Why' first, I'm not that excited to learn it. I guess making 'simple' real life examples in many cases is hard.
I also tend to learn things much better if they came from a real problem/need I have. There was a good discussion about a 'project based university' here: https://news.ycombinator.com/item?id=10989341
Yeah, teaching the basics of an abstract concept without first explaining how it fits into the bigger picture is IMO not the best way to motivate some people. I'm also a person who wants to understand why it's important instead of just trusting someone that it'll be useful "later".
It'd be cool if, once you start your major, there was basically an overview class explaining why each of your courses is important and what concepts (at a high level) you should be grokking with each course and semester; basically some context to frame your learnings.
I took an automata class where the professor talked into the chalkboard and refused to explain why we were required to learn any of the material. It wasn't until later in the compilers course that we had a teacher who actually took the time to explain how all that mysterious theory actually had a place in the real world. So many light bulbs went off in my head during that class.
I had a similar experience from a different perspective -- I took a course on Theory of Computing, which was 50/50 gate-level CPU design and mathematical work on computability, automata and so on. I found this interesting as an intellectual exercise and to get a grasp on what is computable and what isn't, but then the next semester it proved super useful in the compiler design course.
I am also much more happy with an answer along the lines of "it has no practical use currently but is interesting because of...." than no answer at all.
I actually think you're in the norm. The "why" helps create a belief; a belief is something that ignites action. Without it, someone's belief as to why they should learn [X] is too often defined as "to get a good grade."
When I asked in high school what math was for to some professors, they said it was more to 'improve your thinking' than to use it in 'real life'.
Now, a math course designed to be progressively easy to grasp, personalized when needed, entertaining and showing lots of examples from real life (e.g. relating algebra with 3d and video games) requires a very talented educator; teaching well is really hard and usually there's not enough time or resources to do it. That's why I think one of the most important skills to develop is to learn how to learn.
Yup. I don't recall actually learning anything in math class from 7th grade through my senior year.
I learned trig and geometry from my shop and programming courses... Trying to do graphics in QBasic and determine lengths and angles in carpentry gives concrete examples. It's not as though most math sprang spontaneously from pure thought-stuff - at some point, architects, inventors, astronomers and others in concrete endeavors discovered these rules.
It's not difficult to explain the first few topics on calculus in terms of distance, speed and acceleration. Other examples I remember were washing lines (a catenary), the path traced by a stream train's wheels and rocketry.
My teacher in England always had a real world example, but students with the other teacher didn't.
> Why can't we just provide a simple real life example first and then go on explaining the details?
But sometimes there is no real life example. Fundamentally, math is abstract. Yes, mercifully, its models often have analogues in nature, making its purpose utilitarian and intuitive. But sometimes no analogue exists. As in much of modern physics, in math, often you have only abstraction.
I think that's why math is difficult to learn. Without compelling illustrations based in the physical world the student must follow the concepts and proofs using the rigor of math's legal transformations, fortified only by the faith that these formalisms will sustain truth. But too often the practitioner must remain oblivious to the utility and implications of both the end and the means.
I even asked at university (statistics course) regarding to some specific test "what do I need this for", the professor looked at me and said "you'll never need this". I packed my things and left (and finished the course about 3 years later with a different professor).
I don't know... I'm taking a theory of computability class right now and I'm sure glad that the pumping lemma for regular languages is called "the pumping lemma for regular languages"
But isn't that an attempt at a "descriptive" name?
The intuition for the person who discovered this lemma, clearly envisioned the underlying process as a device pumping out new strings belonging to the language.
I took a brick and mortar automata class a long time ago and a classmate who clearly wasn't reading the text before lecture, insisted the prof called it the "plumping lemma" because it was about growing wider strings. That might have been the only funny thing that ever happened in automata class, unfortunately.
Education by satire / humor is a sadly under-explored field. "A satirical approach to linear algebra"... not sure if I'd kickstart support that, or turn around and start running. In the eternal spirit of "anything that sounds interesting and impossible is a great startup opportunity" I propose that someone ... etc etc, you know.
I think you missed the point of the article, which was that students, parents, standards-setters, educational theorists, and legislators have a distorted idea about what education is for and why certain subjects are taught. We don't teach history so that children can recite the years in which various battles happened. We don't teach algebra in order that everyone in society knows how to factor a simple polynomial.
The author can no longer understand his dissertation, but that doesn't mean he failed, or that the educational system failed by granting him a PhD for the work. Rather, the dissertation was about proving to the system and himself that he had learned how to tackle a complex problem and generate a solution that would pass muster with his academic mentors. In the process he learned many skills indirectly, became a more effective problem-solver in general, and had fun, all of which are far more valuable and far more important than the topic of his dissertation and whether or not he still understands it.
I started taking readability very seriously once I started going back to extend old code and finding I couldn't immediately understand what it was doing.
Now, if I have that problem, it's now two problems. The original problem, and the readability problem. The readability problem is solved first, and the original problem can only be solved afterward.
I don't see comments helping me, I could spend the time better by making better method names, better local variable names, extracting methods, ensuring that lines do not run off of the screen.
Maintaining my old code got significantly easier after I started doing this.
I'm fixing someone else’s code right now and a few single line comments would have saved my client thousands of dollars.
Today I put in a log statement to see why the code deletes data from the database if there is no new temperature data for the time period. The cron job has been running all day and so far every time it attempts to delete the data, there is no data to delete.
Another line runs a different script if the time is 15:00. No idea what is magical about that time. I added a bunch of log statements to see what happens at 15:00 that is different from every other hour of the day. So far I have no clue.
I’m sure the original coder had a reason for inserting these bits of code, but damned if I know what it was.
There are dozens of instances like this in the code. A one line comment would have saved me hours of work and the client several thousand dollars.
I'll go one better: I've got servers in my machine room that I don't know the purpose of. Literally in some cases the way I've found out what they do is shut them off and wait for someone to complain.
I've seen stuff like that before, typically it's intended to be temporary code put in as a way of troubleshooting or achieving a non-standard result but wasn't cleaned up properly. I see that so often that I simply assume it's the case and not even try to run it down any further. Just make it work properly and move on.
Deleting from any empty database could just be a sanity check. If it's logically supposed to be empty at a given time, it's a perfect time to clean up database errors...
I don't know why people say this type of thing, as if there's some choice you have to make between good names and comments. You can have both, and there are absolutely times when comments are necessary. Too many comments may be a "smell", but code that doesn't require any comments at all is very unlikely.
I think a lot of coders jump to comments as a first resort rather than trying to make the code clear. In theory it's not a tradeoff, but in practice I've seen a lot of heavily-commented code with single-letter variable names. My rule of thumb would something like "never write a comment until you've spent at least 5 minutes trying to make the comment unnecessary".
The problem is the bug catch creep growing around DataInput and mathematical operations.
I think it would really help to have a language, that differs between the original methods raw body (displaying the pure intentions) and blends out the various filters and catchy expeption handling.
Such a function with catchy names, could be allmost as good as a coment- and is usually there.
Coments can be useless too
/* Function taking these arguments, returning this type */
Comments are not for how or what, the code does that just fine.
Comments are for why.
Sometimes the why is obvious, then you don't need a comment. The rest of the time, add that comment. Even if it's something like "Steve in accounting asked me to put this in."
Good comments do frequently answer the why question and, sometimes, the non-obvious what.
It's generally true that the code expresses the what, but it's also true that it can take the reader time to discern. A simple comment here and there can be a shortcut to this discernment which, over thousands of lines of code can save serious time.
Such comments are best added in commit messages. If you add JIRA task number to each commit it's almost always already there (because in the task there will be your discussion with Steve).
And when you change the line because Tom asked you to change it again - you don't need to remember to remove the comment about Steve.
I have recently started working on an existing project which is all new to me. Having to go through git commits looking for relevant comments is just a silly suggestion. Its far more useful when I am browsing through the code, trying to understand it, to see a comment beside the relevant piece of code, not hidden away in git commit messages.
IMHO comments aren't there for newcomers. You are a newcomer for a month or 2. You are a developer for years most often. Besides, how is
//Steve in accounting asked me to put this in
foobifyTheBaz(bazPtr, foobificationParams);
more helpful for newcomers than just
foobifyTheBaz(bazPtr, foobificationParams);
Comments answering "why" are very important when bugfixing/changing stuff. You want to know if sth was intentional or just accidental, and if it was part of the change that you have to override, or some independent change, so you know which behaviour should stay, and which should change. You should always look at the git blame of the region you change anyway before making a change (it often turns out your change was there before and commit message has the reason why you shouldn't change it back).
And you don't look for git commits. You enable git blame annotations in your IDE and hover over the relevant lines to immediately know who, when, and why changed that particular line. That's why commit messages are as important as good naming IMHO.
By the way, when you have code like:
//Steve in accounting asked me to put this in
foobifyTheBaz(bazPtr, foobificationParams);
barifyTheBaz(bazPtr, barificationParams);
if (mu(bazPtr)) {
rebazifyTheBaz(bazPtr);
}
How do you know which lines the comment refers to and if you should delete it or not when Tom asked you to change barifyTheBaz invocation? That's the main advantage of commit messages over comments - they are always fresh and encode the exact lines they refer to.
// We foobify the baz in order to ensure that both
// sides of the transaction have their grinks froddled.
// This was a request from Steve in Accounting;
// see issue #4125 for more details.
so that (1) if you'd otherwise be wondering "wait, what they hell are they doing that for?", you get an answer (and, importantly, some inkling of what would need to have changed for removing the code to be a good idea), (2) there's an indication of where you can find more details (hopefully including the original request from Steve in Accounting), and (3) if the code around here changes, you can still tell that the relevant bit is the foobification of the baz.
I strongly approve of putting relevant info in your VCS commits too, of course. But, e.g., if someone changes the indentation around there then your git blame will show only the most recent change, which isn't the one you want, whereas the comment will hopefully survive intact.
Changed formatting isn't a problem IMHO. You can ask git to skip whitespace changes, and you can (and should) enforce consistent formatting anyway to make history cleaner. And even if for some reason you don't want to do either - you can just click "blame previous revision" if you do encounter "changed formatting" revision.
I envountered it a few times and it was never a big problem.
The problem with comments in the code is - they have to be maintained "by hand", and they are often separated from the context after a few independent changes.
When you change a function called inside the foobify function because of another change request you will most probably forget to check all the calling places all the way up the callstack and fix the comments refering to foobify function. And then comments may start to lie.
I encountered lying comments a few times and it usually is a big problem. I started to ignore comments when debugging, and I'm not the only programmer that I know that does that.
I do think there is a place for comments in the code, for example for documentation of some API, but IMHO commit messages are the perfect place for explaining reasons of particular change, and I prefer not to repeat that.
In which case it's one click away. And anyway you should enable checkstyle and autoformat on saving files, and you can use "ignore whitespace changes" in git view for legacy code.
While I don't agree with the general sentiment of vinceguidry (e.g. I think comments answering "why?" are very important), for your specific counterpoints I do tend try solve it in code if possible:
2) Vector3 unitVectorWithDirection(Vector3 originalVector_mayBeZero); vs. Vector3 unitVectorWithDirection(Vector3 originalVector_throwsIfZero); (if in a supporting language, throws declaration would also clarify).
3) float ArcTan(float x_throwsIfZero); or float ArcTan(float x_returnsNaNIfNot0to1), etc.
This assumes:
(a) You're not working in a language or environment that supports range constraints in the first place, cause if you are then that's ideal.
(b) You're working on personal code or a small team. If you're writing code for public consumption you have no choice but to add detailed API documentation if you want to be successful, even if it's not DRY.
The sibling comment about renaming is good - particularly with regard to the first function. However I think the other two are better suited to documentation rather than trying to make the function signature explain itself. Sooner or later you'll just remember that it takes a float, not the variable name.
Certainly calling things like "x_returnsNaNIfNot0to1" works, but I find it a bit ugly and it gets complicated if you have multiple or more complex constraints.
This is where languages with docstrings are nice. In Python all I would do is add a """ comment describing the inputs and the return values.
You then have a) a comment describing the code; and b) documentation that's standard so people can pull it up with pydoc or ? in Ipython. When I'm working in Jupyter, I often hit shift-tab to check what a function is expecting.
Was just looking at the Oculus SDK math library, and they have some really neat ideas...
For one, handling rotation sign (is positive clockwise or counter-clockwise), left-handed vs. right-handed coordinates, etc. through c++ template parameters.
I prefer to put that kind of stuff in the javadocs/xmldocs, but I mostly work in C#/Java, with IDEs that have tooling baked in that make generating that kind of documentation A.) really easy to generate and B.) very useful in auto-complete lists, mouse-over popups, etc.
Ideally, you'd have some tests as well that would test those kind of corner-cases and illustrate failure cases and expected outputs.
When you are using a particular method that is required by the context of your work rather than code you have control over. Perhaps the method name in the framework you are using is non obvious, or there is a bug in the way it works.
Sometimes comments are just to save future you some time or help other developers find context.
Suffice to say any project of significant complexity will probably require comments at some point. That complexity can come from the code, the task, the stakeholders or the dependencies.
Unless you are happy with a method name like getSimpleProductInstanceThroughCoreFactoryBecause ModuleRewriteWillBreakCompatibilityWithSecurity PatchSUPEE5523 ()
In that case I guess you're right.
Edited to stop the method name stretching HN layout.
That introduces unnecessary complexity in the code for something that could just be explained in a comment.
There were no fixes performed in the method, the method just gets a product instance from a factory. The comment gives reasons for why it was using a core factory directly instead of using the automatic object resolution mechamisms.
The real method name was getProductInstanceFromCoreFactory. Still long, still clear as to what it is doing. But making the context clear would be more code than it is worth.
Avoiding comments by writing clearer code isn't a bad habit, but my point is that they are very useful to provide context for something that getting context for would otherwise be erroneous or cumbersome to implement.
A good marker for when comments are going to be useful is when something stumps you, and then you fix it and/or figure it out. Chances are, when you read the code again it will stump you again - or on re-reading you might not even realize there was an issue. So I comment the change. It's usually something simple like, "This cast avoids a special case in method X", or along those lines. The comment is the "why" where the code is the "what".
If I run into that situation, I refactor the code so that I can better understand it the next time. To not do so is to waste all that time you spent understanding it.
> It's usually something simple like, "This cast avoids a special case in method X", or along those lines.
If I had an issue like that, I'd fix method X to be more accepting of unclean inputs.
The problem is that method X is a remote invocation into another system that has different change-control procedures, a different ticketing system, and a different release schedule.
I'd use a gateway class and intention-revealing method names, even if all that method was doing was casting a value. I'd call it ".edge_case_fix", (but described better than "edge case") and do the same for every weirdness in the external system that requires workarounds.
I find one place where comments are absolutely critical is in external facing APIs. Summary of the purpose of the API, purpose of each method, valid arguments and possible return values. Think how often you need to read the comments for whatever libraries and APIs you use in your work.
I deal with two types of APIs. Well-documented and otherwise. For well-documented APIs, I can simply re-read the documentation to figure out the other side of my code.
For the rest of it, I try to write as cleanly as possible, and as robustly as possible the first time, so I'm not in the dark the next time I have to look at it. If I have to take time out to re-understand how it works, I'll just take the time and not feel guilty about it. The company wanted me to use that dreck, I'm not going to feel bad about having to take more time with it.
I don't think comments would help much in the second situation given decent engineering the first time around. If it's my code, I'll usually recall the idiocy I had to work around when running down whatever issue had me looking at it again.
Comments are incredibly useful for that bit of code that you spent hours trying to make it work and you couldn't figure out a way to name things well or to improve it. That's where a comment is priceless.
The downside of course is that comments aren't compiled/run - so they often become out of date. "Often" might actually be an understatement.
I can't tell you how many times I've been reading a comment that runs quite contrary to what the code really does. You then end up reading the code to reason about it anyway, paying two taxes. After too many of those experiences, you just end up going straight to the code as the source of ground-truth.
This all depends on the readability of the code / and how soon these comments get out of sync. Maybe my experience is atypical, but I doubt it.
Names for classes, methods, parameters, and variables all suffer from this same problem. I think the solution to align code and comments is code reviews. Not writing comments also works but fails to solve related issues.
Where I work, I've always required that code that's submitted for review has a descriptive commit message. That is, the commit should explain what the problem was and how it was fixed, and why it was done that way.
This way, I can run git blame (or the equivalent VCS command) on the file and get a reasonable set of comments that usually match up with the code.
Maybe it's because I write in Ruby, and so never have to do any tricky optimizing, but if solving a particular problem starts to run over a half hour, I look to re-architect the project, either by reaching for a gem or telling my boss that X is too hard and we should do Y instead. My patience for going down rabbit holes has mostly gone over the last year.
Also, again perhaps because I use Ruby, it never takes me more than a few seconds to name something. If it does, then it's a code smell and I go looking for the missing class.
I don't see how Ruby saves you from optimising (to my limited knowledge, it's not exactly a fast language), but I agree with you that naming stuff sensibly and extracting functions (which you can then name sensibly) is most important for maintainability and can make "in-code" comments unnecessary in many cases. However I strive to always document what a function does if its not obvious -- though I'd call that "documentation" and not "comment".
Example for obvious: Int add(Int x, Int y) in a typed language. Example for not obvious: add(x, y) in an untyped (or "dynamically typed") language (Does it auto-coerce? How? Can it add complex numbers? In my particular representation? ...).
Someone mentioned that sometimes comments are useful e.g. to document a quirk/bug in library function you call, and I have some examples of that in my own code. But most often you should be able to rectify that by wrapping said function in your own one that omits the problem.
If a function seems impossible to give it a sensible name that isn't ridiculously long, split it up. The more clear code of the individual parts and how they are combined should give a hint of what is actually computed here. Maybe it's a new concept in your business logic, in which case providing a clear and exact definition makes sense anyway (put it into the appropriate place in your documentation). This whole procedure can take a significant amount of time, but it will be worth it in maintainance!
> I don't see how Ruby saves you from optimising (to my limited knowledge, it's not exactly a fast language)
That's precisely why you don't optimize. If you find you need fast code, you use a different language. Ruby is the language you use when maintainability and extendability take priority over speed. It's excellent for web development, where any speed improvements you make will ultimately be dwarfed by network latency.
> Example for obvious: Int add(Int x, Int y) in a typed language. Example for not obvious: add(x, y) in an untyped (or "dynamically typed") language (Does it auto-coerce? How? Can it add complex numbers? In my particular representation? ...).
That's a good observation, and it's made me think about how I deal with this in Ruby. First, generally you can tell by looking at a method's code what it's expecting you to pass to it.
Second, you don't generally pass around complex types to library functions, you use basic Ruby value objects like strings, symbols, hashes, and arrays. A gem will often define its own classes, which you might pass objects of around, (money, phone numbers) these will often be the primary focus of the gem, and how to use these objects will be written right there in the documentation. These classes will typically have "parse" functions that will take random input and turn it into a more useful object.
In Ruby, you generally only pass around complex objects in your own code, using JSON or some other format to interact with external systems
> That's precisely why you don't optimize. If you find you need fast code, you use a different language.
Ah, now I get you :)
> First, generally you can tell by looking at a method's code what it's expecting you to pass to it.
Here we might differ, I would always prefer to state clearly in the function doc what types of parameter values are allowed. For example, even if you have a really simple little wrapper in javascript:
function log(x) {console.log(x);}
Without documentation, you have to know what console.log can do for different types. So I'd definitely prefer this:
/**
* Logs x to console.
* @param x a value of a primitive type (other types are not guaranteed to be logged in a readable manner).
*/
function log(x) {console.log(x);}
> A gem will often define its own classes, which you might pass objects of around, (money, phone numbers) these will often be the primary focus of the gem, and how to use these objects will be written right there in the documentation.
Yes exactly, it will be documented as any public API should be. I have no problems using opaque types. But a function call(x) which expects x to be some object representation and not any old string (for which the library has constructors, e.g. PhoneNumber(string)) should surely document this, no?
In my book it goes 1. types 2. tests 3. method/variable names 4. comments. If you can't make the code clear any other way then add a comment, but it should be a last resort.
Kind of old code syndrome. I would say it is more like changing from Java to Ruby on Rails full time, and looking at your old Java code which is now a rusty language, with rusty memories of the class libraries and frameworks you used. Even with beautiful code it will be a struggle.
"Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"
-Brian W. Kernighan
I read a book by Brian Kernighan a few years back, "The Elements of Programming Style" and for me it has so much good advice. The book may not be as relevant in the days of Ruby, C99 and the like, but valuable advice nevertheless.
I wonder if reading the literature (countless books written by the early adopters and renowned engineers) would mould the programmers to do things differently. For instance, reading "JavaScript: The Good Parts" completely changed the perspective with which I look at the language. It made me fall in love with it.
When you design anything - a rocket engine, an immunoassay product, whatever - it should be part of the work to document the assumptions, calculations, and rationale for the design to a degree that somebody skilled in the art could follow your work when you're gone.
That doesn't help. Arthur has written a very nice hundred page introduction to the trace formula. You can read that, and it won't tell you why you should care about the trace formula. But if you know Langlands-Tunnels and how it was proven, or how Jacquet-Langlands was proven, you already know to care about the trace formula because you know that it will prove similar results.
The abstraction is necessary: you cannot do algebraic geometry with varieties alone. But while the core of the subject is a multithousand page jewel of abstraction, Vakil is a perfectly readable, example filled introduction to algebraic geometry, covering most of the contents of the glory that is EGA.
Only the rare lemmas are intrinsically interesting. Most often it's the theorems that are worth knowing as steps on the path to understanding something much more. Those lemmas that get used again and again are named, often descriptively.
The one thing that has always bugged me about lots (most?) open source software is the almost total lack of comments. I consider that bad programming. I have projects dating back 15 years in both FPGA's and software that I've had to go back into and maintain or borrow from. Comments, for me, has always been part and parcel of code writing.
I sort of have a conversation with myself as I document code where I tell myself why something needs to be done and, if necessary, how. As a result of this anyone can go into my code for any project at any time and find their way around. In fact, they don't even have to be domain experts to understand domain-specific code because I often take the time to document such thing pretending I am just learning about it (to a point).
I'm recently picked up an Angular-like JavaScript framework I built from scratch after about 9 months (working on it now actually) so I know exactly what you're talking about.
Over the years, I've stopped and started about 3 pretty major personal projects and through that I progressively got better at writing and documenting code in a way that make it easy to pick back up after a long break.
I've brought up this issue in mathematical circles before and have encountered incredible resistance. The culture is very deeply entrenched, and the current stakeholders are not going to allow it to be changed.
Just to briefly establish that I have some qualifications here, while I do not have a PhD and am not nor have I ever been a professional mathematician, I did have a lot of success in mathematics as an undergraduate, publishing research in the intersection of algebraic topology, differential geometry, and analysis with a professor at an Ivy League university.
I was myself a practitioner of the school of It Is Clear That..., until I realized that my mentor for the above research, a famed professor and former Putnam Fellow who even has inequalities named after himself, didn't follow some of the stuff I'd written in some notes I showed him. And if he didn't follow it, then I knew that meant I probably didn't understand it as well as I thought I did. It wasn't a big surprise to me then that, after a few weeks of effort, I was completely unable to take my approach further. His approach, of course, worked.
He represents one of few mathematicians I've personally known who emphasized clarity just as much as correctness and technical depth, and he had no desire to appear smart. But at this point in time he already had his career mostly behind him. There was nobody left for him to impress.
Part of the problem, ultimately, is that phrases like "it is trivial that..." do serve a purpose when used appropriately. As a result, it's hard to argue that they should categorically be excluded from mathematical writing. But that then opens them up to abuse, and in a culture so obsessed with appearing smart, that abuse can be quite extreme sometimes. Because instances of abuse are often motivated out of appearing smart rather than legitimate economical concerns of space, bringing up the issue with the author amounts to a moral accusation against them, which makes their defense of the phrases even more impassioned. They're not going to admit that they're just trying to puff up their ego.
The worst part is that, in some circles, if you criticize this sort of writing you'll just be greeted by a bunch of people who are proud to exclaim that, of course, they are smart enough to fill in the blanks, and if only you were too then you surely couldn't possibly still have any objections to it.
For me, transitioning from mathematics to software development was a huge breath of fresh air, as the culture is (generally) the exact opposite. Nearly every good programmer I've met has valued clarity and maintainability equally alongside correctness.
I kind of developed that style organically as I went along, in both PHP and JS.
I tried to make the output readable as well. I've seen many frameworks output a bunch of gibberish, but with today's bandwidth it's not such a big deal to add a little bit of whitespace and make everything readable. Here's a fairly complex application, you can view the source of any page:
I can give you some feedback since I tend to write a lot of PHP.
Mostly, code style is about consistency. You can have infinite arguments about whether opening braces should go on the same line as the operator/function, but at the end of the day as long as you stay consistent it doesn't really matter.
I much prefer 2 space indents to tabs. To me, it's much more readable. I noticed your HTML output is also tabbed, which (arguably) needlessly increases the horizontal width you need to see a page of code, especially with modern deeply-nested structures.
Overall the project looks well thought out, and you may have valid reasons for any/all of the above. Just throwing in my initial reactions as a project outsider.
Also, dunno if you know about PSR, but here it's a good starting point that many people (potential contributors) are familiar with: http://www.php-fig.org/psr/psr-2/
>> of a fairly new developer who's just been asked to do non-trivial update of his own code for the first time
fairly new has nothing to do with it. I'm struggling to read my own code since 1998 and constantly in battle with myself to strategically place comments at the right places. I doubt that I'll ever correctly estimate my own ability to comprehend my sh1t no matter how many years of experience I have. In fact it seems to have gotten worse with experience. The more I know the less I can trust myself. Not sure if I'm alone with that.
I've been writing code professionally for 25 years. You're not alone. It does get worse. Junior programmers just don't get it and jump into management before they have a chance to.
Mathematics papers (dissertations included) are meant for active researchers. Perhaps that implication really is obvious to an active researcher in the field. The fact that it is not obvious to someone outside of its intended audience is not a mark against the paper. If I said to someone in machine learning that by assuming zero-mean Gaussian priors for the coefficients, we are encouraging small values of the coefficients, that would be really really obvious to someone in ML/stats/data science (I've restated a really basic statistical learning concept), but not to a random educated person.
My response is that for every 100 of these types of papers, one of them may prove to be pivotal or inspirational in something truly groundbreaking and functionally useful. For this reason, I am all for 100 different people spending their time doing things like this, because eventually one of them will make an impact that is greater than 100x the efforts of 100 normal men.
It's just a different kind of "brick in the wall" - only the diamonds in the rough can turn out to be hugely important for something else in the future.
Great point. I think this applies to scientific research in general, which is why the constant emphasis on only funding research with clear and immediate economic payoff seems a bit shortsighted.
In reality, chances are that most research won't lead to anything significant - but the 1% that does will have outsized impact that will more than pay for the rest. And we don't know which 1% this will be in advance.
Unfortunately, you can very easily tell that what is excused as basic science is often just 'solutions looking for a problem'. Most "basic science" these days is just professors looking to get paid with no accountability.
Although there have been a few Fouriers across history, the most compelling brilliant scientists had one foot in applied science working on theory in the spare time. Euler, Gauss, Faraday, Langmuir. Most of the best basic science had goals "to explain something pressing" (which is not the same as "do something because there will be no payoff") anyways, like Planck, Einstein, Peter Mitchell.
There's something to be gained about cutting your teeth on problems with results, instead of just lollygagging about in theoryland.
Feynman talked about this in an exercise where he described the motion of spinning discs - a very "applied" problem, and remarked that later those insights proved useful in a separate, unrelated problem.
Exactly. I have the broadly same issue with my doctorate: I know essentially what I showed, but I only ever really completely understood my thesis's more abstruse passages around the time I was writing them (and being examined on them). But I don't find this surprising or concerning. The overriding motivation for doing a PhD was to try to contribute some original knowledge and insights to a field I find interesting. All this article talks about is, what did my PhD do for me personally, and tangentially how can the PhD experience help to train the mathematical brain. To me those are secondary issues, my reward is that my thesis, although hardly earth-shattering, gets checked out of the library every so often, and I get emails from interested and interesting people asking me about it.
I don't necessarily agree with that point of view. Even if a paper brings nothing new to the pot, it might help others to get into a field or a problem. I often find myself reading the introduction and first sections of useless (to me) papers just because they do a really good job at explaining things that are confusing. This last years I've really came to appreciate papers as a learning tool instead of class notes or books.
Who's to say it doesn't bring anything new? If it's never been written before, it's new.
I often see technology growing in a "staircase" fashion. Things plateau, and then there's a period of many small 'chink in the armor' improvements... and then one of those ideas inspires something that floods into a whole new area.
Sometimes it takes a lot of smart brains throwing time at 'useless' junk before that insane theory is found. Othertimes we just await an Einstein-like brain to come on the scene and activate new stuff.
Math was always extremely easy for me growing up. Up through my first differential equations class I found almost everything trivial to learn (the one exception is that I always found proving things difficult).
I made the mistake of minoring in math and that slowly killed my enjoyment of it. Once I got to differential geometry and advanced matrix theory it all just became too abstract and I just wanted to get away from it.
For several years after college I would routinely pull my advanced calculus text out and do problems "for fun". After a while I stopped doing that. Within a few years of no longer being exposed to math, I found it all incredibly foreign and challenging, to the point where I would say I have a bit of an aversion/phobia to it.
I'm trying to reverse that now by tackling a topic I'm interested in but have previously avoided due to the math-heavy nature of it - type theory.
Hopefully I can find the joy in math again through this.
I think my point is that you can lose competence in math very very quickly through lack of constant exposure.
The same is probably true of programming but I hope to never end up in that position.
Not in the American school system, at least. AFAICT grade school "math education" is mostly about lodging a particular calculation algorithm implementation into students' heads and making them repeatedly run it, on a variety of test inputs. A large majority of Americans will never do a single bit of mathematics in their lives, cradle to grave.
To be fair, there are lots of efforts to change it. But (being pretty harsh here) a significant number of math teachers really don't have the ability to understand math themselves, let alone teach it.
When you apply that to proofs its awful. To use a gaming analogy the real world is a sandbox game where you can build it any way you want as long as you follow the rules, but in K-12 school, proofs are either for exact memorization, or crazy contrived things on rails that you're only supposed to solve the one correct way.
Sort of a perl "there is more than one way to do it" vs a python "there should only be one way to do it". Nothing inherently wrong with either other than if you and your educational system philosophically disagree, its going to go extremely badly for you.
I don't care about K-12, since I am an unAmerican. But I easily believe you if you say that the subject called `mathematics' in school is horrible. (And has nothing to do with `mathematics' proper.)
“Proofs are to mathematics what spelling (or even calligraphy) is to poetry. Mathematical works do consist of proofs, just as poems do consist of characters.”
I very much like the related point Paul Lockhart makes in "A Mathematician's Lament"[1]: that mathematics is an art form and ought to be taught like one.
> Up through my first differential equations class I found almost everything trivial to learn
Mankind hasn't yet figured out a good way to teach the first Differential Equations course.
The first serious Calculus course introduces some level of rigor and formalism, but this alone is not sufficient for Diff Eqs. And the DE courses are typically leaning towards applications, so they tend to succumb to the "fiddle these dx's and dy's around like it's magic, in the end it might work out" approach.
The business is well founded, which you will discover later. But no one has figured out how to massage that into the first course.
It's been a while but I think this is true of a lot of introductory courses in general, like Organic Chemistry. "Trust us and memorize these rules the basis of which will become clearer down the road."
During my PhD I found I had to occasionally lock myself in a conference room with a bunch of papers and all the whiteboards to do a sort of "deep dive", in order to get everything into my head and push things forward. Things that were a bit rusty quickly came back after a little bit of effort.
At the end I'd emerge with a bunch of photos of whiteboards with clear intermediate steps, and then I'd write it up as a short report in LaTeX. Of course, as the report turned into a paper, I took out all the useful 'trivial' steps - which always feels a bit wrong. I think it's also useful at this stage to drop everything for a month or two and come back and see if you still understand what you wrote.
I think the "deep dive", putting prolonged effort in, and writing it up are all key aspects.
When I was studying, I went to two schools, one using semester system and one using quarter system.
With the semester system I had plenty of time to read the material and really explore the mathematics behind what I was learning. With the quarter system I was so busy running around and catching up in different classes that I never really had the time available to sit down and do that. It sucked. I think the quarter system kind of 'beat' the interest out of me because now it is hard to get back into that state of mind.
The quarter system is pretty non-optimal, I think. The college I went to ran on ten-week quarters, and even though it was normal to only take three courses, when each of those three is trying to jam an entire semester's worth of material into ten weeks, it's hard to even keep up with a surface level coverage of the material, much less dive deeply into it. It was pretty standard to have about 500 pages worth of assigned reading, plus extra research and writing, studying for exams, writing and debugging code. Fortunately, I was not an engineer or a hard-science major, so I didn't have the additional overhead of weekly lab work. When people took organic chemistry, it was a running joke that they were going on the "campus foreign-study program", since you'd see them about as often as you'd see friends that had gone off to Argentina or France for the term.
I can totally relate and to this day I describe one of my greatest failures as a manager as analogous to how "easy" math was to me. Until Calc II I really found nothing about math challenging at all. The result of this was that everyone wanted me to tutor them in math. The problem was that I didn't solve math problems like other people did, in fact I had no idea how I did what I did. I just did it. As a result when I tried to teach someone else it was an unmitigated disaster.
Fast forward to the real world, one of my biggest challenges is helping people figure out how to get from A-Z. A is obvious to me. Z is obvious to me (although I'm probably wrong as much as I'm right), but getting someone to understand my reasoning is almost impossible.
Study "false proofs", try to see where they go wrong. This will teach you what details need to be spelled out in detail to easily tell the difference between a false proof and a true proof. You will learn how to demonstrate you have nothing up your sleeves so to speak.
I can relate to this. My major was EE and I had similar experiences, but kind of liked diffeq. Fast forward some 25+ years later, I spend all day coding and look to my old math and some new comp sci books for fun. I keep an engineering notepad with different sections in it so I can just open it up and attack problems such as concrete math problems from graham knuth and patashnik or physics problems. Even though I code through business problems on a daily basis using X framework and Y API, I find the rigor involved is not as satisfying as struggling through finding the closed form of some equation. I want to solve a deeper problem, if that makes any sense.
It isn't the answer that I seek in doing this; it's the desire to know why the answer is what it is. I think I have become more interested in learning far after graduating from university.
General skill of problem solving seems to stick with you forever.
It's probably because most of your problem solving knowledge is your own invention. Whereas math and programming is your own invention if you solve all the problems by yourself, if you code everything from the start. This gives you the understanding of the problems, and the reasoning/tools to solve it.
I've been dropping in and out of problem solving fields through the years, and I've noticed that my knowledge practically disappears, but my problem solvings skills can adapt to any framework (mathematics, physics, programming) it just takes a little bit of time to re/learn the terminology.
The same is indeed of true of programming, at least in my experience. I took a management/consulting detour that lasted about 18 months and getting back into programming was quite though. I could not focus, could not hold the problem in my head, could not go from design concepts to implementation details. It took me a solid 6 months to get back to my former level.
> I think my point is that you can lose competence in math very very quickly through lack of constant exposure.
This is not that important in my opinion. If you're lacking exposure, it must be because you don't apply math every day, and it won't be a big deal. If you apply math every day? Then you will quickly catch up with whatever field you're using.
If you're in pure math, then none of that matters as well, only proofs do. And lack of exposure will often not harm any of the tricks you learned while constructing/understanding them.
Not sure if related to your username, but I have a book on it (that was recommended to me here, in fact) on this topic, which I was curious about due to its implications in the programming field
>(the one exception is that I always found proving things difficult)
Same here and I have developed a kind of a complex around it. I used to spend hours practicing proofs just so nobody comes to know that I'm terrible at them. I used to find numerical proofs to problems that had simple general proofs.
However, looking at the other replies it seems we both haven't the faintest idea what Math is ;)
An interesting read. But I think the author should have explicitly written out the point he is really making: you can't be too careful about making your writing clear, even to yourself. I recall reading (I'd point to the book with a link if I could remember in what book I read this) that mathematicians who occasionally write expository articles on mathematics for the general public are often told by their professional colleagues, fellow research mathematicians, "Hey, I really liked your article [name of popular article] and I got a lot out of reading it." The book claimed that if mathematicians made a conscious effort to write understandably to members of the general public, their mathematics research would have more influence on other research mathematicians. That sounds like an important experiment to try for an early-career mathematician.
More generally, in the very excellent book The Sense of Style: The Thinking Person's Guide to Writing in the 21st Century,[1] author and researcher Steven Pinker makes the point that the hardest thing for any writer to do is to avoid the "curse of knowledge," assuming that readers know what you know as they read your writing. It's HARD to write about something you know well without skipping lots of steps in reasoning and details of the topic that are unknown to most of your readers. This is one of the best reasons for any writer to submit manuscripts to an editor (or a set of friends, as Paul Graham does) before publishing.
And, yes, if you think what I wrote above is unclear, as I fear it is, please let me know what's confusing about what I wrote. I'd be glad to hear your suggestions of how to make my main point more clear. I'm trying to say that anyone who writes anything has to put extra effort into making his point clear.
>We miss the mark by several major grades of expertise. Aiming for outside academics gets you an article that will be popular among specialists in your field. Aiming at grade school (admittedly, naively so) will hit undergraduates. This is not because your audience is more stupid than you think, but because your words are far less helpful than you think. You're way way overshooting the target. Aim several major gradations lower, and you may hit your mark.
I've been working with Haskell* for a couple of years and it is quite often that I work with code that I don't fully understand. I'll come across a terse bit of code, then carefully take it apart to see what it does (by taking bits and pieces out and giving them names instead of passing in using point-free notation and also adding type annotations). Once I see the whole picture, I make my own change and then carefully re-assemble the original terse bit of code. One could ask the question: wasn't the verbose version better? I'm going to lean on the side of no. If I left this verbose and other bits verbose then it would be hard to see the whole picture.
I think doing maths would be better if it was done interactively with software. If equations were code then you could blow it up and look into fine details and then shrink it to a terse form while software keeps track of the transformations to make sure what you write is equivalent. Maybe it's time to add a laptop to that paper pad?
* not arguing anything language specific here except that Haskell makes use of variety of notations that makes the code shorter and more like mahts. More so than most languages.
> If I left this verbose and other bits verbose then it would be hard to see the whole picture.
I really can't sympathize with this. How exactly is this helping any one at all, if you have to struggle with it yourself? Is it a bunch of dense monolithic code? Decompose into smaller methods / separate files. Setup your text-editor/IDE in an effective way for quickly navigating across large chunks of related code. Imho there is a world of difference between terseness that helps readability and code re-factoring vs. terseness that makes you want to bang your head against the monitor.
That said I think the requirements, or say qualities which define good code and good dissertation's are quite different. Code needs to be maintained, refactored and altered throughout it's lifetime, a dissertation might only need to be built up and understood once to prove a particular result which can be re-used after that.
The way I see it, I have limited capacity to build a mental picture of whatever I am working on. When I'm looking at pages and pages of verbose and repetitive code, it is quite hard. What does this bit do? Just checking the error condition and re-throwing the error. What does that bit do? Same boring stuff. Where is the meat?
When I'm looking at few lines of terse but complicated code, it is easier; it is all meat and little fat. Just enough to make a good steak.
But this only works if I understand the mechanics of that terse code. So when I work on something else for a while and I come back to some code for which I no longer have an accurate mental picture in my brain I need to refresh my memory.
I think mathematics is the same way. Imagine a full A4 page of equations. It is really hard, at least for me, to hold in my brain a mental model of what it all means. Sure, there's a ton of background that I need to be familiar with, but it's not in my mental picture. Imagine this: suppose you wrote rules for how addition works, and then multiplication, and then build it all up so you can do linear algebra. That's too much!
When I advocate terse code I don't mean it in a "here's my obfuscated C code sense". I mean that when I write "f . g . h" there might be more going on here than meets the eye, but as long as you know the rules of what . means in this context, it is super easy to follow.
I find there's a huge difference between code that fits on a single screen and code that doesn't (and I've heard claims this is backed by research). So I'd far rather have lines that I have to stare at for a while to unpack than lines that are individually simple but I have to scroll up and down or jump back and forth to see the whole method.
Proof assistants are in some ways very similar to what you described. Coq [1] is a popular example. It helps control complexity of larger proofs and verifies that everything that is derived is correct.
I think it's about balancing various things. Having less code is good unless there's a hidden catch you need to be aware of or it takes a few weeks to unravel what the code does.
I prefer code that is understandable right away, is consistent and doesn't have any surprises.
A good example is Go. Many have written blogposts describing how great it is that it's such a simple language because the code is easy to read. And I can't deny that. The code is simple to read.
But then I read through pages and pages of such code and all with little meaning. Here's a loop, here we check for an error condition, here's another loop, here we check check for another error condition. It makes it harder to see through all that and answer the question "what does this code try to accomplish?". At least for me, the more code there is, the harder it is to see.
All of these arguments are arguments for replacing the mathematics curriculum with video gaming. Games require generalized problem solving (arguably better-generalized than math, and arguably better-transferrable to other domains). Games build character: grit and tenacity, cautious optimism, etc blah blah etc. And games are fun (for many more people than find math fun).
Guess math teachers should start learning to play League of Legends and Pokemon.
Alternatively, I guess we need better reasons than those to teach a subject.
Don't laugh it off. Personally, I learned to code by building Magic: the
Gathering decks and playing at tournaments at high school. I learned things
about resource management, reducing solutions, integrating disparate components
into a functional system and so on, and even a bit of probabilities along the
way. Not to mention what it did for my ability to concentrate and analyse an
adversarial situation.
If you think about it, a lot of education is really a kind of game and games
themselves are often educational, usually by accident.
Frex, I think a lot of people would recognise the value of teaching kids to play
chess in order to improve their concentration and problem-solving skills. Well,
why not more modern board games?
Video games are designed for grinding or following a fixed story, and your creativity is completely limited by the possibilities programmed for you. The exceptions to this (minecrafters designing Turing machines, hacking pokemon) are an insignificant fraction that are entirely discovered and engineered by those who understand mathematics and computer science and apply it to the video game.
Why do you think the kinds of problem solving in video games is "better generalized" than math?
Math seems to have a very ephemeral lifetime in the brain. I skipped a year of college once, and when I returned I realized I had to basically abandon any major with a math requirement, because I had seemingly forgotten everything.
I'm currently struggling with an online Machine Learning class (the Coursera one... at the tender age of 43), and I can only take it (so far, at least... just failed my first quiz... fortunately I can review and re-take) because I was rather obsessed with matrices, oh, about 28 years ago. "You mean I can rotate things in an n-dimensional space without using trig?"
I'm truly shocked by the multiple people in the thread who claim that Math knowledge can be completely erased through as little as a year of non-practice.
For me, Math has always resembled riding a bike more than anything else. Sure, the first few moments, the path is a bit overgrown and all the weeds need to be cleared off but it was always significantly easier revisiting a topic than understanding it for the first time.
For those who forget so quickly, I wonder if you felt like you truly understood it in the first place?
I've recently taken a calculus problem (multidimensional real optimization) I guess for the first time since I was an student.
While your description has some merit, there's a huge amount of trivia in the format of "I can do X, I just have to do Y first", "X has no known solution, try something else", and "operation X is very useful, try it". That goes away, and everything gets way harder.
I've always felt that if I had done all of my calculus with Mathematica I would have left college with an excellent grasp on how to use higher level functions provided by Mathematica that would have largely abstracted away all of this.
Of course, the higher level functions might get covered in cobwebs - but I suspect not the same way; I would have kept these higher level skills up to date because:
- I recently went though a couple of books on Bayes and computer vision. I would have used Mathematica - refreshing my memory.
- I sometimes need to do some stats / analysis - Refresh...
- I recently picked up a Student's guide to Maxwell's equations - Refresh...
- I need to help my children with Calculus...
If I had been using a high level tool my whole life I think I actually would make use of calculus and other mathematics.
Yeah, you should always, always code what you're thinking about, IMHO. I once turned in a take home differential geometry final in the form of an ipython notebook because I found computing curvature coefficients so tedious. Debugging the thing to pass all my unit tests (not to mention solving the test question) probably have me the best understanding of anyone in the class.
I had a similar experience - I learned symbolic differentiation largely because I happened to pick up a book on Prolog about the time we started covering it at school, and the book gave symbolic differentiation as an example. Not having a Prolog interpreter, I rewrote the thing in Pascal, and then wrote an expression parser for it. Debugging my Pascal translation really hammered home the rules for me at the time (and subsequently writing an expression parser for it was what got me interested in compilers).
That's roughly what I did, and it worked out about as well you predict. As soon as I understood what was going on with some kind of math, I'd use whatever tools I had to automate it: calculators at first, then computer algebra systems, numpy, whatever. No regrets; it was a great time-saver with very few drawbacks, and I made good educational use of the time it freed up.
It speaks to how finite we are around "knowledge." At the moment we reach understanding, we experience a sophomoric feeling of confidence. But as it fades farther and farther from our working memory, we become less fluent and more hesitant. The emerging pattern becomes one of "I can understand these interesting concepts, but it takes a lot of work and they don't last, so I have to choose to understand practically and situationally." And then in the end our bodies and personalities turn out to control our minds more than we might want to believe, as we turn away from one problem and towards a different one on some whim, never able to view the whole.
As I recognize this more in myself, I am more inclined to become a bit of a librarian and develop better methods of personal note-taking and information retrieval, so that I lose less each time my mind flutters. At the moment that's turned into a fascination with mind maps - every time I need to critically-think through a problem I start mapping it. In the future I might look into ways of searching through those maps.
You might also check out spaced repetition software. I've started putting anything I want to remember for the long haul into org-drill, an SRS package for Emacs, though Anki and Mnemosyne are more well known. I even schedule articles that I want to return to.
When I was taking machine learning courses and reading machine learning textbooks a few years ago, I have fond recollections of the derivations from Tom Mitchell's textbook.
Where other textbooks tended to jump two or three steps ahead with a comment about the steps being "obvious" or "trivial", Mitchell would just include each little step.
Yes, you could argue it was my responsibility to remember all of my calculus and linear algebra. But it is kind to the reader to spell out the little steps, for those of us who maybe forgot some of our calculus tricks, or maybe don't even have all of the expected pre-requisites but are trying to press on anyway. Or actually know how to perform the steps but have to stop and puzzle through which particular combination of steps you are describing as "obvious" in this particular instance.
I just remember how nice it was to have those extra steps spelled out, and how much more pleasant it made reading Tom's book.
> I have attempted to deliver [these lectures] in a spirit that should be recommended to all students embarking on the writing of their PhD theses: imagine that you are explaining your ideas to your former smart, but ignorant, self, at the beginning of your studies!
I find it gets harder and harder to do as you progressively become more affected by the topic. I like to think I'm pretty good at explaining concepts in physics to lay audiences, but the more real physics I do the more I think like a physicist, and the less I can see what an explanation looks like to the layman.
Quantum mechanics is the worst. I dislike a lot of the "popular science" language and analogues used to describe it, but the real (academic) pedagogical material is completely inappropriate for regular people. I'm worried that I'll be an inscrutable physicist before I grok it well enough to explain to a highschooler, though.
First, I am not sure Functional Analysis is as obscure as some other areas. But, second, this just shows, once again, that one ought never to use "clearly," "obviously" etc in proofs.
It is the same principle as writing programs so they are easier for the next programmer to read. That person may be you.
I don't read maths papers, but in CS papers too, words like "clearly" and "obviously" etc. serve as red flags to me that screams "hand-waving or big gaps coming up".
Often it's understandable short-cuts, but often it also turns out that the author has left out very substantial chunks of knowledge, or sometimes clearly don't understand why they got the results they did.
In CS papers there's an additional red flag: Maths. Outside of a few maths heavy areas of CS where it is justified, if a CS paper is full of equations, it's a good sign they'll have glossed over a lot of essential information, such as parameters that often turns out to be essential to be able to replicate their results. Not always, but often enough for me
to be vary.
I'm guessing it is because in the instances that include pseudo-code or working code, it is instantly obvious that something is missing, both to the author and to reviewers, but when it's obscured in equations it takes more effort to identify the same flaw because so many steps are often legitimately left out because of conventions that it's non-trivial for someone not steeped in the same notation to determine which bits should be defined and which bits are not necessary. I'm sure most of the time it's not intentional. But taking that shortcut seems to make it a lot easier to forget which additional information is actually necessary. And the irony is that I've seen plenty of example where the equations have taken up just as much space as pseudo-code or even working-but-naive implementations would have taken.
Wait, let me get this straight. It's a red flag when a computer science paper has math in it? Computer science (in the asymptotic limit) is math. And the papers ideally should read like math papers. Otherwise it's not CS.
> Beyond scarcely stretching the boundaries of obscure mathematical knowledge, what tangible difference has a PhD made to my life?
The same thing a bachelors degree does for everyone else. You've proven that you can start, stick with, and complete a task that takes multiple years and a complicated set of steps.
Which feels like a 'life is suffering' weirdness. I did it and never needed my diplomas for anything. Maybe I benefited somehow but I was a coder and entrepreneur before that; I had a software company before uni. It was over 20 years ago and in hindsight I find it pretty pointless and a waste of time. Maybe I became a better problem solver on some level but unless you are going into research or are not a self starter I would not recommend it.
I went into taking my MSc explicitly to be able to add the letters to my resume - I started not long after the dot-com bubble burst, as a precaution. I don't regret it; I learned some interesting things during my thesis (the rest of it was regurgitating stuff I already knew; but it was distance learning so I didn't have to put in much effort), but similar experience - I don't really use it much. If I'd done it full time, it would have been a tremendous waste of time, though.
But a lot of the reason for this experience for me at least was that I started uni after having spent about 15 years learning to program already, and by the time I picked up again and did my masters, I'd had another 10 years of commercial software development experience.
These things are not really geared for people like us that came to them with a lot of pre-existing knowledge, but at people like my class-mates first time around that had hardly touched a computer before, and that did need a lot of hard work to come out with a good understanding of the problems.
As a hiring manager this is why I rarely care whether someone has a degree or not if they can demonstrate experience. And on the other side of the table, I only took that degree because in the UK there are still sufficiently many employers that have an obsession with degrees regardless of experience...
It also supplies the degree holder with a social signal which says: I can buy into the establishment. I say this with my newly minted BSc. in hand. In the process of obtaining it I realised that at a minimum, all you have to do to get a degree is satisfy the course requirements. I took some shitty courses that I knew were largely wastes of time [0]. Doing them anyway equips me with proof (degree) that I can submit to a system I disagree with if I have to. This excessively cynical attitude is the product of being a cog in a massive degree making machine whose graduates are on average, I'd say, mediocre [1].
[0] I'm looking at you, three years of 'numerical methods', which reduced to memorising algorithms to perform by hand.
[1] I count myself amongst the absolutely useless 'applied mathematicians' from my class.
As a PhD in spplied math, I must say I concur wholeheartedly with the author. The true value of a PhD in a quantitative field is less about specific domain knowledge, and more in the set of general problem solving skills you pick up.
This raises an interesting question. Is the PhD process the best way to acquire those general problem solving skills? Or is there a better way to learn them?
From my experience of both academia and life in general, I'd say traditional tertiary education -- meaning lectures and research work from undergraduate upwards in a university setting -- is actually quite a poor way to learn anything.
I'd say the ideal way to develop knowledge, understanding and skill in almost any field is a combination of systematic practice and receiving personalised guidance and training from someone who thoroughly understands the field itself to at least the level you are trying to reach and is able to share that understanding effectively based on your current level of understanding.
Sadly, this is usually hopelessly unrealistic, because there are nowhere near enough suitable trainers around to give everyone close to 1:1 training in that format. But the further we drift from it, the more impersonal and generic training becomes, the more isolated individual practice becomes, the less immediate and detailed feedback becomes, the less effective the training regime as a whole will be.
Given that neither undergraduate-style mass lectures nor postgraduate-style research are particularly efficient at conveying useful information, guiding practice, or promoting rapid and actionable feedback, I personally don't rate either particularly highly on an idealised scale. Perhaps a more practically useful question would be whether there are ways to improve university-level training that are realistic given the time and money constraints and, beyond a certain point, the lack of many if any people who actually are more knowledgeable or skillful in increasingly specialised fields than the research student who is dedicated to exploring them.
Both the CS and maths undergrad courses at my uni were structured with voluntary lectures coupled with compulsory group study with 10-15 students led by a post-grad TA for many of the larger courses. I skipped most of the lectures, and focused on the group study, and it was far more rewarding.
On the subject of lectures, it does slightly surprise me that in 2016 we still have researchers with neither much interest in teaching nor the presentation skills to do it well being asked/compelled to deliver undergraduate lecture courses at individual universities. You'd think with the easy access to video presentations and supporting materials now offered by the Internet, universities might have collaborated by now to build the personal elements of tuition around video lectures given by academics who do have the interest and are gifted presenters.
On the other hand, I suppose that would expose how little personal attention many students actually receive in return for the fees and debts they take on, and universities don't want to encourage potential students to question how much real value they provide. Surely it would be more reliable and efficient as an education method if they focused their efforts on small group tuition and individual guidance, though.
An interesting question indeed, but too broad to make much progress on. If you have ideas for a) an incremental change to the PhD process that would improve this or b) an alternative that might work better then please do pursue them. (Remember that the credential aspect is important as well as the actual learning).
In general (and in my opinion), this is not the best way to acquire such skills. But it is a wonderful way to spend 5–7 years in a relatively stress-free environment while you can exercise your brain, explore different avenues, and pick up some had skills and stick–to–itiveness along the way.
Math is the shadow universe of physics. Most theorems may not look like they are useful for anything real world till someone is able to peg all the variables to real world. And then as if by magic we realize we already know how how the real world behaves. Till someone does this pegging, the theorems sit idle waiting for problems to solve. I believe this is actually a good thing. We are letting people find solutions before someone finds problems to use them for.
I disagree. A mathematical result doesn't have to have direct application to the real world for it to be useful. Drawing analogies between the real world and mathematical concepts is very powerful. I have a good example:
Number theorists (amongst others) are seemingly obsessed with bounding things. There are entire books written about obtaining and then refining bounds - which appear to be nothing more than inequalities. There is great real-world value to be derived from seeing 'inequalities' as tools to leverage. Brian Kernighan once commented that controlling software complexity is the essence of programming [0]. I believe similar thinking applies to other aspects of software engineering, and product and business development. If you can take a hard problem, and bound its complexity, then you can say "the problem is no more complex than this". This is very useful. The chief value proposition of many SaaS businesses is the trivialisation of the upper bounds of complexity of hard problems. For instance, for many developers, Heroku makes the complexity of deployment very low.
> I believe this is actually a good thing. We are letting people find solutions before someone finds problems to use them for.
This may lead to a few problems too:
(1) Prior knowledge can serve as a blinder. Our conceptual toolkits may end up consisting of unwieldy theorems that while useful are cumbersome to use.
(2) The number of theorems with no immediately evident practical use is so large that they may end up being "forgotten" anyways, and this will result in us having to do the "mental heavy lifting" all over again.
If you find yourself saying that you gained nothing from your education other than soft skills, maybe you should have passed over the functional analysis part and put the effort directly into learning said soft skills. I'm in the same boat, and I can see how it can be hard to admit this.
My PhD is in physics, from 20+ years ago, and I would not be able to explain or defend it today without studying it for a while. I've even forgotten the language (Pascal) that I wrote my experimental control and analysis code in.
My experiment formed the basis of a fairly productive new research program for my thesis advisor, so at least it lived on in somebody's brain, but not in mine. ;-)
I think it's hard to generalize about PhDs because of the huge diversity of experiences. A PhD student should have a lot of freedom to define for themselves what they get out of their education. They are responsible adults and if they wanted a "marketable skill," they would have finished with a BS or MS. Predictably, the flexibility of PhD education doesn't always happen, and even when it does, it's both a blessing and a curse.
I've also forgotten basically all high level math from school. And have to re-learn when the occasion comes to use some of it. But one thing that occurred to me is that in school I just learned how to make the calculations, so I never got a deep understanding on how things worked anyway. And that's fine.
That's fine if you don't mind the time that could have been spent on something you actually cared about.
Sometimes it happened that school attempted to teach me something I was interested in and I ended up understanding it. At other times, however, it all went to /dev/null.
It's good that we forget stuff, or everyday tasks would be like querying from a fully saturated disk.
But we might have a tiny fraction of it in cpu cache, that will let use make heuristic decisions.
Someone doesn't understand his own work five years later to this extent, this is a strong indication that the work is actually garbage, and the prior understanding five years ago was only a delusion brought on by the circumstances: the late nights, the pressure, and so on.
Perhaps it doesn't make sense today because it never did, and the self deception has long worn off, not because the author has gone daft.
Several weeks ago, on the last work day before going on vacation, I submitted fixes for nine issues I found in one USB host controller driver. The last time I looked at the code was more than a year ago. I had refactored it and really improved its quality. Looking at some of the code now, I couldn't understand it that well. But that's because it wasn't as good as I thought it was. I was still relying on the fictitious story of how I thought certain aspects of the code worked really well thanks to me, and it wasn't meshing with the reality emanating from freshly reading it with more critical eyes. And, of course, I was also confronted by a reproducible crash. As I'm reading the code, I'm forced to throw away the false delusions and replace them with reality. This is because I'm smarter and fresher today, not because I've forgotten things and gotten dumber! It's taking effort because something is actually being done.
Perhaps a similar problem is here: he's reading the paper with more critical eyes and seeing aspects that don't match the fake memory of how great that paper was, which was formed by clouded judgment at the time of writing. Maybe that obscure notation that he can't understand is actually incorrect garbage. His brain is reeling because it's actually digging into the material and trying to do proper work, perhaps for the first time.
If you can show that your five year old work is incorrect garbage, that suggests you're actually superior today to your former self from five years ago. So that could be the thing to do. Don't read the paper assuming that it's right, and you've gone daft. Catch where you went wrong.
By the way, I never have this problem with good code. I can go back a decade and everything is wonderful. Let's just say there is a suspicious smell if you can't decipher your old work.
Good work is clear, and based on a correct understanding which matches that work. There is a durable, robust relationship between the latent memory of that work and the actual work, making it easy to jog your memory.
This post inspired me to re-read my thesis (well browse through it). Although it has been 16 years since I last looked at it, I didn’t have any problem understanding it and I didn’t even really cringe reading it. I guess it depends on your field how bad this effect is.
I was about to write much the same thing... Last year, I found a hard copy of my dissertation (which I defended in 2004). I skimmed through it and had absolutely no problem whatsoever understanding what I had written 11 years prior. And I've been out of academia since 2008.
The dissemination of knowledge is at least as important as its discovery. Accessibility (i.e. clarity of exposition, availability to the public, etc.) needs to become a cardinal virtue in research.
The author almost realized the much more important conclusion of the fact he lived. He shouldn't conclude the article by asking "what is the purpose of studying maths?" and then giving an three stupid answers.
He should have asked: is this actually "knowledge" as they say academia brings to society? Is the money researchers earn being well spent? Did I actually deserve to be remunerated by this piece of work no one understands -- and, in fact, no one has read except for maybe three people?
Except that some % of PhDs go on to be professors (who do a real service), and every once in a while you get a PhD student whose work probably contributes hundreds of millions of value to society. And every couple years or so, you have a PhD student who contributes billions to society in value.
Not to mention PhDs usually have to TA (teach students), which accounts for some of their pay.
There's no example of a PhD work that contributed hundreds of millions of value to society.
(I could say there's no way to measure "value to society", in fact this concept means nothing, but I agree on settling on "an enourmous amount of productive capital to someone, not exactly the society").
> what is the purpose of studying maths?
Mathematics is an excellent proxy for problem-solving /
Mathematics embeds character in students /
Mathematics is fun
Those may be reasons to study maths (although, studying anything seriously probably yields comparable benefits) but doing a PhD and writing a thesis is not only about yourself: it's supposed to advance the field. It's something you do for the general community.
As someone who left academia after a PhD in math (been working as a quant in HFT for the last few years, which mostly involves coding in one form or another), I can totally relate! Back then, all those stochastic integrals and measures made much more sense. However, it doesn't seem totally alien -- I'm pretty sure I could go to hacking math if required, but it would require at least several months to get into the flow.
I also left academia after a math PhD. Though in my case, my area was numerical linear algebra, and I entered the aerospace engineering field. A lot of what I was working on was immediately relevant, but I would say only a fraction of what became my dissertation would be counted among it.
> Mathematics is an excellent proxy for problem-solving
In my experience, earning a PhD in [redacted] was excellent training in problem solving. And in developing working expertise in new areas. I suspect that the choice of field is indeed irrelevant.
> Mathematics embeds character in students
I'd say that actually finishing a PhD does that.
> Mathematics is fun
Whatever you pick for your dissertation topic had better be fun ;)
I had a similar problem with a crypto presentation. Basically, I was angling for a free ticket to an expensive conference. The trick was to propose something that is plausible, but too arcane for practical use. The consolation was a free ticket. Problem was that they accepted the talk. Damn!
So, I started to read crypto journals. Basically anything co-authored by Chaum, Micali, Goldreich, Wigderson. After a few weeks, I starting to get the hang of it. Sort of like learning a new language. So, I gave the presentation and then forgot about it.
A few years later, I decided to show my powerpoint to someone and describe the process. WTF? How did this lead to that? Didn't understand half of it. Was really embarrassing.
Could part of it just be because Mathematical notation is just so bad? It's more of a shorthand than an actual tool of conveying meaning. So much context goes into establishing what a notated equation means - and that context is now gone.
I always complain about this. While I'd say I have an affinity for math, it was much more difficult to get caught up with my peers in university than let's say programming because there was just so much context / assumed knowledge required to make sense out of what the professor was stating that it took me forever to understand even the most trivial things. To be mathematically mature constant exposure is imperative. As to how to improve mathematical notation so that it is not so subjective or context dependent - I haven't a clue - seems to me like almost everyone is alright with the status quo. Not a good thing in my opinion.
The Sheetrock was the last step. I myself would do the exterior and interior painting. I told Ted I wanted to do at least that much, or he would have done that, too. When he himself had finished, and he had taken all the scraps I didn't want for kindling to the dump, he had me stand next to him outside and look at my new ell from thirty feet away.
And then he asked it: "How the hell did I do that?" --Kurt Vonnegut, Timequake
I find the experience common when I look back on things I write or design or build. As Bill Gates said, “Most people overestimate what they can do in one year and underestimate what they can do in ten years.”
It seems to me that most commenters are ignoring the fact that the author is a guy that basically left high level mathematics after completing his phd.
So basically he went doing other stuff not functional-analysis-related and his functional-analysis got rusty.
It seems quite reasonable to me. Call it old code syndrome, call it "my math got rusty", it seems quite normal to me.
He actually finished the PhD in 2011, but started in 2007. Not sure whether that's a significant enough difference to change the point that you're making though.
This happens to me all the time. I have a very popular illustrated post on Monads titled "Functors, Applicatives, and Monads in Pictures"[1]. When I wrote it I thought it was the best monad guide ever. Now, reading back, I can see that some parts are confusing. I still see a lot of people liking it, but three years later I wish some parts of it were better.
Its quite good actually. Problem with monads is that no tutorial will make them less confusing. But writing code that does useful stuff with them builds the intuition.
I now question the need to fully understand sometimes why not use libraries as per example and cargo cult a little to build up intuition. Then later get a more formal understanding.
"Mathematics is an excellent proxy for problem-solving... Mathematics, by its concise and logical nature, lends itself to problem-solving (it is not unique in this regard)."
But how can we be sure this is true if is unable to read what he wrote?
Maybe I'm thinking about the way Clojure programmers tend to use the word "concise" -- concise is meaningful only if it contributes to readability. Otherwise the more accurate description is "terse". And terse does not lend itself to problem solving.
> Mathematics is an excellent proxy for problem-solving
I went to a special, maths focused high school class and this rings true on that lower level too. I am a reasonably successful programmer/architect today and I have -- repeatedly -- attributed my successful attitude toward solving my problems to the 1200 or so problems we solved during those four years. Our maths education was literally nothing else but solving one problem after the other.
I am reading up on stats after 15 years away from the subject and even the very basic stuff I have forgotten. Although the 'muscle memory' is there so that perhaps it is a bit easier then when totally new.
What I also find is I am more interested in the application/intuition behind something now rather than the mechanics of the formulas. Maybe that has to do with a different aim i.e. usefulness vs. passing an exam..
To make an admittedly bad metaphor, it's likely a lot of that knowledge has been moved from main memory to cold storage, and it would take some time to bring it back. It certainly makes the case for why we write things down! Although the part about having to dig for the main result makes me think the abstract could be improved...
Math is twiddling with formal systems, and discovering how they behave, Some of it has uses, some doesn't, and some of what presently doesn't will in the fullness of time result in further islands of usefulness, as yet not even imagined. But ultimately, it needs no more justification than orchestral music.
We are clearly approaching the point where unassisted human intelligence is becoming insufficient to continue to master even specific domain expertise.
I took a course on formal logic on courses. I put all of the questions and exercises into anki, a spaced repitition program. This ensures I will always remember it and get it in my head at an intuitive level.
Basically it's like flash cards that decay exponentially. The first review is in one day, the second in two days, then 4 days, and so on.
I think that's because mathematical papers place too much value on terseness and abstraction over exposition and intuition.
This guy's basically in the position of a fairly new developer who's just been asked to do non-trivial update of his own code for the first time. All those clever one-liners he put into his code made him feel smart and got the job done at the time. But he's now beginning to realize that if he keeps doing that, he's going to be cursed by his future self when he pulls up the code a few months later (never mind five years!) and has zero memory of how it actually works.
I'm not intending to disparage the author; I've been there, and if you've been a software developer for a while you've likely been there too.
Any decent programmer with enough experience will tell you the fix is to add some comments (more expository text than "it is obvious that..." or "the reader will quickly see..."), unit tests (concrete examples of abstract concepts), give variables and procedures descriptive names (The Wave Decomposition Lemma instead of Lemma 4.16), etc.