The paper mentions how Erdos accomplished so much while knowing relatively little machinery. This is, I think, a more general paradox of mathematical research, not limited to Erdos: it's a mis-conception that to do important original mathematics you need to be lightyears out there. The reality is, there's so much low-hanging fruit it's almost silly. But if you want to make a career picking low-hanging fruit, you'd better be financially independent, because as the paper also points out, mathematical power brokers don't appreciate low-hanging fruit-pickers!
Somebody launch the Journal of Low Hanging Fruits: an interdisciplinary journal where people from the research field A pick the low hang hanging fruits from research field B, using knowledge from A. And, conversely, people from B reply by showing those from A that their most cherished result should have been discovered earlier, because is a trivial application of an old technique from B.
In mathematics this happens all the time, but there is no dedicated place for this show.
The paper comes across as trivializing how much Erdos knew and more importantly understates his level of mathematics genius. Though, the author probably didn’t intend to and mentions how important his work is in today’s time period.
You probably don’t know anything about higher category theory, should I criticize you as a mathematician because you don’t know about it or don’t find it important?
Oh wait, you aren’t a mathematician, but you claim to be a software developer? Do you know anything about type theory? No, then how could you claim to be a software developer?
Are you a simply-minded programmer? Well, you must only solve low-hanging software problems! Wow! It is amazing how many software development problems you can solve with having such low skill knowledge in programming! That’s amazing!
But despite all of that, your work is still not considered deep and worthy of respect because you do not meet the standards of higher level theoretical computer science knowledge! But good for you despite that! Good job!
>> But if you want to make a career picking low-hanging fruit, you'd better be financially independent, because as the paper also points out, mathematical power brokers don't appreciate low-hanging fruit-pickers!
For low-hanging fruit one should most likely join machine learning where all the work currently is done on trivial results that advance the field only in the most marginal sense, but are straightforward to obtain: get more data, use more computing power, produce state-of-the-art results, even if they are only half of a percent better than the previous state of the art results.
Here's one example. In Darwin's "Origin of Species", a key principle which Darwin talks about at great length is the so-called Knight-Darwin Law, the observation that even in apparently asexual species, sexual reproduction still occasionally occurs, if only after long spans of time. In an article in Acta Biotheoretica, I argued this is in fact an infinite graph-theoretical principle. If G is the directed graph of all organisms, where an edge from x to y means that x is a parent of y, then (Darwin theorizes that) there is no infinite directed path through G all of whose vertices have indegree 1. (A vertex has indegree 1 if it has exactly 1 parent.) The low-hanging fruit (some of which I picked in my paper): Investigate the mathematical implications of the above property (esp. if G also has other biologically motivated properties).
The Knight-Darwin law seems to have vanished completely from the literature around the start of the 20th century, until I cited it. Almost as if no-one has actually been reading Darwin in all that time.
To generalize: If you want to find low-hanging fruit, read non-fiction classics like Darwin and constantly be asking, "Is this some unrecognized mathematics in disguise?"
I would further generalize; if you want to find low-hanging fruit, know about relevant subjects that aren't presently very popular (such as the Knight-Darwin law) and find relations between disparate fields (such as evolutionary biology and graph theory) where few others are well-versed in both.
In this vein of low-hanging fruit, my former anesthesiology chair once remarked that you could build a first-class academic career in the specialty if you knew German and simply went through the 1900-1920 German physiology literature and gleaned significant findings unknown even now in the non-German-speaking world. Alas, my other language was Japanese.
I definitely have the impression that many of the "low hanging fruits" are found at the intersection of disparate fields.
There are much more people highly proficient in a single field than mediocre in two different fields, so if you want to discover something new and important, look at a new field through the lens of one you're proficient in.
> there is no infinite directed path through G ...
I'm out of my depths here, but isn't this trivially true regardless of means of reproduction?
You can't be your own parent so there's no directed cycles, and without directed cycles you can't have infinite directed paths in a finite graph.
Maybe he meant there's no path that starts from a root and ends in the leaf that has these properties? The leaf being the first life on earth in this example.
I know the tree of life isn't a tree, but it surely is acyclic, right?
You're right, the claim is trivially true if G is finite. In the paper, I argue that Darwin must have implicitly allowed at least the possibility of G being infinite (i.e. life going on forever), or else, as you rightly point out, the Knight-Darwin Law would be trivial.
Even if life will eventually die out, we can pretend G is infinite, as a simplifying assumption, similarly to how physicists and chemists pretend crystals fill up infinite space in order to simplify talking about their translation symmetries. Under this simplifying assumption, "finite" and "infinite" can serve as concrete alternatives to vague slippery predicates like "small" and "large".
The following quote from Origin of Species reinforces that Darwin definitely didn't take for granted that life will someday end: "Nevertheless so profound is our ignorance, and so high our presumption, that we marvel when we hear of the extinction of an organic being; and as we do not see the cause, we invoke cataclysms to desolate the world, or invent laws on the duration of the forms of life!"
I would assert more strongly that in general, essentially nobody actually goes back and reads the classic papers in any field, and those few who do usually end up with new results.
Is this true? I mean, consider the infinite directed graph generated by the total ordering of ℤ, then all vertices have indegree and outdegree one (by definition of a total ordering), so there is such an infinite path. Are there some further restrictions on this graph?
> You know, people think that mathematicians have been working for hundreds of years, and now there's tens of thousands or hundreds of thousands of mathematicians all spending every day working on problems, so how could you possibly still find a simple down-to-earth problem that hasn't already been studied, you know, way too much? And the answer is: those problems aren't rare at all, they keep coming up several times a year, and this is an example where even this very basic problem of rectangle into rectangles has all kinds of stones not yet turned.
You can see in the talk where he starts simple and just keeps working on it, until he gets something that seems new (a conjecture, which he proposed as a problem in the American Mathematical Monthly). Of course, this particular problem is now solved, but if you just keep exploring you'll pretty soon hit the limits of what's known. (Potential exercise that I haven't tried: load random sequences on OEIS http://oeis.org/ until you hit one where there are only conjectures, and not much research.)
Well, I can't tell you low-hanging fruit that's still out there, because if I knew it, I'd be working on it instead of telling you. :) But I can tell you some low-hanging fruit that I've done!
I'll stick to just one example, this[0] paper of mine was, like... I was astonished this was not already in the literature -- everything in this paper is easy to prove once you think to ask the question -- but as best as I and anyone else could tell, somehow nobody had thought to ask the question despite having asked other extremely similar ones. It's especially astonishing that Jacobsthal's exponentiation was independently rediscovered multiple times, but what I call "super-Jacobsthal exponentiation" seems to have only been considered once before and only briefly as a tool for something else (nobody thought to write down its algebraic laws). So, I wrote it up to make sure it was out there.
(Honestly I'd say a lot of my work has been LHF, I got through grad school while learning hardly any of the heavy machinery one normally does, unfortunately most of it isn't written up yet and so I can't really easily point to it.)
There's plenty examples indeed if you just dig around. I have a colleague who published a theoretical paper [0] in one of the top fluid mechanics journals 5 years back. That paper originated from his lecture notes in a course that he gave for the first time, where in one part he followed the classical theory of waves behind a ship done by Lord Kelvin and others back in the olden days. Then he thought "Hmm, how do we extend this so it works if there is a current in the water?" And it turned out to be both possible to solve analytically, and that nobody had done it before. Nobody had thought it was possible to do. It's an example of "low-hanging fruit" that had been sitting since 1887.
I think the reason why there's "lots" of LHF is that science has become so incremental, iterating on recent results. If you go back 20-50-100 years and look at the road less traveled, there's plenty of LHF, but going down that path and shaking the trees requires more effort per initial publication than most academics can afford under the current system. But if you can afford (or get lucky enough) to find that first LHF, it usually gives you enough material to work on for quite some time such that it pays off over time.
Particularly the Collatz Conjecture, afaik basically zero progress has been made on it, no "advanced" math topics are known to apply at all. So perhaps a unique angle is all you need. (Or maybe it's false, in which case all you need is a counterexample. Sometimes it happens, see e.g. https://www.ams.org/journals/mcom/1988-51-184/S0025-5718-198...)
Here's an old post by terry tao explaining why the collatz conjecture is hard, and giving a heuristic argument that it's unlikely to be solved with purely elementary techniques. The essay is surely worth reading if you want to make an attempt on that problem!
I don't think that a list of historically unsolved problems in math, and the Collatz Conjecture in particular, count as low-hanging fruit. The fact that they have remained unsolved for so long suggests that they are rather high-hanging.
My favorite example of (formerly) low-hanging fruit is the formula for determining how often you can fold a piece of paper in half. Is there anyone who didn't grow up "learning" that you can't fold a piece of paper in half more than six or seven times? Until high school student Britney Gallivan found the formula, with basic algebra, and extended the record to twelve folds. [0]
Another fun one (from the many examples above!) is a new paper I am working on at the moment as well, but in the intersection of physics and math: it turns out you can phrase a bunch of physics problems as optimization problems, then use some very simple tools in optimization to derive fundamental limits of how well any device could ever perform.
I think it's not just a matter of low-hanging fruit, but low-hanging fruit that's interesting to other people and applicable (to other research or to the real world)
Perhaps people who think mathematics is full of low hanging fruit are the same ones who grew up looking for binomial coefficient identities and calculating the 3n+1 sequence. It might not seem low hanging to everyone, but maybe to mathematicians.
Very recently the superpermutation problem [a.k.a "haruhi" problem]. Obscure enough that a anime-loving anon gave a proof for improved bound that went unnoticed for some time.
A great article. One of the prime examples of the disdain that mainstream mathematicians had for "Hungarian" style mathematics is the fact that Szemeredi did not get the Fields medal (apparently Erdos was very angry at this), despite the fact that a lot of deep mathematics, at least two Fields medals (Gowers and Tao), a Wolf prize (Furstenberg) have later resulted from Szemeredi's work.
After the wider acceptance of combinatorics, Szemeredi was awarded the Abel Prize.
Another area of mathematics generally ignored by mathematicans is mathematical logic. I've often heard an opinion is that it is not really "relevant" to general mathematical practice.
There's an age limit on the Fields medal (40), so it's quite common for the importance of someone's work to be recognized too late for them to be eligible. This is somewhat intentional as the medal is meant to be given to mathematicians early enough in their careers to have an impact (c.f. Nobel prizes, which are typically only awarded when someone is famous enough that getting a Nobel prize hardly matters).
Sure. But this was not the case with Szemeredi's work. It was recognised as great immediately, and Szemeredi was below 40 at the time. The problem was that his combinatorial approach was not considered to be deep enough to merit the medal.
It's a great book, probably the best (or most delightful) biography I've read of anyone, in any field. It not only gives a picture of the subject as a person (as any biography should I guess), but actually engages with the person's work and gives a flavour of it — what's the point of a biography of an artist or poet that does not show their art or poetry? The "love" in the title of this book is very apt: not only did Erdős love numbers, but this a biography written with love, and the reader will love it too.
The article mentions two paradoxes that are worth considering:
> How could a great mathematician not want to study these things? [these things = Lie groups, Riemannian manifolds, algebraic geometry, algebraic topology, global analysis, or the deep ocean of mathematics connected with quantum mechanics and relativity theory]
>
> This suggests the fundamental question: How much, or how little, must one know in order to
do great mathematics?
> [...]
> The second Erdos paradox is that his methods and results,
> considered marginal in the twentieth century, have become
> central in twenty-first century mathematics.
I never had the feeling that his results were 'marginal', but the fact that he never got a full position anywhere got me thinking---maybe Erdos was just interested in these positions (which, given his personality, might be likely), or maybe he did not 'sell' his work well enough. As someone who hates advertising their own work, I can see how tragic this would be.