Hacker News new | past | comments | ask | show | jobs | submit login
‘Outsiders’ Crack 50-Year-Old Math Problem (2015) (quantamagazine.org)
160 points by 18nleung on Dec 6, 2017 | hide | past | favorite | 28 comments



Spielman was my algorithms Prof in college and legitimately one of the smartest guys in terms of raw intelligence I’ve ever met. He was a great teacher, able to explain the most complicated of problems in language that I could intuitively understand and get excited about. Before that class I had a lot of problems with the math sides of CS (still definitely do), but he really made the subject approachable for me. I started the class getting C’s, but after visiting his office hours nearly weekly, by the end of the semester I was getting 100s on my problem sets.

That class, and the way he taught it, convinced me that advanced math, whilst hard for me, is not beyond my grasp or really that of anyone. It’s just a matter of breaking problems down into manageable chunks and recognizable sub-problems, a skill at which Spielman is especially adept.

I knew he was smart then (he had just received a $500k McArthur “genius” grant), but reading this article really put into perspective how effective he is and how lucky I was to have him as a professor. His same skill for breaking down problems that helped me succeed in his class seems to be the same skill that enabled him to solve this problem.


I remember watching a (maybe MIT?) youtube lecture where someone described talking to Claude Shannon the same way. You'd talk about some idea and he'd simplify and throw things away, until you had just a kernel of an idea that was so simple it was just obvious.


Quick note that the article is from 2015, and the MSS result dates back to 2013.

There's a blog post by Nikhil here that explains the proof of the discrepancy result that implies Kadison-Singer: https://windowsontheory.org/2013/07/11/discrepancy-graphs-an...

The main result is a theorem that replaces a Chernoff bound (saying here that a random partition has discrepancy logarithmic in the number of vectors with high probability) with a bound that says achieves a better bound on the discrepancy, independent of the number of vectors, at the cost of replacing the "with high probability" with "there exists".

The proof uses some really beautiful techniques from the geometry of polynomial roots to prove this and is a pretty fun read: https://arxiv.org/abs/1306.3969


"He guessed that the problem might take him a few weeks. Instead, it took him five years"

Perhaps the key to their success was typically programming work estimations ;)

More seriously, I think that if you think something is easy for you, or that you are almost there, you may be more willing to put in the necessary time, than if you knew it would take 5 years from the outset.

"Spielman realized that he himself might be in the perfect position to solve it. “It seemed so natural, so central to the kinds of things I think about,” he said. “I thought, ‘I’ve got to be able to prove that.’”"


Related but different advice: build the habit of learning things by trying to figure it out yourself. Richard Feynman did this when learning quantum mechanics, and invented the Feynman diagram as a result. While not a different model or theory per se, the diagram made apparent the connection to functional integrals and a different method for solving that made previously intractable problems (relatively) simple.

Funny story is he “invented” this technique in his school days and only found out that it wasn’t standard during he Manhattan project when a colleague complained a problem was t tractable and Feynman went to the black board to illustrate the “obvious” solution.

Sometimes optimal solutions aren’t found because nobody bothered to look, assuming if they think about it at all that any remaining work must be more complex than the already known suboptimal solution.


> Related but different advice: build the habit of learning things by trying to figure it out yourself.

This can be a wonderful habit, or it can create cranks. As you mention, Feynman used it to great success, but he was Feynman. I had a few classmates in graduate school—I nearly was one, until my advisor set me straight—who couldn't make any progress because they couldn't take any results for granted; they wouldn't do anything until they could prove everything. This is intellectually admirable, but a sure recipe for stagnation (at least for me and these classmates).


What I took from Feynman's biography is that he would periodically attempt to rebuild his understanding of all of physics from first principles i.e. basic arithmetic, through algebra to calculus, mechanics etc. Looking for new insights and simplifications, he would apply new tools to reformulate and solve older problems. His seminal work in Quantum Electrodynamics used the Principle of Least Action to great effect. Despite initial scepticism, he came to appreciate the power of LA after seeing that it could be used to elegantly reformulate Classical Mechanics.

Doing this even after becoming a world famous physicist led to the Feynman Lectures in Physics - written as much for his own benefit as for his students. However, I am not sure he used this strategy when learning entirely new concepts the very first time.

It makes sense to do this as practice. To build new theoretical structures, building new constructions of the stuff we already know is a great way to sharpen a toolbox of problem-solving techniques and reveal potential shaky theoretical foundations.


I think we can generalize the last two comments to: when you look at a problem, learn about it, it's structure, until you think you can resolve part of it. Keep repeating that until you get there. Assume that what everyone knows is good until your next steps push you to reexamine assumptions, and don't be reluctant to do so.

Happy ground between 'must build stack myself' and 'too orthodox to ever question the masters'.


> Assume that what everyone knows is good until your next steps push you to reexamine assumptions, and don't be reluctant to do so.

This is a fantastic summary. I like it a lot, and may steal it for future use.


> "He guessed that the problem might take him a few weeks. Instead, it took him five years"

>> Perhaps the key to their success was typically programming work estimations ;)

No, you should always normalize programmer estimates by increasing both the unit and the measure by one. So "a few weeks", say 4 weeks, will actually take: 5 months! So he still took 12 times longer than the properly normalized estimate! :-)


Y'know, if you take "a few" to mean 3 instead of 4, then do that same transformation twice, it lands right on 5 years...


Yes, great catch! Perhaps Tom said "3 weeks" to Dick, Dick said "4 months" to Harry, and Harry said "5 years" to Jane! :-)


Great story that reminds me of something I've had. There's just so much knowledge out there - and growing - that researchers have to be increasingly specialized. Isn't it inevitable that 4 years of undergrad + 1-2 yrs of grad school classes won't prepare you sufficiently to do novel research? Imagine when people have to be in school for something like 10 years before they know enough to do "real" work! And people will necessarily have to become more and more specialized, making situations like the one in the article even rarer.

Just a thought.


I was always taught that a Masters wasn't intended to prepare one to be competent at novel research; that's what the PhD is for.

A Masters, based on this view, is designed to instill expertise in a field's main body of theory and techniques.


I think, like many things, this varies a lot by field. In computer science, which is still comparatively "new", it is not that uncommon (at top 5 schools, it may even be common) for grad students to publish real research in good venues in their first or second year of grad school (what is nominally coursework time). Even in areas like theory where you're not really hacking away at experiments or building stuff.

Contrast this with say pure math, where (I believe) most students even at the strongest schools do not publish or even produce publishable research in their first few years.


This could again be a consequence of the amount of knowledge out there in both fields. Math is a very mature field compared to CS, though of course CS inherits some of that maturity due to how closely related the two are. First year grad students are publishing CS research because there's still a lot of "low hanging fruit" out there. Lots of open problems have the right lack of popularity, ease of solving, and importance (coupled with the fact that IME many CS grad students actually aren't that good at higher proof-based math) for this to be occurring. Over time the portion of problems that are easy to solve and important will decrease, and I bet theoretical CS's research landscape will look more like CS.


I have also wondered about that. It seems to me that as we better understand an area we are able to develop more powerful abstractions that allow people to reach the limits of current knowledge faster.


> Isn't it inevitable that 4 years of undergrad + 1-2 yrs of grad school classes won't prepare you sufficiently to do novel research?

I think it depends what you call "novel research". Most of the time, novel research means making a small contribution to an existing paper (and for a graduate student, the paper to extend and the contribution are usually suggested by their supervisor).


One help: As in the OP, do field crossing.

Another help: Start with a problem significant in practice. If get a solution with new, correct results, then can say that the results are significant because the practical problem was.



This is a real phenomenon by the name of credential creep:

https://en.wikipedia.org/wiki/Credentialism_and_educational_...


It has very little to do with credential creep. The problem is that, in order to do work at the frontier of a field, you first have to get to the frontier - you have to learn what's going on there, which means that you have to learn all the stuff you have to learn in order to actually understand what's going on there. The learning to actually understand enough to get to the frontier is taking longer and longer. That's not credential creep, that's creep in how far away the frontier is from the foundations.


That sounds like an abstraction problem. I’m totally willing to accept I’m crazy. But my gut feeling is there is a lot of technical debt in representation, so it’s hard to move to the frontier.


In this case, you're probably just wrong. Consider for half a second how many programmers fail Fizz Buzz, and then think about how much work it takes to get from that to the frontier of computational science (i.e. not just installing another javascript library, but actually computation).

There's a lot of low hanging fruit out there, but you still have to actually know how to find it.


I want to point out that Maxwell had like 40+ equations for electricity that wound up being 4. I suspect that many fields have rules, but those rules aren't the most concise representation.

it will always be hard to get to the frontier. but that path should be easy. we can get years and years of explorers by building good roads.


Regardless, will written undifferentiated paper is way better than a paper mill and badly written or worse, lying paper. These happen all the time in algorithms.

Cannot even rely on citation indeed as it is a common practice to cite papers from your surroundings without quality control. Or barely on topic.


This is the opposite of credential creep ... the mathematicians with the credentials were really open minded and excited that someone from the "outside" had a solution! They didn't give a rip about the math pedigree.


That's because CompSci and mathematics is definite. It doesn't matter if someone has PhD's up the wazoo, or didn't finish high school. If the theorem solves something, and it has a proof that shows it, it's accepted.

Other sciences are much more "soft" and have error bars and such. Someone with a "lesser pedigree" just couldn't cope with the special sciences. Nor could they afford the lab.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: