Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The idea that academic work should be grounded in "applicability" is a lazy canard. Nobody was much convinced that decades of work in number theory was worth much of anything until applications in cryptography were discovered. We just don't know what we'll find.

Academic research in every field is undertaken for its own sake. If you want to work on "applicability", go get a job in industry. Finance is still hiring quants, for example.



People are welcome to do whatever they like. I'm expressing an opinion, my value judgement, that the math done for its own sake is just not as beautiful to me as math that has applications.

You call it laziness, and I call it a refined view built up over decades of study and repeatedly hearing mathematicians bullshit and be misinformed about how applicable their work is.


>that the math done for its own sake is just not as beautiful to me as math that has applications.

The point that the person who you're replying to is making is that you dont know the distinction between maths done for its own sake and maths that has applications until potentially hundreds of years later.


If you want to read a more thorough defence of the value of pure mathematics research, I recommend that you read A Mathematician's Apology by G.H. Hardy. Hardy witnessed first-hand the beginnings of the applications of his field, number theory, to cryptography. At that time, the only use for cryptography was in warfare so as a committed pacifist you can well imagine how much this upset him.

misinformed about how applicable their work is

I think I know exactly what you're talking about and it's unfair. I've also seen it happen to scientists across the spectrum. A question from a journalist or other layperson -- about the applicability of their research -- to catch them off guard and put them on the spot, designed to humble them and make the questioner feel superior. You never hear this sort of question levelled at painters, sculptors, musicians, writers, game designers, or any other artist!


It's different. Painters, sculptors, musicians, etc. are all doing something which is ultimately in service of making others happy and fulfilled. A mathematician or scientist working on something terribly narrow and abstruse is arguably only working on something in service of making themselves happy and fulfilled. Since they are frequently supported by taxpayer money, asking whether what they're working on is worth that expense is totally legitimate. The idea that we can't question the legitimacy or applicability of research because we can't predict how useful that research will be useful in 300 years is dumb.

If you want to be taken seriously, it's better to spend some time dwelling on whether you are responsibly fulfilling your obligation to society. For many professors, I think this could easily be addressed by shifting the focus away from research back to where it belongs: teaching.


On the other hand, if there are potential commercial applications 10-20 years in the future, it may not be the best idea for the government to fund the research. The topic is already concrete enough for the market, and it could be a waste of tax money to fund it. There is a lot more money in commercial research than publicly funded research. The government should focus on funding topics where the applications are too uncertain or too far in the future to make sense in the market.


Since they are frequently supported by taxpayer money, asking whether what they're working on is worth that expense is totally legitimate.

That's not the deal they signed up for. They were told that they were going into a purely theoretical field. To turn around and demand applications is called a bait-and-switch.

For many professors, I think this could easily be addressed by shifting the focus away from research back to where it belongs: teaching.

Same as above. The universities hired these professors on the basis of their research. If they want teaching staff they should hire teachers. The researchers will seek greener pastures elsewhere.

As for taxpayers demanding applications? Vote for it! Give the money to DARPA or Johns Hopkins instead of Princeton IAS. But to demand that theoreticians come up with applications on the spot is disingenuous.


Most people don't have the luxury of tenure. Frankly, most people don't have the luxury of a single career that's well paying throughout their professional life. If funding dried up tomorrow and every number theory professor was out on their ear, that wouldn't be a bait and switch. That would be a uniquely nice and pleasant situation for them coming to an end.

It's probably also good to contextualize this historically. The NSF has only existed since 1950. The high degree of funding for pure math let alone the rest of science and engineering could easily be a historical blip.

I don't really care what happens to people who are currently employed doing some kind of useless research in academia. Indeed, they got hired for it. I'm simply pointing out that it's totally fair and reasonable for anyone to ask whether it's a good idea to employ people this way. Your earlier comment seems to suggest that you don't think it's OK for people to call this into question.

I'm also not particularly worried about it. I'm not sure how much money IAS gets, but my guess is that DARPA and the DOE get a hell of a lot more, which (at least in my opinion ;-) ) is good.


It may be fair but it's definitely not reasonable. Progress in mathematics has foreshadowed practical applicability sometimes by centuries. It's short-sighted to consider applications outside of mathematics in the hiring of mathematicians.

In that context, it's worth noting that research mathematicians almost always mean "applications to other fields of mathematics" when they talk about the applicability of their research.


I don't agree. You're basically sticking your head in the sand saying that because there have been unanticipated benefits sometimes hundreds of years later, we can't even attempt to decide what research is valuable.

If the public is paying for research, it is both fair and reasonable to ask questions about its utility. It's as simple as that.


> it is both fair and reasonable to ask questions about its utility.

What questions, though? How do we know that these questions are well-calibrated for the long-tail cases that end up being valuable centuries later?


It’s totally fine if you want to say “defund pure mathematics!” I was complaining about the bait-and-switch people do when they take someone who was hired to do pure mathematics and then demand applications from them. It’s disingenuous and unfair. Likewise for the superstar researcher who is demanded to teach!


I never said we should defund pure math.

How many pure mathematicians have gotten tenure in math departments and were subsequently forced to work on "applications"? Do you have any actual examples?

How many superstar researchers have been forced to teach against their will? If they're working as a regular, tenure-track professor, then teaching is part of their job. The word doctor comes from the Latin word for teacher!


You didn’t say “defund pure mathematics” but you implied it with your original statement:

A mathematician or scientist working on something terribly narrow and abstruse is arguably only working on something in service of making themselves happy and fulfilled. Since they are frequently supported by taxpayer money, asking whether what they're working on is worth that expense is totally legitimate

To me this reads as “pure mathematics is a taxpayer-funded hobby that provides no value to society.” Is that an unfair reading? I don’t know how else to parse the statement that mathematicians are ”arguably only working on something in service of making themselves happy and fulfilled.” If that is what they’re doing then why shouldn’t they be defunded?

As for teaching, plenty of pure mathematicians do teach graduate students. Asking a world-leading number theorist to teach first year algebra class is silly though. Universities hire dedicated teaching faculty for those courses. Many of the expert researchers are terrible at teaching.

I also want to add that the reason universities covet superstar researchers (in any field) is because they bring in grant money (paying for themselves and then some) and because research elevates the profile of the school in the world rankings. The best schools have the best researchers but that doesn’t mean they provide the best education. Trying to fix this is an enormous challenge because you’ve got to tackle the issue of schools competing against each other for students. The losing schools are going bankrupt [1].

[1] https://www.cbc.ca/news/canada/ottawa/queens-university-budg...


I'm not saying we should defund math. I'm saying that it's perfectly fair and reasonable to think through how much sense it makes to allocate research funds for pure math. More specifically, I'm pushing because on the idea that it isn't okay to discuss this because past math successes have been useful in other areas. People really seem to cling to this argument and it doesn't make any sense to me. Maybe, at the end of the day, after thinking about it, it will make sense to continue funding pure math or even fund it more.

I agree that having a top number theorist teach Algebra I is a waste, but it's not a waste having them teaching undergraduate abstract algebra or number theory. The bigger issue is that in many places, there is very little expectation of teaching quality. I'm not sure this is connected to the issue of competition over students. The average math undergrad isn't picking a school based on how many Fields medalists it has.

I should say that I took number theory in undergrad from a top number theorist, and he was a phenomenally good teacher. I could have taken abstract algebra from him, too, if I had timed it right. Taking the craft of teaching seriously and being a good researcher aren't mutually exclusive!


> catch them off guard and put them on the spot

No, it's the researchers who first claim their work is applicable, in their papers, in lectures, etc. I have read entire books with titles like "Applications of X" and later discover all the claimed applications were unrealistic.

Hardy was a prickish snob who claimed in that book that any math that can be applied is dull.


> that the math done for its own sake is just not as beautiful to me as math that has applications.

I think the reverse. In general, the math done for its own sake will be more beautiful.

Think of it this way: mathematicians doing math to get some place will accept to take a long path that’s ugly and/or extremely dull, as long as it gets them there; mathematicians doing math for math’s sake will prefer to walk nicer paths.

But of course, beauty is in the eye of the beholder; those who define ‘beautiful’ as ‘has direct application’ will disagree.


This is the reason I prefer the intuitionist view of mathematics vs the more common platonic view. Something as simple as the Pythagorean theorem is true because it has found so many ways to solve practical problems, the Sumerians and Egyptians basically worshiped that formula.


I really don't think that intuitionism has that much to do with practical application—rather, it's a view about what it means for something to be "true" in the first place. That is to say, whether an intuitionist accepts a piece of mathematics is not contingent on if it has any practical applications.

I suppose one could argue that something proven in an intuitionistic manner would have a higher probability of being practically useful (since it could e.g. be extracted into an algorithm), but IMO any such correlation is pretty weak.


This is simply wrong. There is loads of academic research born out of industrial collaborations which is absolutely focused on how useful it is. Applied research exists in academia.

On the flip side, if you think the only place you can do something "applicable" is by working as a quant... again, wrong!

If anything, the idea that research is free to disregard applicability (read: "relevance to anyone outside academia") is what's a lazy canard. :-) People who are wrestling with whether their research is applicable are struggling to figure out how to connect it to the rest of society, which is valuable. It's responsible of them to do so.


You didn't engage with the argument of the comment above you at all, you engaged with imagined arguments: that there is no academic research that is focused on research, and that the only place to do something applicable is finance. The above comment said nothing of the sort, it only claimed that research did not have to be motivated by applications, and provided evidence: such research in the past has been incredibly applicable.


'If you want to work on "applicability", go get a job in industry.'


decades of work in number theory

Centuries of work. To Galois and at least all the way back to the Chinese Remainder Theorem, and to Euclid as well! Mathematicians have been fascinated by this stuff for a very, very long time.


The Chinese Remainder Theorem had explicit applications to astronomical cycles and calendrical calculations right from the start. See https://people.math.harvard.edu/~knill/crt/lib/Kangsheng.pdf

The "Euclidean" algorithm was also directly relevant to both plane geometry and astronomical calculations. Many of the relevant ideas were worked on by Plato and his contemporaries at the Athenian Academy, and then in the next generation by Eudoxus and his followers. (Unfortunately most of the precursors to the Elements have been lost.)


Fun fact sort of about the Chinese Remainder Theorem:

It appears to have been named after the fact that it appears in a medieval Chinese mathematical treatise. But it always reminds me of the fact that in old Chinese texts it is frequently necessary to determine a number given its remainders modulo 10 and 12.

Why? Because that is the formal way to record dates! It has been for more than 3000 years. There is a Chinese cycle of 10 sequence markers, the "heavenly stems", and a separate cycle of 12, the "earthly branches". Because Chinese characters have no real ordering (there are ordering systems, but only specialists would bother learning or using them), the stems and branches are traditionally used when you need to give arbitrary names to things. (Examples of this survive in modern Chinese -- the two parties to a contract are 甲方 and 乙方, after the first two celestial stems 甲 and 乙 -- but modern Chinese are likely to use Western letters like A and B for their arbitrary names.)

The stems and branches together form the cycle of sixty, from 甲子 (1-1, meaning one) up to 癸亥 (10-12, meaning sixty). The system is so old that instead of incrementing one place value at a time for a cycle of 120 the values of which can be easily calculated in your head, incrementing bumps both place values, so the value following 甲子 is 乙丑, 2-2, meaning two. What does 乙卯 refer to? Well, 乙 is 2 (of 10) and 卯 is 4 (of 12). A little work with the Chinese Remainder Theorem will get you there!


Astronomy had no applications for centuries (unless you count astrology but do you really want to go there?). Saying that mathematics applies to it is disingenuous in the context of a discussion around the applicability of mathematics to the daily lives of an ordinary taxpayer.

Same goes for much of the theoretical work in geometry. Very little of it is applicable to daily life, with the notable exception of trigonometry and its uses in surveying and engineering.

But the results I was alluding to by mentioning Euclid were those from his work in number theory, including his namesake lemma and his proof of the infinitude of the primes, and the fundamental theorem of arithmetic. Those results had no application to daily life until the advent of modern cryptography.


It seems like you are unfamiliar with technology before the past few centuries. Beyond astrology (which was important!), people used these theoretical tools from geometry and number theory for time keeping, calendars, cartography, navigation, surveying (you mentioned), city planning, architecture, construction and analysis of machines, hydrology, water distribution, crafts such as metalworking and carpentry, accounting, optics, music, any kind of engineering involving measurements, ....

Obviously mathematicians were also excited about numbers and shapes for their own sake, and some of their activities (e.g. trying to square the circle or trisect an angle using compass/straightedge) were more like puzzles or games than engineering. But that doesn't mean the broad bulk of mathematics (even deductive proof-based mathematics) was practically worthless. Some of the most famous mathematicians of antiquity were also engineers.

Fluency with prime numbers per se (and prime factorization) is pretty useful when you are trying to do substantial amounts of (especially mental) arithmetic.

I'm not sure which "taxpayers" you are thinking about, but establishing an effective tax system and doing tax assessment and accounting is one example where having a more robust understanding of elementary arithmetic and number theory is pretty valuable. The development of the number and metrological systems and work on number theory in ancient Mesopotamia had quite a lot to do with tax accounting.


Who do you think was making the calendars around the cycles of which agricultural societies were based ?

And while we may think that astrology has no value, that certainly wasn't the opinion of most premodern societies !

(Though it all tended to be a bit lumped together, as the term "natural philosopher" indicates.)


> Nobody was much convinced that decades of work in number theory was worth much of anything until applications in cryptography were discovered. We just don't know what we'll find.

There are plenty of examples of this, Bezier Curves is one such.


"Bézier curves" were separately invented by De Castlejau at Citroën (1959) and Bézier at Renault (early 60s) and were explicitly developed for their application to computer-aided car modeling right from the start.


that's not the history as I understand it but I also don't claim to be an expert.


You can read about it here, §3 https://web.archive.org/web/20170830031522id_/http://www.far...

Of course both De Castlejau and Bézier were building on other tools developed previously: Most importantly, Bernstein polynomials, which were developed by Bernstein in service of a constructive proof of the theorem that every continuous function can be approximated arbitrarily well by a polynomial. You can read about that history here https://doi.org/10.1016/j.cagd.2012.03.001 https://www.researchgate.net/profile/James-Peters-3/post/Sin...


From memory, I recall reading that there was work done by a french mathematician that didn't seem to have much applicability until the advent of 3d modeling.

I'm probably wrong here, it's mostly from memory some 20 years ago. But having said that, I think the overarching point is still valid, it's often the case that someone does something to explore and an application for it is discovered, sometimes posthumously.


De Casteljau and Bézier were both French mathematicians paid by different French car companies to work on systems for doing computer-aided design for cars, because of advantages vs. building physical scale models.

Bernstein was a Jewish Ukrainian mathematician who studied in Germany and later worked in Moscow, and his main research was about partial differential equations, probability, and function approximation, three "applied" topics. (Of course all of these topics also have theoretical aspects; such is the nature of mathematics.)


This assumes that the parts of number theory that ended up being useful could not have been developed after people realized you could do public key cryptography with primes.

If the work is undertaken for its own sake, there should not be a need to argue about how it will be useful in the future.


No, we could not wait to start doing number theory until after we discover that it's useful for cryptography. It took thousands of years to get to the point where we understood it well enough to use it. Its use would never occur to us if we had not discovered it beforehand. That's completely the wrong causal direction.

What completely undiscovered branch of mathematics do you think we should explore based on an immediate need that we have right now? Not so easy, is it?


What specific, useful things in cryptography would never have happened if we had not been studying number theory for thousands of years? Even if there are some examples, would we be significantly behind in useful capability if we didn't have those specific results?

It's more efficient to work backwards from the problems you have and build out the math. That's what they did with a lot of linear algebra and functional analysis when quantum mechanics came about. I am not saying discovery-based exploration would never work; I am saying it's inefficient if the goal is technological progress.


It just feels like asking which bits of a human wouldn't have been possible without having evolved for billions of years. It's an interconnected body of work that made cryptography possible at all. So ... all of it? I know it sounds like I'm copping out of the question, and maybe I am, because it's a really complicated question you're asking. I just don't know how you're imagining humanity came up with the ideas for:

- Diffie-Hellman key exchange without a deep understanding of quotient groups (and their properties, and proofs of their properties), large (co)prime numbers, and computability

- Quotient groups and its applicability to this problem without a deep understanding of group theory, equivalence classes, isomorphisms, etc.

- Large (co)prime numbers without work by Euler, calculations of GCD, proofs of infinite primes, understanding their densities and occurrence on the number line, etc.

- Computability without tons of work by Turing, von Neumann, Church, Babbage, Goedel, etc. relying on ideas on recursion, set theory, etc.

- Ideas on recursion and set theory without work on the fundamental axioms of mathematics, Peano arithmetic, etc.

- Group theory without modular arithmetic, polynomials and their roots, combinatorics, etc.

- Polynomials and their roots without a huge body of work going back to 2000 BC

- Calculations of GCD without work by Euclid

Most of these generalized abstractions came about by thinking about the more specific problems: e.g. Group Theory only exists at all because people were thinking about equivalence classes, roots of polynomials, the Chinese remainder theorem, modular arithmetic, etc. Nobody would have thought of the "big idea" without first struggling with the smaller ideas that it ended up encompassing.

You can't just take out half of these pillars of thought and assume the rest would have just happened anyway.


I agree that it's hard to imagine an alternate history when things happened through a mixture of pure and application-motivated work. In each example, I can see how people arrive at these notions through an application-driven mind-set (transformation groups, GCD through simplifying fractions during calculations, solving polynomial equations that come up in physics calculations). Computability and complexity, in the flavor of Turing's and subsequent work, I already see as application-driven work, as they were building computing machines at the time and wanted to understand what the machines could do.

Related to this topic. I highly recommend this speech / article by Von Neumann: https://www.zhangzk.net/docs/quotation/TheMathematician.pdf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: