Hacker News new | past | comments | ask | show | jobs | submit login
The dirty secret of mathematics: We make it up as we go along (2018) (medium.com/q-e-d)
114 points by yamrzou on Sept 15, 2022 | hide | past | favorite | 103 comments



Math is presented in a way that's supposed to be organized, compact, and categorical. If we taught math the same way math was proven and discovered, it would be so slow and inefficient that we would still be covering linear algebra in post grad.

As an analogy: The 1,000th person to climb Mt. Everest takes a well defined path that has already been mapped out as the most efficient path to the top. If every single person had to go through the treachery of finding the dead ends, cliffs, crevices, and death traps that the first few climbers endured, it would be a journey only a few could accomplish.

Most people (computer scientists, engineers, chemists, physicists) using math only need to reach the top and see the view from the peak. The few climbers that are really dedicated to climbing (ie, the math researchers who reach the frontier of math) will naturally learn about the rest of the jagged, unmapped landscape as they climb harder and unconquered mountains.


I think that's one useful way to view it. Math is infinite "mountain" and the higher you get, the rougher the summaries.

One thing I should mention. I have been reading The Great Formal Machinery Works: Theories of Deduction and Computation at the Origins of the Digital Age by Jan von Plato. One thing I notice is that a lot of mathematicians activity is not proving more and more complex theories but rather, producing a framework that takes a series of complex results and shows them to be much simpler within the framework.

And that's just to say, the organization of math isn't just a matter of simplification for the layman, it's part of the progress of math itself.


Good points.

This is essentially compression of knowledge at play. To make the next discovery it makes sense to get compressed information about the previous discoveries.

Historians are often more interested in the various routes attempted to achieve scientific discovery -- which failed, which succeeded etc. Scientists are interested in climbing to the next peak (of knowledge) with just sufficient knowledge of how we came to the current location.

It always helps to know a bit of history. You might encounter problems while climbing to the next peak and knowing a bit of history might give you some additional tools to solve problems you may encounter.

However, you must be judicious. Learn too much about the past and you won't have much time to create the future. Also if you learn too little about the past you may not be well equipped to deal with upcoming challenges. It is a balance.


The current pedagogy and curriculum is nowhere close to the “most efficient path” (nor is it the most intuitive, best organized, easiest to extend, ...). It’s just an arbitrary history-dependent path people happened to come up with, mostly centuries ago, and “refactoring” any of it is almost impossible.

It’s more like our current public transit system: it gets some people where they’re going, somewhat on time, but it’s generally pretty crummy and full of historical inequity.


I never said it was the most intuitive, just the most streamlined and most efficient. I'm having a hard time understanding why you think it's not the best organized. Most math courses are taught with a pretty straight forward approach: start with the axioms and definitions, prove the easy and auxiliary theorems that are easily derived from the axioms, prove the fundamental theorems that make the subject useful. In other words, the shortest path from the axioms to the important theorems. I don't see a way to make it more organized or compact, but I'm open to hear what you think is a more condensed or organized way to teach math.

And no, this is rarely the most intuitive or contextual way to learn math. Another analogy - a library doesn't sort their books by which ones were best reads or most influential, but by topic and author. Similarly, math curriculums are organized by a hierarchy of which theorems can prove the next theorem with no explanation of which ones are important. Organization doesn't always provide intuition.


What you describe is probably the least efficient way to learn math - or anything else for that matter - because you’re trying to learn things you can’t, at that point and for a while afterwards, reason about. They don’t have connection to anything else. Compare that with the opposite - learning stuff you can already mentally associate with parts of known reality.


> learning stuff you can already mentally associate with parts of known reality.

This is not always feasible or effective. Sometimes it's just better to start by doing some simple reasoning about things in isolation, and build the proper connection and context afterwards.


> It’s just an arbitrary history-dependent path people happened to come up with, mostly centuries ago, and “refactoring” any of it is almost impossible.

This is absolutely not true. If anything, math education has a tendency to keep losing intuition over time as it's refactored for modern approaches and notation.


There are few (if any) important differences between algebra textbooks from 400 years ago, trigonometry textbooks from 300 years ago, and calculus textbooks from 200 years ago vs. their current counterparts. The way we teach vector calculus is more than a century old. Introductory statistics courses still often haven’t caught up with the existence of computers. Undergraduate level math textbooks from 60–90 years ago are still among the most popular course sources across most subjects, including abstract algebra, analysis, etc. Hot “new” material comes from the 19th–early 20th century. The curriculum (at least say 8th grade through undergrad level) is calcified and dead, like a bleached coral.

Once you get to math grad school you can find more material that uses approaches and notations that are only about 50 years old.

The most significant “recent” change to be found from the 20th century is the “Bourbaki-zation” of mathematics, especially sources intended for expert readers: cutting out pictures, intuition, and leading examples in favor of an extremely spare and formal style that alienates many newcomers and chases them out of the field. And I guess at the high school level, there’s the domination of pocket calculators (displacing slide rules) which came about in the 1970s–80s.

There is massive, massive room for improvement across the board.

If you read works by e.g. Euler, other than being in Latin they still seem pretty much modern (we did tighten up some of the details in the century or two afterward), because much less has changed in the way we approach those subjects than you would expect. By contrast, if you read Newton or his contemporaries/predecessors, the style is often completely different and almost unrecognizable/illegible to modern audiences, building on the millennia old tradition of The Elements and Conics.

For another serious transformation, look to the way computing is taught, which has changed quite dramatically in the past 50 years. Nothing remotely like that is happening in up-through-undergraduate mathematics.


Can you recommend a book that you think presents,say, calculus, significantly better than the books commonly in use?


http://www.science.smith.edu/~callahan/intromine.html is one idea (and read that page/book for a critique of the kind of typical ~200 year old textbook/course we still use today), but this could be a lot better with a bigger budget and more support.

Just look what you can do with high-production-value video animations: https://www.3blue1brown.com/lessons/essence-of-calculus


This is a fairly well written book. But I don't see anything qualitatively better about it than the best standard math textbooks. What am I missing?


It largely dropped the "memorize this set of symbol-pushing rules then apply them to a long list of exercises" version of differential/integral calculus found in typical introductory textbooks, in favor of a "make up a model for a situation, then program a simulation into a computer and see what happens" approach. That makes for a radically different experience for students.

It’s hard to simply say this is “better”: it depends what skills and content you are trying to teach. The more computing-heavy version arguably does a lot better job quickly preparing students to engage with scientific research literature (because differential equations are a fundamental part of the language of science). But it might make it harder for students to e.g. dive into a traditional electrodynamics course intended for future physicists, full of gnarly integrals to solve.

Most of the people proposing even more significant departures (in content or style) aren’t writing introductory undergrad textbooks.


Using simple computer simulations to teach introductory math courses is definitely a change that has been slowly happening over the past couple of decades.

However, different approaches don't just teach different "skills and content" as you say, but entire paradigms of thinking. There is mathematical thinking and there is computational thinking (and other types as well), and any course helps you step up the ladders of these paradigms by different amounts.

My experience teaching undergrad math/physics/cs for several years is that computational thinking is in the short term time and effort cheap, and this causes a fixed point in how students think. If you give them the concept of say differential equations, and teach them some computational methods and some mathematical methods to solve these equations, they will always lean towards just using the computational methods. This seems all fine and dandy, except when you go to more advanced mathematical abstractions, and in the previous step the students had not mastered the mathematical way of thinking, they are lost. They simply don't have the mathematical capacity to grasp the higher abstractions. And no amount of 3B1B fixes it - this lack of long term investment into an important thinking paradigm.


"Elementary Calculus: An Infinitesimal Approach", by Jerome Keisler. Learning calculus is made harder than necessary by the legacy of clumsy epsilon and delta formalism. This formalism is not the intuitive approach Newton and Leibniz used to develop Calculus, based on infinitesimals, that was shunned later because it took time until Abraham Robinson made it rigorous in the 1960s. The author made the entire book available for free online: https://people.math.wisc.edu/~keisler/calc.html See also: https://en.wikipedia.org/wiki/Nonstandard_analysis


Open any book on differential geometry and compare the treatment of differentiation with the needlessly index heavy treatment in any undergraduate calculus textbook.

The point is that we treat the differential of a real valued function as a function/vector/matrix for historical reasons. The simpler perspective that always works is that the differential of a function is the best linear approximation of the function at a given point. But for historical reasons most math textbooks restrict themselves to "first order functions" and avoid, e.g., functions returning functions.

This also leads to ridiculous notational problems when dealing with higher order functions, like integration and all kinds of integral transforms.


> full of historical inequity.

Granted the academic profession has historical inequality but what about the math itself displays that?


The biggest problem is that it is very unfriendly to uninitiated newcomers and makes insufficient effort to draw people in. You end up with a culture that is unfortunately insular and has trouble engaging with even engineers and scientists, much less the general public. It’s also not very friendly to people who approach problems in different ways: symbol pushing has been elevated and anyone who has difficulty with symbol pushing (for whatever reason) ends up at least partly excluded.

Students who have a lot of practice/experience by the time they get to be teenagers (often via extra-curricular help and support) are much better prepared than those without that practice. Which is of course not a problem per se, you see the same in any field and it’s great if kids want to learn ahead of their peers. But then the content, curricular design, and pedagogy of mathematics courses leave students with the impression that those differences in preparation are due to innate differences in aptitude (“I suck at math”; “she’s just a math person”; ...), toss less well prepared students into the deep end to sink without enough support, and ultimately chase a huge number of people away who might otherwise find the subject beautiful and interesting, and could meaningfully contribute.


Well, we can't know that until we find (or won't find) the more effective way of teaching (or a way to do math without "symbol pushing" for that matter).

Until then it will not be wise to break what works (even for a minority of students).


Current incentives are set up to make even the most trivial attempts to run against the mainstream definitions and notations extremely difficult.


I don’t think it’s fair to say they are set up to do that. They weren’t conceived with that purpose. It’s just a fact of life that once we’ve invested a huge amount of effort in one set of conventions it’s very costly to change those conventions.


I don’t mean that some secret committee got together to “set up” all of the social incentives of the entire school system, university system, textbook industry, scientific publication system, engineering fields, etc.

What I mean is that there are incentives for the people involved in those systems which are extremely difficult to reform, and as long as the current incentives prevail it is all but impossible for anyone to refactor things like basic mathematical notions and notations.

Switching and retraining costs are high, gaps in inter-operability are expensive, and there is almost nobody who will achieve any career advancement through promoting changes to the high school and early undergraduate curriculum.

Mathematicians are generally most interested in pushing on the shiny boundaries of the field rather than trying to clean up the centuries-old material for novices. Teachers have their hands full enough with their students to do much new research in pedagogy. Practitioners in industry have their own problems to solve.


My mind goes to various, unfortunate notational conventions.


Such as? Many branches of mathematics have their own mutually unintelligible dialects of notation. Many longer papers or Ph.D thesis will just create notation just for the context of the paper.


An obvious one is that traditional mathematical convention requires single glyph variable names due to the unfortunate decision to save paper by denoting the product by juxtaposition. That’s an admittedly trivial example though, even though the higher order consequences are considerable.

Computing science is when notation came into its own. Younger mathematicians have taken those lessons to heart, but as the old saying goes, progress comes one funeral at a time.

Being forced to mechanically parse and interpret a syntax has a way of really bringing out any ambiguity.


> that traditional mathematical convention requires single glyph variable names due to the unfortunate decision to save paper by denoting the product by juxtaposition.

It's not to save paper or because of the product. You don't know the solution to the problem you are working on from the beginning and most of the time is spent writing and writing and writing in a scratchpad trying to solve what you need. Anything longer than a single glyph for variables would be too tedious so everyone evolved to use single letters. And then the papers are written with the same convention since it's natural. You have variable names though with the use of subscripts with the added benefits that it can be (and is) used to elegantly group relevant variables together giving you some sort of abstraction

I once wrote a comment about it here on HN - language in maths is not a programming language used to tell a computer how to go from A to B, but a natural language used to talk about maths between peers. Every natural language have idioms, inconsistences and other quirks. Polish will not change for you so it's easier for you to learn it, it will change in the way that let's polish people communicate better with each other which also include a lot of historical and cultural happenstances. Same with maths

There are attempts like Esperanto and other artificial languages like that and I think any attempts at 'codification' of maths to use some programming language has the same chance of success of wide adoption


> They are attempts like Esperanto and other artificial languages like that and I think any attempts at 'codification' of maths to use some programming language has the same chance of success of wide adoption

Aren't existing programming languages already types of codified artificial math dialects which have seen wide adoption


That’s a good point.

Programming languages are more for humans than for computers. Otherwise we’d be writing our programs in 1s and 0s, and extending our editors in Emacs Binary and VSCode BinaryScript.


> language in maths is not a programming language used to tell a computer how to go from A to B, but a natural language

Right, we're on the same page, I just think this is a bad thing and you evidently think it's a good thing. I'm well aware many mathematicians don't, because it's how they were trained and unlearning is the hardest kind of learning. The ambiguity[1] of natural language is observably ill-suited for formal reasoning, and the experience of computing science has shown this conclusively.

Do bear in mind that the pioneers in our field were virtually all trained mathematicians. They were well aware of the historic faults of the field because having to make programs actually work forced them to be.

The legacy fuzzy pencil and paper approach of traditional mathematics is going to end up being to proper formal mathematics just as what's now called philosophy is to formal logic.

[1] Let's not confuse ambiguity with generality.


> single glyph variable names due to the unfortunate decision to save paper by denoting the product by juxtaposition.

Programmers tend to have this lack of fluency with written math that they completely miss: the concise names are not to save paper or make writing easier or anything like that. They're because they make the structure of expressions easier to visually identify and parse. The shapes of expressions are an incredibly important feature of the language and often contain implicit structural analogies. You need to be able to see those analogies to correctly read mathematics, and long variable names would obscure that part of the language.

I suppose it's similar to having enough fluency in a natural language to mechanically translate the words of a poem, but you can't properly read things like the metre, so you've unknowingly missed half of what the author originally wrote and lost it all in translation.


I haven't encountered much resistance to n_{arbitrarily complex subscripts}


> An obvious one is that traditional mathematical convention requires single glyph variable names due to the unfortunate decision to save paper by denoting the product by juxtaposition.

Generally you'd use upright text in square brackets to denote longer variable names, the notation is often seen in applied fields. But this quickly becomes clunky with longer expressions.


> Being forced to mechanically parse and interpret a syntax has a way of really bringing out any ambiguity.

This is absolutely beautifully said user23. I as a programmer often struggle with understanding notations used in some papers.


There's a value in compact / structural notation though, to an extent of course. But o come from the world of verbose application programming :)


Like APL notation being a tool of thought.


A bit of that even though apl arguably push things too far (maybe that's what you implied). Parametric types also help making point, abstract combinators, recursive schemes.. All very helpful to define and manipulate ideas.


Is there something you could say about these unfortunate notations or an article you you could point me to so that I can understand what they are?


For example, calculus uses notation and terminology that predates the modern limit-based field Weierstraß and others built. It's really confusing [1].

Statistics is even worse. A mix of old tricks developed to avoid computations when these were expensive. See [2].

[1] A Radical Approach to Real Analysis https://www.davidbressoud.org/aratra/

[2] The Introductory Statistics Course: A Ptolemaic Curriculum? https://escholarship.org/uc/item/6hb3k0nz


Limits are not an inherent part of calculus. You can do all calculus relevant for the physical world just fine with nilpotent infinitesimals if you but give up excluded middle.


I've heard of this constructivist approach to calculus, but hadn't made the connection with nilpotents. that's really interesting, could you explain why nilpotenxy and forgoing the law of the excluded middle relate to each other?


You can use nilpotents with classical logic and the excluded middle. This is called dual numbers and it's already a good model for "calculus without limits". They are like complex numbers, but instead of x^2=-1 you set x^2=0.

However, if you want to get really serious about that, you'll need that zero plus an infinitessimal be equal to zero. This is impossible in classical logic due to the excluded middle (which forces each number to be either equal to zero or non-zero).


Can you recommend a introductory calculus book that builds it up from dual numbers?


The Silvanus P. Thompson book suggested by the sibling comment is lovely and very clear.

For a more algebraic treatment, and its important applications to automatic differentiation, I'd suggest starting with the relevant wikipedia articles:

https://en.wikipedia.org/wiki/Dual_number

https://en.wikipedia.org/wiki/Automatic_differentiation


You could try Calculus made easy by S. P. Thompson.


My point is not that limits are an inherent part of calculus. My point is that calculus as currently taught mixes infinitesimal-like notation that predates limits with limit-based calculus.


So, like, inequality against people with visual disabilities and dyslexics?


I would counter that the current pedagogy -- at least high school through early undergrad -- is the most efficient path, or close to it, for teaching students to become electrical engineers in the analog era. Historically, that was the most math-heavy profession that had a lot of jobs (not just professors/researchers). We just haven't updated it in a long time.


That’s probably not too far off the mark... with the proviso that we are fixing the notation, terminology, and problem solving methods for electrical engineering to what was historically used in the 1950s, and not allowing any more radical “refactoring” of those ideas or methods.

I don’t think this is actually the most effective way to train analog electrical engineers, or the most effective possible set of conceptual/notational tools for practical electrical engineering.


That’s a great analogy. Thank you.


It's not a secret. It's what makes it so impenetrably annoying that the smartest people move on to problems they can have more of an impact on, casting suspicion on anyone who survives in the current establishment. There was a post the other day from stopa.io about implemeting Russel and Whitehead's "Principia" in LISP, and then describing Godel's theorems relative to it, that to me was the future of maths. I'd posit that math is in the midst of the equivalent of the AI winter of the 80's and 90's, where current establishment thinking will be subsumed by practitioners just making shit work as the result of a generation of kids with access to things like Coq, theorem provers, and a bunch of hackers using functional languages, just like "AI" happened as the result of some adequate tooling available to people without gatekeepers. I think we're on the precipice of a rennaisance.


There's been a ton of great work in math lately! I mean in number theory alone we have tools which would have made the greats of the past weep with joy, and we've only just begun to scratch the surface of what they can do. Implementing a formalism that frankly does not have a significant impact on any area of math which isn't directly based on working with it is not a sign that math as a whole is trending towards programming. Theorem solvers are, obviously, getting more awesome by the day though. Not quite there yet, but damn are they close


Indeed, it's not that math is trending toward programming, but programmers are trending toward math, which means more minds on the problems acquiring the tools.

It's exciting to read about too. Quanta has some of the best writing I've ever read anywhere. I am a layman who is enthusiastic about it and sees the most important problem to solve as being how to scale to new minds faster. The most influential people in my own life were not the best at what they did, but the ones who let on that the bar to doing those things and being a part of the conversation was closer and more achievable than the headlines about virtuosos made it appear. We're only ever 3-5 years from learning anything, imo, and I think a lot of the opportunity to have an impact will be in writing the next GEB, Chaos, Emperor's New Mind, or other popular treatment, where instead of focusing on and solving one problem, we can inspire and apply a million new minds to several of them.


There's an enormous amount of work to do before what you're saying can be possible. All of existing mathematics has to be encoded as a wikipedia-style library in whatever flavor of theorem prover will win this round (Coq? Lean?) and as someone who has taken a decent run at encoding some theorems - it is non-trivial, both in terms of learning the theorem prover language, learning the math, and learning how to translate the math into the theorem prover language. The number of people worldwide who currently have the skills to do this probably number in the hundreds, and all of them are faced with the choice of contributing theorems or working incredibly lucrative finance/software jobs - or just researching novel mathematics! Even once the library is complete a huge amount of work remains to get using theorem provers to the same level of ergonomics as sketching things out on a chalk board.


There's a really good book called Burn Math Class which highlights this fact very well, and actually helps with the learning process, making advanced math less intimidating.


Thanks for this recommendation. I just ordered a copy.


There are history of math courses. I took one and enjoyed it, but it's a vast topic, reaching into ancient history. It's unrealistic to cover much of it.

And on top of that vast pile of material, there is the history of how teaching math has changed over time. My professor had some early US textbooks.


I'm an elementary school math teacher and would love to learn more about this! Is there anything you can share about this course (syllabus, etc.)? Email address is in my profile


It's been too long. It would have been this course: https://catalog.sonoma.edu/preview_course_nopop.php?catoid=6...

There are some other courses you can find by searching "History of mathematics syllabus", such as this Berkeley one that has quite a few usable links and references: https://math.berkeley.edu/~wodzicki/160/



He wrote a second book called "Measurement" which I loved!


I appreciate these kinds of exposés, and students should know these things, but I'm not sure why one would ever assume otherwise. (That is not to say, I know that people do, but I don't know why.) Novels are presented to us in tidy, precise, packaged form—well, there are some novels that experiment with the form, but certainly it's the norm—but no-one feels the need to explain that a novelist is making it up as they go along. What's different between math and English—well, a lot, obviously, but specifically, what's the difference that causes non-practitioners to assume that the former is eternal and fixed, whereas the latter is a product of human creativity?


This might be rude but... No shit? Maths is a deductive process, as in, you PRODUCE an answer given the problem. It necessitates 'coming up with' an answer.

I think the author is making a fundamental confusion between maths as a class taken in school or uni, and maths as an intellectual discipline. The class is dogmatic and over structured, but that's due to administration pressures, not because mathematics teachers aren't aware of the issue. The discipline, while still adhering to structure, allows for reframing the problem and always has.


> I think the author is making a fundamental confusion between maths as a class taken in school or uni, and maths as an intellectual discipline. The class is dogmatic and over structured, but that's due to administration pressures, not because mathematics teachers aren't aware of the issue. The discipline, while still adhering to structure, allows for reframing the problem and always has.

I think the author is not making that confusion, but rather trying to help remedy that confusion on the part of a student in a school or university class who hasn't had anyone to spell out this fact (that every mathematician knows—but not every non-mathematician!).


"Columbus did not reach India but he discovered something quite interesting."

What's going to bake your noodle is whether he discovered anything at all.


Cue Stan Freberg (do youself a favor and listen to the whole musical):

""" Columbus: Hello there, hello there. We white men–other side of ocean. My name Christopher Columbus.

Native: Oh? You over here on a Fulbright?

Columbus: Huh? Uh, no,no, I’m over here on an Isabella, as a matter of fact. Which reminds me, I want to take a few of you guys back on the boat with me to prove I discovered you.

Native: What you mean, you discover us? We discover you.

Columbus: You discovered us?

Native: Certainly. We discover you on beach here. Is all how you look at it. """


You can tell it’s written by an American because a guy (an Italian for an extra punchline) from 15th century mentions his “whiteness” for no reason whatsoever.


There are few useful definitions of “discover” that don’t apply to what Columbus did.

Discover doesn’t have to mean you’re literally the first human to behold something…


When I say "I discovered this great coffee place over by the river" it's apparent that I don't believe, and nor do I intend you to believe, that for example nobody else knew it was there, or even that none of your other friends knew it was there.

But when people say the Columbus "discovered" America, what useful information is that conveying? Do millions of Europeans "discover" America every year and we just forgot to mention that? If I saw a photo of Haiti in a book, did I "discover America" ? What if I see video of somebody's vacation to New York but mistakenly believe it to be in Germany, did I just "discover America" even though I don't even know the literal continent of America exists?

It is definitely not clear that Colombus ever realised he had set foot on a continent Europeans didn't previously realise existed. Maybe he knew this and pretended not to, or maybe he was too stupid to figure it out, we can't tell, it suited him better to have succeeded in finding a route to the East, by going West, which was definitely not what he'd actually done. His original plan literally doesn't work, but he got lucky.


A more accurate statement of his significance is that he started the European conquest of the Americas. That's the impact of what he did and distinguishes him from previous explorers (like Leif Ericsson) who stumbled onto the Americas earlier but didn't have a lasting impact.


Did he definitely know what he had found though? If you can discover something without knowing what it is then that opens up some interesting edge cases.


According to Marriam Webster:

    Definition of discover
    transitive verb

    to make known or visible : EXPOSE
    to obtain sight or knowledge of for the first time : FIND
Even in other sources for the definition it is not necessary to either be the first or to 'understand' much of what you found/did. You simply have to have done 'it'.

He discovered America as in he found it for the first time for the 'western world' of that time. Whether he thought it was India or connected to it does not seem to matter. He still found it and even found it first as to the knowledge of the Europeans alive at that time.

It does not seem to matter that other Europeans (vikings) had also discovered it previously. Or that Asians had discovered it and stayed on the continent (native Americans)

Can you elaborate on what edge cases you mean and why that would be relevant?


It’s only interesting if you’re interested in arbitrarily narrow, contrived definitions of words that everyone is already pretty clear on.


What Columbus discovered was that there was land to the West of Europe, along the route he took, which was not known at the time.


"Was not known" by whom?


Let me just reverse this, re-using my words. Who do you imagine knew "that there was land to the West of Europe, along the route he took"?


Wow, he sure gets a lot of clout - his name showing up in this internet thread half a millennium later - for an achievement with so many qualifiers!


I think he was preceded by Norman navigators many times by the way.


i think plants were actually the first to behold this great land


I think fungi first crawled upon that great land.


This post isn't really that deep. The writer's discovery of the real nature of math is in line with essentially all disciplines. Things are usually presented as more rigid to make learning easier, but in all fields of research the lines are much more blurred.

But yeah sure it's good that more people understand this. When I used to teach math to university students, this true nature of math is something I always tried to get across. Students would tend to fight this by nature (possibly because they feel more comfortable in a world where mathematical "laws" truly are laws), but I usually pushed back since I thought the insight was important. I'm not entirely sure it was though.


I found advanced maths is like learning engineering. They're less fun than junior maths, where it triggers thinking more effieciently.

So to me, the secret here, is advanced maths is represented boringly on purpose mathematicians (like a note), instead of education purpose.


Advanced maths adds layers of boring bookkeeping you have to wade through before you get to anything interesting.


The spells are increasingly complex, but quite powerful.


The content of the article is good. I do feel the title is click-baity in my opinion. This was not necessary.

Progress in any field of endeavour is never a smooth process, I think most people who know a bit of science/math know that. This is not "making it up" as we go along, rather I would say it is the usual process of experimentation. This is the normal process of discovery and the article explains how math progresses in often non-monotonic, discontinuous steps.

So, once again, good article with good content but I wish there was some other title to it.


There's nothing wrong with presenting existing knowledge in the most organized, clear, and accessible way. Historical context is useful, but I don't think we need to spend a lot of time explaining the initial confusions.

When you try to discover new mathematics, it's always messy. But if you want to learn about discovering new mathematics, you should just try to do a research project, instead of trying to learn already known stuff pretending you are discovering it for the first time.


The dirty secret of every activity: we make mistakes and fix them. Mathematics is no exception.


Reminds me of a joke I once heard. A group of university dons are discussing next year's budget and which subjects can cut down on equipment.

The physicist says "I'm sorry but we need interferometers, telescopes, and lasers to teach anything".

The chemist says "Hah! All we need is some bunsen burners, test tubes and chemicalsto run experiments"

The mathematician says "Pfft! All we need is pen and paper to explore ideas, and a trash can for when the idea turns out to be wrong."

The philosopher says "Well, all we need is a pen and paper".


it's more like an undiscovered country. it was already there, we just didn't know about it. And sometimes, when we first find it, we think it's something else entirely.


It's an open question as to whether mathematics is discovered or invented.


I'm definitely in the "math is discovered" after reading https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness... but clearly I have no proof.


Mathematics may be made up in the sense of what we use for notation or symbology but the underlying relations are timeless and superuniversal.

In any universe or species pi defined as the ratio of the path length traced by a set of coplanar points equidistant to a common point to the path traced by a set of points in a different plane intersecting exactly two points in the first path and the aforementioned common point will be the same as our pi.


I think you are oversimplifying in a way that eludes the point OP was trying to make.

Or rather two points. First that the process of actually creating mathematics is messy and largely made up as it goes along. I can create a new mathematical structure that turns out to not be very useful, etc. Secondly that the way math is taught typically hides this, and creates a very linear "greatest hits" approach which is misleading.

You are correct that one of the things that has come out of centuries of studying mathematics are clear definitions of abstract objects that almost have to have been found; but the day-to-day isn't that.

On the other hand, how something is taught and how it is practised often aren't that close to each other. Part of the reason the pedagogy looks the way it does is to distill centuries of thought and argument into a few credit hours.


if anyone gave a crap about your second sentence they'd point out that its impossible to parse without punctuation or rewording or something, but the fact that everyone already knows what you're saying despite an inscrutable explanation really pokes a hole in the idea that you are bringing up anything remotely novel


Is this still true in hyperbolic geometry, where the circumference of a circle of radius r is greater than 2.pi.r?


Never used the word circle in my definition of pi. If planes curve in another geometry then no. I think these would called surfaces rather than planes. You could define a plane as simply the set of points sharing a common normal vector not intersected more than once by lines oriented in directions that have a non-zero normal component (i.e. parallel planes are unique).


In hyperbolic and elliptical geometry ratio of radius and circumference of a circle is not a constant. You can't define pi there as such.

Anyway, pi comes up in various fields of math with no relation to the geometry of space we live in.

Ironically, pi comes from geometry (land measurement), and the land they measured has elliptical geometry.


For Pi to have any meaning requires a universe with at least two dimensions. Whether Pi would exist without a universe is a matter of debate.


> For Pi to have any meaning requires a universe with at least two dimensions

There is no such requirement. Mathematical constructs have meaning even outside of their most immediately obvious manifestation (such as the application of geometry to the understanding of physical space). Even a one-dimensional intelligent being, if such a thing is even possible, would eventually encounter pi, alongside the basics of euclidian geometry, as soon as they start doing math on datasets measuring more than one feature, as such datasets are embedded in n-dimensional spaces, where n is the number of independent features, even if physical space has less than n dimensions.


So if I tree falls in a forest and nobody hears it it doesn't make a sound?

Pi never has existed in a material sense.

It exists as a logical potential consequent of abstract definitions regardless whether the universe does or not.


Your definition of pi gives incorrect values of pi...

("a set of points in a different plane" is not constrained to be a straight line.)


The Moore method attempts to address this. We used it in some courses at UT-Austin.


This is a taste of what mathematics looks like in practice. We make much of it up as we go.

Sorry, but I strongly disagree. This is not true at all. Math, when done well, is based on a logical foundation that makes it understood by its intended audience. NO, they are not just making it up or blowing smoke. The apparent hand-waiving is still based on something which has a logical foundation or to save space.


> Math, when done well, is based on a logical foundation that makes it understood by its intended audience. NO, they are not just making it up or blowing smoke.

I think you may be misunderstanding what the author is trying to say, or, rather, what part of a mathematician's activity they're trying to discuss. Mathematicians are not making it up, in the sense that we're not engaged in some elaborate game with shifting but meaningless rules. The final product of mathematics is fully as polished and rigorous as its weighty presentation suggests.

However, the act of doing mathematics is, or can be, messy, non-rigorous, and nonsensical on the way to discovering that perfect polish and rigor. Much like the old adage that, if you never miss a flight, you're spending too much time at the airport—if every bit of a mathematician's work is up to the full stands of rigor required of a final publication, then that mathematician is not doing the best work that they could do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: