"Once in a while, I like to indulge in an informative anecdote concerning the genesis of the proof. The criterion was found by the end of 1985; then I remained more than six months making circles around the « splitting tensor ». One nice day of August 1986, I woke up in a camp in Siena and I had got the proof: I therefore sat down and wrote a manuscript of 10 pages. One month later, while recopying this with my typewriter, I discovered that one of my lemmas about imperialism was wrong: no importance, I made another one! This illustrates the fact, neglected by the formalist ideology, that a proof is not the mere putting side by side of logical rules, it is a global perception: since I had found the concept of empire, I had my theorem and the faulty lemma was hardly more than a misprint."
Complementing this situation, there are also cases where a proof hinges on a critical corollary or theorem where "no importance, I made another one!" did not or does not come as easily.
One recent example is Corollary 3.12 in Mochizuki's series of papers on Inter-universal Teichmüller Theory. This single corollary is the main topic of "Why abc is is still a conjecture" by Peter Scholze and Jakob Stix:
"We are going to explain where, in our opinion, the suggested proof has a problem, a problem so severe that in our opinion small modifications will not rescue the proof strategy."
An argument can be made that hence, this is no proof at all, and indeed that is the conclusion of the paper. However, mistakes in other suggested proofs were also found in the past, and they could sometimes be salvaged with significant new insights and additional work. An example for this is Andrew Wiles's proof of Fermat's Last Theorem. Quoting from https://en.wikipedia.org/wiki/Wiles%27s_proof_of_Fermat%27s_...:
Wiles states that on the morning of 19 September 1994, he was on the verge of giving up and was almost resigned to accepting that he had failed, and to publishing his work so that others could build on it and find the error. He states that he was having a final look to try and understand the fundamental reasons why his approach could not be made to work, when he had a sudden insight that ...
Mochizuki’s paper seems like a debatable example. Many mathematicians have been baffled by the proof or ended up feeling that it was not that promising, even prior to attempting to do a detailed analysis. If it’s unrecoverable, then the community will look good for its hesitation. https://news.ycombinator.com/item?id=15971802
Collapses are rare indeed, but they do happen. Cf. the collapse of the Italian School of Algebraic Geometry [1], the cleanup of which took all the efforts of Grothendieck et al.
Great example. Weil is said to be the one who was deeply immersed in both the geometric tradition of the Italians and the functional perspective of Riemann, so he was in a unique position to see many analogies and deep connections. (He proved small pieces of it, and laid the roadmap for Grothendieck.) These are the kind of “ideas and understanding” that Schulman is talking about.
What collapsed were the proofs, and some if not all of the ideas survived and got painted over with the new language. Sadly algebraic geometry has this bad reputation of being obtuse and opaque, that it has become devoid of geometry. Mathematics need to be more open about ideas that are not rigorous, not just left unspoken between the experts themselves.
If builders of a huge castle place some stones incorrectly, does it collapse completely?
No. Such errors may be discovered soon and taken down, or stand for decades, sometimes with lots of stuff built on top of it, and sometime parts will collapse, but the overall structure of the castle is sound, and, sooner or later, errors will be corrected.
And of course, sometimes, somebody decides to start building an entire new wing to the castle using new rules, as, for example, happened when non-Euclidean geometry was discovered/invented.
On the analogy of castles, I asked a professor whether Godel’s
Incompleteness theorems affects their work. She told me Mathematics is this great castle, where in one quadrant you have the number theorist, in another the topologists. Every decade or century, the logicians would come running up from the basement with results confronting the foundations of math, but everyone else continued with their routine
My memory from the logic classes I've taken was that after Godel, mathematicians largely abandoned Hilbert's, Russell's and Whitehead's ambition of founding mathematics on logic, and largely grew disinterested in foundational issues as a whole.
As such, mathematics doesn't have a solid foundation, and most contemporary mathematicians seem to be ok with that, as long as it works and they can get interesting results.
I agree with the rest of what you wrote, but doesn't have a solid foundation seems a bit far. Mathematicians hand-wave that their proofs can be reduced to ZFC set theory and despite reasonable wariness about that hand-waving, it has so far been borne out by all efforts at deeper inspection all the way up to full machine-checked formalization.
Although I am starting to get out of my depth, my impression was that ZFC itself is (per Godel's results) inconsistent or incomplete. Here is some research to this effect:
If this is indeed the case (as it seems to my unprofessional first glance to be) then mathematics does not in fact have a solid foundation, despite its appeal to ZFC.
I'd love to hear a professional logician weigh in on this, if there are any on here.
ZFC is incomplete. There are many celebrated results that are independent of ZFC (continuum hypothesis, large cardinals (a lot aren't even known to be consistent with ZFC), Suslin's Hypothesis, etc. However, by Godel we know this is true for any reasonable foundation of mathematics (i.e. anything capable of expressing the usual version of arithmetic). So it is not something that particularly troubles either logicians or other mathematicians.
Godel's results also show that we cannot use ZFC to prove that ZFC is consistent. However, it is strongly suspected that ZFC is consistent.
In practice, very few of the results independent of ZFC affect day to day mathematical work unless you're a logician or set theorist (which honestly start to blend into each other).
Logicians are often unsatisfied with ZFC as a foundation for other reasons. These mainly amount to the clunkiness of set theory encodings. They have no notion of types so you can pose nonsense questions such as "is the number 2 a subset of the addition function?" There's no formal way of dealing with objects "larger" than sets (e.g. the collection of all sets) other than as syntactic formulas. They obscure relationships between different mathematical fields by resulting in completely different encodings for analogous structures.
This is why logicians will sometimes talk your ear off about how category theory is so much better as a foundational theory than set theory.
Again though, for the working mathematician, these flaws in ZFC as a foundation rarely matter.
I have a general algebra textbook that uses category theory as a foundation, and although I haven't gotten very far into it, my impression is that either category theory or ZFC can be used to derive the other one. As a result, it doesn't matter a whole lot where you start since you get both eventually anyway. If category theory adds tools that make it easier to prove theorems, that's great and useful, but no one really cares if it was foundational or derived.
This is one main reason why the larger mathematical community does not closely follow these foundational discussions (there are foundations that are strict extensions of ZFC but these extensions are usually conservative).
The usual response is that it would be nice if the formal foundations of mathematics corresponded more closely to the informal intuitions of mathematicians, which is arguably not true of set theoretic foundations and perhaps is more true of category theory. Whether or not you agree about the correspondence or whether even if you agree you find it at all convincing is up to you.
The short answer is "incompleteness is not a big deal" and the slightly longer answer is "if it's inconsistent, the proof of inconsistency is devilishly hard to find." If it does exist, the inconsistent part seems likely to be sufficiently esoteric that it could be "built around."
TBH, the construction of Riemann surfaces from the natural numbers is pretty well understood by the average UK undergraduate. (Natural->Integer->Rational->Real->Complex->Foundations of Reimann Surfaces). The expression of the rules of natural numbers in ZFC is fully understood by far fewer, but that’s mostly because it’s tedious.
AFAICT mathematical logic has always felt like a curious side show in the greater math community. If you were to ask professional mathematicians to list off all the axioms of ZFC they'd probably shrug if they didn't get all of them. They probably wouldn't know about the more exotic parts of set theory such as large cardinals nor would they would probably know too much about alternate foundations of mathematics. They might not be familiar at all with the specifics of model theory.
The logical foundations of mathematics have always had a much smaller impact on the day to day lives of mathematicians than their foundational nature might suggest.
That's because formal logic is a (relatively) recent attempt to formalize the intuitions behind the math that people were already doing. Doing mathematics doesn't really depend on foundational logic in the same way that, e.g., a web app depends on transistor physics.
As somebody once said, if we ever found a contradiction in the ZFC axioms, we wouldn't throw out math, we'd just throw out ZFC.
> Doing mathematics doesn't really depend on foundational logic in the same way that, e.g., a web app depends on transistor physics.
Web apps don't depend on transistor physics at all, though. An important consequence of Turing universality is that computer science is not a subfield of electrical engineering.
Also, mathematics underwent very dramatic transformations during the late 1800s and early 1900s alongside the early development of formal logic and set theory. To say that logicians were merely "formalizing the intuitions behind the math that people were already doing" strikes me as misguided at best. The mathematics that rose to prominence in that era was very different from what preceded it, often controversially so.
>if we ever found a contradiction in the ZFC axioms, we wouldn't throw out math, we'd just throw out ZFC.
It's like saying: if we ever found a contradiction in the base case of a mathematical induction, we wouldn't throw out mathematical induction, we'd just throw out the base case. The indudction step remains sound.
> what fraction of web programmers understand transistor physics?
I'm not an engineer, but I imagine that even most professional electrical/electronic engineers do not understand transistor physics in full details. In engineering, a transistor is essentially treated as a blackbox, and its behavior is described and approximated by various small-signal and large-signal models (i.e. treated as a lumped component), with their parameters characterized empirically by vendors through experiments. How exactly things work at atomic or quantum level is essentially for a transistor to work, yet, a subject of study unrelated to electrical engineering.
On the other hand, I imagine there is no shortage of EEs who have studied a physics major, or EEs with a background of semiconductor physics - they can understand transistor physics really well.
So we can say the relation between EE and transistor physics sounds a lot similar to mathematicians and logicians, it's a good analogy.
Well maybe not quite that extreme. It's more analogous to the fact that most professional programmers do not fully understand the programming language they use (nor honestly do they need to for the most part).
Nonetheless, it's perhaps surprising for a beginning student of pure mathematics when all they can see is the sky-high demand for rigor that "professional" pure mathematics is seemingly content to hand-wave its foundations. (For the most part this hand-waving is the postrigorous stage described by Terence Tao https://terrytao.wordpress.com/career-advice/theres-more-to-... rather than the pre-rigorous hand-waving of the student).
Yeah, swapping out currently dominant logic with another one would redo the whole castle, like when a castle built in Gothic architecture is rebuilt into Baroque with some parts left intact. However, mainstream is so dominant and "better logics" either don't have sufficient manpower, are more complicated, are weaker or impractical, so that they never develop unless they solve something extraordinary quickly.
The analogy I’d make is to bugs in software. Most software has bugs, but they’re only triggered in specific circumstances. Stay away from those cases and you’re usually fine. If you happen to hit a new bug, you might lose some data, but it’s not going to render the entire application useless.
So a proof might claim to prove a theorem about a whole class of numbers or other mathematical objects. But if there is bug in one of its lemmas it would mean its results are true of only a subset of the mathematical objects claimed. That would be like getting an error on specific inputs.
The proof then remains a valid proof of something although not exactly a proof of everything its authors claim it to be a proof of.
An erroneous proof is also usually useful because it is not clearly wrong in most cases since nobody has come up with counter-examples so far that would prove the proof incorrect, yet.
I don't think it's necessarily like that. The error could make the theorem wrong in all cases.
I think it's more that the mathematician has a top-down way of reasoning, where they can see things like "I want to get from New York City to Los Angeles, so I have to board the bus, take a flight, and then take the bus from the airport at LA". There are certain parts where you basically know that a proof will be possible, because it seems true, like "I can get to LA's airport with public transit", so usually a specific hiccup, like a bus being delayed, won't prevent you from getting there.
Right, it's possible to get to LA's airport with public transit, but the amount of time it takes cannot be upper bounded using currently known techniques.
Well, this is "hacker news", and I imagine there are quite a few people around who have plenty of first hand experience with programming but not so much mathematical proofs. And there are a lot of examples of "a few stones placed incorrectly" taking down a huge edifice when it comes to software.[1]
So it may be true that math doesn't work like that, but I don't think it's obvious why. Many people have suggested that software would be better if habitually proved correct, but then, why is it not 100% necessary that proofs be perfect? Why should proofs be more resilient than software?
An idea related to this helped me get a grasp on writing proofs when I was first struggling to teach myself mathematics.
The idea was roughly:
A particular formalized proof (or lemma) is one of many ways of expressing/representing a more general concept that grounds some mathematical idea. It seems to mirror the way you could have the general concept of "having gone to the store and purchased bananas" in your head but express it many different ways through natural language.
If humans were doing mathematics by thinking purely in terms of the theorem space of some particular formal system, then 'collapses' should be expected after minor errors: if you take one wrong turn, your error should be compounded in any further progress.
But that does not appear to be how we do mathematics. Instead we're operating in a more general space of concepts, and we do not do math by proceeding linearly in our thoughts from the beginning of a proof to the final conclusion--we assemble the conception piecemeal until we get some feel for the sufficient overall integrity/coherence of a general conception, and then secure it by building up a kind of formal carapace around it. But if part of the proof/carapace is wrong, it's like a malformed section of armor that needs to be replaced: the whole structure isn't going to collapse because of it (not that that couldn't happen).
Thinking about things that way gave me a better view (I think) on how to use and think about formalizations when doing mathematics.
Mathematics doesn't collapse like a house of cards because the more "important" a proof is to the foundations of math, the more frequently it is tested. A counterexample to a proof is an easy way to find errors up the chain.
In the other direction is that many things that might be considered "errors" in the foundations could be looked at more like choices. Others mentioned Euclidian vs non-Euclidian geometry. There was a sort of error of the idea that Euclidian geometry was the only geometry which was fixed not by tearing it down but creating new geometries.
I’d venture that the reason that mathematics doesn’t risk imploding is because ultimately it can be treated as a fairly empirical body of knowledge, and maps accurately to the world as we observe it (or rather, the implications of pure mathematics, when applied by the sciences, give results that are not ad odds with reality as we observe it to be).
I think it wise to be skeptical of any new result coming from a complicated proof until it has been proven multiple ways by multiple people. If multiple approaches are not easily attained, then passing the test of multiple counterexample attempts can at least provide some degree of confidence. If something new is significant, it tends to attract that kind of scrutiny pretty quickly. I think that is the saving grace of the mathematics community.
I think that mathematicians should strive to use formal language proofs verified by computer. It'll allow to avoid situations when it's unclear whether this proof is correct or not and it'll allow people not to waste time on checking whether that proof is correct.
I have done a little with http://us.metamath.org/ and I am sure within 10 years every professional mathematician will use formal proofs.
The two prerequisites are 1) making a fluid and intuitive interface for inputting proofs which can also display human readable proofs for any theorem known and 2) creating a database of all known mathematical proofs into which new results can be inserted.
Both tasks are ~50% complete. Metamath, for example, has all of mathematics formalised up to a basic undergraduate level. It has some reasonable editing software, but not close to good enough for mass adoption. There are also plenty of other systems being developed.
> I am sure within 10 years every professional mathematician will use formal proofs.
I'll take that bet. 1:1 odds, up to $50? Shall we say, every mathematician employed at an ivy league university math department (postdoc level or above) has published at least one paper which employs proof checking for at least some claim? (provided that they have published at all.) So I win if I can find at least one professor or postdoc employed at an ivy league university math department on August 18, 2029, who has published at least one mathematics paper, and who has not published any mathematics paper which contains any formal proof.
That's a much weaker claim than yours, so it ought to be pretty generous to you.
I just wanted to note how precise of a bet this is. I will be stealing the exact definition of conditions and contexts to which the bet applies, to future bets of my own.
Yeah ok when you put it as a bet I think I'll back down ha ha :)
I imagine there will be hold outs who will never switch to formal proofs, though they may simply take on co-authors who are just "formalizers".
I also am probably being over confident with the time scales, I think a process with a tipping point, everyone will use it once everyone starts using it.
However when that tipping point will occur is probably hard to predict. I think it will be soon, but I'm broke and not that sure.
> I am sure within 10 years every professional mathematician will use formal proofs.
They said the exact same thing 10 years ago - see e.g. https://www.vdash.org and the AMS Notices special issue on formal proof from around the same timeframe. Formal proofs are hard, sometimes tedious and not always very intuitive. They're slowly spreading out from the most "synthetic" subfields of math (the ones where you're basically working with unfamiliar "rules", but not with a huge library of proven results), but progress is really slow - definitely slower than many people would expect!
I heard that the outputs of formal verification tools are often unreadable, as they work through the proof mechanically, step-by-step. It means sometimes the computer can tell if the reasoning is flawed, but it takes a lot of effort to understand the counter-argument from the computer. Is there any effort to solve this problem?
Why would it even mean for mathematics to collapse? It's obvious that if a branch is based on rules that don't make sense, at some point that branch will break. But it only breaks to the root node, which is fine. But why would the whole edifice collapse?
Ted Chiang’s short story “Division by Zero”, about a mathematician who discovers the foundations of mathematics are inconsistent, would appear to be relevant to this discussion.
Pretty funny to see Simpson's name showing up here. I have kept two souvenirs from my academic life: the original manuscript of my thesis and Simpson's very eloquent and overwhelmingly positive review of it.
Regarding the topic, I'd say that mathematics, as a whole, is a set of intuitions and ideas, rather than strict proofs. I've spent a fair amount of time working my way through some of Kontsevich's ideas and attended many of his talks -- he rarely (if ever) gives any proofs, my educated guess is that he proposes some extremely profound ideas and his collaborators grind through the details (not always). And it still works just fine.
I remember Kenji Fukaya saying once, talking about a PDE-heavy theorem: "We are trying to keep the details of the proof under 500 pages and it's not sa easy". The main issue here is that people who don't do professional, academic research in mathematics are unaware of the complexity of modern maths. It has evolved immensely during last 50 years, the theories are just layers and layers of foundational work one has to assimilate before getting any work done. Rigorous verification takes years and no one in the academia is being offered a job for proof-reading of existing papers.
One should also remember that referees of the papers are not being paid, it's considered "work for the community" and it's hard to blame them for not reading the papers in detail.
Isn’t this like asking why society doesn’t collapse, when the software that drives it is filled with bugs?
My impression was: any result in modern mathematics critically depends on another result, and that result depends on some other result...
This is like how most large software systems depend on various 3rd party libraries.
I guess the answer for both questions is that the things we build, whether they be mathematics or software, work well enough most of the time that you don’t notice the problems unless you’re paying attention.
>senior undergraduate student in mathematics
Yikes. The student is only now getting around to first year ideas that everything they know might be a sham. I mean... come on.
I mean, what does the student think will happen? We'll all realize that pi=4? Planes will fall out of the sky because physics finally caught up with a mistake in a proof?
Maybe true about physics (as physics does strive to provide approximate models of reality), but mathematics is abstract and for most mathematicians any application to reality is coincidental. You might enjoy reading "The unreasonable effectiveness of mathematics" (it is many decades old, but recently a lot of CS essays copy the style and title).
The more boring answer correctness is testable. If I present a proof that two equations are equal, you don't have to trust me. You can try some values and see if they work, look for a contradiction, try to produce your own proof, or any other test you can think up.
Think about Curry Howard isomorphism which says programs are proofs and data-types are theorems.
A lemma is then like sub-routine.
If there is something wrong in the sub-routine it can often be fixed without requiring any changes to the callers of the sub-routine. It might be the case that the subroutine can never produce a value needed by its callers without changing the types of arguments fed to it, and in such a case it is not enough to fix the subroutine.
But in many cases just the lemma-subroutine needs to be corrected, and the rest of the program, rest of the proof can be used as is.
There is no proof in science and we observe it doesn't suffer from wholesale collapse.
Things being proved in mathematics are unlikely to be wildly wrong even without proof. Fermat's theorem being an example of one that took a very long time to be proved and basically nothing changed when it was. We were already convinced it was true by brute force failure to find a counter example. This lack of proof but search for counter-example is basically the scientific method. Is A equivalent to B? Prove it? Or write a for-loop and test it for values you care about. If you think you've proved it you probably write the for loop anyway (where such a thing is feasible).
So do we actually /need/ proof in math at all or is it just good fun and properly satisfying for mathematicians? Perhaps these are two extremes and case-by-case we could plot them somehwere between the two. Some right up hard against one side or the other.
So is P=NP? There is no proof. You probably aren't going out on a massive limb to have a view on the matter without it.
Supposing that the definition of convexity that you're referring to is that of, for a set C with x_1 and x_2 being elements of C and for any scalar b between zero and one inclusively,
[b * x_1 + (1 - b) * x_2] always being a member of C,
I presume that you mean that errors, namely those kind of errors that are the subject of this thread's discussion, in mathematics will always symbolically be elements of some solution set of a finite number of linear equalities and inequalities (or the intersection of a finite number of halfspaces and hyperplanes).
It seems very curious to analogize mathematics results to random variables. At the least, there's a lot going into that analogy which might be worth unpacking and justifying to explain why logical inference, with its elaborate and precise chains of truth-preserving deductions, potentially collapsing at a single counterexample, can be considered remotely equivalent to something like a dice roll.
Or does it...? Case in point, the fifth axiom of geometry, attributed to Euclid, that through a point there is exactly one line parallel to a given line has been shown to be unprovable, and Lobachevski and Rieman have built alternative geometric theories assuming that axiom wrong.
I find it hard to believe that Math is that different from physics or other sciences in its robustness.
Some advantages that math has had over other sciences is low resource consumption (pen and paper), easier reproducibility(thinking), and it has gotten a long head start of 2-3 thousand years.
Axioms shouldn't be provable (other than from themselves) - if the fifth axiom of geometry had turned out to be provable from something else, then it would be have just been renamed a theorem instead of an axiom.
It's more correct to think of axioms as being preconditions.
We say, "Here is what happens if these things are true", and then specify what happens, but that doesn't deprive us from independently checking whether those conditions are true. Just because something has been proven does not stop it from being an axiom. It's contextural.
I have never understood why proofs are always required. Usually when you have an unproven theorem and it works often enough, to me it's good enough for most of what you're doing, for example in applied math. Of course if you find places where an unproven theorem doesn't work, it becomes interesting to why it doesn't work, and it's usually a big discovery, but it doesn't really disprove a theorem, it just helps to refine it.
I remember that in school teachers were often insisting about giving a proof to something, but to me it often made enough sense and it was often right, and it was often frustrating to have teachers tell you "no". It felt like a burden of proof.
If some claim is not proved but believed to be true it's called a conjecture. If a claim is known to be false but "works often enough" it's called a heuristic.
Well, broadly speaking, pure mathematics is picking some axioms, and then proving rigorous results based on them. If that doesn't sound like a fun game to you, you don't need to play it or watch it.
Not saying it's not fun, but to me proofs don't seem to help me understanding mathematics.
Not saying proofs are not important, but it seems that proving a theorem is important at a higher levels of mathematics, not in high school or university.
- Jean-Yves Girard, The Blind Spot