Hacker News new | past | comments | ask | show | jobs | submit login
Leslie Lamport Tells Mathematicians How to Write Proofs (2014) (scientificamerican.com)
142 points by smithmayowa on June 30, 2018 | hide | past | favorite | 100 comments



Lamport advocates for more rigorous proofs with a justification for every line, and lines arranged in a hierarchy based on assumption contexts. He is also the author of TLA+, a formal proof checker: https://en.wikipedia.org/wiki/TLA%2B

Fully-justified proofs are frequently used to teach geometry and abstract algebra, and are also needed for machine-checked proofs. Outside these contexts, I agree with Lamport that they are useful to catch mistakes, but I wouldn't write them myself because they are incredibly tedious. For communication in papers, narrative proofs can convey ideas and intuition at a much higher bandwidth.


Higher bandwidth but also greater risk of mistakes? If the article is true that 1/3rd of papers have false theorems in them, that seems like a problem severe enough to justify slowing down and being more methodical.


> 1/3rd of papers have false theorems in them

Despite the wording in the article, this is false. A theorem is not false if there is an error in a proof for it. They only showed that 1/3 proofs of theorems had an error. It doesn't say if these were minor errors (omitting an edge case where it's still true, but not handled in the proof, but easily covered), or major (the theorem is actually false).

Quoting the article:

> Some of them were false because the proofs were wrong, and some were false because they relied on wrong proofs.

This is not what 'false theorem' means. This might be pendantic, but isn't that kind of the point here?


Regardless, unproven theorems should not make it to publish papers. At the very least, they should be marked as "conjectures".


There is a difference between unproven, and the published proof contains a mistake.


Seriously, who cares?

When you publish the proof for a theorem, you're generally the first to do so. Aren't you? If there is an error in the only published proof ever, your theorem is unproven. QED.

Edit: to downvoters: please explain my error. I thought I was only telling the obvious here.


Maybe they feel you didn't rigorously prove your case.


They more likely didn't read past the first sentence. I suspect a very thin skin.

Seriously though, I don't know what makes this forum tick. Someone's gotta explain what is socially acceptable around here. I know we're not supposed to complain about votes here, but I genuinely don't understand what happened. (Or rather, what is happening, the votes seem to flow up and down for no discernible reason.)


Whining about downvotes and then trying to goad other users is bad and you shouldn't do it.


Perhaps, but it looked like it worked. Somehow. For now. At the time I write this, the comment you are replying to is still positive (edit: oops, no longer), while it clearly breaks the stated rules of this forum (sated both officially and by you just now).

And I'm pretty sure that if I didn't do it, my "who cares" comment above would have been further downvoted. I observed this several times on several forums, sometimes votes have a momentum, and breaking the rules like I did can sometimes stop that momentum. (This works the other way too: popular comments often stop having further upvotes after the first reply that disagrees.)

Incidentally, I really think a good proportion of downvoters hit the button before reaching the end of the comment. Not because they're personally offended, but because their troll filter judge quickly, from the very first words (thinking back with a cooler head, I can't blame them). Starting a comment by something that looks offensive if taken out of context is a pretty sure way to get it downvoted to oblivion.

Still, the threshold looks pretty damn low.


Perhaps, but it looked like it worked. That's not the reason you shouldn't whine about downvotes or goad other users. Not a 'perhaps' thing.


I wouldn't know about any absolute rule. Including "thou shalt not eat babies", if the circumstances are extreme enough.

I understand why complaining about downvotes is looked down upon (off topic, distracting, feed the troll…), but there are cases where those are both unexpected and unwarranted. I mean, I originally replied to a comment that didn't add to the discussion with a comment that did. What was I supposed to do with it? What am I supposed to learn from it?

Not even you provided any meaningful feedback.


My guess is that it got downvoted because it started on a snarky footing ("Seriously, who cares"), and then made a nitpicky point that isn't very interesting, and moreover is at odds with how mathematics actually works. (If it were right, we'd name theorems after the first person to correct the last error, but we don't.) So, kind of a shallow dismissal, which is the sort of thing we hope people won't post here. I realize you didn't mean it that way.

https://news.ycombinator.com/newsguidelines.html

p.s. Could you please not take HN threads into off-topic meta-hell in the future? There's a reason the site guidelines ask people not to do that.


Honestly, the snarky comment was because I was exasperated by gizmo686's nitpick. He made a point that isn't very interesting, and moreover didn't contradict what I said, though it was phrased in a way that looked like it did.

I hate that kind of manipulation, and it got to me.

> and moreover is at odds with how mathematics actually works

Ah now there's something interesting. How exactly was it at odds with maths? I never said anything about how a theorem was true, or provable. Just proven. If you could tell me how that was at odds with how maths works, I'd most probably learn something.


What am I supposed to learn from it?

That not everything is about you is a decent start.


Please don't get personally nasty.


Not all published proofs are necessarily proving a new theorem. Sometimes they are new approaches to proving known theorems. One example of that is the Prime Number Theorem, for which there have been multiple proofs over the years exploring different angles.


Translating this to computer science/programming: what would you call an unproven program?


One that isn't machine checked to conform to the specification (with Coq or similar). My standard is even higher for programs than it is for mathematical statements, because they tend to be orders of magnitude more complex.

On the other hand, it's okay for a program to not be proven, because it can be useful even if it has bugs.


> On the other hand, it's okay for a program to not be proven, because it can be useful even if it has bugs.

True, and a very important point. But... if the program has bugs, do they affect the results? Do they affect the results enough to matter? How do you know? In particular, what is your objective proof that the program does what you need it to do?

For many programs, such concerns are extreme overkill. For some, though, you have to prove that it does what you need it to.


That's where you quit proofs, and start experimenting. I found tests to be extremely effective at rooting out most bugs. Especially if you use property based tests.

Programs that do need a proof still have model checkers like TLA+, and proof assistants like Coq.


Sure, tests are part of what I meant by "objective evidence".


One that fails to compile with a type error. Probably, pseudocode - enough to convey the concept, but eliding some details that a compiler may require.


Fully justified proofs certainly don't belong in the main text of a paper, but they could be included as an appendix along with all the other uninteresting proofs.

In theory there is a greater risk of mistakes, but I don't think that it's enough to justify that much extra work. A small study is not enough evidence.


Just as with email threading, one could reasonably collapse steps that are tedious/trivial without removing them entirely from a digitally-documented proof.

Completeness and brevity are not entirely at odds.


mathematicians writing for peers don’t aim for the maximally explicit proof. details are frequently left out as obvious to the knowledgeable reader. leaving out tedious matter is a matter of professional style, like tightening regular prose so it doesn’t club the reader over the head with every detail. (my instructor in Fourier analysis would subtly grimace at my very explicit proofs in office hours, but he knew i was a computer scientist so he politely said nothing because he knew it would do no good ;-)


After graduating with a CS degree and a repetitive strain injury, I decided to teach myself mathematics properly (so I could do technical/creative things without a computer). I took the project very seriously, working part-time at a grocery store with no other obligations on my time than learning.

I would say the two biggest difficulties were 1) Undoing all the very bad math education I'd received earlier in life, and associated repulsion that came from it 2) Figuring out that when I ran into walls understanding certain things it could always be reduced to gaps in my knowledge that were implicitly assumed to not be there. I often ran into that reading proofs in the early days and wanted nothing more than 'very explicit' proofs as you describe (I used to always write mine that way too! In part because it's what I wished the authors were doing).

I'd like to build something in software that automatically associates expandable annotations to mathematical notation, e.g.: http://images.slideplayer.com/34/10244075/slides/slide_5.jpg —It would be super annoying if all the text were always revealed, but with a little cleverness and imagination I think a good scheme could be developed which would satisfy both beginners and experts. I would want a standard library of annotations which would automatically get associated just by using certain standard notational elements.

Here's another demo I put together to try out the same concept for natural language documents: http://symbolflux.com/lodessay/


I would also add that by and large this isn't done because of ego, but simply brevity. Seminal papers are often already hundreds of pages long, so to add every detail would bloat them to thousands of pages. And in any case, "skipping steps" is exactly how the top mathematicians think about the proofs when creating them.


Yeah, well, paper sucks.

http://worrydream.com/#!/ScientificCommunicationAsSequential...

No mathematician today doesn't have access to a computer. Just fold the proof to some appropriate coarse level by default, and let the reader expand any part they want to read. It's not like our computers had any meaningful limit on how big a mathematical proof could be.


I think the limit may be the author, not the tool the author writes with.


Historically, there was always a limit to how big a paper could be to make it into a journal. Papers that aren't meant to make it into a journal are often way bigger.


Currently explicit formally verifiable proofs provide no value for most mathematicians apart from some specific fields.

This may gradually change if proof assistants and proof checkers improve. It may become to incorporate some machine learning aspects and let algorithms search proof space that is too tedious for humans. This would finally turn computer into bicycle for mathematician.


Dijkstra also challenged the sloppiness of traditional mathematicians frequently in his EWDs. His proof format is extremely readable, and each proof is a complete manipulable mathematical object, with the steps tied together by implication, consequence, or equivalence and the validity of the step documented.


Could you provide a link or citation where he does an expose no it? I am interested.


I found one more example in this TeX StackExchange question [1]. It's incredibly elegant. But the problem is that it does not seem to be easily reproducible in LaTeX because writing nested lists is such an unnecessary pain (take a look at the adopted answer). Need a package to popularize this style.

[1] https://tex.stackexchange.com/questions/49416/which-packages...


https://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/E... is an example. That site has many others.

Predicate Calculus and Program Semantics is a book that is based on the technique.


Here is the description of hierarchical proofs: https://lamport.azurewebsites.net/pubs/proof.pdf


This is a pure gem. It's like turning a prose into an abstract syntax tree for a human parser. :) I do think it adds to the clarity for the examples presented in it. But whether it applies well to more complicated proofs (those spanning hundreds of pages in a math paper) remains to be seen. I think at least writers of introductory textbook could adopt the style to benefit beginner students.


The backlash he experienced might be due to the style in which he introduces his interesting work.

While not quite obvious, he could be seen as adverserial by labelling hierachical structured proofs as "proofs of the 21st century" and the ones other mathematicians use as "17th century proofs".

Promoting them as a thorough way to write proofs with additional advantage for the reader to adjust the level of detail of the explanation to their respective level of knowledge might have been a way to get people involved. He's even asking for feedback [1], unfortunately only to denigrate the adressed people in the next sentence [2].

It reminds a bit of Ignaz Semmelweis who managed to do good by reducing mortality of women in childbed by requiring desinfection before examinations but failed to spread the idea by being adverserial to his colleagues [3].

[1] "I am sure my way of writing proofs can be improved, and I encourage mathematicians to improve it", How to Write a 21st Century Proof, p. 5

[2] "They will not do it by remaining stuck in the 17th century.", How to Write a 21st Century Proof, p. 5

[3] https://en.wikipedia.org/wiki/Ignaz_Semmelweis


I feel you do Leslie Lamport a disservice here. I read the SciAm article and the full "How to Write a 21st Century Proof" and did not get any sense of denigrating people or being adversarial. Sure, he is concerned about clarity and correctness, but that is the exact point of mathematical proofs.

So everyone else, please skim the SciAm article and then take 15 minutes to enjoy reading the paper [1] in Lamport's own words.

[1] https://lamport.azurewebsites.net/pubs/proof.pdf


It's not clarity or correctness but how it is said.

See page 2 of the paper (page 5 of the PDF) for example. That is where he is talking about people being stuck in the 17th century and "how sloppy their proofs are" [1]. I am not arguing that this might not be true! - I argue that it is not helpful to approach introducing and advertising the idea in this way. I'm not a mathematician but I'd understand if someone reacted with defiance to wording like this.

[1] https://lamport.azurewebsites.net/pubs/proof.pdf, p.2 (PDF page 5)


There's a recent example of programmers publishing a paper on how to better prove some mathematics by implementing and discussing the phenomenon in a programming language: What’s the Difference? A Functional Pearl on Subtracting Bijections https://byorgey.wordpress.com/2018/06/23/new-icfp-functional...


Funny how the blog is trying to use the x^2 + 10x = 39 example to illustrate that modern notation helps understanding and reducing errors, but then forgets the second solution for the equation (x = -13). In all fairness, though, it's is just meant as a translation of al-Khwarizmi's text into modern notation where only one solution is given.


> In all fairness, though, it's is just meant as a translation of al-Khwarizmi's text into modern notation where only one solution is given.

Probably because negative solutions were not recognized as valid at the time.


Good software engineers reduce until everything unnecessary is gone. Same idea, nice to hear people talking about it.


More mathematicians need to know about constructive mathematics https://en.m.wikipedia.org/wiki/Constructivism_(mathematics)

Basically, proof by contradiction is not a good idea.


Mathematicians know about constructivism, they just tend to rightfully reject statements like "proof by contradiction is not a good idea" as crackpottery. Mathematics will continue to be done, constructivists and finitists be damned.


There are settings where LEM is inadmissible. A lot of important mathematics comes from importing some theory into another, e.g. the theory of topological groups. Knowing in fine detail what principles a proof relies on let's you know whether a theorem carries through for free or not.

It might fail if it absolutely must rely on LEM, but a weaker doubly negated version will still work. This might be a deep, important fact in some setting or another.

If anyone thinks constructivism is crackpottery, then they're just ignorant and it's only incidental if it hasn't impoverished their toolkit.


I'll admit that calling all constructivists crackpots is unfair, as there are legitimate mathematicians studying homotopy type theory and whatnot that goes way over my head. What I refer to as crackpottery is people who argue that a random proof by contradiction is invalid or who reject things like Cantor's diagonalization argument with no justification besides their personal "intuition" or "philosophy" that math should be constructivist. And I think the latter group vastly outnumbers the former, at least on random internet forums. But I'll apologize if I offend any of those mathematicians with my flippant generalizations.


I'll not take offense, nor will I spout off about metaphysics here, but I can't let it pass without noting that Cantor's diagonal argument is in fact constructively valid --- given a function from a set S to its power set 2^S, Cantor constructs an element of 2^S which can't be in the image of f. The subtlety here is that there are statements equivalent to Cantor's theorem in classical settings that are not intuitionistically valid, for instance that there's no injection from 2^S -> S.


Ah, you're right, thanks for the insight.


Constructive mathematics and non-constructive mathematics are both valuable. Rejecting the latter is crackpottery.


> Constructive mathematics and non-constructive mathematics are both valuable. Rejecting the latter is crackpottery.

As Andrej Bauer likes to point out—see, for example, Section 3.1 on p. 486 of http://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016-... —non-constructive mathematics is a special case of mathematics (in essentially the same way that untyped = unityped programming is a special case of typed programming).


Thank you--I think I'll start saying this in the future instead. #NotAllConstructivists, I suppose.


Finitism sounds like a good idea actually because Mathematicians have a terribly bad habit of playing with infinity when they could be doing real work :)


The problem is that "youtube mathematics" focuses on Hilbert's hotel and Aleph Nought, while actual mathematics as it's taught in unversities barely mention these issues.

Infinity simplifies mathematics _a lot_ in proving things from the basics of calculus (i.e. analysis).


I believe most mathematicians are well aware of constructive mathematics. What makes you think that they aren't?

> Basically, proof by contradiction is not a good idea.

Based on what?


That most math books I read don’t discuss it.


Maybe you should read more widely.

And maybe you should not make generalizations based on your reading list. In particular, your reading list could be selective, thereby making generalizations worthless. Or you could misread your reading list, where they do not explicitly state that they believe in and/or depend on proof by contradiction.


Do you even study mathematics? You sound like a programmer with a passing interest in mathematics, because 99% of math books don't worry about issues like the axiom of choice or constructive math. The only ones that do are logic books.


I quit after a bachelor's in math, so yes, I have studied mathematics, but no, I don't. Still, I'm not just a programmer with a passing interest in math.

Returning to the discussion: adamnemecek said that most mathematicians needed to know about constructive mathematics. When asked why he thought they didn't, he said that most math books he read didn't discuss it (presumably, "it" = "constructivist mathematics"). So I said that adamnemecek couldn't draw that conclusion on that evidence, though I should perhaps have been clearer that, as you said, "99% of math books don't worry about issues like the axiom of choice or constructive math". You're actually agreeing with what I was trying to say - you can't look at an average math book and decide whether the author knows about constructivist math or not, because it's outside the topic of the book. Even if the book does lots of proof by contradiction, that only proves that the author is not writing a constructivist book, not that the author is unaware of constructivism.

I think adamnemecek wants every math book to be explicitly constructivist. That, I think, is a complete misreading on his part of how important constructivist math is.


Ah okay :) I misread you


To paraphrase: "if it were a good idea, my books would have discussed it." Are we done here?


> More mathematicians need to know about constructive mathematics https://en.m.wikipedia.org/wiki/Constructivism_(mathematics)

> Basically, proof by contradiction is not a good idea.

The first statement is probably morally true (though I'd argue that almost every professional mathematician knows about constructivism, so maybe you meant something more like "take constructivism seriously" than just "know about constructivism"); and I think that judgements like the second probably aren't easily made scientific, and, if unscientific, are unuseable.

That said, it also seems to be almost entirely beside the point of formalisation (except in the sense that it seems that 'naturally occurring' formal logics tend to be constructive), so … why mention it here?


Two questions:

1. Are there any mathematical truths/theorems that are provable in constructivism that is not provable in "regular" (non-constructive) mathematics?

2. Are there proofs that are easier in constructive mathematics than in non-constructive mathematics?

If the answer to both questions is "no", on what do you base your opinion of the superiority of constructivism?


No. Constructivism is strictly weaker.


Strictly weaker in a weak sense, every classically true proof can be translated into a correct constructive proof of a similar theorem that is classically equivalent. So is it weaker or is it more fine-grained? That's a philosophical question.


Can it? Here's a proof that requires the law of the excluded middle. If you don't have that, you don't have a proof of that theorem. What's the correct constructive proof of a similar theorem "that is classically equivalent"?

Or, to say it differently, aren't there things that are true in "classical" (non-constructivist) math that are not true in constructivist math? My impression was that what is provably true in classical math is a strict superset of what is provably true in constructivist math.


If you use LEM to prove A, you can translate that proof into intuitionistic logic so that it proves Not (Not A). This is classically equivalent to A, but not intuitionistically. So the real difference between the two is that intuitionistic logic makes a finer distinction between propositions that classical logic says are the same.


I haven’t heard this. What do you mean by “a similar theorem that is classically equivalent”?



Also, lets throw out the law of excluded middle and axiom of choice!


> lets throw out the law of excluded middle

I don't think that we should throw it out, but neither do I think that we shouldn't do so.


That is the mechanism by which junk drawers and attics are filled.


No, let’s replace law of excluded middle with axiom of adversarial choice.


How does one prove the undecidability of the Halting Problem without proof by contradiction? How does one prove there is no largest integer without proof by contradiction?


Its a common misconception that proofs by contradiction are not constructive. Take the proof that √2 is irrational. What you are trying to prove is that √2 is not rational, or, constructively speaking, `√2 ∈ ℚ => ⊥`. So a proof that constructs such a function must construct a contradiction given the fact that √2 is irrational. This is both a proof by contradiction and a constructive proof. The same can be done with both of the proof you mentioned. Where you really start to run in to trouble is double negation. Evidence of ~~P comes as some function `(P -> ⊥) -> ⊥`, so there isn't really any way to create evidence of P from that. The lack of LEM really stems from this fact. If you try to run through the proof that `P v ~P => T`, you get stuck on double negation. However, it is true that `~~(P v ~P) => T`, which satisfies the gut feeling that LEM can't _not_ be true.


I wouldn't go as far as calling it a bad idea but I do consider it a style error when a a proof by contradiction is used even though a proof by contraposition would do.


> I wouldn't go as far as calling it a bad idea but I do consider it a style error when a a proof by contradiction is used even though a proof by contraposition would do.

A common style of proof among novice mathematicians, and even occasionally some professionals, is the good old: "Let's prove `P` by contradiction. Assume `not P`. Then [direct proof of `P`]. Therefore, we have a contradiction."

(I think that the fondness for this technique above all others among novices is that, no matter how hard the problem, you always at least know how to start a proof by contradiction.)


While constructive mathematics is an interesting restriction to work within, it has precious little to do with the OP.


For some reason this seems incredibly funny to me. I can imagine mathematicians laughing at this too.


Computer science is more rigorous than mathematics. The proofs written for computers have to actually work in computer, which is a collection of mathematics that have provably been demostrated to retain their fundamental basic properties that allow computation to both define and test the validity of statements.


Math is built on a far more rigorous logical foundation than "this program ran on a physical computer, which proves that the statement is valid". Every valid mathematical theorem is a consequence of, and can be traced back to, a set of definitions and well-defined logical axioms. The parts of computer science that are most rigorous are strict subsets of math, e.g. computability theory, logic programming, formal languages, etc.


Most of those subsets of math are relatively new and grew alongside or developed into computation (in fairly quick succession, in comparison to the history of mathematics). In addition, those fields are still actively being worked on as there are still problems that can not be resolved with mathematics alone, and much of that rests on computer science being able to retain much of the structure of mathematics without reinforcing foundations that are problematic or open problems. The mind that sees those problems in computation is a mind that is looking for problems because it is comparing computation to knowledge of mathematics, when there is actually a union between the two that can be provably demonstrated to be built from a foundation that is purely calculable / computable.


Show me the foundations for the continuum hypothesis.

What you're arguing for is a view of mathematics that has been dead for a century now. With the Godels incompleteness theorem and Turing halting problem show you that there are cases of 'true' statements in mathematics that can't be reduced to "well-defined logical axioms".


Re: Continuum hypothesis: the way I try to explain this is, take the statement S = "x^2 = -1 has no solutions". In Z, S is true. In C, S is false. What does it mean to say "S is true" without specifying the system? It depends on which underlying axioms you choose. There are extensions of ZFC where you can prove CH is true, and extensions where you can prove CH is false. In both extensions, the proofs are rigorous and valid, and there is plenty of valid mathematics to be done.

But ok, my point is, how can you say the CH is a true statement that can't be reduced to axioms? What does it mean to be "true" besides that something follows from other truths? The choice of axioms matters, sure, but everything that's true within a system follows from the axioms of that system.


There are two ways to see it. But that's also the problem.

> What does it mean to be "true" besides that something follows from other truths?

I agree with you, but I also don't. You know as well as I, that 'true' can mean something besides 'that which follows from other truths'. All the rules you have to rely on (such as'implication) - that's a truth you are dependent on for math to work, but can not define within math. Implication is a fundamental foundation. But can you define implication without using the concept of implication?

> The choice of axioms matters, sure, but everything that's true within a system follows from the axioms of that system.

It's easy to point out flaws in reasoning. It's so, so much harder to have an airtight reasoning system, that goes for mathematics and computation, both, together, alone, etc.

> What does it mean to be "true" besides that something follows from other truths?

Truth is true, no more, no less. Once you turn it into symbols, it turns into a mess (or a work of art).


My argument is about the rigor of theorems. ZFC may be incomplete, but every existing theorem that builds up the entirety of modern mathematics is (in theory, admittedly) built upon a rigorous chain of truth, traced back to those axioms and definitions.

Most mathematicians who don't study metamathematics or philosophy of math have no reason to ever think about formalism or computability. Godel's completeness theorem stipulates that there are self-contained consistent systems within the language wherein all of their nice proofs are true and universal, and where plenty of truth still needs to be discovered.


The continuum hypothesis is a well-defined logical axiom, occasionally.


If so, then that's because traditional mathematical proofs are arguments meant to convince humans of the validity of a claim. They are formal only insofar as e.g. law is formal.


People's minds have a limit to how much information they can retain. Pushing new information in comes at the expense of pushing other information out. Computer science isn't limited this way because every simplified property of mathematics is retained mechanically. If a property is created that is invalid, computation ceases to be computation. Minds on the other hand, aren't so limited, but that doesn't mean that what they produce is rigorous (or correct, with respect to the context they've been defined in).

I agree with you, if that's not clear. I don't understand your law analogy precisely, but I'm assuming in a general sense you mean 'human defines rule set, rule set retains it's properties provided that humans continue to enforce rule set'. To my mind, that is so open to variation, my brain may begin to attack itself. Humans can 'make sense' of anything. Computation needs to be able to do what it does regardless of whether a human is there to provide input and verify it 'makes sense' to the mind that defined it.

Universe turning into paperclips, sure, but the fact that a set of properties can be used to recreate themselves through their own definitions, that's beautiful, that's computation, rigor. These are things I used to love about math, but computer science just does math better than math does.


>Computer science is more rigorous than mathematics

Not true. Also computer science is very vague in a way.


To you.


That's a rubbish title. (edit: the original title was 'Computer scientist tells mathematicians how to write proofs')

Leslie Lamport is a mathematician by education. And I have a good idea he'd prefer to be called a mathematician if he had to choose between the two labels (mathematician and computer scientist).

That's not to say CS, and programming, doesn't contribute to the process of rigor. In fact, I had a realization that a program is a form of constructive mathematics, and hence a program is more rigorous than a proof on paper in the following sense: a single typo in a math proof would be overlooked by the reader, but a single typo in a program could make it fail.

Still, in order to make connections between programming and mathematical rigor, you have be trained in mathematics, not just computer science.


Should tag with 2014


The relevant paper is actually from 2011.


He has been talking about hierarchical proofs for longer than that. I saw Lamport talking about "how to write a proof" in March 2006 (press release here, in Italian: https://www.usi.ch/it/comunicati-stampa/2991).


Since the article provides information and context, its date is probably the right one to use.


You‘re right.. my comment was thought of as a note for people skimming over the comments, sorry.


Done now.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: