Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Can the same individual accomplish more with programming than proofs?
29 points by amichail on April 14, 2022 | hide | past | favorite | 41 comments
Programming is a kind of fuzzy proof where details are worked out using debugging. The resulting program may not be perfect but it can be good enough.

Explicit proofs (e.g., in math and CS) are more precise but longer, harder to check, and harder to debug. And a missed detail or small mistake can render the entire proof wrong and unfixable.

So it seems that the same individual could accomplish more in life with programming than proofs.




I think a critical point here lies on what exactly you're accomplishing when you're programming. A person could spend a lifetime on improving Google's ad network, make an incredible amount of money from it, and for all that have not accomplished very much in the sense that we care about accomplishment.

One question that comes to mind is "What would happen if everyone did this?". To me, it seems straightforward that we don't want everyone writing proofs to switch over entirely to programming, nor do we want everyone coding to switch over entirely to writing proofs. We would want a certain proportion of people doing proofs, and other proportion doing programming. Each person's own experiences and talents would determine which one is more useful, but on-balance we might have some proportion p doing proofs, and 1-p doing programming.

So, the question of "Can the same individual accomplish more with programming than proofs" turns into 2 questions - the situation that person is in, and also the global context of whether society is doing a good job at maintaining p provers and 1-p programmers for the overall benefit of society.

For my money, I think society has made p too small. There are way too many good reasons to go into software engineering as opposed to math and CS (namely $$$$) and as a result the social benefit of proofs is under-explored. I don't think however, that I would accept very bad odds for that bet like 1:100 or 1:1000. Proofs (and the societal structure, academia, that has been created to create proofs) have their own problems and as such it's very hard to even determine what p should be.


The Curry-Howard correspondence would like a word. Any time you write a program, you are also writing a proof.

Mechanized proof checkers leverage this idea to enable users to specify formal proofs as programs.


The Curry-Howard isomorphism doesn't say that an interesting proof corresponds to a program anyone would care about (other than as an equivalent to the proof), nor that an interesting program corresponds to a proof anyone would care about (other than as an equivalent to the program).

In practice, writing programs and finding proofs feel like quite different activities, and at least in my experience of doing both I almost never try to write a program by thinking "how could I prove that such-and-such a type is nonempty" and I literally never try to prove something by finding a corresponding type and writing a program that computes or represents something of that type.

The Curry-Howard isomorphism is a beautiful theorem, and as you say it inspires some nice methods of automated proving / proof-checking, but I don't think it says anything much about the relationship between the programming and theorem-proving activities that actual humans engage in.


> Any time you write a program, you are also writing a proof.

True, and (mostly) irrelevant. When I write a program, I am also writing a proof, but I don't care about it as a proof. It proves something that I am completely uninterested in proving. But as a program, it's a program that I am interested in running.


> It proves something that I am completely uninterested in proving. But as a program, it's a program that I am interested in running.

Interesting that you contradict yourself in successive sentences

By running the program, you, in fact, demonstrate that you are quite interested in proving whatever it is you wrote


> Interesting that you contradict yourself in successive sentences

No, I didn't. You managed to either completely miss my point, or to completely ignore it.

Just in case you didn't ignore it, let me try again. I could care about a proof for the sake of the proof. If that were the case, I probably wouldn't write a program, I'd just write the proof. (Yes, that could be mechanically turned into a program, but I didn't actually do that, because I cared about the proof rather than the program.)

When I write a program, I usually care about it as a program, not as a proof. When I write a GUI, say, I care about how it enables the users to access the rest of the program's features. You can turn that into a proof, and the compiler may in fact do so, but what it proves is something that is really unrelated to what I'm trying to do. The "proof" won't prove anything about usability or human factors or pleasing color scheme or user workflow or efficient use of screen real estate. It will prove something that, from the programming point of view, is completely uninteresting.

So, as I said, I'm not interested in the proof, I'm interested in the program. And, despite the correspondence, they are not the same thing.


>I'm not interested in the proof, I'm interested in the program

Maybe not in the abstract - but you are, in fact, "interested in the proof": because when the program doesn't do what you want it to, you either have to fix it so it does prove your use case, or you've proven that what you were trying to do was incorrect


You don’t care what the statement it proves is, and just an assurance of the conclusion would often be not valued


There's a middle-ground here where the input and output types of a function (informally) denote the pre- and post-conditions and a (correct) program is equivalent to a proof that your pre-condition implies the post-condition.

A function List -> Sorted List can both be considered a proof that lists can be sorted as well as a procedure for sorting lists.


Programs are proofs in the same way solutions to arithmetic's problems are proofs, they typically don't prove anything interesting so are not considered proofs even though in practice you could map every program to some set of mathematical statements and vice versa.

Edit: And for this topic in question, If you don't understand what you are proving then you aren't working with proofs. I really doubt that programmers actually constructs their programs by mapping it to mathematical statements, seeing what it proves and then mapping it back to a program, so they are writing programs, not proofs.


What if you proof the correctness of a compiler? Do you then transiently get all the achievements of all safety critical software that was then programmed with it?

The value of the proof comes from the things it enables. I guess the Rust memory safety guarantees are based on some proofs?


FYI, this has been done: https://compcert.org/


> Do you then transiently get all the achievements of all safety critical software that was then programmed with it?

It's possible to write incorrect code and compile it with a correct compiler, so I don't think so. It's more like a citation that proven correct programs can use to say "I wrote this program correctly (see my own proof), and I compiled it correctly (see citation), thus I have a correct executable file".

> I guess the Rust memory safety guarantees are based on some proofs?

People have retroactively come up with formal proofs for much of it, but I believe the initial design was just based on intuition not a proof. For unsafe code some of the exact details of the memory model are still being worked out too (and thus definitely aren't proven correct).


You might find some more thinking about this in the field of Computational Science and Engineering (aka CSE). This is related to computer science, but distinct (often a separate department).

https://en.wikipedia.org/wiki/Computational_engineering

Some universities call these "Scientific Computing" as well. E.g., https://gradadm.seas.upenn.edu/masters/scientific-computing/

There is even an industry journal (closely linked to high performance computing). Of course this CSE is said to mean "Computing in Science and Engineering" (just to add confusion):

https://www.computer.org/csdl/magazine/cs/2022/01

Here's a paper that touches on some of the challenges of implementing proofs (and other scientific theory) in software. https://f1000research.com/articles/9-1192


There are about 30 million coders in the world [1]

And about 80 thousand mathematicians [2]

So about 375 individuals programming for every 1 person writing proofs

If we assume this configuration is optimal, then my answer to your question is that if you had a group of 10000 people who like programming and proofs and have 9983 program and 27 write proofs this would be the best way to “accomplish more in life”

[1] https://www.future-processing.com/blog/how-many-developers-a...

[2] https://mathoverflow.net/questions/5485/how-many-mathematici...


Depends what you mean by “more.”

You can write programs to search for and do other things with prime numbers. However this can only tell you about specific prime numbers.

With proof you can prove things about all prime numbers or the integers as a whole. For instance Euclid proved that there are an infinite number of prime numbers.

We remember Euclid 2300 later so he accomplished a lot with his life. Today there is a big market for web pages, microcontrollers for gas pumps and washing machines, missile defense systems, etc. You can keep very busy writing programs for that stuff but it probably won’t be remember in 2300 years. (What’s a gas pump?)


What's the practicality of knowing that there's infinite number of primes?


It's a fundamental fact about the theory of primes which has practical applications such as

https://en.wikipedia.org/wiki/RSA_(cryptosystem)


RSA requires a lot of big primes to exist (so the search space is large), but doesn't require infinitely many. For each key size there's a maximum prime size that matters, and all RSA cryptography that's used in practice would work fine if there were, say, no primes with more than a million digits.


But maybe Euclid could have accomplished more in life with programming?


In some sense yes.

The two also can go together like peanut butter and jelly.

When I was in grad school my thesis contained a derivation of the Gutzwiller trace formula

https://inspirehep.net/files/20b84db59eace6a7f90fc38516f530e...

integrating over phase space instead of position space (that paper does it over position space.)

It was about 40 pages worth of math and while I wrote the math I also wrote each equation in a computer program that would try different inputs and check that the equality held. This didn't "prove" anything, but it convinced me that I had the math right. When you do a calculation like that you might have 50+ places where there might be or might not be a minus sign and if you make any mistakes you might as well flip the coin to get the sign. I could have done the calculation without writing the program but I would have either gotten the wrong answer (not so likely in this case because I knew what the answer was) or gotten the right answer in the wrong way.


He could write programs that find more primes, but never one that proves there will always be an infinite supply of primes.

I think other comments already said it. "more" is an ambiguous term. Do whatever you want and feel free to call that "accomplishing more".


Wait what? Computer science encompasses mathematics. Any proof that can be validated by following a known set of reduction steps until axiomatic statements are reached (i.e. any proof worth the paper it’s written on) can be verified just as well by a computer as a person.

For your specific case:

definition infinite_primes (n : nat) : {p | p ≥ n ∧ prime p} := let m := fact (n + 1) in have m ≥ 1, from le_of_lt_succ (succ_lt_succ (fact_pos _)), have m + 1 ≥ 2, from succ_le_succ this, obtain p `prime p` `p ∣ m + 1`, from sub_prime_and_dvd this, have p ≥ 2, from ge_two_of_prime `prime p`, have p > 0, from lt_of_succ_lt (lt_of_succ_le `p ≥ 2`), have p ≥ n, from by_contradiction (suppose ¬ p ≥ n, have p < n, from lt_of_not_ge this, have p ≤ n + 1, from le_of_lt (lt.step this), have p ∣ m, from dvd_fact `p > 0` this, have p ∣ 1, from dvd_of_dvd_add_right (!add.comm ▸ `p ∣ m + 1`) this, have p ≤ 1, from le_of_dvd zero_lt_one this, absurd (le.trans `2 ≤ p` `p ≤ 1`) dec_trivial), subtype.tag p (and.intro this `prime p`)

https://homotopytypetheory.org/2015/12/02/the-proof-assistan...


Writing proofs in proof assistants is still harder than writing regular proofs despite all efforts done to make it simpler, so I doubt that Euler would have managed to prove much if he tried to do it via computer programs instead.

Not to mention what it does to readability, humanity would never have gotten anywhere if mathematicians produced proofs looking like that. I'm not sure why programmers say mathematicians write hard to read stuff, when programmers trying to do the same thing comes up with that.


If Euclid had formulated his proof from axioms rather than intuition it would a) be an actual proof and b) far exceed the length of the above snippet.


The code you provided is only a proof if you can convince the reader that the code actually corresponds to Euclids theorem, otherwise it could be full of bugs. You could argue that it is less likely to have bugs than the original theorem, but I disagree, the original theorem has stood for over a century and checked by countless mathematicians and is just a few lines, there is no codebase in existence as well checked as that.

There are thousands of lines of CoQ dependencies for your snippet to run, and then many more lines in the CoQ compiler/runtime. The probability that all of that is bug free is much less than the probability that Euclids original proof is right. Theorems written by mathematicians are virtually bug free compared to code written by computer scientists. Until you solve that problem coded proofs doesn't really add any validity to proof, it could still be reviewed like a real proof, but then you are just using circular logic that your code is proven to be correct by expert checkers, exactly like how mathematical proofs are proven to be correct by expert checkers. Maybe the reviews are a bit easier then, but you still have the reviewers as a weak link, so there is no difference in theory, a buggy implementation will still lead to buggy results in both cases.

And no, you don't have to show all the steps for it to be a real proof. just like you don't have to plant apple seeds to make an apple pie, as long as the knowledge exists out there you can rely on it, just like your coq snippet relies on other coq code and isn't a proof on its own, relying on existing knowledge is a feature and not a bug.

Edit: And if you still disagree, the fact that theorem provers haven't disproven any famous mathematical result yet means that the old way actually produced reliable proofs. And as far as we know the people who wrote the theorem prover proofs maybe missed some things and they proved different things, and only stopped once they got the same results as the mathematicians. We don't know, maybe they are right, but I wouldn't be surprised if some of those proofs actually missed some important aspects and proved weaker results. It is enough that a single of your dependencies didn't write their theorem correctly and every proof depending on it is now wrong.


First of all, this is clearly Lean not Coq (and certainly not “CoQ”). While the distinction is minor, it does cause me to question if you may have fallen victim to the Dunning–Kruger effect here. Perhaps I’m wrong, but it may be worth some self-evaluating.

Furthermore, I think you’re missing the forest for the trees here. My only real claim is that theorem proving and computation are two sides of the same coin. Either way you start with data, apply a sequence of established transforms to that data, and stop when you’ve reached an accept or reject state. The precise encoding of the input, transforms, and accept/reject states is unimportant.

That said, we do seem to agree that Euclid’s original formulation is not a “true” proof. As you mentioned, many mathematicians have found cause to go back over the same theorem using more formal methods. While it’s true that for this particular proof-by-intuition no faults have been found, that certainly does not generalize to all proofs by intuition, and as such it’d be far more appropriate to compare a complete traditional proof to the complete computer proof. If you were do do this I suspect you’d find them quite similar, as axiomatic computer proofs tend to use the exact same fundamentals as axiomatic pen and paper proofs.

Finally, your claim that theorem provers have never found significant faults in a famous conventionally proved result is shown incorrect by a simple Bing query. See https://www.math.sciences.univ-nantes.fr/~gouezel/articles/m.... While the overall result did hold, formal verification discovered a bone fide hole in the widely accepted, published, peer-reviewed, etc etc etc, paper. Plenty of other examples are available at https://mathoverflow.net/questions/291158/proofs-shown-to-be.... You also seem to be ignoring the fact that many theorems have only been proven via computational analysis.


One of the goal of mathematics is to introduce tools that represent problems in a way that is easy to grasp for a human mind. For example, geometry is a good tool to get some intuitions on the problem of classifying sets of data. Also, a well written proof is, by design, easier to check by a human than by a computer.

On the other hand, programming languages are designed first to be easily parsed by computers. With supercomputers, exascale computing [0] is now possible. Yet it is still hard to use fully the power of such computers because what is intuitive for a human mind is not always easy to program. For example the video proof of the inversion of the sphere [1] is checked more easily by a human than by a computer.

Finally instead of using only the power of computers, or only the power of human brains, it is also possible to combine both. This requires to improve the interaction between computers and human beings. In a video on thinking the unthinkable [2], the author notably remarks that the introduction of symbols in algebra allowed human beings to extend the problems they could think about. With better interfaces, the computers could help us to write fuzzy or explicit proofs, but more importantly, they could help us to think.

[0]: https://en.wikipedia.org/wiki/Exascale_computing

[1]: https://youtu.be/OI-To1eUtuU

[2]: http://worrydream.com/MediaForThinkingTheUnthinkable/


Por que no los dos?

There are reasons to prove things and reasons to program things. What is the hypothetical individual trying to accomplish that they can do more via programming than proofs?

Also, as pointed out by others, as of the 20th century we had automated proof systems which have only improved in the 21st century. So someone wanting to prove things can do a lot more now without relying as much on proving "by hand".


When I write math papers I often use proof assistants such as Coq to leverage my programming background to avoid some pencil and paper proofs altogether and rule out many sources of human error. If you consider yourself better at programming than proving, definitely take a look at proof assistants to leverage the former to enhance the latter.


Smart people should write proofs. Less smart people should do the plumbing.


I'm actually betting on the opposite. Better proof systems will allow programmers to accomish many multiples of current productivity.


I would argue on the point that more and more programming in my experience seems not to support things that are calculation heavy and need "correct answers" but supporting workflows and models of thinking.

I don't see how supporting workflows or setting/managing/evolving models of thinking can have anything to do with proof systems or proving correctness of algorithms.

Since we have CRUD basically worked out there is so much more software that can be written without being mathematically strict like the DBMS that we build on top.


> Explicit proofs (e.g., in math and CS) are more precise but longer, harder to check, and harder to debug.

Sorry, I've done a LOT in both (1) proofs in pure and applied math and (2) algorithms and software, and your statement is strongly wrong and backwards -- the math is much easier than the computing.

E.g., my Ph.D. dissertation in applied math had both proofs and the results of some software. There was no problem getting errors out of the proofs.

Proofs are really easy to check. I've done thousands of proofs, and I'm not sure I've ever had a mistake in one.

I've written perhaps a million lines of software, and over 50% of the programs had bugs until I found and fixed them.

One of the real advantages, powers, of math is how easy it is to check proofs.

Commonly it is much easier to have a correct and relatively easy to read proof of a math theorem than it is to have a correct and relatively easy to read proof of an algorithm where it is easy to see that the proof and the algorithm are as claimed. Even for just a simple, old algorithm such as Quicksort, it is not easy to prove it is correct; even the notation is not easy. In computing, proofs of correctness commonly have to be in the framework of mathematical induction, and there some notation is needed typically not seen otherwise for the algorithm.

Uh, mathematical induction is to show something is true for all the natural numbers, and the technique basically returns to one of the good definitions of the natural numbers.

Beyond just algorithms, in computing proofs of correctness are so difficult that we essentially give up: The reason is, the assumptions are too difficult to list. If you will, a common problem is "too many edge cases".

It is true that there is something to learn about how to write proofs.

Pure math has a standard that in precision puts it far above essentially all other writing: Some pictures or intuitive explanations may be tolerated but they are not necessary and mostly not wanted. Also, there is no role for "try it and find out", for "doing some examples to see how it goes", some manual manipulation like do when when learning to use socket wrenches or a chef's knife. An old joke was, how complicated a math description would be for how to put on a suit coat.

E.g., a missed definition sticks out worse than a sore thumb. E.g., it should be clear, maybe even mentioned, in the proof, for each of the assumptions, where it got used.

So, the math writing has to give really solid results just from the words and math notation. Bluntly, that is unique in writing.

For accomplishing good, new things, I'm strongly on the side of math with proofs.

The core of my startup is an example. There are no heuristics there, just some math with proofs. Without the math, there would be only guessing and heuristics on how to manipulate the input data to give the desired outputs, and the chances of any heuristics having stumbled onto the same manipulations as provided by the math are zip, zilch, and zero.

E.g., some of the math is close to some math commonly considered, but my math fixes some essentially universal mistakes, really misunderstandings, that have become standard. That is, the common understanding is that there is a fundamental problem there, but that is not true, and my math, with proofs, shows that. Without the math and proofs, I would not have thought of the data manipulations or, if I did, have confidence in their correctness or power.

Sorry, guys: In my opinion, math is the main tool for the most important progress in applying computing to help civilization.


What is _computing_? What is _math_?

I am genuinely interested in your response.

From what I have gathered, computing is actually ill-defined, almost as hand-wavy as "the consciousness problem."

Mathematics is a set of mental models and or formal languages to reason about quantities or at least "things." Is one way I look at it, would you agree, have an additional perspective?

Computation is.... ?

I've come to some simple definitions of computation trying to find an axiomatic root, I'll share if interested.


A computation is basically anything that follows an algorithm. While Math is usually a definition + proofs derived from the definition. I personally think of Constructive Math as being Math as computation. But I am not an expert so …


What’s the difference between a process, an algorithm, a logic and a system? With your definition wouldn’t everything in the observable universe be an algorithm? Not that there isn’t utility in thinking that way but I feel like the definition should be more specific.

My definition is: computation is the transformation of information into meaning. (Simply)

Or more expanded…

> the transformation of energy into meaning == consciousness

> the transformation of energy into information == computation

> the transformation of energy == physics

Something I’ve been working with in my research and nomenclature.


I do believe that the universe is one massive computation. I highly recommend reading “Constructor Theory of Information”:

https://royalsocietypublishing.org/doi/10.1098/rspa.2014.054...


Not all things are computable though, no?


I'll start with what is computing:

Yes, there are some deep questions and answers about some of what can or cannot be done with the current, usual concept of computers, or if something can be done, what is the fastest it can be done?

But for what computers really are:

A guy has a calculator. He stands in front of a wall with cubby holes like behind the desk of an old hotel. Ah, there's a picture at

https://www.amazon.com/Happybuy-Storage-Classroom-Plywood-Cu...

Each cubby hole has an address, e.g., room number of some room in the hotel. The example at Amazon is missing the addresses!

Some of the cubby holes have in them a small piece of paper.

The guy has been instructed to start with the cubby hole with address 1 and do what its piece of paper says to do. When he has done that, unless the paper tells him otherwise, he is to move to the next cubby hole and do what it says.

Another example is a recipe:

Cut a large onion into small pieces.

Place a frozen ground beef steak of about 1/2 pound in a cast iron skillet. Surround it with the onions. Add pepper. Put a cover on the skillet and turn the heat to low.

In about 20 minutes, flip the steak.

In another 10 minutes move the steak to a serving bowl.

Add to the skillet the contents of a 10.5 ounce can of Campbell's Beef Consomme.

Use a spatula to scrape the bottom of the skillet until nothing is stuck.

Move the contents of the skillet to a strainer set over the skillet and press liquid out of the onions.

Put the onions with the beef.

Over high heat, boil down the liquid in the skillet to a light syrup.

Add the syrup to a bowl with 1 T (tablespoon, 15 ml) of each of Worcestershire Sauce, Dijon mustard, minced garlic, and 4 T of heavy cream and, to make a sauce, mix the contents of the bowl.

Pour over the onions and beef.

Warm in a microwave oven.

So, a recipe is some instructions, step by step.

A more complicated recipe might have uses of

if-then-else

do-while

If we want we can also get into computability, the halting problem (a self-referencing paradox), computational complexity, e.g., the question of P versus NP, numerical linear algebra, numerical partial differential equations, ray tracing graphics, cryptography, etc., but still the computing itself can be regarded as just following some recipes. Since I'm trying to get a business going and am not trying for academic tenure, that view of computing is enough for me.

Yes, the steps in a computer recipe might be from the results of some math theorems and proofs. A good example is the Gauss elimination in linear algebra.

For math, one view is to start with the truth table and propositional logic (calculus). The truth table defines AND, OR, NOT, and IF-THEN, that is, the results, TRUE or FALSE, depending on the inputs TRUE or FALSE.

From that can argue that math can be reduced to just symbol substitution and uses of the truth table. The result refutes counterexamples.

Might insert: A math statement such as

x + 1 = 0

can be read: There is a number, call it x, such that .... So, in this way we make more clear the meaning of variables in math. Then get to use such notation when working with truth tables, etc.

A little deeper and closer to math, we can have sets:

So, a set is a collection of elements. We're supposed already to know what the elements are.

Given a set A and an element x, x is either in set A or it is not. When x is in set A, it is in that set exactly once, i.e., never twice or more. The elements in set A are distinct. They are not in the set in any particular order; e.g., we can't say what is the first element in the set. Suppose

x = y

that is, x and y are just two names for the same element. Then if x is in A, also y is in A. That is, it is possible to have x and y both in A when x and y are not distinct -- this is easy, just have

x = y

From that we can move to a book on axiomatic set theory and see some more -- axioms -- have to assume, e.g., to get rid of self-referencing paradoxes.

From sets, in particular, starting with just the empty set, we can give definitions of everything else in math. The definitions are often tricky and even weird, still in principle it is possible to go through some elaborate constructions, starting with only the empty set, to define everything else. Since the results are so elaborate, even bizarre, we don't think of the math that way; instead, for skeptics or people who worry, we can say "if you just believe in the empty set, we can define everything else". When this is done, everything in math is a set -- the natural numbers, relations, functions, groups, rings, fields, vector spaces, curves, surfaces, etc.

Then, in fundamental terms, the proofs in math are essentially just using propositional logic on sets.

In common high school courses in plane geometry, proofs are written in a special style based on a table. Those proofs are valid as proofs.

In practice in the rest of math proofs are essentially always written with an especially precise use of some natural language, e.g., English, but it is possible to convert such proofs to a table such as in plane geometry or just back to symbol substitution.

For what I've seen, done, and do in math, that description is enough.


Thank you deeply and kindly for the effort in your response. Appreciate it!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: