Hacker News new | past | comments | ask | show | jobs | submit login

Depends what you mean by “more.”

You can write programs to search for and do other things with prime numbers. However this can only tell you about specific prime numbers.

With proof you can prove things about all prime numbers or the integers as a whole. For instance Euclid proved that there are an infinite number of prime numbers.

We remember Euclid 2300 later so he accomplished a lot with his life. Today there is a big market for web pages, microcontrollers for gas pumps and washing machines, missile defense systems, etc. You can keep very busy writing programs for that stuff but it probably won’t be remember in 2300 years. (What’s a gas pump?)




What's the practicality of knowing that there's infinite number of primes?


It's a fundamental fact about the theory of primes which has practical applications such as

https://en.wikipedia.org/wiki/RSA_(cryptosystem)


RSA requires a lot of big primes to exist (so the search space is large), but doesn't require infinitely many. For each key size there's a maximum prime size that matters, and all RSA cryptography that's used in practice would work fine if there were, say, no primes with more than a million digits.


But maybe Euclid could have accomplished more in life with programming?


In some sense yes.

The two also can go together like peanut butter and jelly.

When I was in grad school my thesis contained a derivation of the Gutzwiller trace formula

https://inspirehep.net/files/20b84db59eace6a7f90fc38516f530e...

integrating over phase space instead of position space (that paper does it over position space.)

It was about 40 pages worth of math and while I wrote the math I also wrote each equation in a computer program that would try different inputs and check that the equality held. This didn't "prove" anything, but it convinced me that I had the math right. When you do a calculation like that you might have 50+ places where there might be or might not be a minus sign and if you make any mistakes you might as well flip the coin to get the sign. I could have done the calculation without writing the program but I would have either gotten the wrong answer (not so likely in this case because I knew what the answer was) or gotten the right answer in the wrong way.


He could write programs that find more primes, but never one that proves there will always be an infinite supply of primes.

I think other comments already said it. "more" is an ambiguous term. Do whatever you want and feel free to call that "accomplishing more".


Wait what? Computer science encompasses mathematics. Any proof that can be validated by following a known set of reduction steps until axiomatic statements are reached (i.e. any proof worth the paper it’s written on) can be verified just as well by a computer as a person.

For your specific case:

definition infinite_primes (n : nat) : {p | p ≥ n ∧ prime p} := let m := fact (n + 1) in have m ≥ 1, from le_of_lt_succ (succ_lt_succ (fact_pos _)), have m + 1 ≥ 2, from succ_le_succ this, obtain p `prime p` `p ∣ m + 1`, from sub_prime_and_dvd this, have p ≥ 2, from ge_two_of_prime `prime p`, have p > 0, from lt_of_succ_lt (lt_of_succ_le `p ≥ 2`), have p ≥ n, from by_contradiction (suppose ¬ p ≥ n, have p < n, from lt_of_not_ge this, have p ≤ n + 1, from le_of_lt (lt.step this), have p ∣ m, from dvd_fact `p > 0` this, have p ∣ 1, from dvd_of_dvd_add_right (!add.comm ▸ `p ∣ m + 1`) this, have p ≤ 1, from le_of_dvd zero_lt_one this, absurd (le.trans `2 ≤ p` `p ≤ 1`) dec_trivial), subtype.tag p (and.intro this `prime p`)

https://homotopytypetheory.org/2015/12/02/the-proof-assistan...


Writing proofs in proof assistants is still harder than writing regular proofs despite all efforts done to make it simpler, so I doubt that Euler would have managed to prove much if he tried to do it via computer programs instead.

Not to mention what it does to readability, humanity would never have gotten anywhere if mathematicians produced proofs looking like that. I'm not sure why programmers say mathematicians write hard to read stuff, when programmers trying to do the same thing comes up with that.


If Euclid had formulated his proof from axioms rather than intuition it would a) be an actual proof and b) far exceed the length of the above snippet.


The code you provided is only a proof if you can convince the reader that the code actually corresponds to Euclids theorem, otherwise it could be full of bugs. You could argue that it is less likely to have bugs than the original theorem, but I disagree, the original theorem has stood for over a century and checked by countless mathematicians and is just a few lines, there is no codebase in existence as well checked as that.

There are thousands of lines of CoQ dependencies for your snippet to run, and then many more lines in the CoQ compiler/runtime. The probability that all of that is bug free is much less than the probability that Euclids original proof is right. Theorems written by mathematicians are virtually bug free compared to code written by computer scientists. Until you solve that problem coded proofs doesn't really add any validity to proof, it could still be reviewed like a real proof, but then you are just using circular logic that your code is proven to be correct by expert checkers, exactly like how mathematical proofs are proven to be correct by expert checkers. Maybe the reviews are a bit easier then, but you still have the reviewers as a weak link, so there is no difference in theory, a buggy implementation will still lead to buggy results in both cases.

And no, you don't have to show all the steps for it to be a real proof. just like you don't have to plant apple seeds to make an apple pie, as long as the knowledge exists out there you can rely on it, just like your coq snippet relies on other coq code and isn't a proof on its own, relying on existing knowledge is a feature and not a bug.

Edit: And if you still disagree, the fact that theorem provers haven't disproven any famous mathematical result yet means that the old way actually produced reliable proofs. And as far as we know the people who wrote the theorem prover proofs maybe missed some things and they proved different things, and only stopped once they got the same results as the mathematicians. We don't know, maybe they are right, but I wouldn't be surprised if some of those proofs actually missed some important aspects and proved weaker results. It is enough that a single of your dependencies didn't write their theorem correctly and every proof depending on it is now wrong.


First of all, this is clearly Lean not Coq (and certainly not “CoQ”). While the distinction is minor, it does cause me to question if you may have fallen victim to the Dunning–Kruger effect here. Perhaps I’m wrong, but it may be worth some self-evaluating.

Furthermore, I think you’re missing the forest for the trees here. My only real claim is that theorem proving and computation are two sides of the same coin. Either way you start with data, apply a sequence of established transforms to that data, and stop when you’ve reached an accept or reject state. The precise encoding of the input, transforms, and accept/reject states is unimportant.

That said, we do seem to agree that Euclid’s original formulation is not a “true” proof. As you mentioned, many mathematicians have found cause to go back over the same theorem using more formal methods. While it’s true that for this particular proof-by-intuition no faults have been found, that certainly does not generalize to all proofs by intuition, and as such it’d be far more appropriate to compare a complete traditional proof to the complete computer proof. If you were do do this I suspect you’d find them quite similar, as axiomatic computer proofs tend to use the exact same fundamentals as axiomatic pen and paper proofs.

Finally, your claim that theorem provers have never found significant faults in a famous conventionally proved result is shown incorrect by a simple Bing query. See https://www.math.sciences.univ-nantes.fr/~gouezel/articles/m.... While the overall result did hold, formal verification discovered a bone fide hole in the widely accepted, published, peer-reviewed, etc etc etc, paper. Plenty of other examples are available at https://mathoverflow.net/questions/291158/proofs-shown-to-be.... You also seem to be ignoring the fact that many theorems have only been proven via computational analysis.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: