Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One can prove, ab initio, that 1+1=2 in less than ten lines in Coq.

Algorithms are _meaningless_ without accompanying proofs. There should be --- and is --- a continuum of formality. The 20th Century has been unique in recognizing the fallacies and contradictions of completely informal reasoning; why give up when we're finally ahead?



>> Algorithms are _meaningless_ without accompanying proofs.

Most of what we write as coders (and in fact, most of what the world runs on) are algorithms without proofs which "mostly" work.

In fact, you could call almost all of our strategies in social interactions "heuristic algorithms"!


To whomever rated me down -- why do you think that human-written software has so many bugs? It's because we don't take the time (and don't have good strategies yet) to effectively "prove" that our code/algorithms exactly meet the program specs. To write code, we need to make certain idealizations (e.g. thinking about "doubles" as actual real numbers) which actually have tricky corner cases.

So many areas of life cannot easily be mathematically analyzed. Hence, our brain resorts to algorithms which work "most" of the time, which is luckily usually sufficient for survival.


It's typically because we don't even write a spec :-)


I'm half-way into "Simple Heuristics That Make Us Smart", and it's very revelatory on the subject. Turns out with the "recognition heuristic" alone you can go about halfway. The rest goes again halfway with another just a bit more complex heuristic. Also interesting is that very simple heuristics tend to be about as good or more as very complicated algorithms, and sensibly more robust.

I found an abstract (http://www-abc.mpib-berlin.mpg.de/users/ptodd/SimpleHeuristi...) but it's a bit dry. The actual book is a lot more readable.


Almost all our strategies ain't worth much, and since proving things takes a lot of effort, it makes no sense to prove most of our strategies.

But if I were writing flight control software, I'd take the time to formally prove tings, I'd even write the speck in Z.


An algorithm's meaning: that which is computed when the algorithm runs. Pretty simple. Of course, you need something to run it on. But one could say that proofs are meaningless without humans to run on as well.

Given that algorithms can have undecidable outcomes, its hard to even tell what a meaningful description of such algorithms in terms of proof even entails.


Clearly when developing an algorithm one should have an intention as to what it will do. The idea behind many algorithmic proofs and, I assume, that of the original poster, is to prove that ones implementation in fact does what you intended. For example, proving that a sorting algorithm always outputs a properly sorted list or something to that effect.

Modern theorem provers are also pretty awesome in that they can often, using the correspondence between formal proofs and programs (the Curry-Howard isomorphism for those interested), turn a proof that some unspecified function F, say, sorts a list into a Haskell/ML implementation of a function satisfying that proof.


Even if you prove that a program meets a formal spec, there's no way to know that the formal spec does what you 'intend'. And specs can get complex too.


Nonetheless, you have reduced the uncertainty in the system. At least you know where the system is breaking down when you find bugs if you have proven your algorithms conform to spec.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: