One can prove, ab initio, that 1+1=2 in less than ten lines in Coq.
Algorithms are _meaningless_ without accompanying proofs. There should be --- and is --- a continuum of formality. The 20th Century has been unique in recognizing the fallacies and contradictions of completely informal reasoning; why give up when we're finally ahead?
To whomever rated me down -- why do you think that human-written software has so many bugs? It's because we don't take the time (and don't have good strategies yet) to effectively "prove" that our code/algorithms exactly meet the program specs. To write code, we need to make certain idealizations (e.g. thinking about "doubles" as actual real numbers) which actually have tricky corner cases.
So many areas of life cannot easily be mathematically analyzed. Hence, our brain resorts to algorithms which work "most" of the time, which is luckily usually sufficient for survival.
I'm half-way into "Simple Heuristics That Make Us Smart", and it's very revelatory on the subject. Turns out with the "recognition heuristic" alone you can go about halfway. The rest goes again halfway with another just a bit more complex heuristic. Also interesting is that very simple heuristics tend to be about as good or more as very complicated algorithms, and sensibly more robust.
An algorithm's meaning: that which is computed when the algorithm runs. Pretty simple. Of course, you need something to run it on. But one could say that proofs are meaningless without humans to run on as well.
Given that algorithms can have undecidable outcomes, its hard to even tell what a meaningful description of such algorithms in terms of proof even entails.
Clearly when developing an algorithm one should have an intention as to what it will do. The idea behind many algorithmic proofs and, I assume, that of the original poster, is to prove that ones implementation in fact does what you intended. For example, proving that a sorting algorithm always outputs a properly sorted list or something to that effect.
Modern theorem provers are also pretty awesome in that they can often, using the correspondence between formal proofs and programs (the Curry-Howard isomorphism for those interested), turn a proof that some unspecified function F, say, sorts a list into a Haskell/ML implementation of a function satisfying that proof.
Even if you prove that a program meets a formal spec, there's no way to know that the formal spec does what you 'intend'. And specs can get complex too.
Nonetheless, you have reduced the uncertainty in the system. At least you know where the system is breaking down when you find bugs if you have proven your algorithms conform to spec.
Algorithms are _meaningless_ without accompanying proofs. There should be --- and is --- a continuum of formality. The 20th Century has been unique in recognizing the fallacies and contradictions of completely informal reasoning; why give up when we're finally ahead?