Formal methods are good and all, but you can still run into trouble when you fail to account for everything.
Some years ago, I had to work on a massively distributed system that scaled up or down with minimal communication between nodes, depending on the amount of work the system had. The developer of that scaling algorithm, a mathematician that gives quite a few software talks, and writes books teaching functional programming, wrote a proof for the system, guaranteeing its upper and lower bounds of performance. I was brought in because the system wasn't acting anywhere near those parameters: The worst part was that often it'd not downscale at all. The mathematician pointed at his proof, and therefore at the fact that if there was a problem, it must be a bug somewhere else.
So after learning how to read his proof, executing it, and doing all the typical ops stuff to identify what the system believed was going on at any particular time, it became clear that the proof had a significant flaw: It believed that the amount of work that the system received was perfect information, with no latency. Since the real world has latency, and none of that was accounted for, the proof was guaranteeing success in a universe that is different from ours. Once I modified the model to represent the real information quality, the outcomes were pretty close to what we were seeing in reality, and equally unacceptable.
So proofs are powerful, and maybe there are zero issues in the proof of this smart contract, but formal models and proofs can still fail you, they'll just fail far less often than your typical system made with unit tests and chewing gum.
Chewing gum systems tend not to fail catastrophically though, due to a general lack of trust in the solution. Formal verification often instills a sense of confidence that means few, if any, failsafes are put in place.
The formal method involves using a programming language (a mathematically based, verification oriented language) to verify another module in some other programming language.
I’m sure it strengthens the codebase being verified. But there is a reason systems engineering of involves both verification and validation.
Admittedly I'm being a bit trollish as I say this, but do you get the same feeling from proofs that Pythagorean Theorem is correct?
Proving things correct is much harder than finding counter-examples that they aren't correct. But the methods do work. Their based on sound formal logic. Proofs can have mistakes, certainly, but it's a darn strong signal that their system is correct.
The problem with proving things about software is less in the proof itself and more in the connection between the proof and the real world.
If somebody told me that they had a financial system whose security was based on applying the Pythagorean Theorem to physical triangles, it would raise exactly the same red flags—the theorem itself isn't a question, but it doesn't even try to capture how physical materials might be imperfect or change over time, and those are exactly the sort of inconsistencies a motivated attacker could exploit.
I was going to reply but exactly this - I'll add that the proof work in the article is really interesting and I love to see practical work in formal proofs happening as a result of eth in particular. However 100% proven code on eth I just don't believe is 100% trustworthy.
Another way of phrasing it is that the law of leaky abstractions means that though the code itself might be 100% correct, lower levels can puncture the assumptions and make that correctness moot - IE, unsinkable titanic vibes