Aside from the obvious point that software can obly be as secure as the hardware it runs on, isn't Gödel's incompleteness theorem a problem with scaling from smallish problems to biggish ones?
Granted it has been a long time since I understood it even moderately well, but my main recollection is that a complete system of logic can't be consistent, and a consistent system can't be complete. That would seem to be an issue with extending formal methods to any computer system of broad general use.
Gödel's incompleteness theorems are more of a theoretical limitation than a practical one.
Roughly speaking, Gödel's incompleteness theorems say that all proof systems for number theory are limited in some way. For example, first-order Peano arithmetic is such a system, and one of its limitations is that it doesn't support transfinite induction. (It doesn't matter if you don't know what it is.) In other words, if you want to translate a mathematical proof to Peano arithmetic, you have to come up with a way to do it without transfinite induction. Sometimes, such in the case of Goodstein's theorem, this is impossible. To prove Goodstein's theorem, you have to choose a stronger proof system to begin with.
So, Gödel's theorems guarantee that no matter how strong you proof system is, there are always number-theoretic statements that are beyond its reach. But for reasons that are not currently completely understood, this doesn't really happen in practice. "Naturally occurring" examples of number-theoretic statements almost always turn out to be provable in surprisingly weak systems.
Instead, you run into practical problems: the theorem is provable in the system, but actually writing out the proof is utterly inconvenient. As an analogy, there are Turing-complete programming languages that don't have the concept of functions. In theory, they are capable of all kinds of computations, but in practice you don't want to use them.
And if, instead of mathematics, we concentrate on proofs of correctness, then this is even less of a practical problem. To quote Leslie Lamport, proofs of correctness "are seldom deep, but usually have considerable detail." The proofs may be long and complicated, but as long as they don't use any kind of ridiculously abstract techniques, they are just the kind of proofs where computers can have an advantage over humans.
Static type systems were explicitly designed to get around various inconsistency problems with standard formulations of mathematics (ZFC, etc.). Formal verification generally exploits such type systems. In a nutshell, the incompleteness theorem (and related problems) only apply to sufficiently powerful logical systems (namely, those that can model Robinson arithmetic). Less powerful (or, in some cases, incomparable) axiom schemes exist that are capable of being both consistent and complete. Usually, the cost is that (1) they are not capable of expressing all programs (but many people argue that they are capable of expressing all useful programs), and (2) some programs that are correct in those type systems may require exponentially larger proofs (types) than a less stringent system would require in order to accept the same term (again, many people argue that this does not matter in practice).
I think that the difference is that Gödel's theorem related to the general case, and not the specific case.
For example, the Halting Problem (which is, I admit, what I first thought of when I read this) proves that there is no general algorithm that can state whether any program will halt for any input.
However, it says nothing about whether you can prove that a specific program will halt for any input. You can easily make programs that are guaranteed to halt, provably.
I guess my thought was that if they are moving beyond very limited use cases and into broader ones (e.g. communication, which almost certainly involves very broad allowable inputs) then the existence of provably correct programs for small problems is less relevant.
But as I'm starting to understand, it may be that this is still a very limited case compared to the theoretical general case, even if it is significantly more general than e.g. proving a given program will halt.
Granted it has been a long time since I understood it even moderately well, but my main recollection is that a complete system of logic can't be consistent, and a consistent system can't be complete. That would seem to be an issue with extending formal methods to any computer system of broad general use.
Or am I misapplying the theorem here?