> and can't be applied directly to implementations
There are model checkers that work directly on implementations, like NASA's JPF.
> I personally don't confer that much trustworthyness to systems that appeal to model checking tools for their correctness.
Given that the going rate for proofs is roughly 1-1.5 days per one line of code, which grows superlinearly with the code size, I think we have no choice...
In any event, pretty much all safety-critical software that human lives depend on (even many, many lives) is verified with model-checking in the best case. Use of proofs in verifying safety-critical systems is very rare, as even those domains can rarely afford it.
Also, remember that no one really needs 100% proof, as no other, non-software component in the system guarantees 100% reliability (not that model-checking can give us probability bounds).
100% certainty in proof doesn't mean that you're aiming to get a 100% proven system. It means that you can stop worrying about the proven part forevermore. So it reduces long term running overhead.
If your proven component is so well-isolated from others and doesn't undergo changes (and neither do its assumptions about input), it seems like the same would apply to a model-checked component. Besides, the cost of proofs is well over 1Kx if not 10Kx over model checking, so I don't see how this consideration is important.
Proofs are useful -- because of their total coverage -- when verifying components that will be used in many different projects, like compilers, kernels and libraries, and even then only when they are sufficiently small to be doable within, say, 5-10 years by a dedicated team. A one-time gigantic job will pay for itself by being used (and hopefully paid for) by many customers over many years.
I've got no problem with model checkers, though I'm not sure I agree with your effort ratings. Different tools for different jobs, eh?
I think I also have a different conception of proof than you're naming. You don't need to prove your entire program. 10 small proofs strung together can go an enormous distance with low weight.
I don't know. The spec I'm working on now is ~1300 lines of TLA+, and that's just the essential core of the thing -- nowhere near the whole program. By comparison, the Raft TLA+ spec is under third of that, yet the Verdi proof[1] is ~50K LOC, took a whole team of experts months of work, and didn't prove liveness.
I have a very small sub-algorithm that I might try proving in TLA+ just to see how it goes.
[1]: I couldn't find the original TLA+ proof online, but the author of the Raft protocol said that he struggled even with partial machine-checked proofs.
I don't think this is really surprising. I don't know a ton about distributed systems, but the last time I read up on formalizing them and proving properties it seemed like an incredibly daunting task. There's an exponential blowup problem in that you end up having to follow every state in the state spaces of every actor against every other state over time. I saw some interesting proofs which used topological invariants to keep a hold of that whole complex more delicately... but to actually just dive in and try exhausting it cannot be simple.
Well, formalizing them isn't too hard, and model-checking them is quite easy (once you have a good formal spec), but proving them is another matter altogether...
People prove specific algorithms (distributed, concurrent or sequential) with TLA+, but once the spec is of a real system, things get much, much harder to prove.
There are model checkers that work directly on implementations, like NASA's JPF.
> I personally don't confer that much trustworthyness to systems that appeal to model checking tools for their correctness.
Given that the going rate for proofs is roughly 1-1.5 days per one line of code, which grows superlinearly with the code size, I think we have no choice...
In any event, pretty much all safety-critical software that human lives depend on (even many, many lives) is verified with model-checking in the best case. Use of proofs in verifying safety-critical systems is very rare, as even those domains can rarely afford it.
Also, remember that no one really needs 100% proof, as no other, non-software component in the system guarantees 100% reliability (not that model-checking can give us probability bounds).