You know, I'm not really into cryptocurrency, but it does seem like it's contributing to a resurgence of interest in formal methods. So - thank you, cryptocurrency community!
I agree; I've been trying to get employers/coworkers to look into TLA+ or Isabelle (for different things) for years now, and it feels like only after every cryptocurrency started blogging did they start to listen.
I find it funny that the reason we want the cryptocurrencies to be formally verified is because we want rock-solid financial transactions so that this distributed story is always accurate (which is good), but actual banks don't seem to have these same requirements.
NOTE: Could be wrong on that last point, getting my information about banks third-hand, it's possible that banks are using formal verification. Can someone who works at a bank confirm?
This is because banks rely on property rights, which are enforced by courts, whereas cryptocurrencies operate on a "fait accompli" basis (i.e. whoever happens to control the coins, owns them). This means, in the cryptocurrency world, mistakes are irreversible. In conventional finance, this is not usually the case.
Sorry for replying to an old thread, but I wanted to respond to this.
I totally understand why blockchain isn't the standard for everything right now; my statement was in regards to formal methods to ensure correctness. For example, "reversing" a transaction could totally be modeled in TLA+ (and probably other systems, I'm just most familiar with TLA+ than most other systems), which would provide model checking (and possibly even proofs) that you would not otherwise have.
Your English is excellent - as a native speaker I wouldn’t have guessed you had a different mother tongue. ‘Fait accompli’ is used in English with the same meaning. I was trying to make a stupid joke referring to fiat currency… HN is a tough audience, and I’m more of a droll engineer type.
Yeah, which always felt like that would be a great reason to formally verify...If a race condition ends up draining my account of all my money, you can bet that I am going to sue for the lost funds and probably a bunch of other things, which I cannot imagine would be very cheap for the bank. In fact, I would think it would be expensive for them even if it wasn't their fault (e.g. maybe there was no money in my account because I just have bad spending habits and a bad memory).
It seems to me that having every bit of evidence on their end to prove that they are guaranteeing our transactions would be a really good thing....I suppose this solution might just work itself out if banks actually start relying on some kind blockchain to secure their transactions.
That is correct- there are all sorts of human and procedural buffers built into bank operations. Correctness of the system- and there is never just one system, there are at minimum dozens, even in the smallest bank- is not a hard requirement.
In fact you would immediately seek another job if you came from formal methods practice and then saw production code at just about any of the different animals called banks.
>> Is formal verification a required course or curriculum competency for any Computer Science or Software Engineering / Computer Engineering degree programs?
I've worked on safety critical software, but we did not use formal methods. Lots of test and checking for coverage (at asm level). Oodles of static analysis and wading through false positives. Do formal methods usually supplement those or replace some of it? Can provers or formal methodologies scale to safety critical codebases of 100s of ksloc? Whether such codebases should exist, or be better partitioned is a separate issue.
The current crowning achievement of formal methods is, as I understand it, seL4. It is a formally proven microkernel of about 8500 LOC. There's still a while to go until they can scale to 100kLOCs, unfortunately.
A problem with both seL4 and CompCert is that the code written to express the proofs is huge, much larger than code that actually does stuff. This puts a ceiling on the size of the projects we can verify.
F* is a language that tries to address that, by finding proofs with z3, a smt prover; z3 can't prove everything on its own but it cuts down proof code by orders of magnitude. They have written a verified cryptography stack and TLS stack, and want to write a whole verified http stack.
F* (through Low, a verified low-level subset of F) can extract verified code to C, which is kind of the inverse than the seL4 proof: seL4 begins with C code and enriches it with proofs of correctness; hacl* (a verified crypto F* lib) begins with a proven correct F* code and extracts C code (I gather the actual crypto primitives is compiled directly to asm code because C has some problems with constant time stuff). This enables hacl* to make bindings to other languages that can just call C code, like this Rust binding
I've never really worked on anything safety-critical, but I know that at least one real-time operating has been designed and verified with TLA+ [1]. The book is a bit pricey (and used copies don't pop up that often), so I haven't read it, but my understanding is that the developers felt that TLA+ did find bugs.
That said, TLA+ is written separately from the code. There could of course be bugs in the translation of TLA+ -> C...TLA+ is about checking the design itself, not the code.
Not as a monolithic object. You might be able to do it with an almost microkernel approach (whatever you're implementing) with multiple small, well-defined objects with a well-defined protocol between them. That can be very useful. But I doubt you could formally verify their collective behaviour. The space and time complexity of the algorithms is explosive as the number of interacting objects increases.
Yes, they are normally used in conjunction with testing. But they are also used in conjunction with careful systems engineering to minimise the amount of code that needs to go through the most rigerous processes and focus on what really mattered.
I'm curious which static analysis tool you were using. I used to work in SPARK Ada and false positives were not really a thing due to how it worked. But I've heard of others that were much less restrictive which ended up with lots of false positives, or "maybe" cases that then wasted everyone's time. I can see how systems like that would put people right off using them.
Another public good that cryptocurrency has been contributing to is advances in zero knowledge proofs including some really interesting stuff like SNARKs
One thing to highlight how important this may turn out to be:
> Proof-carrying data (PCD) enables a set of parties to carry out an indefinitely long distributed computation where every step along the way is accompanied by a proof of correctness. It generalizes incrementally verifiable computation
This has the potential to remove the element of trust that you currently must place in the provider of computation resources. With SNARKs you can simply ask them to include a proof that they actually performed the computation that is sublinear in the length of the proof and sublinear in the amount of time taken to verify the proof.
Of course this will always be more computationally expensive for the provider of the computation resources to both do the work and provide the proof.
It also seems associated with advances in wider adoption and some progression in state of the art of functional package managers and reproducible builds, which has been nice to see.