> I looked a little into RustBelt; how would it have helped this `Pin` issue? I browsed [1] a bit and it seems like `Pin` would have to have been proven on its own in an ad-hoc way not covered by RustBelt.
To clarify, we do have a work-in-progress formalization of pinning as part of RustBelt [0] (I worked on it over the summer). We did not discover this issue because RustBelt does not currently model traits, and the problem here is very much about the interaction between traits and pinning. Ralf touches on this in the linked discussion:
> Actually trying to prove things about Pin could have helped, but one of our team actually did that over the summer and he did not find this issue. The problem is that our formalization does not actually model traits. So while we do have a formal version of the contract that Deref/DerefMut need to satisfy, we cannot even state things like "for any Pin<T> where T: Deref, this impl satisfies the contract".
CompCert doesn't do this, but CakeML, a verified compiler for a variant of ML bootstraps itself in the logic:
> A unique feature of the CakeML compiler is that it is bootstrapped “in the logic” – essentially, an application of the compiler function with the compiler’s source implementation as argument is evaluated via logical inference. This bootstrapping method produces a machine code implementation of the compiler and also proves a theorem stating functional correctness of that machine code. Bootstrapping removes the need to trust an unverified code generation process. By contrast, CompCert first relies on Coq’s extraction mechanism to produce OCaml, and then relies on the OCaml compiler to produce machine code.
I might be mistaken, but my understanding is that it does not. You still need to trust the implementation of the logic. But if you don't trust that, you wouldn't trust the compiler correctness proof anyway, so bootstrapping in the logic does give a benefit.
Yes, but large part of Coq is designed to be untrusted. Roughly, Coq searches proof and checks found proof. Search part is larger, it can be incorrect, can have bugs, etc. without affecting the proof. Check part is trusted, and is available as a separate binary (coqchk), which consumes proof found by search (coqc).
For CompCert, the more worrying is the correctness of "extraction". This is somewhat technical, you should be able to search if you are curious. CakeML solves extraction problem.
For HOL, Proof Technologies Ltd developed HOL Zero, which is solely dedicated to trustworthiness without bells and whistles. Bells and whistles are absolutely necessary for proof development. The idea is to use HOL Zero at the end, after proof is finished. It's like coqchk idea on steroid. http://proof-technologies.com/holzero/
Summary: CompCert trusts coqchk and extraction. CakeML trusts HOL, hopefully something like HOL Zero can be used in the future.
Note that Godel's Incompleteness Theorems tell us that formal systems with sufficient power cannot 'prove themselves' in some sense, and we see that rather convincingly:
1. Implementations can be wrong, so we want a formal proof system
2. The proof system can be wrong, so we want to apply the proof system to itself
3. But the proof system may be wrong (either in implementation or specification) in a way such that its self-proof is actually invalid
At some point, we want to assert some property (completeness, correctness) of the system as a whole, but whenever we augment the system to allow this, that augmented system's own properties now come into question.
Logic proving systems need never hit Godel incompleteness. Many systems weaker than Godel are proven consistent within themselves, and are strong enough to verify code.
Godel’s Proofs rely on statement encoding using properties of integers not required for these logic systems.
Presburger Arithmetic, for example, avoids Godel incompleteness, yet provides a decent system for a lot of work.
not coq but HOL4 and Poly/ML. there are verified HOL kernels but you’d need to do substantial manual validation of the exported theorems to make sure they didn’t get subverted during proof export.
I agree, and there are efforts to develop formally verified implementations of Nakamoto consensus (which I'm involved in). We've published a paper a year ago about our efforts and first results and I'm currently extending that work for my Master's thesis. Others are also building on top of that work [3].
We're nowhere near having a verified Bitcoin implementation (much less verifying the existing C++ implementation), but we're making steady progress. One could reasonably expect having a verified, deployable implementation in 3-4 years if people continue working on it.
It remains to be seen whether people would switch to that from Core (very unlikely) or Core would adopt it as a component to replace its existing consensus code (interesting possibility, but also unlikely).
First you mention Nakamoto consensus, then you mention "existing consensus code". Are you including Bitcoin script when you say consensus? It seems like a huge task to formally model the whole consensus system, including OP_CODESEPARATOR. How would you replicate something like the BIP-0050 problem?
> Are you including Bitcoin script when you say consensus?
Currently, no. Our model considers "consensus" to be agreement on the ordering of transactions. We don't yet interpret the transactions.
However, agreement on the state follows directly provided nodes have a function (in the "functional programming" sense), call it `Process`, that returns the current state given the ordered list of transactions. If all nodes have the same the function, they will have the same state.
The BIP-0050 issue (in my understanding) is that different Bitcoin versions had different consensus rules, i.e. their `Process` functions were different. As far as I know, Bitcoin script is deterministic, so it should be possible to define it functionally. We haven't begun looking at this, though, and I don't know of anyone that has -- some people are working on verified implementations of the EVM, however.
> It seems like a huge task to formally model the whole consensus system, including OP_CODESEPARATOR.
It is a huge task, yes, but it can be done modularly such that reasoning about script interpretation can be almost entirely separated from reasoning about Nakamoto consensus. (This is a bit subtle, since the fork choice rule needs to check if blocks are valid, which relies on script interpretation.)
I don't really understand OP_CODESEPARATOR. As long as it is deterministic, it shouldn't be an issue.
It looks interesting, but I believe even verifying 100-200 lines in the real Bitcoin consensus code is more useful. All the Bitcoin Core developers know that the Bitcoin implementation is a mess, but they understood that it's just not going away. As for Bitcoin script, there's a plan to upgrade it to Simplicity (https://blockstream.com/simplicity.pdf), but even then, the old code will stay (there's a concensus that there won't be any hard forks for a long time unless it's really needed).
The last bug was in the caching code (a transaction could slip twice in a block which could lead to inflation), and it was something that only formal verification of inflation in the current code would catch.
My biggest worry is with digital signatures in the Bitcoin blockchain: if there's a bug in it, it's over, building trust again is extremely hard.
> I’d really appreciate [...] links to the work of similar initiatives
Take a look at Quorum, a "programming language which is designed to be accessible to individuals with disabilities and is widely used in schools for the blind".
> SAT solvers generally have poor support for constructive logic, the kind of logic used in theorem provers based on dependent types like Coq.
Arguably, this is no longer the case. FStar [1] has dependent types, monadic effects, refinement types and a weakest precondition calculus (i.e. a richer type system than Coq) and uses the Z3 SMT solver to discharge many proof obligations. I've been using FStar recently and it works surprisingly well.
Apparently, v1.0 of the Facebook Graph API could access users' private messages via the 'read_mailbox' API request [1]. This was deprecated when v2.0 launched.
"Version 1.0 of the Graph API launched on April 21, 2010. It was deprecated in April 2014 and closed completely to legacy apps (ie, existing apps that used the API before April 2014) on April 30, 2015."
You used to be able to connect to facebook messenger via XMPP. Combined with this permission, it would have let you retrieve historical messages and add persistence among alternative clients.
If Adium, Pigeon, or another collective chat application needed that permission at the time to combine my facebook and AIM lists into a single application, i would have certainly understood it.
In late 2013 I used (I think) the Graph API to pull my own private chat messages so I could see what messages I sent and received on a particular day, to remind myself when certain events happened. I can't think of many good reasons for this API to exist to third parties, but it was pretty handy for my own data.
I want to like bitmessage, but the last few times I looked it wasn't ready for prime-time. It is (was?) text-only, and I can't communicate without memes.
https://aws.amazon.com/blogs/opensource/lean-into-verified-s...