Cache invalidation, one of the so-called hardest things in software, is simply a consensus problem on the lifetime of data. Naming things is consensus on definitions.
Consensus isn’t just hard, it’s the hardest thing. Possibly even the only hard thing, if you include the social aspects of software development too (which involve human consensus).
Author spends some time on determinism, which got me thinking of CRDTs, which are trying to make state changes commutative. They are in effect trying to get from linearalized state changes to causality. If we find a way to solve that problem, we have a more efficient consensus than Raft, one that can withstand small gaps in availability.
I keep waiting for someone to bring a Turing, Gödel, or Shannon into this and point out it’s not computable.
That worry aside, this thought process also brought me to monoids, which are used to share state in some functional languages. I’m curious how much information about concurrent state change is locked up in that space that people trying to solve the general problem don’t have ready access to.
Are you familiar with Joe Hellerstein and Peter Alvaro's work in this space? CALM provides something of a unifying theory of mergeable operations, of which CRDTs are an "object orientated" special case: https://arxiv.org/pdf/1901.01930.pdf
In practice, designers choose coordination-heavy protocols like consensus for a number of reasons. One is because writes don't or can't be merged. Operations as basic as simple assignment (x = 1;) can't be merged, so that's very real. Another is because readers can't tolerate weak consistency, because their business logic needs to make decisions at a particular point in time.
You're right that the thinking behind CRDTs (and CALM) is useful in reasoning through determinism in this context. The determinism problem, though, is easier than the general monotonicity problem, because only determinism is required and not associativity or commutativity.
Except that in the Multi-Paxos family of consensus and replication protocols, Raft is probably the least efficient. It's to the extreme and foregoes several simple optimizations, because of design decisions taken in the name of understandability and simplicity.
I would suggest that VR is at least as understandable as Raft, while also being more efficient and with the lowest latency for leader election in the common case. There's no need for Raft's carefully tuned random timeouts because split votes are not possible with VR. VR also lets you do some really cool things like warm up the next leader, or prioritize synchronous replication under Flexible Paxos to the next leader with asynchronous replication amongst the remaining followers, or decide who you want the next leader to be to optimize for geographical placement etc.
VR's view change protocol for leader election is also entirely in-memory so it's far more fault tolerant compared to Raft if you have a storage fault model and not only a network fault model. For example, Raft requires strong persistence guarantees for correctness of the leader election phase. If anything goes wrong with your disk (a ghost write or misdirected write) then Raft's implementation as written would be unsafe. Raft also has liveness issues if all nodes have even a single block failure at any point in their log.
If you're going to reach for a consensus algorithm, there are a lot of good reasons to do a survey of the literature first. There's a whole spectrum to choose from.
> VR's view change protocol for leader election is also entirely in-memory so it's far more fault tolerant compared to Raft if you have a storage fault model and not only a network fault model.
Only in the fail-stop model, right? Or does this property extend to other models (like omissions)?
>> VR's view change protocol for leader election is also entirely in-memory so it's far more fault tolerant compared to Raft if you have a storage fault model and not only a network fault model.
> Only in the fail-stop model, right? Or does this property extend to other models (like omissions)?
You're far more knowledgeable about the domain... but I was thinking that with VRR's completely in-memory view change and replication protocol, this would then extend past the fail-stop model to include byzantine disk storage faults (misdirected reads/writes, corruption, latent sector errors) since VRR requires no guarantees from the disk in order for the consensus protocol to be correct, whereas Raft does.
To me this is just another reason that makes VRR such a fantastic protocol.
To be fair, I guess we could say that Raft assumes a fail-stop disk model, but then again Raft is supposed to be a practical implementation and disks are not fail-stop in reality. I'm sure you're also well familiar with WISC's Protocol-Aware Recovery for
Consensus-Based Storage: https://www.usenix.org/system/files/conference/fast18/fast18...
Beyond that, I've also been wondering about this from the perspective of taking VRR's in-memory view change for the leader election phase, but then combining this with disk-based persistence for the replication phase, along with CTRL from WISC.
I would love to hear your thoughts on this. Did I understand you correctly regarding what you meant with extending "to other models (like omissions)"?
Would be great to catch up next time you are in the Cape!
We will not. And you are hinting at the reason yourself.
[...] CRDTs, which are trying to make state changes commutative.
But this is slightly wrong, when you are using CRDTs you are not trying to make state changes commutative, you are limiting the allowed state changes to commutative ones. But some state changes are inherently non-commutative and if your system requires those, then you can not build it with CRDTs.
I tried implementing Raft, which is supposed to be the most understandable out of all available consensus algorithms, but wasn't able to make it work 100%.
I came close but in the end gave up on resolving some concurrency issues :/
I'm not sure if you are aware of it, but check out MIT 6.824 on opencourseware (pick 2020, not 2021).
In Lab 2 they provide you with a frame for raft with RPC, etc. already in place leaving only protocol for you. Lab is also split into 3 logical parts - leader election, append entries, persistence. Highly recommend.
Chain Replication (and friends) are vastly simpler than Paxos (and friends) in many ways, but do have the same requirement for determinism. That's because chain replicated systems typically need to be confluent (see https://pathelland.substack.com/p/dont-get-stuck-in-the-con-...), which means that all the replicas need to have the same value in them when replication is done. Conceptually simpler, for sure, but many of the same challenges remain.
Heh... knowing for certain that you've gotten all the concurrency issues is hard. Knowing that a specific code base has concurrency issues can be very easy, by virtue of them clearly exhibiting bugs. I haven't tried to implement RAFT myself but I've certainly had code bases that clearly had concurrency bugs in them, even if I didn't know exactly what they were. :)
The author mentions in passing "Virtual Synchrony" . It turns out it is used in some highly critical systems via ISIS/VSync framework from Cornel:
The "CORBA Fault Tolerant Objects standard" is based on the virtual synchrony model. Virtual synchrony was also used in developing the New York Stock Exchange fault-tolerance architecture, the French Air Traffic Control System, the US Navy AEGIS system, IBM's Business Process replication architecture for WebSphere and Microsoft's Windows Clustering architecture for Windows Longhorn enterprise servers.
* As developers, we're used to thinking of services in terms of streaming TCP connections and RPCs. You send a request on a connection and get a response back on the same connection. However, distributed consensus algorithms (or at least their authors) like to think and write in terms of messages and message passing and the classic Actor pattern. For example, it's not uncommon for a consensus client to send a message to a leader but then get the ACK back from another server, a subsequently elected leader. That's at odds with the networking protocol we're used to. It's not always easy to shoehorn a consensus protocol onto a system that already has a TCP oriented design. Embrace message passing and multi-path routing.
* We're familiar with Jepsen. The network fault model is front of mind (dropped/delayed/replayed/corrupted messages, partitions, asymmetrical network topologies and performance). We're far less wary of the storage fault model: latent sector errors (EIO), silent bit rot, misdirected writes (writes written by firmware to the wrong sector), corrupt file system metadata (wrong journal file size, disappearing critical files), kernel page cache coherency issues (marking dirty pages clean after an fsync EIO), confusing journal corruption for a torn write after power failure.
* We underestimate the sheer bulk of the code we need to write to implement all the components of a practical consensus protocol correctly (a consensus replica to run the protocol at each node, a write ahead journal for storage, a message bus for in-process or remote messaging, a state machine for service up calls). The consensus protocol invariants are tough but limited, but the amount of code required to be written for all these components is brutal and there are so many pitfalls along the way. For example, when you read from your write ahead journal at startup and you find a checksum mismatch, do you assume this is because of a torn write after power failure as ZooKeeper and LogCabin do? What if it was actually just bit rot halfway through your log? How would you change your write ahead journal to disentangle these?
* We tend to think of the correctness of any given consensus function as binary, and fail to appreciate the broad spectrum of safety requirements required for specific components of the consensus algorithm. In other words, we don't always take fully to heart that some consensus messages are more critical than others. For example, we might resend an ACK to the leader if we detect (via op number) that we've already logged the prepare for that op number. However, most implementations I've seen neglect to assert and double-check that we really do have exactly what the leader is asking us to persist before we ACK. It's a simple verification check to compare checksums before skipping the journal write and acking the duplicate prepare and yet we don't.
* Another example, when we count messages from peers to establish quorum during leader election, we might count these messages without applying all the assertions we can think of on them. For example, are we asserting that all the messages we're counting are actually for the same leader election term? Or did we simply assume that we reset the array of messages being counted during the appropriate state transition sometime back in the past? The former is a much stronger guarantee, because it keeps you from double-counting stale leader election messages from past election phases, especially if these were successive (e.g. multiple rounds of elections because of split votes with no successful outcome). We should rather assume that the array we store these messages in, and that we're counting, could contain anything, and then assert that it contains exactly what we expect.
* Our intuition around fault tolerance might suggest that local storage faults cannot propagate to destroy global consensus. Yet they do (https://www.youtube.com/watch?v=fDY6Wi0GcPs). We need to be really careful how we repair local faults so that we do so correctly in the context of the global consensus protocol.
* Finally, I think what also really helps is to have a completely deterministic consensus protocol Replica abstraction that you initialize with an abstract Message Bus, Journal and State Machine instance. This Replica instance can send messages to in-process or remote Replica instances, and has on_message() handlers for the various protocol messages that either change state and/or send messages but can never fail (i.e. no error union return type) because that amplifies the dimensionality of the code paths. For timeouts, don't use the system clock because it's not deterministic. Instead, use a Timeout abstraction that you step through by calling tick() on the Replica. With these components in place, you can build an automated random fuzzing test to simulate your distributed network and local storage fault models and test invariants along the way, outputting a deterministic seed to reproduce any random failures easily.
"Introduced in May 1988 in Brian Oki's PhD thesis, Viewstamped Replication predates the first publication of Paxos by about a year. If you're looking for intrigue you may be disappointed: both Lamport and Liskov claim the inventions were independent."
VR was of course developed independently by Barbara Liskov and Brian Oki and then revisited by Barbara Liskov with James Cowling.
However, it is common practice and perfectly acceptable to refer to VR as a variant of "Multi-Paxos" because that's actually EXACTLY what it is in theory, the protocol maps one-to-one, cf. Dan Ports (he explains it nicely here, see slide 46 if you want his bumper sticker version): https://courses.cs.washington.edu/courses/csep552/16wi/slide...
The more you understand VR and Multi-Paxos the more you will see that this is true.
In fact, and you may be surprised/disappointed at this, but Raft is also a variant of Multi-Paxos very similar to VR (cf. Heidi Howard https://groups.google.com/g/raft-dev/c/cBNLTZT2q8o), except with tighter restrictions on leader election that make it less efficient than VR, which is why we chose VR over Raft coincidentally.
By the way, that's a fantastic post by Marc Brooker and was part (along with references by Martin Thompson and Heidi Howard) of what made us pay more attention to VR in the first place.
Not surprised or disappointed, really. My take on RAFT is that it's basically Paxos for elections and a more relaxed regime in between.
But Brooker's post (in my OP) actually addresses VR vs Paxos distiction:
It's easy to believe that these two protocols are, in fact, the same. That doesn't appear to be the case. A new paper by van Renesse et. al., titled Vive La Difference: Paxos vs. Viewstamped Replication vs. Zab, looks at Paxos and VR through the lenses of refinement and abstraction, and finds they are not exactly equivalent due to design decisions in the way they refine a model the paper calls Multi-Consensus. One of the key differences is active (Paxos) vs. passive (VR) replication: "Passive vs. Active Replication: In active replication, at least f + 1 replicas each must execute operations. In passive replication, only the sequencer executes operations, but it has to propagate state updates to the backups."
Which reminds of ZAB (of Zookeepr fame), another protocol languishing in relative obscurity.
Anyway, thanks for your input. I have this thing for underdogs and this VR vs Paxos thing is a very minor cause, and I guess you triggered me there. /g
You're right that Raft is Paxos for the leader election phase, in the same way that VR is Paxos for the view change phase, or that Paxos is basically VR's view change but for deciding a value.
But a better way of saying this would be that Raft is Multi-Paxos (https://news.ycombinator.com/item?id=23123701), because Raft is much more than Paxos (in the same way that VR is more than Paxos because it not only decides on a value/leader but is also a protocol for replication). I think this is where our misunderstanding came in.
Yes, I've read the linked "Vive La Difference" paper. As the excerpt makes clear, one of the key differences is active replication (Paxos: with multiple concurrent processes that want to decide on a value) vs passive replication (Multi-Paxos variants: VR and Raft and ZAB all protocols with a single leader elected for a given term/view during which multiple rounds of replication take place).
The name Multi-Paxos just means that the first round of Paxos for leader election is reused for multiple rounds of replication, driven by a single leader instead of competing processes always using the minimum two-phase Paxos.
"I have this thing for underdogs"
Yes, me too, and that's why I tried to show that VR is no less than Multi-Paxos in my original comment, and that Raft is not newer than Multi-Paxos in my follow-up. It would be great for VR to receive the recognition it deserves.
Well, in that case we mildly disagree. My take is this:
That functionally M-Paxos and Raft are equivalent does not make them the equivalent. RAFT uses a protocol that looks awfully like Paxos for election of a single leader - so it has nothing to do with multi-Paxos (as I understand it). But that L.E. phase + the interim regime gives you functional equivalence to Multi-Paxos.
"RAFT uses a protocol that looks awfully like Paxos for election of a single leader"
Agreed.
"so it has nothing to do with multi-Paxos (as I understand it)"
Except that Raft is not only a protocol for single leader election, it's also a replication protocol (see the "AppendEntries" message), and that's why it is Multi-Paxos. Multi-Paxos is just the category or classification for a family of consensus protocols that use the strategy of single leader election and replication with terms/views/epochs (the "interim regime" you refer to) for strict serializability. This is the passive replication leader/follower primary/backup strategy.
Outside of Multi-Paxos, there are also consensus protocols that achieve strict serializability with Paxos but without electing a single stable leader, e.g. FastPaxos. These exploit low latency between the client and all replicas, but this is not always available, or may suffer from tail latency issues, hence implementations such as Raft use the MultiPaxos strategy of a stable leader, which may be better connected to the rest of the cluster than the client.
As an implementation, depending on how you zoom, Raft is very different to VR and ZAB, but it's still in the same Multi-Paxos class since it reuses a single stable leader derived from one instance of Paxos for "multiple" rounds or instances of replication, hence "Multi-" "Paxos".
At least this is how I understand it from Heidi Howard and others who seem to share this "view".
> I guess my issue here is calling all these "paxos" when 2 (VR and VS) predate Paxos.
Definitely. All VR needs now is a paper called "VR Made Famous" because it's already been made simple, easy, fast and understandable! And, after all, it was first.
Thanks for the reference to Egalitarian Paxos, that's a great example of active replication in the Paxos (and not Multi-Paxos) sense.
Nine out of ten scientists ardently believe reaching consensus is harder than it looks. The remainder are unpersuaded and firmly believe that reaching consensus is easy.
One of the things about Raft that bugs me is that it appears that the nominee counts the votes. And the leader is the only machine that approves transactions. If our favorite protocol is susceptible to first coups and then despots, then we have a long long way to go to model humans.
All problems stem from the misconception that consensus is necessary. When you realize that it's not, everything becomes much easier:
- Democracy vs Contractualism
- Bitcoin vs Holochain
- Censorship vs Filters
The same way prediction markets don't expect consensus on the interpreted state of the future, description markets shouldn't expect consensus on the interpreted state of the past.
Understand that objective reality is interpreted through a subjective lens.
Allow disagreements. Allow contradicting information to coexist. Allow people to be wrong.
We don't need to share the same perspective of the world. Embrace the filter bubble.
I don’t know whether you’re just making a statement or you weren’t aware, but this is about consensus in the technical sense; which is a case in which it is often necessary.
Social consensus is a whole different kettle of fish.
Predictive markets don’t need consensus, because they can be wrong. We have certain correctness expectations for other systems, and for those, reconciliation/fixing-it-after-the-fact can be more pain and effort than it’s worth.
Consider a global scale e-commerce application. You may have a handful of operations that absolutely need to pull consensus across the entire system (i.e. adding/removing nodes, creating new accounts, authenticating and creating sessions).
Everything else could be left to a per-node basis without compromising the integrity of sensitive transactions.
If you look at this from a physics perspective, I think it gets really simple. You want to minimize your user request latency. More consensus = more participants = more latency.
If the grain of your consensus mechanism is your datastore and everything in your business requires pulling a majority of votes over the cluster, you cannot leverage this philosophy. I think the biggest mistake out there right now is trying to abstract this stuff under the datastore and not allowing the business logic to access it directly. Whether or not something needs to pull consensus should be a business question, not a technical question.
> Whether or not something needs to pull consensus should be a business question, not a technical question
By its nature it’s a technical question with some crossover into business realm. The problem of consensus is technical, hard, full of strange hard-to-reason-about occurrences and second/third order effects. The decision to be made is whether some business thing needs those guarantees is still a technical decision.
> he biggest mistake out there right now is trying to abstract this stuff under the datastore and not allowing the business logic to access it directly
I think the issue with this is that there’s not a lot to be gained by allowing this, it’s incredibly complex as very easy to get subtly wrong-and I’ve seen very competent and very average devs get stuck on things that are comparatively far easier. Consensus is really hard.
A similarity might be when NoSQL databases were all the rage and everyone was throwing away relational databases and ACID, only to recreate them at app level from first principles, but worse. Putting it in a db doesn’t solve the problem for all cases of course, but I can understand why it’d solve a lot of them.
Consensus isn’t just hard, it’s the hardest thing. Possibly even the only hard thing, if you include the social aspects of software development too (which involve human consensus).