Edit - yep, the event producer manages and increments the vector clock.
---
I've not really used Kafka, so couldn't comment on that. I did some work for a customer that involved multi-DC microservices with isolated databases (ie one DB per DC). We used event sourcing with vector clocks to do manual reconciliation of the databases including during partition. Reconciliation involved custom logic depending on the event type, so not sure how a transport mechanism like Kafka would handle that.
Edit: to add on about Kafka, it guarantees a serializable ordering on the incoming messages and is driven internally by a vector clock. This may not be suitable for applications, but the throughput is high and allows multiple subscribers to get a consistent ordering.
---
I've not really used Kafka, so couldn't comment on that. I did some work for a customer that involved multi-DC microservices with isolated databases (ie one DB per DC). We used event sourcing with vector clocks to do manual reconciliation of the databases including during partition. Reconciliation involved custom logic depending on the event type, so not sure how a transport mechanism like Kafka would handle that.
A talk about the approach is here: https://youtu.be/hYVh8PbbeJw
I get to vector clocks at the end.