To be clear, I'm interpreting "APIs" as "a method". So adding a new method is equivalent to adding a new service. If you mean for methods to never increase in number, only flexibility, then yea - I think I follow, this all makes sense. Then new stuff is truly new and disjoint from others, and there's no migration to worry about.
---
Also, since you're mentioning "replaying the stream of commands", I think this means "consistent" is strictly bound to "... at the point in time it has read to, from API X"? Then yea, switching APIs / methods is fine, you just delay the readers. It's event sourcing in a nutshell - there are undeniable benefits between any two "services", it's a compelling design.
I was interpreting it more in a system-wide sense with a large number of services, which is where I don't have a good feel for event sourcing - consumers of C and [others] are not "up to date" with what A has done until they read all data derived from all sources from the same minimum A-timestamp. So without a vector clock (probably) it's generally unsafe to consume from C and Q until they're both up to date, because C is missing stuff from A that Q already handled. Building something that maintains correctness and usefulness in the face of this seems extremely difficult or constrained, unless you accept unbounded delays (in practice: likely weeks of dev time in some cases).
---
And last but not least: CRDTs solve pretty much all of this without synchronization of any kind, yea. Are they still a pain to design? Or have we developed relatively-repeatable strategies nowadays? I haven't kept up much here, sadly.
---
I'll probably have to reread this all a couple times to make sure I'm not totally off somewhere irrational, sorry! Yours was a rather dense comment to comprehend, and I'm not sure I'm following correctly. Event sourcing has been interesting to me for quite a while, but I've never really developed a feel for how to build large, multi-developer(-team) systems out of it and it sounds like you might have an idea.
---
Also, since you're mentioning "replaying the stream of commands", I think this means "consistent" is strictly bound to "... at the point in time it has read to, from API X"? Then yea, switching APIs / methods is fine, you just delay the readers. It's event sourcing in a nutshell - there are undeniable benefits between any two "services", it's a compelling design.
I was interpreting it more in a system-wide sense with a large number of services, which is where I don't have a good feel for event sourcing - consumers of C and [others] are not "up to date" with what A has done until they read all data derived from all sources from the same minimum A-timestamp. So without a vector clock (probably) it's generally unsafe to consume from C and Q until they're both up to date, because C is missing stuff from A that Q already handled. Building something that maintains correctness and usefulness in the face of this seems extremely difficult or constrained, unless you accept unbounded delays (in practice: likely weeks of dev time in some cases).
---
And last but not least: CRDTs solve pretty much all of this without synchronization of any kind, yea. Are they still a pain to design? Or have we developed relatively-repeatable strategies nowadays? I haven't kept up much here, sadly.
---
I'll probably have to reread this all a couple times to make sure I'm not totally off somewhere irrational, sorry! Yours was a rather dense comment to comprehend, and I'm not sure I'm following correctly. Event sourcing has been interesting to me for quite a while, but I've never really developed a feel for how to build large, multi-developer(-team) systems out of it and it sounds like you might have an idea.