> It would also help to have compile-time guarantees about which nodes are running.
I haven't had a chance to try it yet, but in theory I like the idea of something like Zenoh Flow to describe the larger data flow graph. https://zenoh.io/blog/2023-02-10-zenoh-flow/
> But I always like to say, why spend 5ns calling a function when you could spend 1ms waiting for a context switch or 50ms for the next event loop to tick over?
I think your context switch timing is off by several orders of magnitude, but regardless these things aren't one extreme or the other. For sharing data across threads, to an external system, or logging and visualization I still like pub/sub (and I've seen more than my share of horrible abuse), but it definitely shouldn't be treated as one size fits all.
In a best case, yes I've exaggerated wildly. But in a latency sensitive system, waiting for the message to get to the front of the queue could really take that time. Different queue priorities help, but really, why not just do a function call? To me, anything that isn't just a function call should really need to be justified. But I guess it's all design decisions.
I haven't had a chance to try it yet, but in theory I like the idea of something like Zenoh Flow to describe the larger data flow graph. https://zenoh.io/blog/2023-02-10-zenoh-flow/
> But I always like to say, why spend 5ns calling a function when you could spend 1ms waiting for a context switch or 50ms for the next event loop to tick over?
I think your context switch timing is off by several orders of magnitude, but regardless these things aren't one extreme or the other. For sharing data across threads, to an external system, or logging and visualization I still like pub/sub (and I've seen more than my share of horrible abuse), but it definitely shouldn't be treated as one size fits all.