Would it be correct to say the these client libraries provide the functionality (eg ease of transactions, once only, recovery) whereas your cloud offering solves the scaling / performance issues you’d hit trying to do this with a regular pg compatible DB?
I do a lot of consulting on Kafka-related architectures and really like the concept of DBOS.
Customers tend to hit a wall of complexity when they want to actually use their streaming data (as distinct from simply piping it into a DWH).. being able to delegate a lot of that complexity to the lower layers is very appealing.
Would DBOS align with / complement these types of Kafka streaming pipelines or are you addressing a different need?
Yeah exactly! The Kafka use case is a great one--specifically writing consumers that perform real-world processing on events from Kafka.
In fact, one of our first customers used DBOS to build an event processing pipeline from Kafka. They hit the "wall of complexity" you described trying to persist events from Kafka to multiple backend data stores and services. DBOS made it much simpler because they could just write (and serverlessly deploy) durable workflows that ran exactly-once per Kafka message.
I do a lot of consulting on Kafka-related architectures and really like the concept of DBOS.
Customers tend to hit a wall of complexity when they want to actually use their streaming data (as distinct from simply piping it into a DWH).. being able to delegate a lot of that complexity to the lower layers is very appealing.
Would DBOS align with / complement these types of Kafka streaming pipelines or are you addressing a different need?