Interesting, what was it from that podcast that made you reconsider? Always eager to learn about opportunities for improving the experience of using Debezium.
Oh wow, by "work on" you mean "the core maintainer of". Thank you for replying. :)
The main part I reconsidered based on was the level of effort taking the data from kafka and landing into snowflake, especially around handle postgres schema changes safely. I also have no experience with kafka, so I'd be out of my depth's pretty quickly for a critical part of the architecture. He also expressed the need for building quality checks into the kafka to snowflake code, but those details were a bit sparse (if i recall correctly).
Note: all of the above are probably outside the scope of debezium. :)
Note 2: your article [1] on using cdc to build audit logs w/ a "transactions" table blew my mind. Once I listened to your data engineering podcast interview [2], I knew there was some implementation of "event sourcing lite w/ a crud app" possible, so I was excited to see you had already laid it out.
Gotcha, yeah, there's many things to consider indeed when setting up end-to-end pipelines. Thanks for the nice feedback, so happy to hear those resources are useful for folks. As far as event sourcing is concerned, we got another post [1] which might be interesting to you, discussing how "true ES" compares to CDC, pros/cons of either approach, etc.
Disclaimer: I work on Debezium