Hacker News new | past | comments | ask | show | jobs | submit login

Oh wow, by "work on" you mean "the core maintainer of". Thank you for replying. :)

The main part I reconsidered based on was the level of effort taking the data from kafka and landing into snowflake, especially around handle postgres schema changes safely. I also have no experience with kafka, so I'd be out of my depth's pretty quickly for a critical part of the architecture. He also expressed the need for building quality checks into the kafka to snowflake code, but those details were a bit sparse (if i recall correctly).

Note: all of the above are probably outside the scope of debezium. :)

Note 2: your article [1] on using cdc to build audit logs w/ a "transactions" table blew my mind. Once I listened to your data engineering podcast interview [2], I knew there was some implementation of "event sourcing lite w/ a crud app" possible, so I was excited to see you had already laid it out.

1) https://debezium.io/blog/2019/10/01/audit-logs-with-change-d...

2) https://www.dataengineeringpodcast.com/debezium-change-data-...




Gotcha, yeah, there's many things to consider indeed when setting up end-to-end pipelines. Thanks for the nice feedback, so happy to hear those resources are useful for folks. As far as event sourcing is concerned, we got another post [1] which might be interesting to you, discussing how "true ES" compares to CDC, pros/cons of either approach, etc.

[1] https://debezium.io/blog/2020/02/10/event-sourcing-vs-cdc/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: