Hacker News new | past | comments | ask | show | jobs | submit login

With a realistic, that is, not 100% reliable, queue you can have either "at most once" or "at least once" delivery anyway. "Exactly once" can't be guaranteed.

So a duplicate message should be processed as normal anyway, e.g. by deduplication within a reasonable window, and/or by having idempotent operations.




Yes, it depends on the workload. Idempotency is typically always a good idea, but sometimes the operation itself is very expensive in terms of time, resources, and/or money. I have also seen people try to update the message when writing it back(with checkpoint information and etc) for long running processes. A slew of issues, including at least once delivery, can cause workflow bifurcation. Deduplication via FIFO _can_ help mitigate this, but it has a time window that needs to be accounted for. Once you start managing your own deduplication I'd say it has moved past trying to go databaseless.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: