> Using S3 as an ad-hoc queue is a cheaper solution, which should throw some red flags.
Interesting. Can you expand on this? How do you ensure that only one worker takes a message from s3? Or do you only use this setup when you have only one worker?
You encode messages with timestamp and origin (eg 1558945545-1), you write directly to S3 into a (create if not exists) folder for a specific windowing (let's say minute). Every agent writing, you end up with a new folder in the next minute. You have a window with an ordered set of messages by window by sort algorithm...optimally determined by the naming encoding.
You reminded me of a post on Dropbox announcement in 2007, that you can do it “yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem”.
Which gets you one of the basic features of SQS, but not the entire rest of the implementation. It's also significantly more work than just setting up an SQS queue.
I guess if you're at the point where your engineering time to implement this + all of the features on top of it that you might need from SQS and future maintenance of this custom solution is cheaper than the cost of using SQS, and you have no other outstanding work that your engineering team should be doing instead, this is a viable cost optimization strategy.
But that's a whole lot of ifs, and with customers I've mostly worked with, they're far better served just using SQS.
S3 went down twice in 5 years. Since we're transferring files, you just push everything in the next window. The retry is trivial from the agent and accounted for in the consumer.
I wasn’t talking about the reliability of S3, but of your own systems.
Say the outage results in a few million messages that need to be retried. Some subset of those few million will never succeed (aka they are “poisoned pills”). At the same time, new messages are arriving.
In your system, how do you maintain QoS for incoming messages as well as allow for the resolution of the few million retries while also preventing the poisoned pills from blocking the queue? How do you implement exponential backoff, which is the standard approach for this?
SQS gives you some simple yet powerful primitives such as the visibility timeout setting to address this scenario in a straightforward manner.
Interesting; looks like DirectoryQueue uses directories, rather than file locks (man 2 flock), to lock the queue messages. This might actually work, since mkdir returns an error if you attempt to create a directory that already exists. The implementation seems to be handling most of the obvious failure cases, or at least tries to.
So how does one lock a message in s3? Does s3 have a "createIfDoesNotExistOrError"? I'm still having difficulty understanding how the proposed system avoids race conditions.
Back in 2000, I worked with a guy that built an entire message queue product using the SMTP protocol with an implementation that was in turn built on top of lex and yacc.
Interesting. Can you expand on this? How do you ensure that only one worker takes a message from s3? Or do you only use this setup when you have only one worker?