Fair enough viewpoint. I think its more nuanced than that.
The DB queue may be a buffer but the callbacks are not triggered until success/failure (and when caching like I did for the SELECTs that doesn't happen until then too). This is just the same as the twisted/tornado db connection pool approach to getting the DB access async, but is getting much better performance out of it by the merging client-side logic; its still the same schematics and there's no risk of a request succeeding but the DB not.
I have put some sketches into my post to describe when things hit disk and when the client making the request continues. I hope this clarifies all the ACID questions.
Ah, you don't respond to the user until the DB interactions for a given user have been processed as a batch. I was under the impression that on a write you were: updating the in memory dataset so that the user could immediately read their data after the write, queuing a write to MySQL, and then responding to the user (200 OK) even though the data had not been committed.
So my concern regarding consistency was that during this period after responding to the user but before your queue committed data that a bug or hardware failure would result in some users thinking that there requests were successful but none of their work would be saved.
As long as the user doesn't receive a response until data has been committed to your DB this makes sense.
Now that I understand the flow a bit better I agree that it makes sense. This is similar to what folks were trying to do with MySQL-Proxy. I still think that you're asking for trouble by coupling your components together in the same process and would really encourage you to take it to the next level by factoring out caching and proxying/batching of db access into their own services.
The more I think about this the more it seems like it would be reasonable to build as a separate twisted process and exposed through a api compatible with twisted.enterprise.adbapi... Figuring out reasonable (and general) ways to determine which statements can be safely batched could be quite challenging though.
I happen to be using Python's built-in new multiprocessing module. It has a suitable queue. I am using the same logic to talk to rabbitmq. I would avoid tcp.
I meant I I'd rather use Queue.Queue or multiprocessing.Queue to talk to a worker than TCP. Its just my preference.
Its generally the principle of least-power is that you'd use a thread (nice for the callback handling, and for 'interactions') or a process (if you worry about the GIL, but it complicates callbacks and interactions), and only jump to a TCP server if you really really could justify it.
I also meant that the approach of workers that do coalescing is useful for non-DB things too; we're using a worker to buffer rabbitmq, for example.
The DB queue may be a buffer but the callbacks are not triggered until success/failure (and when caching like I did for the SELECTs that doesn't happen until then too). This is just the same as the twisted/tornado db connection pool approach to getting the DB access async, but is getting much better performance out of it by the merging client-side logic; its still the same schematics and there's no risk of a request succeeding but the DB not.