Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So silly question - if i understand right, the idea is you can do other stuff while i/o is working async.

When working on a database, don't you want to wait for the transaction to complete before continuing on? How does this affect durability of transactions? Or do i just have the wrong mental model for this.



I think the OP is about a runtime that runs hundreds of programs concurrently. When one program is waiting for a transaction other programs can execute.


You don't need io_uring for that - the usual synchronous file operations will cause the OS to switch away from processes while they wait for disk, if there are other processes needing to do work. OP's design is for when you have other work to do in the same process.


When I said “runtime” and “program” I meant it. If I had meant process I would probably have used that word.


Okay, I see what you mean. To me "program" usually implies process, even in a runtime.


From the paper it looks like this is for read heavy workloads (testing write performance is "future work") and I think for network file systems which will add latency.


The complex thing with a transactional db is that many concurrent transactions (should be) executed simultaneously, and that mix that single query tx and the one that loads 1 million rows.


The sqlite model is that only one write transaction can be run at a time. That's kind of a defining trade-off because it allows simplifying locking.


Pekka already experimented with MVCC and I expect it to make it to Limbo at some point to enable multiple concurrent writers


Mvcc will create multiple persistent files on disk which is very un+sqlite like




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: