Hacker News new | past | comments | ask | show | jobs | submit login

Try searching their mailing list for "hang" and suchlike (of course they don't openly advertise such an easily triggered flaw).

LevelDB write rate initially seems amazing, since it's simply writing unsorted keys to an append-only file until the file hits 2MB or so. For bursty loads it feels great.

But the moment writes are sustained for longer than it can merge segments (say while doing a bulk load), per-write latency spikes appear (average op time jumps from <1ms to >30000ms for a single record), and eventually it'll get so far behind that all attempts to progress will hang entirely, waiting for the background compactor to free up room in the youngest generation. The effect seems to worsen exponentially with database size. To attempt to mitigate this, when LevelDB notices it's falling behind it begins sleeping for 0.1s every write.

It's especially easy to trigger on slow CPUs with spinning rust drives.

https://groups.google.com/forum/?fromgroups=#!topic/leveldb/...




> But the moment writes are sustained for longer than it can merge segments (say while doing a bulk load), per-write latency spikes appear

Isn't that a problem common to all SSTable-based databases?

Is LevelDB any worse than HBase or Cassandra in this area?


Yes, it seems like Cassandra is suffering from compacting too, although maybe not so ugly as LevelDB.

The only solution I've found is Castle backend from Acunu [1], but there were no updates from 2011 [2] and it looks really heavy (with kernel module and all that)

[1] http://www.slideshare.net/acunu/cassandra-on-castle

[2] https://bitbucket.org/acunu/fs.hg


Any hint at how http://symas.com/mdb/ behaves ?

Been using Tokyo Cabinet for a long time now, and have repeatedly hit similar hangs. Shopping for a future datastore of this sort!


MDB had no issue on the same hardware and workload I discovered the LevelDB behaviour. It does not defer work (unless running asynchronously, in which case the OS is technically deferring work), so performance is a predictable function of database size, and unaffected by prior load.

Tokyo Cabinet should behave similarly.. can you tell us a bit more about your setup?


Sure, using TC hash datastore over millions (tenth of, not hundreds or billions) of entries, each being a compressed protobuffer. The cost of each write grows exponentially after some time. We've played with parameters, buckets size and numbers, cache, we tried over SSDs vs regular HD, no big improvement. We've considered writing a sharded version of TC (there are a couple implementations already IIRC). Typically, the problem seems to be related to the size of the file on disk and the number of buckets. Somewhere, the reads and writes become prohibitive (at least for our usage).

We like the speed of these datastores as some of our algorithms proceed with millions of calls every few seconds or so, and we like that it is not remote.


If it is a bulk load, surely slow writes aren't a serious issue as long as throughput is good? Or are you saying the average write takes 30s?


Thanks! I appreciate the heads up.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: