Hacker News new | past | comments | ask | show | jobs | submit login

>First, we use a 50 : 50 Zipf workload, and plot throughput vs.RocksDB in Fig. 10. As expected, Faster slows down with limited memory because of increased random reads from SSD, but quickly reaches in-memory performance levels once the entire dataset fits in memory.

I dont mean to offend the authors in anyways. But can someone from facebook rocksDb team reproduce their results on rocksDB ? I am curious as to , why throughput remains constant, even though memory is increased from 5 -> 40GB.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: