Hacker News new | past | comments | ask | show | jobs | submit login
1.25 Billion Key/Value Pairs in Redis 2.0.0-rc3 on a 32GB Machine (zawodny.com)
24 points by nochiel on July 25, 2010 | hide | past | favorite | 6 comments



Hey, very cool to see such a real-world tests! Thanks.

With Redis 2.0 lists, sets, and sorted sets are not as space saving data structures as hashes, but fortunately in Redis master they are going to be!

Redis master already implements specially encoded lists, so they are using very little memory (up to a given size. If the limit is reached the "ziplist" is converted into a real list using much more memory).

In very short time we'll also have space-saving sets (when composed of just integers). The patch is already in Pieter's git branch but needs more testing.

For sorted set... they'll likely remain a non space saving data structure ;) For hashes, lists and sets of integers the trick is that when they are small it's possible to specially encode this aggregate values without performance hits, but sorted sets are often composed of million of items, as they are used as indexes, so there are no tricks if we want sorted sets to be as fast as today.


I’m kind of tempted to re-run this test using LISTS, then SETS, then SORTED SETS just to see how they all compare from a storage point of view.

Yes please, that would be much appreciated.


I figured it'd be useful to do... just need to block off a bit of time. Perhaps in the next 24 hours if I'm lucky.


I've noticed that sorted sets have noticeably more overhead than the other types.


If that's true, I wonder why. Maybe is they are trying to get O(1) access? Storing it the 'default' way would be O(log n) which isn't bad, but might still be unacceptable.


Ah, I found my answer. Sets are stored in both a skip list and a hash table. So the overhead is indeed high.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: