Hacker News new | past | comments | ask | show | jobs | submit login

That's what we hear from our customers as well. They complain about excessive CPU and memory usage.

The two phases we've seen are:

1/It's flexible and it works! Problem solved! 2/21st century called, they want their performance back.

The problem with phase 2 is that you may not be able to solve it by throwing more computing power at it.

Unfortunately if you really need map-reduce, at the moment I don't know what to recommend. Riak isn't better performance-wise and our product doesn't support map-reduce (yet).

However if you don't need map-reduce I definitively recommend not using Cassandra. There's a lot of non-relational databases out there that are an order of magnitude faster.




Be careful to compare apples to apples. Sure, the memory-only crowd (e.g, redis) will post higher numbers, but Cassandra is the performance leader for scalable, larger-than-memory datasets. See http://www.cubrid.org/blog/dev-platform/nosql-benchmarking/ for example. (And this tests an old version of Cassandra; we did a lot of optimization on the read path for 1.0: http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-...)


Didn't want to sound like someone scorning other people's work, I'm sure you did a lot of great things for 1.0.

However I have the gut feeling we're far from squeezing out all the juice from today's hardware.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: