Neat; but seems to missing copyright notices and an explicit license, which means no one can actually use it or redistribute it with their application.
We do horizontally scale redis as a farm. I'll try to get more details on how we do it as I'm not the one responsible.
We thought of parallel reducers and it does make a lot of sense. The reason they are sequential is to get a first release out so we can juggle ideas with people. If you care to contribute we'd love it. Even if you just create an issue.
Hadoop is a bloated pile of elephant poo. Any and all alternatives are welcome. Disco (http://discoproject.org/) is popular in some parts of the mapreducesphere.
The reason I wrote r³ is because I was a little overwhelmed by how complex disco is to administer and scale.
r³ was designed from the ground up to adhere to HTTP. That means it's pretty easy to scale using our old and well-proven techniques: caching and load-balancing.
I'd love to, but it would take about an hour to run through everything.
Here's a short version:
There's a collective ecosystem problem of fragmented applications, not-quite-right command line utilities, web interfaces that look like they were designed in 1995, noisy log files people actually have to read constantly, and cross coupling of dependencies that make keeping a cluster live for production use a full time job.
There's the programming problem of nobody actually writing hadoop mapreduce code because it's impossibly complicated. Everybody uses hive and pig and half a dozen other tools to compile to pre-templated java classes (this knocks off 5% to 30% of your performance if you could do it by hand).
It hasn't grown because it's so amazing, performant, and company saving. It grows because people jumped on a fad wagon then got stuck with having a few hundred TB in HDFS. The lack of a competing project with equal mindshare and battle-testedness doesn't foster any competition. It's the mysql of distributed processing systems. It works (mostly), but it breaks (in a few dozen known ways), so people keep adding features and building on top of it.
seiji pretty much nails it. Hadoop seems to have come out of a weird culture. It is a distributed system with a single point of failure (name node) because its designers insisted on avoiding Paxos (distributed systems are too hard so we'll just make a broken-by-design protocol instead). Another example is that a lot of the database code built on top of Hadoop is designed around one Java hashmap per row which really limits performance.
There are all sorts of oddities and you can mostly work around them but it is...exhausting, and I spend a lot of time thinking "surely there must be a better way".
Wait, so Zookeeper (= distributed consensus thingie that I think implements the Paxos algorithm) is a Hadoop project but not actually used in Hadoop mapreduce?
Thanks for your insightful comments! I appreciate that you took the time to back up your opinion by distilling your thoughts into something quickly digestible.
Have you heard of any other projects outside of disco that are more performant than hadoop when used for similar applications?
If you have alot of data, and network IO is a big issue, you'll want to use something like hadoop (or disco) becuase they come with an integrated distributed file system and they preserve data locality.
If you don't have that much data, MR on redis is fine
I can think of one case where a redis dictionary is used to represent a tree, and reductions are needed over a subtree. Calculations on river networks are like this. You might want to use redis instead of a cPickled dictionary, and you might not want the overhead of a full Hadoop.
I actually like when people focus on the case and don't pollute their manuals with such things. If you don't know how to, it's information you can easily obtain elsewhere and you probably got some homework to do anyway.
This is pretty interesting, I have a related project (plug, hadoopy.com). The way I went about this (in an experimental branch) is to use Celery running on Redis.
I agree, but one of the next features we'll implement is for you to be able to write stream processors, mappers and reducers in any language you want. Stay tuned!