As mentioned by others, this list is old. The cited algorithms are certainly still good to know, but the meaning of massive is different now. Today, massive means:
- too large to fit even in big iron (few people can afford them anyway)
- low value: a lot of data are useless / too bad to be useful, so not taking into account all of them all the time is not too bad.
Nothing outside near linear or even sublinear algorithms really work in those cases. Singular Value Decomposition is a great example. Up to recently, it was mostly about about doing fast, accurate SVD for large matrices. There is a recent surge on approximate algorithms which see any data only once at most. This is useless for most "hard" engineering tasks, but for analysis of large graph data, you can most likely tolerate a few % of error in your biggest singular values to still get something useful.
The fun part is that things as simple as matrix multiplication become an interesting and potentially hard problem.
Would anyone know if there's audio/video of these lectures? I keep seeing amazing classes like this and wishing that everyone could enjoy them instead of just the local students.
Slides, notes and papers from Sergei Vassilvitskii's class on a similar topic, COMS 6998-12: Dealing with Massive Data, http://www.cs.columbia.edu/~coms699812/ .
http://www.stanford.edu/class/cs246/cs246-11-mmds/handouts.h...
The book here: http://infolab.stanford.edu/~ullman/mmds.html