This is awesome. I've asked professors at my university to have this class, but didn't happen. Couldn't be better to have Erik Demaine teaching it. The video has a great quality and the synced lecture notes are indeed impressive.
Erik Demaine has some of the nicest algorithms lectures, and I would compare him to Gilbert Strang teaching linear algebra.
EDIT: and closer to here, cpercival talking about cryptography and security, or tptacek when he's not grumpy and actually writes an informative, detailed comment.
Too bad that you can't easily judge upfront which of those data structures and algorithms are really usable in practice. For example, fusion trees may seem like a cool idea, but they're totally impractical, as also admitted by the authors (Fredman and Willard) in their original paper.
This is definitely true, and marks a strong detachment of asymptotic analysis and theoretical computation models from real-world computers, which are just too complex to reason about them abstractly.
In theoretical computer science, data structures like this are mostly theorems, proving what you can do in a given computation model. This doesn't mean that they have desirable properties in real-world implementations, compared to classical algorithms such as hash-tables, binary trees, tries, ...
However, they are not completely useless in practice, as they usually bring up new ideas that, if properly engineered, can improve on "classical" algorithms. I've been working in this field for the last couple of years and I've observed that in certain niche applications advanced algorithms can really be game-changers (think computational molecular biology).
For example I recently used succinct data structures to engineer a compressed trie that is fairly fast and has very good compression. For example if you have a big set of URLs you can compress it to a size around 12% of the original data and still do searches and retrievals in matter of microseconds. (paper http://siam.omnibooksonline.com/2012ALENEX/data/papers/018.p... , code https://github.com/ot/path_decomposed_tries ) [/shameless plug]
I don't really understand the point of fusion trees.
log log N is a constant from the practical perspective.
So, O(logN/loglogN) is simply O(logN).
What was the point?
The point is that O(log N/log log N) is not O(log N): they have broken the informatic-theoretic bound of O(log N) in linear space using "ordinary" operations. (Previous such works used exotic machine models that had little to do with how modern CPUs actually operate.)
Even for N=2^16, you could (in theory) get a 4-fold speedup.
Aha... and for 2^32 I'll have 5-fold speedup. 6-fold for 2^64 entries, obviously.
Frankly, those small-linear-factor improvements tend to be overshadowed by other factors (i.e. whether you hit the cache). I mean, they make sense as an optimization once everything is in place and works correctly.
But, designing something ground up to have a small-constant-factor speedup... that reminds me one physics trick. Namely, that by standing next to an infinite wall and by turning a laser pointer fast you make the spot on the wall move faster than c.
Depending on how many students you have and how many graders, it may be a choice of that or arbitrary "if I can't read your handwriting, you get a 0" requirements.
As a separate note, some people think that classes should teach more than just the narrow subject material, and a knowledge of LaTeX is somewhat useful for someone who plans to actually publish CS research.
Typesetting in LaTeX is a pretty common at MIT - for example, all majors have a technical communication requirement, and most courses that satisfy that requirement stipulate that problem sets be handed in in TeX. It's also not that difficult - TeX is easy to learn - and a helpful thing to be able to do if you plan to publish research.