Hacker News new | past | comments | ask | show | jobs | submit login

The point is that O(log N/log log N) is not O(log N): they have broken the informatic-theoretic bound of O(log N) in linear space using "ordinary" operations. (Previous such works used exotic machine models that had little to do with how modern CPUs actually operate.)

Even for N=2^16, you could (in theory) get a 4-fold speedup.




Aha... and for 2^32 I'll have 5-fold speedup. 6-fold for 2^64 entries, obviously. Frankly, those small-linear-factor improvements tend to be overshadowed by other factors (i.e. whether you hit the cache). I mean, they make sense as an optimization once everything is in place and works correctly. But, designing something ground up to have a small-constant-factor speedup... that reminds me one physics trick. Namely, that by standing next to an infinite wall and by turning a laser pointer fast you make the spot on the wall move faster than c.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: