Hacker News new | past | comments | ask | show | jobs | submit login

That's actually not good enough, because modern CPUs have prefetchers, so if you just access a 33kB chunk sequentially, it will still almost always be from L1 (the prefetcher proabably uses misses to train, so you'll have a few misses before it catches up), even if the L1 is just 32kB.

To predictably miss caches you need to have a random access pattern.




Well, back in the early 90's, HP RISC was the only cache controller we found that had cache line prefetch. If you keep increasing your stride you eventually get to a big enough one that you are skipping enough cache lines that they stopped fetching.

We switched to going backwards, that fooled them for a while but now most cache controllers seem to detect forwards, backwards, and uniform strides, which is actually pretty neat.

So yeah, today it would seem that random is what you need.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: