Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's complicated, memory accesses can really block for relatively long periods of time.

Consider that regular memory access via cache takes around 1 nanosecond.

If the data is not in top-level cache, then we're looking at roughly 10 nanoseconds access latency.

If the data is not in cache at all, we are looking into 50-150 nanoseconds access latency.

If the data is in memory, but that memory is attached to another CPU socket, it's even more latency.

Finally, if the data access is via atomic instruction and there are many other CPUs accessing the same memory location, then the latency can be as high as 3000 nanoseconds.

It's not very hard to find NVMe attached storage that has latencies of tens of microseconds, which is not very far off memory access speeds.



I just want to add to your explanation, that even in the absence of hard paging from disk, you can have soft page faults where the kernel modifies the page table entries or assigns a memory page, or copies a copy on write page, etc.

In addition to the cache misses you mention there's also TLB misses.

Memory is not actually random access, locality matters a lot. SSDs reads, on the other hand, are much closer to random access, but much more expensive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: