Hacker News new | past | comments | ask | show | jobs | submit login

Why do you even need a cpu cache?

1ns write operations suggest fast read too.




No matter how fast the device itself is, addressing into a large pool will always be slower than a smaller pool. Both because of increased travel distance, and because every time you double the size of the pool, you add one additional mux on the path between the request and the memory.

This is why CPUs have multi-level caches, even though the transistors in L1 cache and L2 cache are typically the same -- the difference in access latency is not because L2 is made of slower memory, but because L1 is a very small pool very close to the CPU with the load/store units built into it, and L2 is a bit further away.

However, if main memory latency is suddenly a lot lower, it might change what is the most efficient cache level layout. The currently ubiquitous large L3 cache might go away. That would of course require very high bandwidth to the memory chips, because L3 does bandwidth amplification too.


Should be stressed that speed is entirely theoretical: https://onlinelibrary.wiley.com/doi/epdf/10.1002/aelm.202101...

> In all of the above tests, the program and erase states were set using between 1 and 10 ms voltage pulses, two times longer than the switching times used in our recent report of ULTRARAM on GaAs substrates.[15] In both cases, the devices operate at a remarkably high speed for their large (20 μm) feature size. Assuming ideal capacitive scaling[33] down to state-of-the-art feature sizes, the switching performance would be faster than DRAM, although testing on smaller feature size devices is required to confirm this.

> Why do you even need a cpu cache?

Cell read time is entirely different from latency and throughput. This stuff still reads in rows like RAM and can't just be accessed freely like registers.


You’d probably still need an L1 cache. L2 and L3 might be superfluous or you could have massive L2/L3 caches made with this rather than traditional SRAM that sit internally within the CPU to avoid the memory bus. Contention for the memory bus could also be a reason to still have SRAM caches that are slower than main memory.

Shifts like this are so impactful it’s hard to predict exactly what good designs will look like until we’ve had 5-10 years hands on for the industry to shake out how the Hw topology will looks like (maybe more since HW dev cycles prevent fast iteration and testing of ideas)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: