Hacker News new | past | comments | ask | show | jobs | submit login

The claimed latency for it seems not far off from some other vendor's L3 caches which may be by virtue of rethinking where to share and therefore paying interconnection coherency taxes.

The innovation here seems to be adaptive sizing so if by whatever algorithm/metric a remote core is idle, it can volunteer cache to L4.

Presumably the interconnect is much richer than contemporary processors in typical IBM fashion and they can do all the control at a very low level (hw state machines&microcoding) so it is fast and transparent. It will be interesting to hear how it works in practice and if POWER12 gets a similar feature since it shares a lot of R&D.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: