Hacker News new | past | comments | ask | show | jobs | submit login

It guarantees that the referenced memory will be available for subsequent operations. Knowing precisely how much memory is available at any particular instant in time is pretty critical for performance engineering.



Only if one ignores memory fragmentation.


Pretty sure everything they wrote is true even with fragmentation.

But, also, if your point is that defragmentation is a selling point... I mean, the last time I saw someone who moved from RC to GC because of defragmentation was... never? Probably because you can just solve it by throwing money at it and buying more memory to permanently solve the problem. Whereas people go in the other direction all the time... probably because you can't just perpetually throw more money at the problem and make compute time go N times faster.


Only on the cloud and classical desktop PCs.

You never saw anyone move from RC to GC, because there are very few high performance pure RC implemenations to make them a first option anyway, and in most cases that requires a rewrite in another language.


Fragmentation is always a possibility unless you've handled it specifically, like with a moving garbage collector.


Which reference counted implementations never make use of, when they do, they slowly evolve into a tracing collector when fully implemented.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: