Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I’m sure you would! GC is like communism. Always some excuse as to why GC isn’t to blame.

To be fair, there are about 4 completely independent bad decisions that tend to be made together in a given language. GC is just one of them, and not necessarily the worst (possibly the least bad, even).

The decisions, in rough order of importance according to some guy on the Internet:

1. The static-vs-dynamic axis. This is not a binary decision, things like "functions tend to accept interfaces rather than concrete types" and "all methods are virtual unless marked final" still penalize you even if you appear to have static types. C++'s "static duck typing" in templates theoretically counts here, but damages programmer sanity rather than runtime performance. Expressivity of the type system (higher-kinded types, generics) also matters. Thus Java-like languages don't actually do particularly great here.

2. The AOT-vs-JIT axis. Again, this is not a binary decision, nor is it fully independent of other axes - in particular, dynamic languages with optimistic tracing JITs are worse than Java-style JITs. A notable compromise is "cached JIT" aka "AOT at startup" (in particular, this deals with -march=native), though this can fail badly in "rebuild the container every startup" workflows. Theoretically some degree of runtime JIT can help too since PGO is hard, but it's usually lost in the noise. Note that if your language understands what "relocations" are you can win a lot. Java-like languages can lose badly for some workflows (e.g. tools intended to be launched from bash interactively) here, but other workflows can ignore this.

3. The inline-vs-indirect-object axis - that is, are all objects (effectively) forced to be separate allocations, or can they be subobjects (value types)? If local variables can avoid allocation that only counts for a little bit. Java loses very badly here outside of purely numerical code (Project Valhalla has been promising a solution for a decade now, and given their unwieldy proposals it's not clear they actually understand the problem), but C# is tolerable, though still far behind C++ (note the "fat shared" implications with #4). In other words - yes, usually the problem isn't the GC, it's the fact that the language forces you to generate garbage in the first place.

4. The intended-vs-uncontrollable-memory-ownership axis. GC-by-default is an automatic loss here; the bare minimum is to support the widely-intended (unique, shared, weak, borrowed) quartet without much programmer overhead (barring the bug below, you can write unique-like logic by hand, and implement the others in terms of it; particularly, many languages have poor support for weak), but I have a much bigger list [1] and some require language support to implement. try-with-resources (= Python-style with) is worth a little here but nowhere near enough to count as a win; try-finally is assumed even in the worst case but worth nothing due to being very ugly. Note that many languages are unavoidably buggy if they allow an exception to occur between the construction of an object and its assignment to a variable; the only way to avoid this is to write extensions in native code.

[1] https://gist.github.com/o11c/dee52f11428b3d70914c4ed5652d43f... - a list of intended memory ownership policies. Generalized GC has never found a theoretical use; it only shows up as a workaround.



re 1. C# dispatch strategy is not Java-like: all methods are by default non-virtual unless specified otherwise. In addition, dispatch-by-generic-constraint for structs is zero-cost, much like Rust generics or C++ templates. As of now, neither OpenJDK nor .NET suffer from virtual and interface calls to the same extent C suffers from manually rolled vtables or C++ suffers from virtual calls. Because both OpenJDK/GraalVM and .NET have compilers that are intimately aware of the exact type system they are targeting which enable advanced devirtualization patterns. Notably, this also works as whole-program-optimization for native binaries produced by .NET's nativeaot.

re 4. there is some understanding gap in programming community to the kind of constraints imposed by lifetime analysis on dynamicity allowed by JIT compilation, which comes at a tradeoff of being able to invalidate previous assertions about when the object or struct truly no longer referenced, whether it escapes or else - you may be no longer able to re-JIT the method, attach a debugger or introduce some other change. There is still also lack of understanding where the cost of GC comes from and how it compares to other memory management techniques, or how it interacts with escape analysis (which in many ways resembles static lifetime analysis for linear and affine types), particularly so when it is inter-procedural. I am saying this as a response to "GC-by-default is an automatic loss" which sounds overly generalized "GC bad" you get used to hearing from audience who never looked at it with a profiler.

And lastly - latency-sensitive gamedev and predictability tends to come with completely different set of constraints to regular application code, and tends to require comparable techniques regardless of the language of choice provided it has capable compiler and GC implementations. It greatly favours low or schedulable STW pause GC (pause-less-like and especially non-moving designs tend to come with very ARC-like synchronization cost and low throughput (Go) or significantly higher heap sizes over actively used set (JVM pauseless GC impls. like Azul, maybe ZGC?), ideally with some or most collection phases being concurrent that performs best at moderate allocation rates. In the Unity case, there are quite a few poor quality libraries, as well as constraints of Unity specifically in regards to its rudimentary non-moving GC, which did receive upgrades for incremental per-frame collection but still would cause issues in scenarios where it cannot keep up. This is likely why the author of the parent comment is so up and arms about GC.

However, for complex frequently allocated and deallocated object graphs that do not have immediately observed lifetime constrained to a single thread, good GC is vastly superior to RC+malloc/free and can be matched by manually managing various arenas at much greater complexity cost, which is still an option in a GC-based language like C# (and is a popular technique in this domain).


> I assume you're talking about Unity, is that correct

That particular project was Unity. Which, as you know, has a notoriously poor GC implementation.

It sure seems like there are a whole lot more bad GC implementations than good. And good ones are seemingly never available in my domains! Which makes their supposed existence irrelevant to my decision tree.

> good GC is vastly superior to RC+malloc/free

Ehhh. To be honest memory management is kind of easy. Memory leaks are easy to track. Double frees are kind of a non-issue. Use after free requires a modicum of care and planning.

> and can be matched by manually managing various arenas at much greater complexity cost, which is still an option in a GC-based language like C# (and is a popular technique in this domain).

Not 100% sure what you mean here.

I really really hate having to fight the GC and go out of my way to pool objects in C#. Sure it works. But it defeats the whole purpose of having a GC and is much much more painful than if it were just C.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: