> That's the story that GC sold us. History has proven it wrong. Citation: the fire hose of articles on HN about how GC bit people in the ass, and they now have to go back into their code and write a bunch of duct tape code to work around Garbage Collector Quirk #4018 de jure that results in hitching, insane memory usage, and random OOMs.
I follow HN daily and very rarely do I see articles lamenting GC (I'm only familiar with a small handful of incidents including some pathological cases with Go's GC on enormous heaps (many terrabytes) and some complaints about Java's GC having too many knobs), certainly not in the general case. Indeed, for the most part people seem quite happy with GC languages, especially Go's sub-ms GC. In particular, memory usage (and thus OOMs) have nothing to do with GC--it's every bit as easy to use a lot of memory in a language that lacks GC altogether. This is incorrect, full stop.
> I say GC is a failed experiment because it promised that programmers would be able to write code without worrying about memory.
GC promises that programmers don't have to worry about freeing memory correctly, and it delivers on that promise. I'm not a GC purist--there's lots of criticism to be had for GC, but we don't need point at patently false criticisms.
> The borrow checker is an infantile incarnation of a bigger idea that is finally panning out. Rather than garbage collecting during run-time; garbage collect during compilation using static analysis. Being in its infancy it's not as easy and free to use as we'd like. But it's the path forward.
Maybe. I like the idea, but I'm skeptical that putting constraints on the programmer is going to be an economical solution, at least for so long as the economics favor rapid development over performance. Conceivably rather than rejecting code that aggrieves the borrow checker, we could picture a language that converts those references into garbage collected pointers transparently, but we kind of have this already today via escape analysis--and indeed, I think this is the economic sweet spot for memory management because it lets users have a GC by default but also minimize their allocations for hot paths.
> Hence my comment: "A little bit of time spent now appeasing the borrow checker will pay off ten fold later when you don't have to deal with exploding memory usage and GC stalls in production."
But the borrow checker is strictly less effective in preventing memory leaks than a (tracing) GC (borrow checker will happily allow circular refcounts). More importantly, having to pacify the borrow checker on every single codepath when only 1-2% of code paths are ever going to be problematic is not a good use of your time, especially when you can do some light refactoring to optimize. With respect to GC stalls, these are particularly rare if you have a GC that is tuned to low-latency (Go's GC can free all memory in less than a millisecond in most cases).
> It's not about performance; it's about saving yourself the time of having to come back to your code a month later because your TODO app is using a gig of RAM and randomly hitching.
That sounds like the textbook definition of a performance concern, but again memory usage is orthogonal to GC and random hitching isn't a problem for latency-tuned GCs. Even while Rust is faster than many of its GC counterparts, this difference typically comes down to the ability of the compiler to output optimized code--not the memory management system. That said, for realtime applications, nondeterministic GCs aren't appropriate.
I follow HN daily and very rarely do I see articles lamenting GC (I'm only familiar with a small handful of incidents including some pathological cases with Go's GC on enormous heaps (many terrabytes) and some complaints about Java's GC having too many knobs), certainly not in the general case. Indeed, for the most part people seem quite happy with GC languages, especially Go's sub-ms GC. In particular, memory usage (and thus OOMs) have nothing to do with GC--it's every bit as easy to use a lot of memory in a language that lacks GC altogether. This is incorrect, full stop.
> I say GC is a failed experiment because it promised that programmers would be able to write code without worrying about memory.
GC promises that programmers don't have to worry about freeing memory correctly, and it delivers on that promise. I'm not a GC purist--there's lots of criticism to be had for GC, but we don't need point at patently false criticisms.
> The borrow checker is an infantile incarnation of a bigger idea that is finally panning out. Rather than garbage collecting during run-time; garbage collect during compilation using static analysis. Being in its infancy it's not as easy and free to use as we'd like. But it's the path forward.
Maybe. I like the idea, but I'm skeptical that putting constraints on the programmer is going to be an economical solution, at least for so long as the economics favor rapid development over performance. Conceivably rather than rejecting code that aggrieves the borrow checker, we could picture a language that converts those references into garbage collected pointers transparently, but we kind of have this already today via escape analysis--and indeed, I think this is the economic sweet spot for memory management because it lets users have a GC by default but also minimize their allocations for hot paths.
> Hence my comment: "A little bit of time spent now appeasing the borrow checker will pay off ten fold later when you don't have to deal with exploding memory usage and GC stalls in production."
But the borrow checker is strictly less effective in preventing memory leaks than a (tracing) GC (borrow checker will happily allow circular refcounts). More importantly, having to pacify the borrow checker on every single codepath when only 1-2% of code paths are ever going to be problematic is not a good use of your time, especially when you can do some light refactoring to optimize. With respect to GC stalls, these are particularly rare if you have a GC that is tuned to low-latency (Go's GC can free all memory in less than a millisecond in most cases).
> It's not about performance; it's about saving yourself the time of having to come back to your code a month later because your TODO app is using a gig of RAM and randomly hitching.
That sounds like the textbook definition of a performance concern, but again memory usage is orthogonal to GC and random hitching isn't a problem for latency-tuned GCs. Even while Rust is faster than many of its GC counterparts, this difference typically comes down to the ability of the compiler to output optimized code--not the memory management system. That said, for realtime applications, nondeterministic GCs aren't appropriate.