> To me it's not about performance. A little bit of time spent now appeasing the borrow checker will pay off ten fold later when you don't have to deal with exploding memory usage and GC stalls in production.
I'm confused by the "it's not about performance. [reasons why it is, in fact, about performance]" phrasing, but in general a lot of applications aren't bottlenecked by memory and a GC works just fine. Even when that's not entirely the case, they often only have one or two critical paths that are bottlenecked on memory, and those paths can be optimized to reduce allocations.
> GC is great for quick hack jobs, scripts, or niches like machine learning, but I believe at this point it's a failed experiment for anything else.
That sounds kind of crazy considering how much of the world runs on GC (certainly much more than the other way around). I feel the need to reiterate that I'm not a GC purist by any means--I've done a fair amount of C and C++ including some embedded real time. But the idea that GC is a failed experiment is utterly unsupported.
That's the story that GC sold us. History has proven it wrong. Citation: the fire hose of articles on HN about how GC bit people in the ass, and they now have to go back into their code and write a bunch of duct tape code to work around Garbage Collector Quirk #4018 de jure that results in hitching, insane memory usage, and random OOMs.
> That sounds kind of crazy considering how much of the world runs on GC
And much of HN runs on comments complaining about the _absurd_ amounts of memory all those non-bottlenecked applications use to do otherwise simple tasks. Or the monthly front page articles about developers and companies working to fix their otherwise straightforward, non performance critical production services that are choking themselves because the GC is going wild.
I say GC is a failed experiment because it promised that programmers would be able to write code without worrying about memory. But ever since its popularization 26 years ago with the dawn of Java, coders writing in garbage collected languages have been doing nothing but worrying about memory. The experiment failed. It's time to move on.
The borrow checker is an infantile incarnation of a bigger idea that is finally panning out. Rather than garbage collecting during run-time; garbage collect during compilation using static analysis. Being in its infancy it's not as easy and free to use as we'd like. But it's the path forward. And just like garbage collection before it, in the vast majority of cases, programmers don't care whether it's more or less performant. Garbage collection was vastly less performant than manual management. But it required _so_ much less developer time to build the same applications. My argument is that Rust's borrow checker, as painful as it is, results in more developer time up front, but less developer time overall when you consider the long tail of code upkeep that garbage collected applications demand.
Hence my comment: "A little bit of time spent now appeasing the borrow checker will pay off ten fold later when you don't have to deal with exploding memory usage and GC stalls in production."
It's not about performance; it's about saving yourself the time of having to come back to your code a month later because your TODO app is using a gig of RAM and randomly hitching.
I’ve been working in tech for a decade, I’ve scaled several large products and worked on distributed complex systems with a lot of users and some seriously workloads. Memory use has been a major issue perhaps two or three times total.
> It's not about performance; it's about saving yourself the time of having to come back to your code a month later because your TODO app is using a gig of RAM and randomly hitching.
If your GC program is using excessive RAM, that’s because of a memory leak, not the garbage collector. This can happen in C/C++ as well; just malloc/new and forget to free/delete. Last I checked, C and C++ aren’t garbage collected languages.
It's rarer, and you can rule it out entirely by just not using types that let you leak memory. Afaik, circular `Rc` references, `Box::leak` (and friends), `MaybeUninit` and overzealous thread spawning are the only ways of leaking memory in safe Rust.
Even if you avoid those things, how does safe Rust make leaks more rare than in a GC language? Presumably leaks in a GC language or safe Rust are almost always going to be stuff like “pushing things into a vector repeatedly even though you only care about the last item in the vector”, and clearly safe Rust doesn’t stop you from doing this any more than a GC.
Note also that GC languages don’t even have the circular references case to worry about since they don’t have any need for reference counting in general.
> That's the story that GC sold us. History has proven it wrong. Citation: the fire hose of articles on HN about how GC bit people in the ass, and they now have to go back into their code and write a bunch of duct tape code to work around Garbage Collector Quirk #4018 de jure that results in hitching, insane memory usage, and random OOMs.
I follow HN daily and very rarely do I see articles lamenting GC (I'm only familiar with a small handful of incidents including some pathological cases with Go's GC on enormous heaps (many terrabytes) and some complaints about Java's GC having too many knobs), certainly not in the general case. Indeed, for the most part people seem quite happy with GC languages, especially Go's sub-ms GC. In particular, memory usage (and thus OOMs) have nothing to do with GC--it's every bit as easy to use a lot of memory in a language that lacks GC altogether. This is incorrect, full stop.
> I say GC is a failed experiment because it promised that programmers would be able to write code without worrying about memory.
GC promises that programmers don't have to worry about freeing memory correctly, and it delivers on that promise. I'm not a GC purist--there's lots of criticism to be had for GC, but we don't need point at patently false criticisms.
> The borrow checker is an infantile incarnation of a bigger idea that is finally panning out. Rather than garbage collecting during run-time; garbage collect during compilation using static analysis. Being in its infancy it's not as easy and free to use as we'd like. But it's the path forward.
Maybe. I like the idea, but I'm skeptical that putting constraints on the programmer is going to be an economical solution, at least for so long as the economics favor rapid development over performance. Conceivably rather than rejecting code that aggrieves the borrow checker, we could picture a language that converts those references into garbage collected pointers transparently, but we kind of have this already today via escape analysis--and indeed, I think this is the economic sweet spot for memory management because it lets users have a GC by default but also minimize their allocations for hot paths.
> Hence my comment: "A little bit of time spent now appeasing the borrow checker will pay off ten fold later when you don't have to deal with exploding memory usage and GC stalls in production."
But the borrow checker is strictly less effective in preventing memory leaks than a (tracing) GC (borrow checker will happily allow circular refcounts). More importantly, having to pacify the borrow checker on every single codepath when only 1-2% of code paths are ever going to be problematic is not a good use of your time, especially when you can do some light refactoring to optimize. With respect to GC stalls, these are particularly rare if you have a GC that is tuned to low-latency (Go's GC can free all memory in less than a millisecond in most cases).
> It's not about performance; it's about saving yourself the time of having to come back to your code a month later because your TODO app is using a gig of RAM and randomly hitching.
That sounds like the textbook definition of a performance concern, but again memory usage is orthogonal to GC and random hitching isn't a problem for latency-tuned GCs. Even while Rust is faster than many of its GC counterparts, this difference typically comes down to the ability of the compiler to output optimized code--not the memory management system. That said, for realtime applications, nondeterministic GCs aren't appropriate.
I'm confused by the "it's not about performance. [reasons why it is, in fact, about performance]" phrasing, but in general a lot of applications aren't bottlenecked by memory and a GC works just fine. Even when that's not entirely the case, they often only have one or two critical paths that are bottlenecked on memory, and those paths can be optimized to reduce allocations.
> GC is great for quick hack jobs, scripts, or niches like machine learning, but I believe at this point it's a failed experiment for anything else.
That sounds kind of crazy considering how much of the world runs on GC (certainly much more than the other way around). I feel the need to reiterate that I'm not a GC purist by any means--I've done a fair amount of C and C++ including some embedded real time. But the idea that GC is a failed experiment is utterly unsupported.