Ehhh I’d have to hard disagree with #1. But we’ll likely just have to agree to disagree.
Maybe I’m just a bad enough programmer I write parallel bugs. But C++ certainly doesn’t make it easy to write correct code in any way.
I personally think it’s pretty darn difficult to ensure correctness in a large program. Especially when multiple programmers are involved. And especially when you are adding features to an existing system.
However I will also admit that I haven’t written a large Rust program so I can’t claim to have run into all of its warts.
There’s no silver bullets in life. I work primarily in video games. GC is the bane of existence and is something that provides seriously negative value.
We definitely agree it’s all a trade off. GC provides some value and some costs. Borrow checkers provide some value and some cost.
> It only means that the data model doesn't agree with Rust's rules for modeling data (which exist to ensure memory safety in the absence of a GC).
I think my initial response was that Rust’s model exist for more than just that single reason. Whether those reasons or trade offs are useful depend on the program in question.
In my work I never want a GC, but damn would I love a borrow checker.
C++ provides the tools to build robust parallelism that is also optimal, with the giant caveat that implementing it is left to the programmer (and good libraries are not abundant). Rust offers built-in correctness but not optimality, and C++ offers optimality but no built-in correctness. Many massively parallel codes and virtually all massively concurrent codes necessarily lean on latency hiding mechanics for concurrency and safety. Ownership-based safety can’t be determined at compile-time in these cases, but you can prove in many cases that safety can be dynamically resolved by the scheduler at runtime regardless of how many mutable references exist. This has a lot of similarities to agreement-free consistency in HPC, where no part of the system has the correct state of the system but you can prove that the “system” will always converge on a consistent computational result (another nice set of mechanics from the HPC world).
The problem with ownership-based safety for massive parallelism is that the mechanics of agreeing on and determining ownership don’t scale and often can’t be determined at compile-time. Some other safety mechanics don’t have these limitations. C++ doesn’t have them built-in but you can implement them.
> Maybe I’m just a bad enough programmer I write parallel bugs. But C++ certainly doesn’t make it easy to write correct code in any way. I personally think it’s pretty darn difficult to ensure correctness in a large program.
IMO the key to writing parallel code is to keep the parallelism confined to a small kernel rather than sprawling throughout your codebase. If you try to bolt on parallelism then you’re going to have a bad time. It needs to be part of your architecture. It’s not easy, but it’s easier than writing parallel code that will pass the borrow checker IMHO. But yes, we may have to agree to disagree.
> We definitely agree it’s all a trade off. GC provides some value and some costs. Borrow checkers provide some value and some cost.
Agreed!
> I work primarily in video games. GC is the bane of existence and is something that provides seriously negative value.
I’m very curious about videogames development. In particular, I get the impression that aversion to GC in videogames comes down largely to experiences with Java back when pauses could be 300ms. I’m very curious if Go’s sub-millisecond GC (and its ability to minimize allocations, etc) would be amenable to videogame development. Thoughts?
> I think my initial response was that Rust’s model exist for more than just that single reason. Whether those reasons or trade offs are useful depend on the program in question.
Heartily agree.
> In my work I never want a GC, but damn would I love a borrow checker.
In my line of work, I like the idea of using Rust but realistically the economic sweet spot is something like “Go with sum types” or “Rust-lite”.
> I’m very curious about videogames development. In particular, I get the impression that aversion to GC in videogames comes down largely to experiences with Java back when pauses could be 300ms. I’m very curious if Go’s sub-millisecond GC (and its ability to minimize allocations, etc) would be amenable to videogame development. Thoughts?
It really just comes down to average/worst case time.
Modern games are expected to run anywhere from 60 to 240 frames per second. 60 is the new baseline. VR runs anywhere from 72 to 120. Gaming monitors regularly hit 144Hz. And esports goes as high as 240 and even 360.
In Unity C# GC can take tens of milliseconds. This is, uh, obviously very bad. High tier unity games spend a LOT of time avoiding all allocs. This is not fun in a GC language. Most indie games just hitch and its pretty obvious. I’m not sure if Unity’s incremental GC has graduated from experimental mode.
If a GC had a worst case time of less than a millisecond that’d be fine. That’s actually a pretty big chunk of a 7ms frame, but hey probably worth it. If it’s usually 250us but once a minute spikes to 3ms that’ll cause a frame to miss. If once every 5 minutes it’s a 50ms GC that’s a huge hitch. For a single player game it’s sloppy. For a competitive multiplayer game it’s catastrophic.
Unreal Engine actually has a custom garbage collector. But it’s only for certain days types and not all memory. That’s a nice compromise. Games in particular are good at knowing if the lifetime of an allocation is short, per frame, long-term, etc.
> I’m very curious about videogames development. In particular, I get the impression that aversion to GC in videogames comes down largely to experiences with Java back when pauses could be 300ms. I’m very curious if Go’s sub-millisecond GC (and its ability to minimize allocations, etc) would be amenable to videogame development. Thoughts?
Well, Minecraft is written in Java, and it runs fine from what I’ve heard. In .NET land, there was a short lived toolkit for C# called XNA - Terraria is (was?) written in it. Both Java and C# are garbage collected.
I haven’t looked at Unity too deeply, but isn’t Unity (and the games made in it) built in C#?
Game programmers mostly want tight control of object layout and lifecycle. GC doesn't matter much when use ECS all over your codebase. As long as they can run the GC when they want and can layout the objects flat without needing pointer indirection it would be very suitable.
I think C# is popular because it allows the above. When Java has proper value types it might be suitable for writing games.
> As long as they can run the GC when they want and can layout the objects flat without needing pointer indirection it would be very suitable.
To be clear, the problem isn’t pointer indirection, but rather lots of objects on the heap, right? Pointers should be fine as long as there aren’t many allocations (e.g., pointers into an arena)?
> I think C# is popular because it allows the above. When Java has proper value types it might be suitable for writing games.
Go also has value types, FWIW, and they are a lot more idiomatic than in C# from what I’ve observed.
Yeah, in theory, but I’ve never had much luck with OCaml and every time I dig into my problems here it ends with the OCaml fanboys berating me so I’ve pretty much given up on it.
Maybe I’m just a bad enough programmer I write parallel bugs. But C++ certainly doesn’t make it easy to write correct code in any way.
I personally think it’s pretty darn difficult to ensure correctness in a large program. Especially when multiple programmers are involved. And especially when you are adding features to an existing system.
However I will also admit that I haven’t written a large Rust program so I can’t claim to have run into all of its warts.
There’s no silver bullets in life. I work primarily in video games. GC is the bane of existence and is something that provides seriously negative value.
We definitely agree it’s all a trade off. GC provides some value and some costs. Borrow checkers provide some value and some cost.
> It only means that the data model doesn't agree with Rust's rules for modeling data (which exist to ensure memory safety in the absence of a GC).
I think my initial response was that Rust’s model exist for more than just that single reason. Whether those reasons or trade offs are useful depend on the program in question.
In my work I never want a GC, but damn would I love a borrow checker.