So, I think you've gravely misunderstood the concepts at work here.
You know NullPointerException? Does that feel like maybe "getting it wrong" ? But it does still happen to Java programmers. That can't happen in (safe) Rust. If you write a program that could try to dereference a null pointer it won't compile. You'd be getting it wrong.
Or let's try something a bit more sophisticated. Many Java data structures can be subject to a Data Race in threaded software. So you may be "getting in wrong" in the sense that on John's 16 core monster server the output is incorrect, but on your cheap 10 year old laptop it works (much more slowly but) correctly. Both outcomes were valid meanings of your Java program, and Java provides some tools you could use to protect yourself, but it won't even warn you that you were "getting it wrong" the results are just incorrect, too bad.
In (safe) Rust, Data Races can't happen, the compiler will reject your program. Some other types of Race Condition can happen, but no Data Races.
I literally state in my post that I am referring to garbage collection and memory safety. I literally even state this specifically because I knew if I didn't someone would bring up completely irrelevant details for the sake of argument. And yet... here we are.
In Rust, data races are undefined behavior, and so safe Rust mostly prevents them even though there are still to this day subtle issues about where the boundary between safe and unsafe Rust is. That said, this is a great thing that Rust provides, it's a genuinely incredible step forward, but it has very little to do with this topic.
However in Java, data races are not undefined behavior, they have well specified semantics and do not result in memory errors the same way they do in Rust or C++:
Calling a NullPointerException a memory safety violation is like calling a panic on unwrap or panic on array out of bounds a memory safety violation (they're not). Both are well defined operations with specified semantics.
Are they likely bugs? Yes absolutely, but neither Java or Rust prevent developers from writing bugs and the fact that you're confusing program correctness with memory safety only indicates that it's you who gravely misunderstands the concepts being discussed.
NullPointerException isn't a memory safety violation but it is getting it wrong.
That's what the original comment claimed, that Rust's compiler "tells you when you're getting it wrong".
Since you brought it up - I'd actually say the existence of unwrap() shows this trend elsewhere in Rust. Java is one of many C-style languages in which silently discarding the important result of your call is a common mistake. In some cases Java tried to mitigate this with a Checked Exception, but now you're just adding a bunch of boilerplate everywhere, it doesn't do much to encourage a better way forward. Rust's Result and Option force a programmer to explicitly decide to discard unhandled cases (Errors and None respectively) each time if that's what they meant. Yet another case where the Rust compiler will tell you if you're getting it wrong.
The original comment was speaking about getting it wrong with respect to memory safety, not NullPointerExceptions, not array out of bounds accesses, not division by zero, but about memory safety.
This discussion isn't about program correctness as a general and broad concept, Rust and Java both have various strategies to eliminate many classes of errors and both languages leave the door open to many other classes of errors.
This discussion is about whether Rust uses a compile time garbage collector in order to ensure memory safety. It does no such thing, Rust has a borrow checker which ensures that syntactically valid expressions referencing memory have a correspondingly valid semantic interpretation. C++ does not have such a thing, syntactically valid expressions referencing memory may not have any valid semantic interpretation, what is referred to as undefined behavior. This is not what a garbage collector does in any sense of the word. A garbage collector is a system that computes an upper bound on object lifetime and when an object exceeds that upper bound, reclaims the memory associated with the object. Rust does no such thing at compile time.
Rust's system of enforcing memory safety is great, it's a step forward in language design, by all means give it the praise it deserves... just don't refer to it by a concept that already has a well defined meaning and an active area of research. Compile time garbage collection is a separate concept from how Rust enforces memory safety and there's not much utility in reusing that term, all it does is create confusion.
You're clutching at unrelated straws. Rather than comparing to Java, try comparing to OCaml, which is a language that's much closer to "Rust with GC". There's pretty much no safety gain from using Rust over OCaml. But if you use OCaml you don't have to worry about borrow checking.
> There's pretty much no safety gain from using Rust over OCaml.
> But if you use OCaml you don't have to worry about borrow checking.
I've never written any OCaml, when you choose not "to worry about borrow checking" how does OCaml arrange to ensure your program is free from data races in concurrent code anyway? Or do you consider that "pretty much no safety gain" ?
OCaml's memory model specifies bounded space-time SC-DRF.
What this comes down to in simple terms is that data races have well specified semantics and their effects are bounded both in terms of what is affected by a data race, and when it's affected.
Using C as a starting point, a data race can modify any region of memory, not just the memory involved in the read/write, and the modification can be observed at any time, it might be observed after the write operation of the data race executes or it can be observed before the write operation executes (due to instruction reordering).
In Java, data races are well specified using bounded space SC-DRF. This means that unlike C, data races are NOT undefined behavior. A data race is limited to only modify the specific primitive value that was written to. However it does not specify bounded time, so when the modification of that primitive value is observed is not specified by the Java memory model, it could happen before or after the write operation.
OCaml's memory model specifies both bounded space and time SC-DRF. When a data race occurs, it can only modify the primitive value that was written to, and the modification must be observed no sooner than the beginning of the write operation and no later than the end of the write operation.
That was a very long-winded non-answer, but I think I understood it to be essentially "Yes".
I'm definitely not an expert, but to me this memory model sounds like a more circumspect attempt to carve out a set of benign data races which we believe are OK. Now, perhaps it will work this time, but on each previous occasion it has failed, exactly as illustrated by Java.
Indeed the Sivaramakrishnan slides I'm looking at about this are almost eerily reminiscent of the optimism for Java's memory model when I was younger (and more optimistic myself). We'll provide programmers with this new model, which is simpler to reason about, and so the problem will evaporate.
Some experts, some of the time, were able to correctly reason about both the old and new models, too many programmers too often got even the new model wrong.
So that leads me to think Rust made the right choice here. Let's have a compiler diagnostic (from the borrow checker) when a programmer tries to introduce a data race, rather than contort ourselves trying to come up with a model in which we can believe any races are benign and can't really have any consequences we haven't reasoned about.
Of course unsafe Rust might truly benefit from nicer models anyway, they could hardly be worse than the existing C++11 model but that's a different story.
You know NullPointerException? Does that feel like maybe "getting it wrong" ? But it does still happen to Java programmers. That can't happen in (safe) Rust. If you write a program that could try to dereference a null pointer it won't compile. You'd be getting it wrong.
Or let's try something a bit more sophisticated. Many Java data structures can be subject to a Data Race in threaded software. So you may be "getting in wrong" in the sense that on John's 16 core monster server the output is incorrect, but on your cheap 10 year old laptop it works (much more slowly but) correctly. Both outcomes were valid meanings of your Java program, and Java provides some tools you could use to protect yourself, but it won't even warn you that you were "getting it wrong" the results are just incorrect, too bad.
In (safe) Rust, Data Races can't happen, the compiler will reject your program. Some other types of Race Condition can happen, but no Data Races.