You get memory safety. That's about it for Security. Quality is in the eye of the beholder. maybe it's quality code? Maybe it's junk, who knows. Rust isn't magic that forces code to be quality code or anything. That said, the code in the Redox system is generally good, so it's probably fine, but that's not because it's written in Rust, it's because of the developers.
Any GC'd language(Python, JS, Ruby, etc) gives you the same memory safety guarantees that Rust gives you. Of course GC'd languages tend to go a fair bit slower(sometimes very drastically slower) than Rust. So really the memory safety bit of Rust is that the memory safety happens at develop and compile time, instead of at runtime, like a GC language does. So you get the speed of other "systems" languages, like C, C++, etc AND memory safety.
While I generally agree, the latest Android report suggests that rust developers get fewer reverts and code reviews are faster. This could mean that better developers tend to adopt rust or it could mean that rust really is a language where quality is easier to attain.
There’s some reason to believe that given how easy it is to integrate testing into the rust code base and in general the trait and class system is also a bit higher quality and it encourage better isolation and things like const by default and not allowing API that could misuse the data structure in some unsafe way. And it has a rich ecosystem that’s easy to integrate 3p dependencies so that you’re not solving the same problem repeatedly like you tend to do in c++
So there is some reason to believe Rust does actually encourage slightly higher quality code out of developers.
Or there are "reverts and code reviews are faster" because no one wants to actually read through the perl-level line-noise type annotations, and just lgtm.
> Or there are "reverts and code reviews are faster"
This seems like a slight misreading of the comment you're responding to. The claim is not that reverts are "faster", whatever that would mean; the claim is that the revert rate is lower.
Also, how would skimping on reviews lead to a lower revert rate? If anything, I'd imagine the normal assumption would be precisely the opposite - that skimping on reviews should lead to a higher revert rate, which is contrary to what the Android team found.
What type annotations? In Rust almost all the types are inferred outside of function and struct declarations. In terms of type verbosity (in the code) it is roughly on the same level as TypeScript and Python.
> Rust isn't magic that forces code to be quality code or anything.
It is not, but the language and ecosystem are generally very well-designed and encourage you to "do the right thing," as it were. Many of the APIs you'd use in your day-to-day are designed to make it much harder to hold them wrong. On balance, outside of Haskell, it's probably the implementation language that fills me with the most confidence for any given particular piece of software.
Most of the performance penalty for the languages you mentioned is because they're dynamically typed and interpreted. The GC is a much smaller slice of the performance pie.
In native-compiled languages (Nim, D, etc), the penalty for GC can be astoundingly low. With a reference counting GC, you're essentially emulating "perfect" use of C++ unique_ptr. Nim and D are very much performance-competitive with C++ in more data-oriented scenarios that don't have hard real-time constraints, and that's with D having a stop-the-world mark-and-sweep GC.
The issue then becomes compatibility with other binary interfaces, especially C and C++ libraries.
> With a reference counting GC, you're essentially emulating "perfect" use of C++ unique_ptr.
Did you mean shared_ptr? With unique_ptr there's no reference-counting overhead at all. When the reference count is atomic (as it must be in the general case), it can have a significant and measurable impact on performance.
You might be right. Though with the way I design software, I'm rarely passing managed objects via moves to unrelated scopes. So usually the scope that calls the destructor is my original initializing scope. It's a very functional, pyramidal program style.
Definitely true! Probably add Swift to that list as well. Apple has been pushing to use Swift in WebKit in addition to C++.
Actually Nim2 and Swift both use automatic reference counting which is very similar to using C++’s SharedPointer or Rusts RC/ARC. If I couldn’t use Nim I’d go for Swift probably. Rust gives me a headache mostly. However Nim is fully open source and independent.
Though Nim2 does default to RC + Cycle collector memory management mode. You can turn off the cycle collector with mm:arc or atomic reference counting with mm:atomicArc. Perfect for most system applications or embedded!
IMHO, most large Rust project will likely use RC or ARC types or use lots of clone calls. So performance wise it’s not gonna be too different than Nim or Swift or even D really.
> IMHO, most large Rust project will likely use RC or ARC types or use lots of clone calls. So performance wise it’s not gonna be too different than Nim or Swift or even D really.
I do not think so. My personal experience is that you can go far in Rust without cloning/Rc/Arc while not opting for unsafe. It is good to have it as default and use Rc/Arc only when (and especially where) needed.
Being curious I ran some basic grepping and wc on the Ion Shell project. It has about 2.19% of function declarations that use Rc or Arc in the definition. That is pretty low.
Naive grepping for `&` assuming most are borrows seems (excluding &&) to be 1135 lines. Clone occurs in 62 lines for a ratio of 5.4%. Though including RC and ARC with clones and that you get about 10.30% vs `&` or borrows borrows. That's assuming a rough surrogate Rc/Arc lines to usages of Rc/Arc's.
For context doing a grep for `ref object` vs `object` in my companies Nim project and its deps gives a rate of 2.92% ref objects vs value objects. Nim will use pointers to value objects in many functions. Actually seems much lower than I'd would've guessed.
Overall 2.19% of Rust funcs in Ion using Rc/Arc vs 2.92% of my Nim project types using refs vs object types. So not unreasonable to hold they have similar usage of reference counting vs value types.
>You get memory safety. That's about it for Security
Not true, you get one of the strongest and most expressive type systems out there.
One example are the mutability guarantees which are stronger than in any other language. In C++ with const you say "I'm not going to modify this". In Rust with &mut you're saying "Nobody else is going to modify this." This is 100x more powerful, because you can guarantee nobody messes with the values you borrow. That's one very common issue in efficient C++ code that is unfixable.
Sum types (enum with value) enable designing with types in a way otherwise only doable in other ML-based languages. Derive macros make it easy to use as well since you can skip the boilerplate.
Integers of different sizes need explicit casting, another common source of bugs.
Macros are evaluated as part of the AST. A lot safer than text substitution.
Further the borrow checker helps with enabling compile time checking of concurrency.
The list goes on, but nobody that has tried Rust properly can say that it only helps prevent memory safety issues. If that's your view, you just showed that you didn't even try.
Any GC'd language(Python, JS, Ruby, etc) gives you the same memory safety guarantees that Rust gives you. Of course GC'd languages tend to go a fair bit slower(sometimes very drastically slower) than Rust. So really the memory safety bit of Rust is that the memory safety happens at develop and compile time, instead of at runtime, like a GC language does. So you get the speed of other "systems" languages, like C, C++, etc AND memory safety.