This is because of how the tutorial author and the crate they use chose to represent the IO. In this case it is a Mutex, wrapping a reference counted pointer, wrapping an Option<[GPIO type]> to prevent uninitialised stuff being accessed before they are ready. So the "problem" here is the secure abstractions they use over I/O. You can write another interface to avoid the borrow().borrow().unwrap().unwrap().whatever() chain calls.
It might look better if you split it in lines:
let mutex = MY_GPIO.borrow(cs).borrow().as_ref().unwrap();
mutex.odr.modify(|_, w| w.odr1().set_bit());
I'm curious if all the mutexes and reference counted pointers might contribute to any runtime cost, and to what extent this is relevant in the embedded world. (I don't know that much about the embedded field, but I've heard they're also pretty sensitive in terms of performance, so I'm curious.)
Some people have written the equivalent C code for this (`*GPIO |= 1;`). Are the existing C embedded programmers typically not using refcounts / mutexes in their code since they know that they are upholding the invariants in their hardware and therefore do not have to employ these runtime checks, or are they just writing this one-liner out of laziness?
1.Superloop, for simpler stuff. No worries about atomic register writes, at most you have a few interrupts. You don't need mutexes here. The overhead will always be low but it gets very hairy quickly with the more things you try to do concurrently.
2. Rtos-based. Think microkernel but with just a thread scheduler. You might have a dedicated IO thread that handles messages from your worker thread, which may listen to a GUI thread. In this case, you will need a way to prevent multiple threads from trying to use the same resource, whether that is a pin, a configuration register, an interrupt flag that is in the same register as an exception flag and must be cleared atomically via a special shadow register, etc...
If you do all hardware touching through 1 thread you don't really need them, but otherwise, you absolutely need mutexes. And there can be a substantial performance cost to using them too much.
As an example, say you had a UART receive thread that fired from an ISR handler each time a byte was received. You could naively grab the UART mutex and read the new data, clear the flag, and deal with any error conditions, then release it again. Even a 100mhz MCU could have trouble keeping up with a trivial 100kbaud serial stream due to the overhead of using the mutex too much.
RTOSes do have other means for locking resources, but you would want to understand exactly what you're doing and minimize overhead as much as possible.
I am not at all experienced in Rust but it seems that touching the hardware directly could end up being a large pain point.
You absolutely can do that in Rust. Rust has raw pointers, and they can be used just like C. It might look something like:
*GPIO |= 1;
Where GPIO is a pointer that's mapped to a register.
There's almost certainly some Rust code doing just that somewhere down the call stack in your code above. You just don't get a lot of the safety benefits that come with Rust if you do it like that (although you do get some), so a lot of the library ecosystem has invested in higher level wrappers.
The motivation for doing so is good, but IMO it's been taken a little too far and led to things being over-abstracted. I suspect this will even itself out as embedded Rust matures.
A few years ago, just after rustc got AVR support, I decided it'd be fun to port a (very) simplistic roguelike, writing everything from the HAL up. What I ended up doing was representing the MMIO registers and their bits using unit structs, and then defining and implementing some traits to tie it all together. I ended up with usage code looking like this (configuring the i2c bus):
// Set the SDA and SCL pins to input, and enable the internal pullups.
DDRC::clear_bits(DDRC::DDRC4 | DDRC::DDRC5);
PORTC::set_bits(PORTC::PORTC4 | PORTC::PORTC5);
// Initiliazing the prescaler to 1, and bit rate for a 400KHz transmission rate.
TWSR::clear_bits(TWSR::TWPS0 | TWSR::TWPS1);
TWBR::set_raw_value(TWI_BIT_RATE);
// Enable the TWI module, and the ACK.
TWCR::set_value(TWCR::TWEN | TWCR::TWEA);
While my representation doesn't protect against races or aliased access, I did manage to get it to protect against reading write-only bits, writing read-only bits, and using bits with the wrong register, all at compile time. The error messages were a bit unpleasant, but I was quite happy with how it turned out.
I could have implemented the *Assign operators for the register structs, but I'm not a huge fan of those. I feel it's a bit too opaque and error-prone, and preferred the clear function names.
One thing that complicates matters is that you need to do volatile read and writes, not just standard pointer access. So using raw pointers, while possible, actually becomes kinda verbose.
This is the whole point the author was trying to make.
As you mentioned and made clear, the implementations in other languages barely have any checks for memory safety.
In Rust, you can do that but you have to write a verbose code.
Why not just abstract this on the SDK level and provide a way so that if you call *GPIO |= 1; it will already do that for you? Or as others mentioned, make a macro?
I believe this is the point the author was trying to make. It's not about the niceties Rust have that make it complicated, it's the fact that you do have to write a whole lot more code for a simple task.
Unless you're overloading the |= to death that assignment isn't checking any of the things that are being checked (either at compile or runtime) in the rust code
This specific example is from the concurrency chapter so it's already handling multiple threads and that's why all those checks are needed.
This is a random example I found to blink a led on a rp2040:
> let mut led_pin = pins.gpio25.into_push_pull_output();
>
> loop {
> led_pin.set_high().unwrap();
> delay.delay_ms(500);
> led_pin.set_low().unwrap();
> delay.delay_ms(500);
> }
And this is how to do it using embassy, which is an async framework for embedded in rust:
As a newcomer to Rust (specifically Async/Threads) why are their multiple runtimes and thus different code for embedded async/threads between desktop and embedded ? I know it isn't in the standard library, so things just happened, but there is still no clear winner that does it all ?
How can we make code be portable (e.g node.js like) between embedded and other platforms if they differ so much?
> How can we make code be portable (e.g node.js like) between embedded and other platforms if they differ so much?
By embedded here we are talking about running bare metal on single core microcontrollers with kilobytes of memory and megabytes of storage which are more interested in driving or reading gpio/adc pins than serving a rest api or rendering a webpage
To answer your question, you can't because the difference in code comes from huge differences in the use case and requirements. And trying to force using the same runtime for anything from webservers with dozens of processors and hundreds of gigabytes of ram to the kind of device that barely draws any power at all is going to make no one happy
I only have a passing knowledge of embedded programming but it already seems to me that being able to effectively use this kind of abstractions in this domain is already halfway through a miracle, and probably only possible because the rust team decided to decouple the async model from a specific runtime
There is no one set of tradeoffs that you can perfectly make across situations. Each have different needs, and so have executors that provide for those different needs.
> You and other Rust enthusiasts are missing the point.
Please, there's no need to make this "us vs them" :) I explained what's happening without saying it's the best approach possible.
> In C or any C-based language or even Python, you can just call either a function passing the reference to the GPIO pointer, or just GPIO |= 1.
You can do this in Rust just fine by using raw pointers (references). The point is, in complex programs (not the example from the embedded rust book necessarily) it's possible to make memory safety mistakes. While in Rust it is not unless you choose so.
This is not about which approach is better, it's a matter of fact. Whether you prefer the one or the other is your personal choice.
That’s not a point being missed. The comment you reply to lays out explicitly why all these components are here, and it’s the same sort of thinking which leads to the results of TFA.
OP picked upon a sophisticated example that's multithreaded; the example shows off some fancy Rust techniques which deals with this complex case.
It's 'fair' to say that OP's code snippet looks complex. But, it's not 'fair' to treat OP's snippet as if it's how one must write a simple blinky program.
And it's good to explain that the code is complex because it's dealing with a complex case.
I'd somewhat acknowledge the frustration, though. If you're trying to learn something and you take a wrong step or look in the wrong place, you don't have the intuition to figure out "here's what you want to be doing instead".
It does look horrible, never denied that in my post. Also, I am not part of any community despite writing Rust. There's a lot of stuff online that is borderline insanity in terms of developer UX. I personally try not to generalise on things that stand out like that; it's selection bias.
* its annoying to write, good tooling helps (e.g. using the rust-analyzer in an lsp enabled editor)
* it's a godsend when revisiting code months/years later, and in reading unfamiliar code.
Some languages have verbosity for it's own sake, but I don't really see rust as one of those - most of it actually inform the reader about what's going on without a need to go track down all sorts of info elsewhere to understand what effects any given line will have.
let mutex = MY_GPIO.borrow(cs).borrow().as_ref().unwrap(); mutex.odr.modify(|_, w| w.odr1().set_bit());
Both of those ways are really horrible, and differ almost nothing. I understand and know why it looks like it does, but having to jump through all that mental gymnastics just to understand/write one line is why Rust is just overly verbose for a lot of things.
Is that one line more verbose than adding multiple lines of passing information through all the functions to provide all the same features?
Probably not clear, but all of those functions are doing something that is helpful in additional checks. As others pointed out, you can set a pointer if you like in a simple line, but that isn't safe.
The verbosity is providing the features, at least it is on one line, instead of 20 or 100 lines.
Of course the verbosity is doing something, the same is true for Java as well, it's not like the verbosity is just useless characters doing nothing. Still makes it overly verbose for a lot of us. I still write Rust code daily, but I wish the language thought a bit more about read/write ergonomics.
I'm a professional software developer. In my experience, Rust code is shorter than equivalent C or C++ code, with proper error handling, no memory leaks, no race conditions.
If you rewrite the code above in a proper C, you will have a page of code at least.
> That's not particularly gnarly rust. It's just chained method calls like in many other languages.
That was my thought as well, as someone who is not Rust heavy (though I do small projects with it now and then), I get some of what's going on, and I've done a little bit of embedded, I could be misremembering but I don't remember it feeling this confusing or intimidating and that was using C, which to newcomers can be both.
I'm really confused by this method chaining, I would argue its too confusing and breaking it up with comments might be best for newcomers.
You should try again. I think that code is verbose because of the borrowing and because you're trying to do a one-liner. I use the nrf-hal library with the nrf52840, and the code reads pretty nicely. Here's an example:
It's checking to see if the button is high, not the led. "blinky" is a slightly misleading name. It doesn't cycle the led automatically, it turns the led on when you press a button
It is a busy loop though, and in a real scenario you'd want to use interrupts instead of a loop like this.
I think the embedded book kind of teaches at a very fundamental level, prioritizing safety. It teaches you to use a mutex and critical section when accessing a global variable, which can easily obscure what you are trying to achieve.
Globals should be avoided in Rust, unless you have some kind of synchronization strategy because otherwise modifying them is unsafe.
If that scares you off you must have been spared using the STM32 HAL so far. Have you considered splitting the long line at `MY_GPIO.borrow(cs).borrow().as_ref().unwrap().odr` (assign it to a local variable)?
Huh? Usually (using any of the `embedded_hal` crates) it'll be something like
`let mut gpioa = dp.GPIOA.split();` to get the GPIO A port from the Device Peripherals list `dp`.
`let pa5 = gpioa.pa5.into_push_pull_output();` to declare which pin in that GPIO port you're using, and set it into a push/pull output (input pull up, input pull down, input floating, and output open drain are of course also defined) and then
`pa5.set_high();` to set it high (`pa5.set_low()` for low).
There are HAL modules for most common MCUs, and a few uncommon MCUs. Use the HAL.
The official embedded Rust book also includes how to write such a HAL, which seems to be the bit you stumbled on. Ignore it if you're not writing a new HAL crate.
Even better than hiding it away is splitting it in half. The code was written that way so that two tasks running simultaneously could not both try to write to the same pin at the same time. But writing it as a one–liner that you use over and over is the wrong approach, because then you’re unlocking the mutex (and other layers) before every write to the pin, and then locking it back down afterwards. If you then sleep before writing the next bit to that pin, some other task could still jump in and write it’s own bit to the pin before you wake up.
So what you really want to do is unlock the mutex and unwrap all the layers to get just the io pin object, and give ownership of it to the task. Then the task can write to the pin, sleep, and not worry that any other task can write to the pin before it wakes back up. When it comes time to spawn that other task, you can unlock and unwrap another io pin and pass along ownership of it to the new task. Meanwhile the compiler prevents either task from accessing pins not assigned to them, unless they contain the code to unlock the global mutex.
But then you have to talk about how to spawn multiple tasks, and I bet that’s in a different chapter! By the end of the book when all the pieces are in place, there probably is no global variable for the IO pins, no task tries to unlock a global mutex, and instead all tasks take ownership of the pins they need.
This is the way. You explained it better than I could have.
The example being an IO pin toggle is not great, because it's such a simple operation, and typically will be intrinsically atomic on many MCUs.
Now, something like setting up a DMA on one of the 2 available DMA engines you may have, and blocking til completion via interrupt, that is complex enough it has a much better justification for the abstraction salad.
You would definitely hide it behind a function if you need it, but that's not very useful in a guide that wants to teach you what it looks like without the function
I'll be keeping an eye on this then, because the above code is ridiculous; if it's the best (code-wise or performance-wise) way to do it in Rust, I can see the complaints.
It wouldn't be that much simpler if it wanted to do the equivalent thing in C. This is code for "set a pin in a completely thread safe way", whereas a lot of the comparisons are "set a pin".
Which, as others have already pointed out, brings us to the very topic of this post. Rust prevents novices from messing up partly by forcing them to acknowledge ways in which the code might break, and it does that via the type system.
I have to say I'm excited to see what the Swift embedded subset turns out like, if it gets any hardware support for chips I write code for (mostly stm32 so that seems pretty likely since they're so popular). It looks like a nicer developer experience than what I've seen from Rust, but this is coming from somebody who mostly codes C.
The Rust apologists replying to this comment are conflating mere understanding of the above chain with the reality of needing to type this slog every time you need to set a pin. When you’re just making a garage door opener in your home electronics lab and a C API offers setBit(&gpioCtx, bitPosition, 1); as an alternative to the above abomination, we all know where developers will be happier and more productive.
You can almost certainly do the same in Rust in an unsafe block.
You will be happier and more productive unless you get bitten by the lack of locking, reference counting and mandatory initialisation. Which is actually quite likely.
Compiled with -Wall is fine and infinitely more readable that the Rust puke above, but you probably won’t need the lock or the warnings in your typical embedded project. The main issue with Rust appears to be that it invites masturbators that invent problem scenarios in their head and then overengineer every std lib API and crate available for consumption. Setting a bit on a dev board should simple. It should not be a chained mess full of operator soup.
But this is never done, because the language attracts people who overengineer. The original commenter was following a beginner’s tutorial in Embedded Rust. They didn’t just unluckily stumble upon something esoteric with too many layers of safety for the functionality they were aiming to achieve. This is how people in the Rust Foundation actually use their language, and it’s how they teach people to use it.
POSIX threads are far more battle-proven than any threading API in Rust. All of the underlying multi-threading on the system you made your comment from is using pthreads. It works fine in reality, despite all the pearl clutching.
>Rust makes it so that you can't forget.
And the tradeoff is an unreadable, puke-inducing one-liner. You can do a (readable) scoped lock with a macro in C (GCC defines the __attribute__ cleanup) or RAII-style with destructors in C++.
The context of this exact discussion is the defense of the ugly Rust code above by Rust aficionados who see nothing wrong with it. That code came from the Embedded Rust working group in their learning material for Rust beginners.
I'm not taking anything out of context. That code is hideous. It's terse. It uses controls and defenses that aren't needed for most embedded applications. It turned a beginner away from the language. When I provided much simpler C code as an alternative, Rust aficionados started to cry about unneeded mutexes, unneeded ref counting, the inability of programmers to remember to call free or mutex_unlock, etc. etc.
Your response is an unrelated rant. Here's your claim:
> But [writing it more simply] is never done
You're not saying anything interesting, you're just making grandiose false claims to start a flame war. Approach it from a good-faith perspective, please.
Let's assume your C SDK defines MY_GPIO as a pointer to a volatile memory mapped I/O register. Lets further assume that your project uses interrupts (or preemptive multitasking).
Your trivial one-liner is an example of terribly seductive simplicity enabled by a broken design. It works until MY_GPIO is concurrently modified. Which can happen if the memory mapped peripheral changes the register content on its own or if other code can preempt read/modify/write sequence. To make it worse unless MY_GPIO is heavily contested it will work most of the time.
I'm not familiar with embedded Rust, but it looks like APIs are designed to make the programmer explicitly lay claim hardware peripheral (or fail trying) instead of corrupting the system state. Most of the unwrap() and unsafe code is executed once during setup. Once the hardware resources are divided up and configured the normal runtime code should be a lot less verbose (from a quick look at the API documentation).
The rust version includes reference counting, state verification (preventing access before initialisation) and locking. All "for free" by wrapping the original value.
In C you would have to write all that out. And still have a good chance of getting something wrong first try.
Not the OP, but I do not like reading one line expression bonanzas. I would rather read implementations of different subsystems and then calling them.
I mean, I find this simpler
procedure x_component
..
end
procedure y_component
..
end
procedure assembly
x_component
y_component
..
end
In python, rust, C++, java, lisp people seem to form very long procedures with lambdas and some ninja fu to save lines of code and it may look elegant to some people, but I can't read it.
Pretty much. The issue is that the C version is horribly error prone and unsafe. Any time you lose by writing a load of `borrow()`s you will get back 100 times over by not having to debug races on a microcontroller!
As much as the "rust for everything" folks are annoying, wrong programs not compiling is the entire point of having a powerful type system correctly leveraged. This is the kind of task where rust truly shines.
ease of programming != ease of making undetected mistakes
I think it is a separate study whether developers find it easier to actually use, things like environments being equal.
I remember another study which showed that actually, most computer programmjng languages today are objectively terrible in terms of legibility and usability, the alternative was some idealized language that allowed effective communication to occur.
A safe program is not necessarily one that won't compile, if the lack of functionality incurs some danger on its own.
I suspect that a majority of the peanut gallery is rather lazy, or unwilling, or economically unmotivated to put forth the proper effort (and it is the last category that deserves the harshest punishment, where a wasteful and harmful situation is actively encouraged).
> ease of programming != ease of making undetected mistakes
Disagree. GCed languages prevent the majority of errors the higher complexity of Rust unloads on the programmer to solve by wrestling with its syntax.
So this is only a plus for Rust, if the small performance advantage over, say, Go, actually matters in an application, which is rarely ever the case in most software written.
I'm actually a really big fan of affine types and adts with great pattern matching. The performance is nice too, but I stayed for all the other nice bits that are put together really well (except async, but eventually...).
Rust has been my favorite language by far to refactor in.
Forcing Rust into a domain where a GC'd language is more than enough offers absolutely zero advantages (yes, you'll have no memory safety issues, but neither you would have had you used Python or Java or Go or whatever), and it's the kind of obnoxious overzealousness that I'm denouncing.
If you're using Rust to write web services you're just wasting your time, I agree with that.
But if the alternative would be C or C++, Rust is at least worth consideration. Ironically, those are domains where you'll have to use a lot of unsafe and/or limit yourself to a subset of Rust, but it's still a nice incremental improvement.
> Forcing Rust into a domain where a GC'd language is more than enough offers absolutely zero advantages (yes, you'll have no memory safety issues, but neither you would have had you used Python or Java or Go or whatever), and it's the kind of obnoxious overzealousness that I'm denouncing.
It is refreshing to see that other people share this point of view. I guess we are not disagreeing after all.
> GCed languages prevent the majority of errors the higher complexity of Rust unloads on the programmer to solve by wrestling with its syntax.
The other value that Rust adds, that you're ignoring, is control over mutable/immutable access. In Rust, to share data between threads of execution, you have to grab a lock or such. In many other languages, like Go, the lock is advisory not mandatory, and nothing prevents two threads from writing to the same struct field concurrently.
I personally think I'd really enjoy a "GC'ed Rust" for many uses. I wish GC was a Rust feature crates could opt in to, and then the embedded & high performance crowd could avoid that like no_std works today.
Yup, and if you actually meet a use-case where Rust has a major performance advantage above Go, typically you’re already too deep into the low-level details of optimization that you’ll constantly fight with the borrow checker or just scatter unsafe all over your code.
This is just not true. The experience is typical if you know how to solve your problem in a GCed language and the solution is to solve problems the Rust way, primarily by using handles (and/or arenas) instead of references.
There are many funny comments in here but I think they are all valid in a tragic-comic way. Rust is actually that hard that only a small share of developers will ever master it and those who master will be certainly the more intelligent one.
I feel like Rust makes you pay a big cost upfront, and the benefits are often not worth that cost.
It's easier to program if I don't have to worry about ownership/lifetimes. -- Garbage Collection allows not needing to worry about lifetimes; and preventing the same value from being mutated from multiple places in the code generally isn't such a significant problem.
With high level programming (in JavaScript), you don't have to pay those costs.
I like to believe that Rust's strict discipline about ownership/lifetimes encourages code that's fast/lightweight, though.
Absolutely. Most programs should be written in a safe language, but that doesn’t mean Rust. Most programs should be written in a garbage–collected language like Javascript or Python or Lisp precisely because those languages make developers faster and more efficient.
Rust is only appropriate for programs that are too expensive to run when they are written in a garbage–collected language. That means operating systems, because small inefficiencies there will add costs for billions of people. It means big server–side services, where small inefficiencies cost you millions in compute time or ram usage or whatever.
Even if you think you’re going to hit that scale one day, writing your prototype in a garbage–collected language can net you a huge advantage in time saved. You can rewrite it in Rust later, after you know exactly what features will be needed and exactly what they cost to run, and exactly how much you’ll save by rewriting it in Rust.
If programmers will learn Rust as first language, it will be natural to them. I use Rust since version 1.0, and I like that borrow checker helps me to find bugs at design state, where they are least expensive to fix.
> If programmers will learn Rust as first language
Imagine a person who just started learning about programming. A person who may still be tripped up by the fact that `=` and `==` are completely different things here.
Imagine that person.
And now imagine explaining to that person a language, that even many experienced and skilled software developers say is significantly harder to learn than the average programming language.
Sorry, but when I think about a good teaching language, Rust isn't the first thing that comes to my mind.
And neither is C/C++, in which you have to follow all the borrowing rules except the compiler maybe sometimes warns you about something instead of always erroring out.
Python, go or (a bit less so) typescript are nice beginner languages.
C++ isn't, but C was my first programming language, and I am thankful for this to this day.
Because C in and of itself, is an incredibly simple language. It is made complex in libs, usually by abusing macros, or by trying to shoehorn it into OOP, FP, or whatever other paradigm is currently being idolized.
> Python, go
Go yes, Python no, and I am saying that as someone who does alot of work in Python and loves both languages.
Because Python isn't simple. It looks simple in trivial code, but hides an enormeous amount of complexity and magic not-so-far beneath the surface. In addition, it completely fails at teaching core concepts of memory, and dances around the topic of types.
But C is anything but simple! It's small but it's so full of undefined behaviors it requires considerable knowledge and experience to stay within the bounds of 'predictable'. It's better nowadays if you are taught to enable all possible warnings and even then you'd better use a static analyzer and valgrind for anything more than hello world.
Python's hiding of memory is a feature when teaching beginners programming IMHO unless you want to start teaching close to the metal. I prefer to start from the algorithmic CS-y side.
> it requires considerable knowledge and experience to stay within the bounds of 'predictable'
No it really doesn't. Initialize all variables, always check boundaries and free memory when you're done with it. That about covers 90% of it.
Yes, this becomes more difficult the less trivial the codebase becomes. But beginners don't have non-trivial codebases, and teaching these concepts isn't hard.
> Python's hiding of memory is a feature when teaching beginners programming IMHO unless you want to start teaching close to the metal. I prefer to start from the algorithmic CS-y side.
And I prefer to start from the systems-programming, practical side. And in the practical world, memory isn't some abstract thing that may or may not exist, function calls cause overhead, strict typing is a friend, and networks fail.
So yes, teaching a bit closer to the metal has its advantages.
No, it isn't, by any metric. I personally know people who learned C in a week. I learned all basics of the language in 2 weeks as a teenager, with nothing but K&R as learning materials.
If you disagree, point out by which metrics you consider Rust to be easier to learn than C, and we can have a discussion.
> Rust is hard for older programmers because it's hard to FORGET old habits from other languages, which allowed too much.
No it isn't. First, both newcomers and experienced programmers consider Rust a more complicated language than most of its contemporaries, and if anything a solif experience in multiple programming languages makes it easier to learn the advanced concepts Rust relies on.
Rust can be used as C, because it has pointers, like C. However, Rust has better syntax, more syntax sugar, automatic memory management, automatic memory manager and borrow checker. So, Rust is simpler than C to learn.
The only difficulty that I see in Rust is that sometimes I know that something is safe, but the compiler has to disagree based on what it is able to infer.
It's been a while since I last did something in Rust, but I remember that the shift to Rust 2018 was a huge step forward in this regard.
Or I would say, programming in general is already hard. Rust makes it even more difficult…
The reason is that actual common memory safety mitigation strategies in real life involve using some kind of smart pointers (same with Rust and C++), or using indices with bound checking (same with Rust and C++, it’s just that C++ STL has this off in Release mode by default…). For both languages, trying to avoid these runtime mitigation strategies (which affects performance) will require doing some level of unsafe stuff which the compiler doesn’t help with you (the unsafe semantics of the two languages are actually similar since Rust relies on LLVM which relies on C semantics). So ultimately, the borrow checker doesn’t seem worth it if you actually think about the cognitive overhead and headaches it brings. The only reason Rust has an edge over C++ is that it has sane defaults and a better standard library (ah only if C++ had a sane way of initializing values…)
One of the things I found interesting about this paper is that they find that there were more contributors in Rust projects. The first headline I saw posted from this paper was that first time contributors to Rust projects are 70× less likely to introduce vulnerabilities than first time contributors to C++ projects, and my immediate gut reaction was that this could be explained if the Rust developers were generally more experienced programmers. But as the abstract points out:
> We also found that the rate of new contributors increased overall after switching to Rust, implying that this decrease in vulnerabilities from new contributors does not result from a smaller pool of more skilled developers, and that Rust can in fact facilitate new contributors.
Which seems to imply the opposite of what you're suggesting, i.e. it implies that Rust makes contributing easier and safer than C++.
I also want to point out that your comment isn't accurate about how Rust's unsafe works:
> [T]rying to avoid these runtime mitigation strategies [...] will require doing some level of unsafe stuff which the compiler doesn’t help with you
(Emphasis my own.) This is inaccurate — the borrow checker is not turned off inside of unsafe blocks, but instead you are given a handful of extra APIs to use. Those extra APIs can be used to do things that the borrow checker would not normally allow (e.g. by converting checked pointers into raw pointers, and then inside the unsafe block, dereferencing those raw pointers), but the borrow checker is still active, and still catches all the same errors as before.
> the borrow checker is not turned off inside of unsafe blocks, but instead you are given a handful of extra APIs to use. Those extra APIs can be used to do things that the borrow checker would not normally allow (e.g. by converting checked pointers into raw pointers, and then inside the unsafe block, dereferencing those raw pointers), but the borrow checker is still active, and still catches all the same errors as before.
Yup I know that the borrow checker does also work inside unsafe blocks. But when what you primarily do as a systems programmer is establishing invariants in your systems so that you can exploit (or circumvent) the UB-ness of the underlying system (OS/driver/hardware/etc.), and when the borrow checker doesn't really help you with verifying these invariants... The Rustonomicon states that whenever you open up an unsafe block, it doesn't pollute only the scope, but the whole module (https://doc.rust-lang.org/nomicon/working-with-unsafe.html). So there can be a correctness issue outside the unsafe block because it disobeys the invariants implicitly set up by the developer inside the unsafe block. And you can't do anything about this other than carefully reasoning about all the various ways your implicit invariants can break inside the whole module. So unsafe is ultimately more of a convention (or a promise) that you have designed and verified your invariants correctly, so that you will not produce undefined behavior no matter how you use the module as an outside user. If you want to verify your invariants any further than that, you need to check UB at runtime using the Miri interpreter (which is really slow and still incomplete), or just use Ada SPARK.
It does pollute the whole module, but this is much less of a big deal than it might seem because Rust modules are so lightweight - you don't even need a new file to create a new module, you can just do:
mod name_of_module {
// code goes here
}
And it's often possible to wrap the tricky unsafe bits in a safe interface (that e.g. uses a mutex to enforce safety). So that anyone who is contributing to higher-level code doesn't need to worry about it. This is much better than C or C++ code where it's trivially easy to introduce memory unsafety or even Undefined Behaviour in even the boring "glue" parts of the codebase.
This leads to a really nice gentle onboarding flow where inexperienced users can start out contributing to the safe parts of the project, and (optionally) move on to gnarly unsafe bits later when they are already familiar with the project's codebase. It also dramatically reduces review workload for maintainers as they can rely on the compiler enforcing invariants outside of unsafe modules.
This works less well for really low-level code like embedded code or kernel code. But it's still a lot better than nothing.
Though my usual experience with writing performance-sensitive code is: if you just write naive inefficient code on the first try, there's a high probably that you need to rearchitect the whole system to get a more performant design, it's not something you can do incrementally. Maybe in these kinds of projects it's not wise to let random contributors handle your code though... (I'm geared more into graphics and numerical computing so the experience might be different from others.)
Maybe there's a reason why game developers have primarily used scripting languages - give out a safe managed GC-backed runtime for the majority of developers, and let only a select few who understand the system to develop the core C++ engine. Maybe Safe Rust can be used this way (as a "fast" scripting language), to separate between these two worlds... but the problems is even Safe Rust is just really difficult to grok for newcomers, and the hoops they go through to circumvent the borrow checker either falls into using Copy/Clone all over the place (slow) or smart pointers (slow) or array indices with bound checking (maybe less slow but more cumbersome, and also prone to logical invalidation errors if you're not careful)
> Maybe Safe Rust can be used this way (as a "fast" scripting language), to separate between these two worlds... but the problems is even Safe Rust is just really difficult to grok for newcomers, and the hoops they go through to circumvent the borrow checker either falls into using Copy/Clone all over the place (slow) or smart pointers (slow) or array indices with bound checking (maybe less slow but more cumbersome, and also prone to logical invalidation errors if you're not careful)
Are smart pointers like Box, Rc and even Arc any slower than any scripting system you'd "hand out" to most developers from your tightly written C++ core engine?
One thing that I see is that 90% of code is simple enough that you don't need to have any ceremony around ownership beyond writing a & in front of a value or type, 5% is harder than that but doable, and 5% requires extensive expertise to avoid allocations, or using Arc. I'd wager that the distribution of code that a GC can optimize during runtime is comparable, if not worse, at higher memory consumption.
I've compared some simple networked applications written in Java and Rust for that purpose, and performance ended up being comparable, but with 100x memory consumption, even when using GraalVM.
Rust is new. Low hanging features and bugs are going to be everywhere in new Crates. Contributing to, say, libcurl or openssl sure is more difficult than contributing to yet another rewrite of a mostly mature tool.
For heap-allocated objects, there is a way to do borrow checks at runtime if you want (there's something called constraint references, explained in https://verdagon.dev/blog/raii-next-steps.)
For structs that I do not want to heap-allocate, they're usually POD types and in arrays (which you can bound check), so there's not much to think about borrowing. The more concerning issue I usually have is about about initializing the values correctly (which usually Rust doesn't help, when reasoning about performance-sensitive code).
> For structs that I do not want to heap-allocate, they're usually POD types and in arrays (which you can bound check), so there's not much to think about borrowing.
Both situations which also apply to Rust. Whenever the borrow checker complains you can do exactly the same thing.
> The more concerning issue I usually have is about about initializing the values correctly (which usually Rust doesn't help, when reasoning about performance-sensitive code).
You can use a type-state machine, where the only way to construct the final value is by calling all of the appropriate methods that change the type parameters on the Self type. When that gets compiled it ends up as either a single memcopy of values, or you can make the Self type hold a MaybeUninit value to make the partial construction with no copy at the end possible. I actually implemented that for fun and as it turns out already existing crates that held every field in an Option and then built the final value from those ended up being faster. C'est la vie.
You don't need to master all aspects of a language to write code for it. The point is beginners who do not master all of it are not shooting themselves in the foot anywhere near as much as C++ beginners.
> Rust is actually that hard that only a small share of developers will ever master it
Kind of disagree here. Don't really think it's that much harder than C++, just hard in a very "annoying" way. Which, given its purpose, is a good thing. Making the borrow checker happy is mostly an exercise of going up and down method call chains to see where you screwed up. Or verbosely wrapping stuff in Arc<Mutex<T>> to make it a shareable resource.
I don't know if this is true compared to C++. It's not as though it's easy to pick up a C++ codebase. Some languages, yes, but I wouldn't say C++ is one of them.
Rust is harder than C++ to write a linked list in, or other “more complex than hell world” programs. But large applications i think it ends up being easier than C++. As far as mastering goes, i think far fewer people master c++ just on account of c++ being a massive language. Not only is it a massive language, but its a language with a spec, 3 major compilers, and not all of them agree on everything.
Single linked list is easy to implement in safe Rust. Double linked list is the single linked list with additional pointer (instead of reference) back to parent node.
Pointers are unsafe, so use "unsafe" keyword when dereferencing them.
> and those who master will be certainly the more intelligent one.
Rust isn't about being more intelligent, Rust is about being willing to jump through more hoops to get a guarantee that small subset of errors may not be in the executable.
C++ is memory-unsafe and a Google search will not always return the latest and safest features, but raw pointers and C-like code. As a beginner, you often don't realize this.
Rust makes it a little more difficult for you with "unsafe" blocks and explicitly points this out to you.
A good language amplifies talent - it doesn't require it. If, as you allude to, Rust requires above average talent for producing quality code, I'd say that as a language, it can only make a limited contribution. The only meaningful target audience for a language you hope will see mass adoption is the average programmer.
I don't have sufficient experience with Rust to determine whether or not only a small proportion of programmers will master it. However, if that is indeed correct, it also means that it wouldn't be worth the investment for most of the software industry to switch to Rust. Because mass adoption is, by its very nature, about what average programmers can master.
Rust fundamentally doesn't require any more intelligence than is required to write correct C or C++ code. It actually requires less, because you're offloading the mental labor of memory management onto the compiler.
If the assertion is that Rust is unsuitable for the average programmer, then the consequence would be that C and C++ are unsuitable for the average programmer... and yet average programmers have been writing C and C++ for decades.
I'm closer to an idiot than I am to a genius, and yet I write Rust all the time. You'll be fine.
> We also found that the rate of new contributors increased overall after switching to Rust, implying that this decrease in vulnerabilities from new contributors does not result from a smaller pool of more skilled developers
That seems like a dubious line of reasoning. People learning the shiny new thing are going to be itching to use those skills in a (non-local, non-hobby) project, and that seems a much more likely explanation for why contributors increase during the stage where Rust is still shiny and new. An old codebase in an old language doesn't have nearly the same appeal, where contributions are going to come mainly because people want to scratch their own itches. It's a false comparison that doesn't contraindicate the self-selection effect at all.
The official Rust book is the way to go. That being said, you "should" only learn it if this is actually your domain – for example because you're a C++ developer currently or so. You really don't need it in other software engineering domains.
I agree with the official Rust book, but I disagree that one shouldn't learn it.
Rust, even as a learning exercise, makes you think about a lot of concepts you'd normally not notice. It is a fun language, fun to experiment with, and due to its learning curve, it'd be far less stressful to learn the basics ahead of time if you ever needed it.
I used Rust-python bindings on a small project where we didn't want to use big python math libraries for portability, but a few of our calculations ran really slow on python primitives. Re-wrote a couple functions in Rust for a 40x speedup and its fully switchable to return to the original python implementations.
Yes, it's worth looking into the language and its concepts. But I think many people feel bad for not knowing or not having tried Rust because the Rust community tends to be very vocal and not quite reserved, and I believe that's a mistake. Not knowing Rust doesn't make you a bad software engineer, as opposed to what some Rustaceans want to make you believe.
> Not knowing Rust doesn't make you a bad software engineer, as opposed to what some Rustaceans want to make you believe.
Nobody is saying this, despite what some people with overactive persecution complexes want to make you believe. As for prescribing what people should do, I will say that curiosity is an essential part of improving in any field, including software development. Learning anything at all broadens your horizons and threatens to make you better. If you don't know Rust, go ahead and learn it (or anything else that interests you); if you already know Rust, go learn something new. It would be quite silly for someone to assume that no advances to the state-of-the-art have happened since they first learned to program.
Sibling comment (last paragraph about python binding) is spot on: modest c++ knowledge puts doing a native component in a otherwise memory safe environment deeply into "don't try this at home!" territory. Modest rust knowledge? Sure, you might find yourself having picked a fight that you are unable to win, giving up before finding a solution the borrow checker accepts. But the danger of ending up with a quick success that later turns out to be an infinite supply of mystery segfaults is rather low. The failure mode for being not quite sufficiently good at it for the problem at hand is far more benign in rust.
Modest c++ knowledge means knowing that you should better not try using that knowledge, which is quite terrible as an on-ramp.
> That being said, you "should" only learn it if this is actually your domain
You're suggesting that you can actually decide this for other people? Hahahah! :)
That being said, you "should" never to deign to decide for someone else what they should do. It's a responsibility violation because you decide "should" for you, I decide "should" for me. No other way. Get it? :) hahaahah
The idea above comes across as condescending, hope you can remember that next time you are loading a "should" into the chamber! Haha :)
Regarding the utility only existing for C++ devs, I get if that's the way it looks for you, tho that's not how it is.
It's important to keep an open mind regarding the value of learning and what can be helpful and appreciate that there are a variety of approaches that work for different folks! :)
NoBoilerplate [1] is a great Rust-oriented YouTube channel that's less tutorial and more of a tour of the strengths and foibles of the language. The videos are a great springboard, because they are entertaining as much as informative and inject a bit of hype and hope for when you're battling the compiler.
If you avoid async, it's fairly easy to learn. (I never managed to learn C or C++.)
While you're learning, take shortcuts; e.g. use .clone() and/or .to_string() liberally to avoid values going out of scope. When you want to speed up code you can do the opposite.
Egui is a great well-rounded immediate mode GUI library that you can use to get up and running fast.
The Rust implementation of SDL2 is pretty good too. There are plenty of examples online you can borrow from.
Generally the later you detect the bug, the harder it is to fix.
If it doesn't compile, you have to fix it, probably right after you finished typing it. You still have the overall idea of what you were doing fresh in your mind.
If it compiles and then something weird happens a week later, you may spend days trying to reproduce it, then you'll have to pull out valgrind/ASAN, look at the dumps of that, figure what the hell does it mean that something goes wrong in the bowels of the STL/some other library, then backtrack from there to where that interacts with your code, figure if it's indeed a you problem or a library problem, remember again what this weird code is supposed to be doing, then finally fix, and do testing.
Rust is just doing its job here, but it's PITA that Rust is extremely bad at letting people to write inefficiently secure code concisely (i.e. scripting languages like Python). It wants you to go all-in from the day 1.
It's kinda the point, though. Runtime GC as in Python allows you to handle these problems during... runtime (which means you won't encounter some of them at all when learning - which is good!). Rust explicitly attempts to handle them at compile time, so the surface area of potential problems you might run into during runtime is much bigger.
Rust with GC is somewhat like ocaml with ...; instead of let ... in.
Aren’t C++ newcomers embracing “modern C++”? Touching new and delete is a feature for advanced users. Most newcomers can exist happily solely with STL containers, RAII and smart pointers.
This is like saying your neighbors siding is off color while your own is falling off and the framing is rotting.
Both are problems with siding, one is far more serious than the other. I'm not a fan of Java, but cpp people being disingenuous is a pet peeve. "Modern" cpp still sucks, it just sucks less - it's not useful to pretend otherwise.
Our cloud service at Sococo, back in the day, their principle bug-tracing activity was Java null pointers. The second? leaks. Both problems with pointer management.
However you slice it, it takes discipline to get memory allocation right. No magic bullet.
For some reason, the comparison is always current rust Vs 20 year old c++. Mentioning that "modern" (12 year old) c++ does not have these problems gets downvoted.
Completely unrelated: there are at least two dedicated anti c++ communities.
> Mentioning that "modern" (12 year old) c++ does not have these problems gets downvoted.
Because it's false, that's why. Modern C++ still has many/most of C++ problems. It solves a bunch of them, but far from all.
And telling the opposite to newcomers is a lie that's hurting them because they can end up making even more mistakes than the previous generation, since they were told they didn't have to pay attention thanks to “modern C++”.
> Mentioning that "modern" (12 year old) c++ does not have these problems gets downvoted.
I'm not a C++ user but I thought that all of these things are still in "modern" C++, that the new editions only add things, not remove? Are there tools that can automatically rewrite existing code to use the "modern" features?
And that's a criticism I can take to a degree on behalf of the language. These relics are still in because mistakes are happening in the governing body. However in practice, they don't matter much. Every CPP dev has a little helper running in the background which immediately flags these mistakes and can auto correct them. It's not an issue, just an annoyance.
Can’t be sure since you didn’t give specific examples… but I guess it’s because these domains (graphics, audio processing) requires the utmost performance and therefore needs to touch the unsafe parts of Rust frequently. Or they need to interface with operating system/driver APIs which are fundamentally unsafe.
Writing inefficient safe code is easy(-ish). Writing efficient safe code is hard regardless of using Rust or not.
This is how to set a GPIO bit, from the official "Embedded Rust" book:
I tried. I really tried.