However that LinkedList implementation requires unsafe{} to be implemented. All unsafe really means is that the compiler isn't going to hold your hand, the usual memory ownership footgun is available at your discretion.
unsafe shouldn't be this mythical thing you don't touch like people seem to think it is. If you need to escape the compilers very helpful guidance you can and should, but test thoroughly!
Going on a tangent, but I honestly think 'unsafe' might suffer from a naming issue. It should've been called 'unchecked' or 'unverifiable' or something that says the code is merely not verified to be safe, not that it is actually unsafe.
Nope, unsafe does exactly what it says on the tin.
C# tackled this problem 15 years ago. I'm sure other languages (Haskell) did it even earlier. When to use unsafe is a judgement call. Each developer and team will have to set their own standards. Some people will abuse it. None of this is new. At first it scares people. They think this is the brave new world, using unsafe feels gross and backwards! Eventually they understand where it is and isn't appropriate.
You might think "so what? Why even bother with a safe-by-default language?"
Because it greatly restricts the problem space. Rather than being forced to examine every line of code for every possible bit of undefined behavior or every path of flow control for memory errors you only need to think really hard about edge cases inside the unsafe blocks. Simply by virtue of being a relatively small number of blocks of few lines the problem of safety and correctness becomes easier to understand. Easier to test. Easier to reason about.
Unsafe is a tool. It's a dangerous tool so you should always wear your gloves and safety goggles. But when faced with a problem for which it is the best tool you should use it without regret.
> Nope, unsafe does exactly what it says on the tin.
Depends on how you interpret the name—whether it's referring to what it does (makes things no longer automatically safe), or whether it's referring to what the code inside it does.
If you write only safe code, inside an unsafe{} block, then nothing unsafe is happening. Fewer compile-time static-analysis checks are happening, but if you manually verify that the code is "safe anyway" the way C/C++ programmers are forced to do, then you can declare that the code is safe, maybe with a comment like:
// this is safe
unsafe{ ... }
That seems bizarre and contradictory, no? But it would seem less weird if it was:
// this is safe
unchecked{ ... }
Of course, there's no reason you should be using an unsafe{} block for only safe code, so unsafe{} is usually a pretty good label for the content of the block.
Right but the hope was that Rust was all the way safe, not just most of the way safe. That’s it main niche. That’s why someone would choose it over C++. But if 95% of your code is safe and 5% of your code isn’t, and the safe 95% uses that unsafe 5% all over the place, it inherently makes the safe code an entry point into unsafe code, kinda-sorta making it unsafe too. So it ends up feeling like all the hard work to keep things safe was a waste.
But when you're debugging, you know where to focus your efforts.
If 95% of your code touches the other 5%, then that 5% is probably pretty important and useful and hopefully fast. Spending some extra time to verify safety in exchange for speed/control is a small price to pay, and will pay dividends from the other 95% of code that doesn't have to be inspected so closely.
No, the objective for Rust was never to be all the way safe.
Rust gets much inspired by C++ and seeks to be a systems language where " there should be no room for a lower-level language between [it] and native machine code".
If you want that, you need unsafe blocks. The intent is to use those blocks to build safe abstractions that can be used for the lion's share of your program.
I'm not having a problem with it personally, I'm just saying that trying to educate the developers instead of addressing it on Rust's end seems like a potentially losing battle, as unfortunate as it might be.
I hope we don't settle for unsafe being okay forever. Right now, sometimes it is the right thing to do and there shouldn't be any regret. But in the future, I hope Rusts compilers become better.
There are two things I consider necessary for that. First, that the Rust compilers become smarter in proving the safety of things by themselves. Second, that the Rust compilers become capable of verifying proofs given to them that show the safety of a given piece of code the compilers can't prove as safe on their own.
I think that it is a worthwhile goal to be able to someday formally prove all the unsafe blocks correct in, say, the standard library and popular crates.
However, I honestly feel that the Rust language itself isn't really the right language to be doing these kinds of proofs in. I think that the right language to do the actual formal verification in is likely to be something closer to Coq. Whoever undertakes this effort would probably use an automated theorem prover to prove the unsafe Rust code correct, like was done for seL4 using Isabelle.
You can think of this setup as offering a sort of layered verification: once the small core of code (the unsafe code in the standard library and popular crates, say) is proven correct, the type system and borrow checker effectively prove the rest to be memory- and thread-safe. In fact, that's what would make this system practical: most programmers wouldn't have to understand anything about the complexities of the theorem prover. They would get the benefit of verified memory- and thread-safety for free just by learning Rust.
I also thought foremost about standard libraries and other often used code, or say, critical code in some OS.
We don't need to verify every single occurrence of unsafe, but whenever unsafe is necessary I feel the lack of some other guarantee holding our back.
Having optional small-scale(!) verification in Rust would be awesome.
Using other theorem provers has the same problems as when one tries to establish tools for checking memory safety in C: it isn't the default, just an addon.
In my eyes, there is a scale of verification-readiness in which Rust can position itself. The least ready would be having it completely separate and done by other tools in other files, the most ready would be having syntax for it in the language, having it in the same files, and checked by the standard compiler.
I think every bit of verification-readiness Rust has by default will have a strong effect that we can't achieve through other means.
Maybe, some things don't impact those not interested in verification. Say, have next to 'safe' and 'unsafe' also the 'verified' environment that also contains proof language as a core part of Rust. That way all ordinary code is still valid and everyody is free to use Rust without verification in mind.
Any ideas what could be done to make Rust verification-readier?
Rust needs unsafe to be Rust. It's reasonable to expect (safe) Rust to become more expressive over time so that more things can be written in safe Rust (or written more conveniently) but expecting the escape hatch to go away entirely is misguided. It is as unlikely as C++ or C trying to do away with inline assembly.
IMO that's less of a Rust problem than a problem for the next generation of verified languages. Probably total functional languages, or maybe just usable versions of Coq.
The Rust compiler will not solve the halting problem. It is pretty trivial to write programs which are safe if and only if they halt. So 100% safe is simply absurd.
You don't need to solve the halting problem just to verify an existing proof of a semantic property, nor to use smarter heuristics to avoid requiring such a proof. 100% safe is totally reachable, though I bet the syntax would be pretty hairy added to today's Rust.
Hmm, good point. You'd have to extend the formal verification into the C code, at least. If you can do that, it might be easier to just write verified C.
Inline ASM is exactly as verifiable as the underlying CPU, given an adequate model in the verifier. That's probably easier than verifying C, which introduces extra ambiguity in its semantics. But yeah, verifiable CPUs would be nice.
That may be due to a quirk of how we use it in English (and maybe other languages, but I can't speak to that). For some reason, unsafe or not safe isn't always perceived as a logical negation as some other concepts usually are, and is instead parsed as opposite of safe. "Verified" seems to suffer from this less, but is also less strong in what it positively implies.
Is it possible that the link to personal safety of the terminology is what provides the best positive connotation in terminology but also causes this unwanted ambiguity in negated interpretation? If so, that's an annoying catch-22...
The word that I have seen in similar contexts is 'trusted', which I like and would have preferred -- the block has extra privileges and isn't machine verified. Some people tend to give 'trusted' an opposite reading when they first come across it, though.
The problem with that word is it doesn't say who is doing the trusting, which is the crucial point. In fact, "trusted" can be used to describe both safe code and unsafe code. In the safe code, the programmer is trusting the compiler. In unsafe code, the compiler is trusting the programmer. Both code environments are "trusted," but the trust is being given to different parties.
Oh yeah, 'trusted' would definitely give the opposite of the intended meaning! I had to read your comment twice just to get why you were calling it that.
Agreed. Pointers are unsafe. References are safe, & almost powerful enough to make one think they don't need pointers. But if you find yourself needing pointers over references, they're there
It can't. And there are many things std does that are in unsafe blocks. There's nothing wrong with that if you have a specific use case for it. (And wrap that functionality in a safe function)
However that LinkedList implementation requires unsafe{} to be implemented. All unsafe really means is that the compiler isn't going to hold your hand, the usual memory ownership footgun is available at your discretion.
unsafe shouldn't be this mythical thing you don't touch like people seem to think it is. If you need to escape the compilers very helpful guidance you can and should, but test thoroughly!