Very cool, but why choose an unsafe language when today there exists low-level memory-safe languages? We all know already that most of security bugs are memory related.
I get that this is an hobby project, but still why not deprecate unsafe languages where there are much better alternatives?
Mostly just because C is a lot simpler, and in kernel dev, simplicity is everything. I've used rust for other projects but I feel like in kernel dev I would much rather use a simple and readable language than a safe language.
Rust is supposed to be a "safe" language for low level use, and thus has borrow checker, unsafe, etc. Building a "verifier" on top of Rust seems a bit excessive and unneeded.
> Developers write specifications of what their code should do ... Verus statically checks ... the specifications for all possible executions of the code
Tests do not account for all possible executions of the code, rather only a subset of it.
Rust is indeed a safe language, in terms of memory safety. Vulnerabilities are still very possible within a rust program, they just need to not rely on memory exploits, and the borrow checker won't catch them. That is why formal verification exists. If you have a really critical, high security application then you should ensure the maximum amount of safety and reliability.
Formal verification enables the developer to write a mathematical proof that the program behaves correctly in all situations, something that the borrow checker cannot do.
Tests and proofs cover very different things. For large code bases getting all the requirements rights is nearly impossible while tests tendto be obvious - even if we don't have the requirements right (or at all) this one fact is true in this one case. However the marjority of cases are not covered at all and so you only hope they work.
Yes, but remember that Erlang bug discussed here last week where somebody apparently messed up SSH state transitions in such a way that people could just log in without having the password or anything?
Buffer overflows etc. are absurd things that should not be possible, but preventing them is the first step towards security.
Buffer overflow vulnerabilities are prevented with bounds checking, but that just means you get a panic at runtime. With formal verification, you can prove that the program never accesses an array with an index that's out of bounds.
I guess you're asking why that wasn't built into Rust from the start; after all, there are programming languages with the formal verification and theorem-proving built-in, e.g. for imperative languages, the SPARK extension to Ada, as well as ATS, or for a functional one, Idris. My guess is that Rust never would have become popular if you needed to write actual formal proofs to guarantee some degree of safety, since satisfying the borrow checker is easier in comparison, and it also probably would have been a lot harder to develop Rust after that. The borrow checker simply eliminating use-after-free errors and data races in safe code was good enough.
I don't know why you are trolling us. Tests aren't supposed to check every input. The entire point of classic testing is to define static execution traces by hand. That is obviously meant to only cover a finite number of execution traces. Even 100% test coverage doesn't give you proof over all possible execution traces. Tests prove that the software "works", but they do not prove the absence of bugs (which inherently requires you to specify what a "bug" is, hence the need for a specification).
Not only is static verification more powerful, there is also a massive usability difference. You define your pre and post conditions in each function (also known as specification or design contract) and the tool will automatically check that you do not violate these conditions. It solves a very different problem from classic unit or integration tests.
Obviously not. Suppose the input to my function is a 64-bit integer. My test cannot possibly try every possible 64-bit integer. It would take years for such a test to finish.
This is why tools like formal verification and symbolic analyses can help you establish that for all possible integers X, your function does the right thing (for some definition of “right”). You get this assurance without having to actually enumerate all X.
Indeed. Exhaustively testing a 64b input space requires about 600 machine years for each nanosecond that the function under test takes to run. So, _very_ short-running critical functions are testable on a cluster, but not efficiently testable, and if a function is simple enough to be testable, then it's usually easier to prove that it's correct instead.
#[kani::proof_for_contract(NonNull::new_unchecked)]
pub fn non_null_check_new_unchecked() {
let raw_ptr = kani::any::<usize>() as *mut i32;
unsafe {
let _ = NonNull::new_unchecked(raw_ptr);
}
}
It looks like a test, but actually it is testing that every possible usize, when converted to a pointer to i32 and built with NonNull::new_unchecked, will follow the contract of NonNull::new_unchecked, which is defined here
Which means: if the caller guarantees that the parameter ptr is not null, then result.as_ptr() is the same as the passed ptr
That's a kind of trivial contract but Kani tests for all possible pointers (rather than some cherry picked pointers like the null pointer and something else), without actually brute-forcing them but instead recognizing when many inputs test the same thing (while still catching a bug if the code changes to handle some input differently). And this approach scales for non-trivial properties too, a lot of things in the stdlib have non-trivial invariants.
It's not that different from writing a regular test, it's just more powerful. And you can even use this #[requires] and #[ensures] syntax to test properties in regular tests if you use the https://crates.io/crates/contracts crate.
Really if you have ever used the https://proptest-rs.github.io/proptest/intro.html or the https://crates.io/crates/quickcheck crate, software verification is like writing a property test, but rather than testing N examples generated at random, it tests all possible examples at once. And it works when the space of possible examples is infinite or prohibitively large, too.
if i read it correctly, it also checks raw memory access during compilation. My assumption is that it also checks unsafe blocks, which is important when working with low level
Annoying CLI; the things called "casks"; the annoying behavior of updating ALL the packages on your system when installing only one program; and bad handling of broken packages.
Is using a web browser really necessary for viewing Markdown? Seems a really inefficient way to view a what's supposed to be a small and minimal language.
Emphasis on "Github flavored", which supports additional extensions to the markdown spec that a TUI can't properly preview. You need some kind of GUI (preferably a browser since it does support some html) in order to properly preview it.
Yes, exactly. For example Mermaid diagrams (that is supported in this project) can't be rendered in TUI (well, technically you can if you use something like Kitty's image protocol, but I think this is being pendant).
As someone said already, Markdown was designed to be compiled to HTML, and browsers are the best HTML renderers we have. While you can technically interpret it semantically and render it without a browser, you would still need to support HTML since Markdown supports embedded HTML (e.g. `<details>` tag is a popular one to hide information). At that point you probably are better off using a browser anyway.
By the way, while browsers can be both CPU and memory hogs, they're not inherently inneficient and it mostly depends on what the website is doing. This project uses minimal JavaScript and no frameworks, and while I didn't measure it I assume this site should be reasonably efficient, both in memory and CPU usage.
Fair enough, but (speaking as a web developer), running a stack locally on Apple Silicon (especially if it has any "interesting" dependencies such as compiled Rust/C libs and whatnot) and expecting the same stack to run identically in the cloud on x64 can still expose differential behavior
At the very least, I could test mock deploys from the Apple Silicon side to the x64 VM running locally (over loopback) as an extra data point
I don't actually use it for this use-case but now that I'm thinking about it, I might try, because this seems useful
reply