Hacker News new | past | comments | ask | show | jobs | submit | worldsavior's comments login


Thank you!

Very cool, but why choose an unsafe language when today there exists low-level memory-safe languages? We all know already that most of security bugs are memory related.

I get that this is an hobby project, but still why not deprecate unsafe languages where there are much better alternatives?


Mostly just because C is a lot simpler, and in kernel dev, simplicity is everything. I've used rust for other projects but I feel like in kernel dev I would much rather use a simple and readable language than a safe language.

Rust is supposed to be a "safe" language for low level use, and thus has borrow checker, unsafe, etc. Building a "verifier" on top of Rust seems a bit excessive and unneeded.

> Developers write specifications of what their code should do ... Verus statically checks ... the specifications for all possible executions of the code

This is what tests are for.


Tests do not account for all possible executions of the code, rather only a subset of it.

Rust is indeed a safe language, in terms of memory safety. Vulnerabilities are still very possible within a rust program, they just need to not rely on memory exploits, and the borrow checker won't catch them. That is why formal verification exists. If you have a really critical, high security application then you should ensure the maximum amount of safety and reliability.

Formal verification enables the developer to write a mathematical proof that the program behaves correctly in all situations, something that the borrow checker cannot do.


Tests and proofs cover very different things. For large code bases getting all the requirements rights is nearly impossible while tests tendto be obvious - even if we don't have the requirements right (or at all) this one fact is true in this one case. However the marjority of cases are not covered at all and so you only hope they work.

both have their place.


Yes, but remember that Erlang bug discussed here last week where somebody apparently messed up SSH state transitions in such a way that people could just log in without having the password or anything?

Buffer overflows etc. are absurd things that should not be possible, but preventing them is the first step towards security.


This is something that Rust should prevent, not another layer on top of Rust.

Buffer overflow vulnerabilities are prevented with bounds checking, but that just means you get a panic at runtime. With formal verification, you can prove that the program never accesses an array with an index that's out of bounds.

I guess you're asking why that wasn't built into Rust from the start; after all, there are programming languages with the formal verification and theorem-proving built-in, e.g. for imperative languages, the SPARK extension to Ada, as well as ATS, or for a functional one, Idris. My guess is that Rust never would have become popular if you needed to write actual formal proofs to guarantee some degree of safety, since satisfying the borrow checker is easier in comparison, and it also probably would have been a lot harder to develop Rust after that. The borrow checker simply eliminating use-after-free errors and data races in safe code was good enough.


Yes, it should make sure that there are not buffer overflows, but this is the next step.

Why is verification excessive but not tests?

A verification of a property is stronger than a mere test of a property.


The test is supposed to be the verification.

Test verify that the code works on specific inputs. Formal verification checks that it works on every input.

Can't you make the tests check every input? (This is also what they're supposed to do.)

I don't know why you are trolling us. Tests aren't supposed to check every input. The entire point of classic testing is to define static execution traces by hand. That is obviously meant to only cover a finite number of execution traces. Even 100% test coverage doesn't give you proof over all possible execution traces. Tests prove that the software "works", but they do not prove the absence of bugs (which inherently requires you to specify what a "bug" is, hence the need for a specification).

Not only is static verification more powerful, there is also a massive usability difference. You define your pre and post conditions in each function (also known as specification or design contract) and the tool will automatically check that you do not violate these conditions. It solves a very different problem from classic unit or integration tests.


Obviously not. Suppose the input to my function is a 64-bit integer. My test cannot possibly try every possible 64-bit integer. It would take years for such a test to finish.

This is why tools like formal verification and symbolic analyses can help you establish that for all possible integers X, your function does the right thing (for some definition of “right”). You get this assurance without having to actually enumerate all X.


Indeed. Exhaustively testing a 64b input space requires about 600 machine years for each nanosecond that the function under test takes to run. So, _very_ short-running critical functions are testable on a cluster, but not efficiently testable, and if a function is simple enough to be testable, then it's usually easier to prove that it's correct instead.

But why? Tests can't catch everything. A single verified predicate is equivalent to a very large, potentially infinite number of tests.

Right now the Rust stdlib is being verified using Kani, a model checker, https://model-checking.github.io/verify-rust-std/

In Kani, a proof looks like this

https://github.com/model-checking/verify-rust-std/blob/00169...

    #[kani::proof_for_contract(NonNull::new_unchecked)]
    pub fn non_null_check_new_unchecked() {
        let raw_ptr = kani::any::<usize>() as *mut i32;
        unsafe {
            let _ = NonNull::new_unchecked(raw_ptr);
        }
    }
It looks like a test, but actually it is testing that every possible usize, when converted to a pointer to i32 and built with NonNull::new_unchecked, will follow the contract of NonNull::new_unchecked, which is defined here

https://github.com/model-checking/verify-rust-std/blob/00169...

    #[requires(!ptr.is_null())]
    #[ensures(|result| result.as_ptr() == ptr)]
Which means: if the caller guarantees that the parameter ptr is not null, then result.as_ptr() is the same as the passed ptr

That's a kind of trivial contract but Kani tests for all possible pointers (rather than some cherry picked pointers like the null pointer and something else), without actually brute-forcing them but instead recognizing when many inputs test the same thing (while still catching a bug if the code changes to handle some input differently). And this approach scales for non-trivial properties too, a lot of things in the stdlib have non-trivial invariants.

You can check out other proofs here https://github.com/search?q=repo%3Amodel-checking%2Fverify-r...

It's not that different from writing a regular test, it's just more powerful. And you can even use this #[requires] and #[ensures] syntax to test properties in regular tests if you use the https://crates.io/crates/contracts crate.

Really if you have ever used the https://proptest-rs.github.io/proptest/intro.html or the https://crates.io/crates/quickcheck crate, software verification is like writing a property test, but rather than testing N examples generated at random, it tests all possible examples at once. And it works when the space of possible examples is infinite or prohibitively large, too.


Say I have as input a byte. I create a test that exercises every possible byte value.

A verification would be the equivalent of that. In practice that matters since the input space is often much larger than just one byte.


> Building a "verifier" on top of Rust seems a bit excessive and unneeded.

Well, Rust doesn't yet have taint checking or effects, so there's two things lacking.


if i read it correctly, it also checks raw memory access during compilation. My assumption is that it also checks unsafe blocks, which is important when working with low level

Annoying CLI; the things called "casks"; the annoying behavior of updating ALL the packages on your system when installing only one program; and bad handling of broken packages.

This tool [0] might help for pinning casks.

[0] https://github.com/buo/homebrew-cask-upgrade


Ah, I've started this one already. But htop is a formula not a cask so this doesn't help.

That's a very strong obfuscation. Takes a lot of work to deobfuscate such a thing. Great writeup.

Is using a web browser really necessary for viewing Markdown? Seems a really inefficient way to view a what's supposed to be a small and minimal language.

Emphasis on "Github flavored", which supports additional extensions to the markdown spec that a TUI can't properly preview. You need some kind of GUI (preferably a browser since it does support some html) in order to properly preview it.

Yes, exactly. For example Mermaid diagrams (that is supported in this project) can't be rendered in TUI (well, technically you can if you use something like Kitty's image protocol, but I think this is being pendant).

Using a GUI framework over a web browser is still much more efficient.

As someone said already, Markdown was designed to be compiled to HTML, and browsers are the best HTML renderers we have. While you can technically interpret it semantically and render it without a browser, you would still need to support HTML since Markdown supports embedded HTML (e.g. `<details>` tag is a popular one to hide information). At that point you probably are better off using a browser anyway.

By the way, while browsers can be both CPU and memory hogs, they're not inherently inneficient and it mostly depends on what the website is doing. This project uses minimal JavaScript and no frameworks, and while I didn't measure it I assume this site should be reasonably efficient, both in memory and CPU usage.


Markdown was designed to compile to HTML and web browsers are designed to render HTML. So I’d say a browser is the only reasonable choice.

Building on a virtual x86_64 barely achieves the performance when building on host, if there was an applicable way.

Fair enough, but (speaking as a web developer), running a stack locally on Apple Silicon (especially if it has any "interesting" dependencies such as compiled Rust/C libs and whatnot) and expecting the same stack to run identically in the cloud on x64 can still expose differential behavior

At the very least, I could test mock deploys from the Apple Silicon side to the x64 VM running locally (over loopback) as an extra data point

I don't actually use it for this use-case but now that I'm thinking about it, I might try, because this seems useful


From a web standpoint it's possible, but when I tried building AOSP on macOS, it didn't turn out great.

You and me same.


Man, girls are the biggest mystery in the world. I'm gonna wait until the next life to figure them out. Cheers!


Used to but now only for cakes and similar.

It's not really needed and it's much more preferable to get some solid food. It's not hard consuming another 20g of protein with solid food.


Sad that I didn't read this comment before reading this article.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: