Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Chrome OS KVM - A component written in Rust (googlesource.com)
239 points by cyber1 on Sept 27, 2017 | hide | past | favorite | 107 comments


There is a somewhat expanded README that has yet to be reviewed and checked in here: https://chromium.googlesource.com/chromiumos/platform/crosvm...


Solo5/ukvm unikernel monitor co-author here: I've been thinking about rewriting ukvm in Rust off and on for some time now, your work provides a proof point that it can be done. I'll be following it with interest.

Aside: I really like what the ChromeOS team has done over the years to advance the state of OS security for consumers, keep up the good work!


Isn't stuff like that exactly counteracting Rusts raison d'etre?

> // This is safe; nothing else will use or hold onto the raw sock fd.

> Ok(unsafe { net::UdpSocket::from_raw_fd(sock) })

https://chromium.googlesource.com/chromiumos/platform/crosvm...


That's low-level code implementing a wrapper around the underlying libc socket API; there's no alternative to unsafe blocks there as it wraps a legacy API written in C.

The "this is safe" comment is just a (very verbose) explanation as to why the wrapper code is doing the correct thing. Notably, there's another similar comment just above it. In fact, looking through the file if anything I'm quite impressed at how carefully it's written.


Author here: one of the policies we tried to stick to when writing unsafe code was to document each case. It can be tedious but it encourages having less unsafe code and makes the author really think about if this unsafe code is really meeting the same guarantees as safe Rust according to https://doc.rust-lang.org/nomicon/meet-safe-and-unsafe.html


I've also found this to be a really good approach to unsafe. Declare why something is actually safe and you may realize it isn't. It also helps when writing tests to assert safety - you have a comment explaining exactly the invariants you need to test.


The line above has this:

    // This is safe since we check the return value.
    let sock = unsafe { libc::socket(libc::AF_INET, libc::SOCK_DGRAM, 0) };
    if sock < 0 {
        return Err(Error::CreateSocket(IoError::last_os_error()));
    }
I think in this case it would be better to put the return value check within the unsafe block, this way the unsafety does not "leak out" of the block, so to speak, so it is easier to audit. Of course in such a trivial case it does not matter much.


I think it's best to keep unsafe blocks as small as possible. Within an unsafe block there is undefined behavior, so you want to get out of there ASAP.

Just my opinion.


Personally I disagree, but mostly because I think the `unsafe` model Rust has is worth much less then people give it credit for.

For one, there is the current problems of documentation - it's not documented what features/invariants the optimizer and language actually require to be true, and the nitty gritty details are very fuzzy, so switching from `unsafe` to `safe` is error prone and you're going to get it wrong. The more you do it, the more likely it is your code will be broken in the future when you find out something you thought was OK isn't actually something the Rust devs like or isn't something they wanted you doing. If you do more of the `unsafe` work in one big `unsafe` block rather then jumping in and out, you're less likely to have issues in the future because there's less points where you have to ensure all the Rust invariants are met.

But the bigger detail for me is that, even if the above problem is fixed, `unsafe` doesn't really denote the areas we would consider the `unsafe` areas anyway, so "getting out of there ASAP" is not always a helpful mindset and can easily be counter-productive and result in you marking things `safe` when they're not actually `safe`. For example, dereferencing a pointer is `unsafe`, but doing pointer-arithmetic is `safe`. So you can easily just wrap the dereference in an `unsafe` block and your technically good to go (You can even wrap it in a pretty interface, like I've seen people do). But all the spots where you do pointer-arithmetic can easily introduce bugs into your `unsafe` code, making it hardly any better than C code that could have the same problem (Half the point of using Rust is to avoid bugs from unchecked pointer-arithmetic!).

My point being, just because your `unsafe` blocks are small doesn't tell you anything about the correctness of them, and it likely means they rely on outside information to be correct. And if that is the case, then that outside code is effectively just as dangerous as your `unsafe` code. This may be obvious to you, and I apologize if it is, but this is an issue/misconception I see a lot. IMO, you should mark anything `unsafe` if using it within the bounds of `safe` Rust could potentially cause `unsafe` code to fail, even if the code itself is completely `safe` code. Only if you have an interface the meets all the invariants that Rust requires should you allow it to be considered `safe`.


Speaking of policies, here's a question I've had for a while now: Google has rather (in)famous style guides for various languages; based on your experience with Google's codebases, have you given any thought to what a hypothetical "Google Rust Style Guide" would look like?

(When I've pondered this question before, "document every instance of `unsafe`" has been one of my policies (as obvious as it may seem), so I think you're on the right track!)


The default is safe ownership semantics. Using unsafe blocks on occasion when it makes sense is an important feature.

The important bit to me is that the default pointer type and the standard library don't start you out on the wrong foot.


Not sure why you're downvoted because it's a perfectly reasonable question.

Rust's safety guarantees are mostly there to prevent you from making a certain class of coding errors. "unsafe" means that you've disabled those protections, but that just means that you have to be extra careful not to make those mistakes, and that people reviewing your code need to spend extra time to make sure that you haven't made those mistakes.


No! Unsafe code is perfectly reasonable in rust, it’s just totally clear where it is and can be reviewed to make certain it’s safe.


I take the exact opposite view: The goal of Rust is to give you tools to manager you unsafe code. Writing Rust code that avoids `unsafe` is counteracting its raison d'etre.


Nope, this is exactly what low-level Rust code should look like.


Is Rust an officially sanctioned language at Google?


Author here: Rust is not officially sanctioned at Google, but there are pockets of folks using it here. The trick with using Rust in this component was convincing my coworkers that no other language was right for job, which I believe to be the case in this instance.

That being said, there was a ton of work getting Rust to play nice within the Chrome OS build environment. The Rust folks have been super helpful in answering my questions though.


> The trick with using Rust in this component was convincing my coworkers that no other language was right for job, which I believe to be the case in this instance.

I ran into a similar use case in one of my own projects—a vobsub subtitle decoder, which parses complicated binary data, and which I someday want to run as web service. So obviously, I want to ensure that there are no vulnerabilities in my code.

I wrote the code in Rust, and then I used 'cargo fuzz' to try and find vulnerabilities. After running a billion(!) fuzz iterations, I found 5 bugs (see the 'vobsub' section of the trophy case for a list https://github.com/rust-fuzz/trophy-case).

Happily, not one of those bugs could actually be escalated into an actual exploit. In each case, Rust's various runtime checks successfully caught the problem and turned it into a controlled panic. (In practice, this would restart the web server cleanly.)

So my takeaway from this was that whenever I want a language (1) with no GC, but (2) which I can trust in a security-critical context, Rust is an excellent choice. The fact that I can statically link Linux binaries (like with Go) is a nice plus.


> Happily, not one of those bugs could actually be escalated into an actual exploit. In each case, Rust's various runtime checks successfully caught the problem and turned it into a controlled panic.

This has been more or less our experience with fuzzing rust code in firefox too, fwiw. Fuzzing found a lot of panics (and debug assertions / "safe" overflow assertions). In one case it actually found a bug that had been under the radar in the analogous Gecko code for around a decade.


KVM maintainer here. I agree that this makes a lot of sense to write in Rust. Some time ago there was even a demo of a qcow2 format driver for QEMU written in Rust.

I also find your Rust style extremely readable. Great job!


Would you consider releasing parts of this as standalone crates on crates.io? I ask because there are many different things developers would like to do with KVM that might not be the same as the goals of ChromiumOS.

Also, the Wayland stuff looks cool but I'm not sure how you are managing the buffers with just the wl protocol.

There is a Wayland crate for Rust that also has support for generating protocol from the xml descriptions, not sure if you used that there.

The library is here: https://github.com/Smithay/wayland-rs and the part you would want is the `wayland-scanner` crate in that repo.

This seems like a good follow on to the work done in Go and python a while back at Google, though it would be cool to support the virtio p9fs as a root filesystem.

And, I know you can't say anything about this, but I'm happy to see the arm support in there, maybe it's possible that Google supports a fully virtualized Android device with the ability to run first class Linux, ChromeOS, etc. even on locked bootloader devices.

It would even be possible to keep a tiny resident e911-compliant dialer persistant as part of a lock screen.

Anyway, awesome work, I might have to revive kvmd at some point.


>Would you consider releasing parts of this as standalone crates on crates.io? I ask because there are many different things developers would like to do with KVM that might not be the same as the goals of ChromiumOS.

Sadly, not many of the components within crosvm would make good crates in crates.io. The crates in crosvm tend to be laser focused on solving a specific use case within crosvm. There are more general versions of lots of the functionality we have in crosvm that exist in crates.io (e.g. eventfd or memory maps) that we skipped to avoid excessive external dependencies.

>Also, the Wayland stuff looks cool but I'm not sure how you are managing the buffers with just the wl protocol.

The virtio wayland I designed is intended to be somewhat agnostic of the underlying wayland protocol for simplicity. It just passes along the protocol bytes to the host's wayland compositor. In order to share FDs with the host (to support e.g. buffers and keymaps), crosvm has a mapping of virtual file descriptor IDs known to the guest kernel to host FDs that get passed along to the wayland compositor.

>though it would be cool to support the virtio p9fs as a root filesystem.

We considered it, but for our application, 9pfs was not going to be optimal. That being said, we'd welcome patches that added support for it. :)

>And, I know you can't say anything about this, but I'm happy to see the arm support in there

The ARM support is rather preliminary. crosvm will compile for ARM but it has yet to succesfully boot a VM.


Oh, okay.

I was actually referring to your kvm and kvm-sys crates, through it seems that some of the memory management stuff is closely tied to that. I ask because I tried to build a kvmd daemon using the kvm-rs crate that's out there, but ran into some issues. At the time bindgen was not quite up to the task of handling the kernel headers itself as well.

> The virtio wayland I designed is intended to be somewhat agnostic of the underlying wayland protocol for simplicity. It just passes along the protocol bytes to the host's wayland compositor. In order to share FDs with the host (to support e.g. buffers and keymaps), crosvm has a mapping of virtual file descriptor IDs known to the guest kernel to host FDs that get passed along to the wayland compositor.

Okay, I guess I'm more asking how software running on the VMs are accessing buffers on the GPU which are managed by the Wayland server, as I don't see anything in the virtio folders referring to this. I guess this is just using some kind of shared memory region then? I'll have to look at it more later.

Also, this low-level Wayland really seems like it could be a standard way to access GPUs through a hypervisor, maybe something that could be implemented by (k)qemu or others.

> We considered it, but for our application, 9pfs was not going to be optimal. That being said, we'd welcome patches that added support for it. :)

Yes, and I might try to get to that sometime. I really like the approach starting with vmlinux and not a BIOS implementation, anything to make the OS boot faster in the VM is good.

> The ARM support is rather preliminary. crosvm will compile for ARM but it has yet to succesfully boot a VM.

Understood.


I am super excited to see this! Glad we've already been helpful, but I'd like to reaffirm that if you need anything, we'd like to continue being helpful in the future :)


Thanks! I've really appreciated your Rust guides over the years. It's been so handy that we pre-ordered 5 copies of your Rust Programming Language book a few months back. I can't wait to get my hard copy. :)


Wonderful; we had some delays but it's back to full-steam ahead. I can't wait to get a hard copy too :)


Speaking as one of the folks in charge of languages: No

But we also don't get in the way of teams trying to experiment and see what will work for them. (otherwise, we'd never be able to know what languages to sanction)


I’m on my company’s language group. Do you have any more info on what boundaries are setup for experimental usage of a language? What does the sanctioned/unsanctioned distinction provide if teams can experiment freely?


This would end up a pretty long and detailed discussion and is probably a derail. If you want to discuss it, email me :)


It might be really interesting for those of us on the outside, so if you end up doing it, maybe publish the resulting conversation somewhere as far as possible?


I don't see your email address on your profile what's the best way to reach you?


dannyb@google


Since Rust is one of the most energy efficient languages and also safe, it sounds like a good fit for Google. I always wondered if they would pick it up.


Can you expand upon what makes Rust more energy efficient than C or C++? What are the language properties that enable Rust code to be more energy optimised?


He/She never said it was more energy efficient than C/C++ so...


It's not even ranked first. Rust comes a good second to third behind C & C++ on most if not all tests listed on https://sites.google.com/view/energy-efficiency-languages/re...


This is more a reflection of where the compiler is at, not the language. As a compiled, strongly-typed language without a GC, there is no theoretical reason why Rust could not be faster than C++. As the use of MIR and HIR (two intermediate languages used within the compiler) increases more optimizations taking advantage of the memory model could be added.

In the current state, much of that work is left to LLVM which is designed more around the C/C++ memory model and only some of the ownership and aliasing information can be represented in that form.

I am not arguing that Rust is faster now, only that it can be without fundamentally changing the language.


I just clicked that link. Rust is in fact first on a few of those categories. I’m not sure what your point is though, as the comment says that it is “one of” the most energy efficient languages, not the most energy efficient. It appears to gernerally be in the top three.

A couple of points here. Rust is generally as fast as C and C++, but on top of that it is memory and data race safe. Said another way, there is a safe language alternative which doesn’t have the pitfalls of C/C++, and that’s a great thing!


In the normalized global results, at the bottom, Rust is above C++, and closer to C than C++ is to it.


"one of the most energy efficient" + "safe"


It isn't sanctioned but it does look like it's gaining popularity. I've been following the xi-editor project for awhile (an editor with the backend written in Rust).

https://github.com/google/xi-editor


Offtopic, but I've always been a bit confused by the notion of "officially sanctioned" at Google... famously it's "Java, C++, Python, Go", but I've never heard anyone mention Javascript, which they obviously must use... or TypeScript (via Angular), or Dart (AdWords and Fuchsia), or C (Google does tons of Linux work), or ObjC/Swift (all their iOS apps). And these are only for projects we know about (if there's no Perl code running somewhere at Google, I'll eat my hat). So what does the officially-sanctioned language list determine?


I would say any language on the official Google style guide web site.

https://google.github.io/styleguide/

All the languages you mention are there.


Theres Common Lisp in there! When did that happen?


When they bougth ITA Software.


Ah right. Forgot ITA.


How does this square with rumors from a few days ago that the Pixelbook will be able to run virtualized Windows (or Linux)? I do not understand the implication of "No actual hardware is emulated."

If the Pixelbook can run Window or Linux in a VM, then its price slides a little closer towards justifiable.


At least what exists today appears to me aimed at running Linux (or other KVM-aware guests, maybe BSD). I have only briefly read the code.

This may be used as part of running Android apps on ChromeOS in more secure sandboxes, but the Wayland integration suggests to me this might be for running traditional Linux desktop applications on ChromeOS.

I think there's a good chance we'll find out something at the rumored upcoming Pixelbook launch.


Is this a new component for Chrome OS? Would this be a replacement for a project like Crouton or is something like this already being leveraged by that project?


Crouton is just a chroot to some linux distro. It doesn't use any sort of virtualization.


Could this be a sanctioned way to run linux on a Chromebook w/o dev mode?


The README contains commands that seem to require access to the developer-mode shell. Perhaps in the future though.


This is huge for chromeOS. Hope we learn more next week from Google on the 4th.


Whether you want Rust to go or, Go to rust


I see a place for both of them in my toolbox.

Go still makes network code and certain models of concurrency stupidly simple.

Rust is more of a replacement for C/C++ for me.


Go confirmed for me what I suspected: which is that I hate hate hate the futures style of async programming.

I'm on the lookout for channels and green-threads in Rust (so I can basically write borrow-checked Go-style code in Rust).


Channels are there but you have to use an API for them. Green threads were removed a few years ago, though there are implementations of co-routines, etc. as crates.

Rust is getting an unstable form of async and await as macros/syntax extensions, and there are RFCs discussing adding them to the language in some form. This would still be a wrapper for futures, but a more ergonomic way of using them.


I'm not entirely convinced that async annotations can't be completely elided. I think that any call stack that touches a sync/async API could have the decision of which to use bubbled up to a top level function via generics.


The Fuchsia OS microkernel should be rewritten in Rust, too, especially if it's going to take another 5 years before we even see it in a commercial product. If Google wants to make a modern new OS that will help it avoid many of the existing security problems it needs to keep fixing with Android/Chrome OS right now, then it should do it right and avoid collecting a lot of "security debt" down the road because of unsafe code/poor initial security architecture decisions.


They should rewrite Go in Rust! That way we can avoid all this Go vs Rust discussions ;)

Problem with your idea, is that low level kernel will use a lot of unsafe Rust, which will lose lot of benefits.


> Problem with your idea, is that low level kernel will use a lot of unsafe Rust, which will lose lot of benefits.

I've actually worked on a toy kernel in Rust (using the excellent tutorial at https://os.phil-opp.com/), and it turns out that, yes, you obviously need to use unsafe code to talk to the actual hardware. But in most cases, you can encapsulate the low-level hardware inside a safe API:

https://github.com/emk/toyos-rs/blob/fdc5fb8cc8152a63d1b6c85...

In this example, only I/O port creation is an unsafe API, because you need to specify a memory address to read and write. But once the port is created (pointed at an appropriate address!), it's perfectly safe to use.

So, yes, kernel-space Rust will use "unsafe" far more often than regular Rust code. But you can still make at least 80% of your code safe, and maybe much more. And the remaining "unsafe" APIs act as a useful warning to pay attention to what you're doing. Plus, Rust is a really nice language to write kernel code in, anyway.


Nice to have the dangerous bits annotated – in a C/C++/... kernel, everything is "unsafe".

"Given enough eyeballs, all bugs are shallow" - but it helps when the eyeballs are focussed! :-)


It's hard to emphasize the auditability enough. With auditing a C/C++ codebase, in my experience, you rely a lot on intuition. "Well, there's parsing done here, so I'll focus my efforts" or "Historically we've had a lot of CVEs from this part of code so I'll start there".

I'm not a professional RE so my experience is limited, but when I went looking for vulns that's how I went about it, and I think that's generally the case.

With rust there's significantly less guesswork. That parser doesn't use unsafe? OK, let's start elsewhere. That seemingly innocent code uses unsafe? Great, check that out.

You can grep for vulnerabilities, basically.


>But in most cases, you can encapsulate the low-level hardware inside a safe API

I'm probably missing something obvious. But isn't that true for most languages?


In C and C++ you have to treat everything as unsafe, because you have neither a GC nor a compiler to help tell you when you have accidentally violated some memory management invariant that some API was depending on. Rust's type system gives you the tools to define those safe APIs and have them checked by the compiler, even if you need to do some unsafe shenanigans under the hood.


Not really. I mean let's say you program in C, how will you enforce some pointer is never null? In Rust you can say &Object and that reference is never null (modulo any unsafe shenanigans).


Null is not really the most pertinent example in this context - at least you get a segfault. What is more important is that in C or C++ (even C++17), it is trivially easy to produce buffer overruns, use after frees, dangling pointers, invalidated iterators, data races etc. That is the unsafety that we are talking about here. Opt-in nullability via Option<T> is nice to have though.


> at least you get a segfault.

If there were a list "falsehoods software engineers believe about memory safety" this should be there.

https://cansecwest.com/slides07/Vector-Rewrite-Attack.pdf


> at least you get a segfault

If the platform has an MMU, if you've properly set up your page table mappings, etc. We are talking about writing an OS here!

Not to mention all the UB around null pointers that can cause miscompilation. Well, not technically miscompilation, but stuff like https://news.ycombinator.com/item?id=15324414


Yeah, I went with familiarity/simplicity in that example.

My point was similar C/C++ don't have a safe subset.


Not a safe subset anyone would want to use at least. You could just not use pointers in your code and you'd have memory safety.


Your code, or any of the code it calls; iterator invalidation, for example, wouldn't force you to use pointers directly, but can still cause memory unsafety.


thankfully C++ has references.


C++ references aren't exactly safe either.

    std::vector<int> x;
    x.push_back(4);
    int& val = x.back();
    x.push_back(5); // ahh, val may be garbage now.
or

    Foo f;
    Foo& fRef = f;
    Foo g = std::move(f);


References can still be null.


In C++, null references are Very Bad, and they trigger undefined behavior: https://stackoverflow.com/questions/4364536/is-null-referenc...

I don't think I've ever run into a null reference in the real world. I'm sure it happens, especially if people write "&*some_function_that_might_return_null()". But it shouldn't be a normal thing. There are lots of other issues with C++, but this has never been a major one in my experience.


> In C++, null references are Very Bad,

They are!

And while they are not a normal thing, they are a thing and I've run in to them a handful of times in the real world, almost always the result of someone not checking for null before dereferencing a pointer.

Rust does not really have this issue.


Rust also has this issue, unless you fully validate every single pointer coming out of unsafe blocks.


The difference being that all c++ code is 'unsafe' in the rust sense, whereas a typical rust program will have only a small portion of unsafe code (or none), making it easier to fully validate - hence 'doesn't really have this problem'.


I agree, but it depends how seriously unsafe blocks get reviewed.

Because I can assure you, unsafe blocks in enterprise Rust will be reviewed as much as C and C++ code currently are in most Big Corps™.

We do have such problems with native libraries killing Java and .NET processes, with unsafe being the FFI boundary.


Check it when you use it?


Experience tells us that this idea doesn't work so well in practice without a compiler yelling if you don't do it.


Compiler doesn't yell, because it's unnecessary in most cases.

My experience is that huge amount of C code running on my computer had exactly zero issues like this today. It has been working perfectly fine.

I run GNOME which uses a paradigm of checking all inputs to a function on entry via g_return(_val)_if_fail(assertion_expression). It helps the programmer do the right thing when it comes to using APIs that are disallowing NULL input.

Two days ago, the code on my workstation hit one of those "assertions":

Sep 26 23:40:31 core transmission-gt[1084]: g_file_test: assertion 'filename != NULL' failed

No oops in the kernel recently.

Millions of lines of desktop and kernel code, and one failed assertion in two days for using NULL incorrectly in the API.

So unless your standard is absolute perfection, it works fine as is.


So you could do the same in C/C++, if you use unsafe it's unsafe that it's. I can't believe people arguing over unsafe used in a safe way.


There's unsafe hidden under an API that you know where you're using, and unsafe in every single line in your software. Just like there's changing state encapsulated in object methods, and changing state by assigning to global variables.

Theoretically it's just all the same, but in practice having reduced interfaces does wonders to reduce the complexity and being able to reason about what brings problems, and what doesn't.


The responsibility of proving safety can belong to the compiler or it could belong to the programmer. The compiler can do a very good job, but there are edge cases where your code can become convoluted and things aren't expressed easily. The programmer can do a good job of expressing things, but is usually not able to prove safety of large blocks.

unsafe blocks are just a compromise where the programmer only has to prove safety of small blocks of code which aren't easily expressed in the type system.


I would recommend looking at RedoxOS[1] (a kernel+userspace written in Rust) to see what a fairly reasonably featured operating system written in Rust looks like. "Using a lot of unsafe Rust" doesn't actually lose the benefits of Rust -- in fact annotating some code as unsafe is one of the benefits of Rust (you can much more easily tell where certain classes of bugs must originate from).

[1]: https://www.redox-os.org/


According to memory management, yes but you have a lot of benefit from the strong type-safe system, ecosystem, testing and etc.


Both House OS in Haskell and SVA-OS handled that by cleanly hiding unsafe details behind an interface that could be used safely:

http://programatica.cs.pdx.edu/House/

https://llvm.org/pubs/2009-08-12-UsenixSecurity-SafeSVAOS.ht...

So, that's not really a limitation. Worst-case scenario is proving those primitives correct with external tools whose preconditions and invariants are just checked by memory-safe code calling it. The above work shows worst case might not happen, though.


It would be kind of cool to see a `go gen`-syntax based Go + generics implemented in Rust. Another thing that has been attempted is a stdlib for Rust that uses syscalls into the kernel much like Go's.


> Another thing that has been attempted is a stdlib for Rust that uses syscalls into the kernel much like Go's.

With Rust, it's easier just to replace the C library! In particular, cross-compiling with a static musl-libc has been supported out of the box with for a while now. For pure Rust programs (or ones with small amounts of C handled by cargo), just write:

    cargo build --target=x86_64-unknown-linux-musl
This will produce a 100% static binary that relies on nothing except the kernel.

For programs which require external C libraries, it's a bit trickier, because you need a static version of those libraries built against musl-libc. For common libraries like OpenSSL and libpq, I have a Docker container that makes this easier: https://github.com/emk/rust-musl-builder


Well...maybe just the compiler? ;)


I'm normally quite happy to advocate for Rust everywhere, but one of the benefits of the microkernel architecture is that you have a very small trusted computing base for your system. Yes, Rust could help, but there are lots of techniques for building highly reliable C/C++ codebases, and one of the main reasons they don't get used more in large codebases is that they don't scale super well. But, in a microkernel it's much easier to apply those methods. See for example the sel4 microkernel, which is small enough to have a well-verified model for a bunch of different hardware.

Uh, this is rambling a bit. My point is that building zircon as a microkernel makes it much easier to write correct code, Rust or not. And that maybe the overhead of a rewrite wouldn't be as beneficial for the microkernel as it would be for other parts of the operating system.


> See for example the sel4 microkernel, which is small enough to have a well-verified model for a bunch of different hardware.

The workflow for seL4 starts with prototyping in Haskell, formalizing everything in Isabelle, and then translating into C. This results in highly unusual C code and much of it would be better serviced with a DSL. I'm guessing that C chosen because it has formally verified semantics and compilers as well as integration with proofing assistants.


This is virtually the same reply that every single "rust would help with safety" comment gets every time without fail. And despite those tools, mistakes occur in the most critical of codebases. In fact just early today a patch to LKML was being mocked because some pointer manipulation in C was clearly buggy and uncaught due to the lack of safety and sophistication of the type system.


I don't think this is quite the template reply (and if you check my HN comment history, I think you might see that I'm quite familiar with receiving the template replies!), what I meant to say is that it'd be much higher leverage to rewrite other parts of Fuchsia in Rust than to rewrite Zircon.


Sorry, I didn't mean to put words in your mouth or anything. I am quite curious which parts you think would better benefit from a rewrite in a safer language if you might elaborate. Thanks!


Oh no worries, I might have come off a bit defensive there.

I think one of the clearest examples would be drivers -- they might be properly sandboxed by a good microkernel, but they are notoriously buggy/crashy/incorrect. If I remember correctly one of the Fuchsia team members told me that they were going to support drivers written in Rust, but I could be completely wrong.

Filesystems, the network stack, whatever horrifying systemd equivalent Fuchsia grows, etc. are all examples of things I would advocate for writing in Rust before focusing on the kernel. All of these things are still security/reliability sensitive, all still terrible by Rustacean safety standards in most OSes, and conveniently don't have to be built directly as part of the kernel when you're doing a microkernel.

Just spitballing, of course.


Some filesystems and the network stack from Fuchsia are written in Go.


Cool! Accomplishes many of the same goals that doing it in Rust would, arguably. I would be curious to hear about what sort of latency-management techniques they've used to build an FS in a GC'd environment.


Here is some more information

https://github.com/fuchsia-mirror?language=go

https://groups.google.com/d/msg/golang-dev/2xuYHcP0Fdc/tKb1P...

Regarding your specific latency issue with a GC systems language, here is a Mesa/Cedar example about a network file server.

http://www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-6.pdf


You absolutely right! You can imagine what Google can do with his resources when even one programmer writes his own OS, file system, etc on Rust - https://github.com/redox-os/!


What is the connection between Google and Redox?

I didn’t understand you to be honest.


I try to say when you use a good programming language with a really cool compiler which can prevent a lot of errors you can be very efficient and don't lose most of time in debugger!


I think he meant that if one guy developed Redox in his spare time, surely it wouldn't be too hard for a Google team to "simply" rewrite Fuchsia in Rust.


The TCP/IP stack and file system driver are written in Go.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: