Given that they already have a microkernel written in C++ (zircon is derived from littlekernel), and they're trying to move as much as possible outside the kernel, it makes sense that (for the time being) adding a new kernel language isn't on the table.
Absolutely, I don't think any sane Rust zealot would argue that Rust can compete with Go/Dart productivity. The argument can be made that Rust code has a lower maintenance cost over time, though, and that productivity may not drop as much as the system becomes more complex.
Even with this relative improvement, though, I question if it's enough to overcome the shorter compile and test loop the other languages have, or the mental overhead of managing lifetimes and ownership.
Rust binary size can be pretty painful, depending on what you're doing. Also, it doesn't look like that document is receiving regular updates, so grain of salt and all.
I'm not very happy to see C++ infecting more and more system software when it STILL IN 2020 doesn't have a stable FFI/ABI.
This is going to bite us ALL in the future because it will saddle other languages with a useless set of constraints long after C++ gets removed from a project.
With the exception of the change to std::string in C++11, which was very, very carefully worked around so that C++11 code can handle C++03-ABI std::string, there have been no changes to the ABI implementation since the Itanium ABI was adopted by gcc almost two decades ago.
That's not a C++-ABI but a C++-as-compiled-by-gcc-ABI. C++ itself does not define an ABI and different compilers (sometimes even from the same vendors) will use different incompatible ABIs.
It is the linux standard C++ ABI as the defined by the linux standard base. An ABI for low level language is necessarily (OS, architecture) specific, so you can hardly do better than that. There is no ABI that could be usefully defined at the standard level (and even if it somehow were, it would be mostly ignored[1] as compilers wouldn't break compatibility to implement it).
[1] I could see the committee standardizing some intermediate portable representation requiring installation time or even JITing in the future though.
It is not the linux standard C++ ABI, it's just the defacto standard ABI because of gcc's former dominance and clang intimidating the ABI. And I broke things in the past, where I had to recompile stuff, due to different compilers (clang, clang+libc++, gcc in different -std=c++ modes) producing not 100% compatible outputs.
You can say it's good enough (most of the time), but it isn't really a standard, unless I am mistaken.
The Itanium ABI it is not just whatever GCC does; while it is not an ISO standard, it is an intervendor ABI documented independently of any compiler implementation and changes are agreed among compiler teams. It is continually updated to track the C++ evolution.
The standard library ABI it is not covered the the Itanum ABI (outside of some basic functionality), but it is defined necessarily by the platform. For linux that would be libstdc++.
The LSB references the Itanium ABI and defines libstdc++ as the ABI for the C++ standard library on linux platforms; it is again not an ISO standard, but it is as close as you can get on Linux.
And of course the C++ ABI being a very complex and both the ABI document itself and compilers have bugs from time to time, especially if you live close to the bleeding edge.
although it might the facto be, at least for AMD64, I wouldn't say it is the official standard ABI of all unix systems. But it is the standard ABI of Linux based systems, at least those that claim to conform to the LSB.
There's more to a practical language ABI than stack and vtable layout. If you write idiomatic C++, this means passing objects from the standard library around. If different compilers use different implementations of the standard library that aren't layout-compatible, things break.
The Linux processor-specific ABIs explicitly call out that the Itanium ABI is the C++ ABI for x86 and x86-64; ARM has its own ABI that differs from the Itanium ABI only in the exception handling details.
> specially when the programming languages are a bit more expressive that pseudo-assembly.
Most higher level languages (java, c#, python) handle most of those much better, albeit with a different set of trade offs. Things like adding a private field to a class won't break binary compatibility in a c# apllication. C++ is fairly unique in that it tries to be high level and tries to be low level but the cost is it pushes the complexity of this split personality onto the developers.
C++ is moderately safe if one doesn't use it as a C compiler, even standard library does bounds checking across all major compilers, during debug builds and specific compilers switches.
There is a stable ABI inside some OSes, moreso than any wannabe C++ replacements.
Speaking of which, Rust is a very nice language, but still lacks many productive tooling that systems developers came to expect.
If anything this rationale is great input for the Rust community, how to improve the ecosystem.
FFI in C++ is never gonna happen with the preprocessor and templates being there. By design of the language it basically won't ever work. You would need to recompile the world unless you're artificially restricting yourself to the equivalent of extern "C". C++20 modules won't make it better either.
I don't think Rust, Dart, or Go are any better.
In practice it seems C++'s ABI is called "protocol buffers".
Basically correct, though in Fuchsia it's called FIDL. It's protocol buffers, but not the specific project that Google refers to as "protocol buffers".
Would have assumed the Kernel would be where Rust truly shines, but that's where it's blocked, which is... interesting!