I've been using Rust + WGPU using the Vulkan backend for some chemistry visualization and simulation. Working on using Vulkan compute shaders as well. Good experience overall, but it was tough to piece together the engine from the examples and Rust docs. Would have been untenable without a third-party tutorial.
Maybe one of the direct Vulkan wrappers would be a better bet? Eg Ash. Seems like the article took a different approach; tough to understand without knowledge about Mesa. (I hadn't heard of it)
I believe that using direct Vulkan wrappers in Rust has the downside that you'll have to write a lot of unsafe Rust code to use them, and that's part of the reason a WebGPU-style API around Vulkan like WGPU is popular in Rust.
> it was tough to piece together the engine from the examples and Rust docs. Would have been untenable without a third-party tutorial.
This is the problem with using a non C/C++ language for any high-end graphics programming--you step out of the mainline social ecosystem.
Consequently, when you need help, you're hosed. As I much as I like Rust, Zig, etc., I would never recommend using them for modern graphics programming unless you have masochistic tendencies.
If you already have some experience with graphics API, the Vulkan documentation/specification directly from Khronos is pretty well done and covers everything.
I 100% support this endeavor and I'm fully down to RIIR for everything but wish to address one slight problem I have with the text:
> Recently, we even had one bug which was caused by accidentally truncating a 64-bit unsigned integer to 32 bits deep inside the common Vulkan synchronization code.
> It doesn't have implicit type casting so accidental sign promotion or integer truncation doesn't happen.
It doesn't have implicit type casting but if you u64 as u32 inside a piece of Rust code it'll overflow panic in debug mode at best but happily wrap around when compiled in release mode at worst.
I would prefer that u64 as u32 and similar lossy conversions get deprecated in some future Edition encouraging people write try_into and thus take a moment to decide what happens when it won't fit.
But yeah, meanwhile the only difference in Rust is that you did explicitly say you wanted the conversion.
EtA: the pointer as usize conversions will presumably get deprecated once Aria's provenance experiment is solidified. So that's a precedent for deprecating something like this where the risk outweighs the convenience.
While C and C++ are languages that are permissive to the point that bugs can be introduced incredibly easily I don't think they need to be replaced in order to eliminate said bugs, and especially not with a language like Rust.
Is it more efficient resource and time-wise to rewrite everything in [Language of choice], or to comb through the currently existing code manually and with special tools to find bugs and fix them? Is it more efficient in the long run? Will the current people working on Mesa be able to adapt to such a change (Likely irrelevant since they'd probably be the ones leading such changes in the first place if they were to occur)? Are the benefits/improvements even worth the costs in the first place? Is it sustainable (C has been around since... well forever, and it has a very rich, mature ecosystem built around it. Rust on the other hand has been around for a fraction of the time, suffers from core team (corporate) drama, and doesn't have a "fixed" standard like C does with C89/90, C99, C11, etc. as far as I know (Things might've changed in the past few months/year))? I personally can't see it as being necessary for high code quality and safety.
> Is it more efficient resource and time-wise to rewrite everything in [Language of choice], or to comb through the currently existing code
This is a false dichotomy. Bugs are concentrated in new code, so the biggest value for Rust is to use it going forward, interoperating with existing C and C++ code. Going backwards and rewriting low bug density code should not be a priority. Both of your alternatives only address existing code.
Which is why I wrote "Is it more efficient in the long run?"
Ideally there would be an open probe into the project as a whole decide whether or not the switch is a good idea, and whether or not such a change should be accomplished with Rust.
This isn’t a simple thing to answer as it’s specific to any given project. If something is millions of lines of code, will anyone ever fund a full rewrite that could take years? In that case finding areas to plug-in Rust is probably the better strategy. See the Linux kernel and drivers being able to be in Rust.
On the other hand, if something is only a few thousand lines of code, a rewrite wouldn’t take much time and might be worth it.
There is unsafety at the FFI boundary. The C functions will deal with pointers, so the Rust side will have to use pointers too. The Rust side will normally thunk it into safe Rust (convert pointers to references by introducing lifetimes, wrapping in types with destructors, making sure to add or remove Send / Sync impls as necessary, etc) so that the rest of the Rust code that's not at the boundary can be safe.
So if you have a Rust function that receives a pointer and a length representing the array, it will translate that into a Rust slice by trusting the pointer and length. Of course if the C caller gave a garbage pointer or a length that's bigger than the actual C array, then the "safe" Rust code that uses that slice will still be illegal.
C++ interop has some additional complexity because of templates, methods, inheritance, etc, that's handled by the cxx Rust library. I don't know much about it.
Rust allows for zero overhead FFI between itself and C. For C++, there are some crates working to make that easier so that C++ objects can be shared, but generally, the C ABI is the common interop layer.
As to the security issues persisting, yes, the C security issues will persist. But, what’s nice is that the Rust code is safe and you just need to attest to the compiler that the usage of those C APIs is safe. Essentially the Rust code is safe, but it’s still your responsibility to prove that the C code is safe.
> While C and C++ are languages that are permissive to the point that bugs can be introduced incredibly easily I don't think they need to be replaced in order to eliminate said bugs
A ton of data about software bugs and vulnerabilities, as well as history, seems to disagree with you here.
I suppose in the most permissive use of the term "can", sure you "can" eliminate all those bugs in C/C++ code bases, but at what cost? Formal verification? A code review process that has requirements similar to those imposed on people working on spaceship and airliner avionics software? That's impractical for the vast majority of projects.
It just seems more sensible to use a language that eliminates certain classes of bugs entirely. Rust does that, at least for some classes of bugs. I'll grant you that maybe Rust isn't the correct answer here, but C (and even C++) IMO need to be relegated to the past.
I recently got re-involved in a C-based open source project that I used to work on ~15 years ago. In the intervening time, most of my work has been done using JVM languages, and, more recently, Rust. Going back to C has been tedious and error-prone. The language is awkward to use, and its type system is a joke. I would absolutely love to be able to work in Rust instead, or... really, anything else.
> Is it more efficient resource and time-wise to rewrite everything in [Language of choice], or to comb through the currently existing code manually and with special tools to find bugs and fix them?
Neither, and that's not what the article in question is proposing doing.
> Is it more efficient resource and time-wise to rewrite everything in [Language of choice], or to comb through the currently existing code manually and with special tools to find bugs and fix them?
One thing you're missing is that "comb[ing] through the currently existing code manually" doesn't help you in the future. Unless you have an absolute code freeze new code keeps getting written which can (depending on the language) affect the correctness of distant already reviewed code. So even if you can absolutely ensure that the code as it exists today is perfect (which is already an exceedingly thing to do), you need to keep that effort up continuously, lest a code change causes prior invariants to no longer be met.
> Is it more efficient resource and time-wise to rewrite everything in [Language of choice], or to comb through the currently existing code manually and with special tools to find bugs and fix them?
This is for the NVK project, which is creating an open source GPU driver for NVIDIA GPUs. The project has a number of considerations that are not in the blog post:
> The unfortunate reality is that, while the original nouveau drivers were written by some amazing engineers and were state-of-the-art a decade ago, they have fallen behind in the last several years. The few developer hours it gets are mostly spent on basic hardware enablement and trying to get new OpenGL versions, leaving the systemic and architectural issues unaddressed.
If you want Yet Another Example of using Rust to write GPU drivers, you can look at the other Mesa driver in development (for the Apple AGX GPUs): https://asahilinux.org/2022/11/tales-of-the-m1-gpu/ (see "Rust is magical!")
The sibling comments have good points, but the actual reality is that any project that wants to stay alive and attract new developers in 10 years should probably start introducing Rust about now. Linus to his credit picked up on that.
Maybe one of the direct Vulkan wrappers would be a better bet? Eg Ash. Seems like the article took a different approach; tough to understand without knowledge about Mesa. (I hadn't heard of it)