Yes. Kvark pushed WGPU as a cross-platform graphics base for Rust, and that worked out quite well.
It's actually better in an application than in the browser. In an application, you get to use real threads and utilize the computer's full resources, both CPUs and GPUs. In browsers, the Main Thread is special,
you usually can't have threads at different priorities, there's much resource limiting, and the Javascript callback mindset gets in the way.
Here's video from my metaverse viewer, which uses WGPU.[1] This seems to be, much to my surprise, the most photo-realistic game-type 3D graphics application yet written in Rust.
The stack for that is Egui (for 2D menus), Rend3 (for a lightweight scene graph and memory management), WGPU (as discussed above), Winit (cross-platform window event management), and Vulkan. Runs on both Linux and Windows. Should work on MacOS, but hasn't been debugged there. Android has a browser-like thread model, so, although WGPU supports those targets, this program won't work on Android or WASM. All this is in Rust.
It's been a painful two year experience getting that stack to work. It suffers from the usual open source problem - everything is stuck at version 0.x, sort of working, and no longer exciting to work on. The APIs at each level are not stable, and so the versions of everything have to be carefully matched. When someone changes one level, the other levels have to adapt, which takes time. Here's a more detailed discussion of the problems.[2] The right stuff is there, but it does not Just Work yet. Which is why we're not seeing AAA game titles written in Rust. You can't bet a project with a deadline on this stack yet. As you can see from the video, it does do a nice job.
It's encouraging that WGPU is seen as a long-term solution, because it improves the odds of that work getting completed.
Still, I was chatting with one of the few people to ship a good 3D game (a yacht racing simulator) in Rust, and he admits that he simply used "good old DX11".
Is there some reason you need a different stack of all new UI libraries just because you decided to use a certain language? (Yes, I know everyone does it, but they should stop.)
Does it actually work with OS features like screenreaders and printing that aren't just "pixels go on screen"? Does it break when a user has an OS setting changed that only one Microsoft QA greybeard remembers exists?
http://kvark.github.io/web/gpu/native/2020/05/03/point-of-we...