The C99 version could also use nested designated initialization right in the function call, this would make it look very similar to the Rust version, e.g.:
I think the Rust style that uses ”constructorless” struct initialization really makes the code too verbose in this case. It’s nice and explicit in a way but having to say e.g
entrypoint: ”main”
as the argument for a shader program seems like it could be turned into a default.
Convention over configuration isn’t always the answer, but some times at least it can help reduce clutter.
In the future the api will perhaps have builders as an option.
I'm not sure if the builder pattern is better than designated initialization... AFAIK in Rust you can get something similar (fill up missing members with their default values) with ..Default::default(), like:
- The types help, seeing "WGPUSwapChainId" tells me exactly what sort of object is returned (a reference to some object held elsewhere), we don't get this in Rust because of type inference.
- Rust doesn't help in this example because it's just a bunch of API calls... so it's a more complex language for nothing. Obviously this doesn't generalize.
- The event handling is much simpler in the C code, this also doesn't generalize.
> we don't get this in Rust because of type inference
Might be worth mentioning that (AFAIK) type inference is entirely optional in Rust. If you prefer to write explicit types on your decls that's fine. (I suspect there are pathological cases where the type is impractical to write, but you'd have been screwed then anyway.)
This won't help you when reading other people's code, obviously.
I think that in reviewing code this is definitely something where languages with type inference lose clarity; Rust is even worse because of the Deref/Into idioms.
I think the solution is for diff viewers incl. those on the web to support something like intellisense, but obviously we are a very long way away from that.
I tend to find, personally, that this kind of thing doesn't interfere strongly with reviewing. I think this is because I have an extensive background in dynamically typed languages and ones with strong inference like Rust, so I'm more used to it, whereas people who don't come from said backgrounds prefer more explicitness. YMMV of course!
I don’t think it’s just “strong” inference that’s the problem; I find I have an easier time reading ML because the inference somehow feels more obvious. After thinking about it some more, I think it really is deref coercion that is the gotcha, especially with function chaining.
How practical would it be to produce a 2D UI using a framework built on these kinds of APIs? I'm wondering if there's a future down the road for multi-platform desktop apps with the convenience of Electron-like development and distribution, but where the native clients are able to be much thinner than a whole browser.
Not very. There is a huge difference between graphics rendering and interactive UI. By the time you add proper cross platform text rendering, a proper layout system, image/video/audio (incl. multirate streaming), resolution scaling, accessibility, compositing, animation, etc. you won't have such a thin client anymore. Plus you'll probably want cross-platform consistent networking, storage and security too. Electron is so big because it reinvents every possible wheel in exactly this way to provide maximum consistency.
Plus, you will have to deal with the fact that while you can push large amounts of computation out quickly to the GPU, reading back the results to CPU involves long latencies and scheduling issues. It often results in a loss of the kind of generality people take for granted in pure CPU programming, from little things like not being able to printf() in the middle of a program, to larger things like having to manually hook up every function call in GPU land by submitting a detailed description of your specific desired calling convention in triplicate (i.e. uniforms, varyings and attributes).
The alternative model is platform specific front-ends backed by a mostly universal back-end library. Used successfully by Transmission, VLC, and others. All the conveniences of being tailored to each platform in terms of UX and integration, but with the same basic broad functionality. But few companies will want to pay for 2x the devs, while in the open source world in practice the various platforms diverge so much they almost feel like different apps entirely. Still, it is much preferable over a 300MB+ hog that can't display more than 2 screenfuls of chat.
> I'm wondering if there's a future down the road for multi-platform desktop apps with the convenience of Electron-like development and distribution, but where the native clients are able to be much thinner than a whole browser.
I sure hope so! It's no secret that I've been hoping that someone will put together all the pieces we've been developing—WebRender, Pathfinder, gfx-rs, font-kit, skribo, etc.—and create a lightweight UI framework for Rust. None of us have the time to do so, but all the pieces are there…
There's no need to tie the 2D UI framework to the underlying rendering API. If you look at immediate-mode UI systems like Dear ImGui, those provide vertices, indices and a command list for rendering in each frame, and it's very easy to render those commands through all existing 3D APIs.
For instance here is an example of such a render function, it's just 50 lines of C, implemented on top of slim cross-platform headers with support for Windows, Linux, macOS, iOS, Android and WASM:
With such a bloat-free UI system you can create statically linked single-exes with embedded font data and shaders in a few hundred KBytes, and people are starting to use this for "proper" desktop tools too (not just as embedded tool UIs for games etc)
There is an imgui port to wgpu-rs in progress - https://github.com/unconed/imgui-wgpu-rs . I don't know how feasible it would be to compile it to WASM later on, we'll see.
One of the usual problems with this is font rendering, this project seems to use free type like most others. This still has patent issues https://www.freetype.org/patents.html which limit some of the rendering. To me it never renders as fast or as well as native solutions, though its been a while since I've looked at it.
Not practical compared to any UI framework people actually use.
It’s probably been a decade now that all software that can be written by normals has already been written.
Projects like this thrive because as opposed to fulfilling some user’s need, they are very intellectually stimulating to work with and thus make programming fun again.
A program that isn’t joyful to write never gets written.
> The code runs on a variety of platforms [...] eventually the Web (when the browsers gain support for the API, which is also in our scope of work).
Have the big three confirmed intent to support WebGPU?
Also, what is the state of WebGL backend support for gfx-rs? I'm watching [0] eagerly. That would be a great practical step towards gfx-rs in browsers today.
As mentioned in the PR, I have a version of spirv_cross (the Rust wrapper of SPIRV-Cross) working with the wasm32-unknown-unknown target now (https://github.com/grovesNL/spirv_cross/pull/92).
So currently the plan is to finish the spirv_cross changes, expand the unified OpenGL/WebGL bindings (the crate I've been calling "glow"), and rebase the gfx PR with these changes. Then I think we should be able ready to merge the initial WebGL support, and we can keep iterating on it (like we've done with the other backends).
There isn't such thing as "safe C" :) Any pointer passed in can be garbage with unintended consequences. Surely, there are some bits of Rust that are required for the safe operation over wgpu-native, such as taking an address of a borrowed struct. What this quote is trying to convey is that we don't rely on super strong types for safety, for example, like Vulkano [1] does. We can expose Rust API that borrows command buffers and resources where they are used and does the move semantics, and at the same time we can expose Rust API that implements Copy for all the objects, and it's still going to be fine, because the C layer doesn't rely on a lot of constraints from above.
Even webgpu makes performance portability hard and keeps the user manually setting barriers etc. We could use a modern but yet higher level api.
Shameless plug: We have a graph based approach which automates all intermediate resource allocations and scheduling. I am hoping we can opensource it or something similar as it separates definitions of the algorithms nicely from how they are scheduled in the end (async compute on amd and so forth are all automatic). We also have backend implementations for all the major modern apis.
I believe the plug is misplaced here, since you misunderstood the article contents. WebGPU does not have the user manually setting barriers or allocating memory.
The current version of the webgpu doesn't talk much about synchronization primitives https://github.com/gpuweb/gpuweb (same that can be reached from the original post), the little it reveals is DX12 style running fences that are used in multi-adapter system.
If they do not have explicit barriers within a device then it has to do state tracking, bringing the CPU overhead back into the level of OpenGL and making multithreaded command recording almost useless. The original WebGPU (now WebMetal) proposal did have explicit barriers.
We also have explicit resource creation available as when you create a texture you just want a texture. No fancy stuff there.
-I thought the original article was just gfx-rs supporting the WebGPU as a backend.- EDIT: I did misunderstand it. It's wgpu-rs that's a new api. The WebGPU in between had me confused.
EDIT2: I misunderstood it doubly. It is WebGPU implementation on top of gfx-rs. So points about WebGPU stand.
I do agree that the information on what WebGPU does and what not isn't clearly available. It's all in the IDL ([1]), where you may notice no API for explicit barriers. We are working on more readable documentation/specification.
> If they do not have explicit barriers within a device then it has to do state tracking, bringing the CPU overhead back into the level of OpenGL and making multithreaded command recording almost useless.
I don't believe this to be correct. We have the state tracking implemented in wgpu-rs, and it's designed from day one to support multi-threaded command recording. Implementation-wise, every command buffer knows which resources it expects in which states, and what states they end up with. When it's submitted, the WebGPU queue inserts necessary transitions into those input states, and it tracks the current state of all resources.
Yes, there is some CPU overhead there for sure, but it's not yet clear how significant it is. Keep in mind that WebGPU has to validate the states, which is roughly the same complexity as deriving those states in the first place, hence our decision to do so.
> WebGPU also has the usual way of allocating memory.
Not directly. Currently, users can create resources and not care about their lifetimes. This is contrary to Vulkan/D3D12 memory allocation with transient resources, which is what your frame graphy library helps with.
Thanks for pointing out the IDL. That tells a lot more about the API.
It's good to know that the state tracker is bit more refined than what is required for something like OpenGL.
>Yes, there is some CPU overhead there for sure, but it's not yet clear how significant it is. Keep in mind that WebGPU has to validate the states, which is roughly the same complexity as deriving those states in the first place, hence our decision to do so.
One of the benefits of prebuilding the frame as a graph ahead of time is that one can do vast majority of the validation ahead of time, thus making the heavy operation be explicit for the user. Also the validation happens only once, instead of every frame.
Just like when you bind pipeline you do not have to again validate that all of the shaders etc are valid. Just that it's valid for this renderpass etc. In OpenGL one has to validate the graphics pipeline state after every change to it, whereas in WebGPU it's enough to validate most of it on creation. Similar thing applies to building the state transitions ahead of time.
> Currently, users can create resources and not care about their lifetimes.
For some reason I assumed that if there is a createTexture there ought to be destroyTexture for it. But indeed there is not. I have a few questions pertaining to this.
How does it know I'm not going to need a texture anymore? If it has to keep the data valid for eternity it will at the very least have to store it to ram or disk because it never knows if I'm going to submit a command that references it again.
This sounds to me that some applications where lots of assets are streamed in will explode in memory usage. Or then the user has to do manual pooling, which can be even more difficult and bug prone.
The assumption is to rely on GC and if a resource is not reachable in JS (or in existing bindings), then it can't be submitted again, so it can safely be cleaned up. There is also an explicit GPUTexture.destroy() method if you want to free up the GPU memory immediately.
WebGPU. That's an awful name. Given WebGL is taken and already in version 2. But WebGPU sounds like a virtual GPU for GameStreaming. Even GLWeb would be nicer.
Gpu = graphics + compute, it's more than just "GL" (graphics library). The fact that WebGL reached version 2 is hardly relevant, since it's a different API, likely to be overshadowed in some 5 years.
AMD calls their CPU with integrated Graphics APU (Accelerated Processing Units), why not thinking that forth, oh OpenAL (Audio Library) is already in use, WebAL would confuse more. We are running out of acronyms.
Throwing in some random word creations: * Processing#, * UnifiedProcessing#, * ParallelProcessing#, * Accel#,
The important question is - are the graphics and platform vendors onboard? Does webgpu sit directly on top of graphics drivers or is this just a MoltenVK/Angle style abstraction layer that abstracts away OpenGL / Vulkan / Metal? Has someone done the hard work of getting Apple onboard for using this outside the browser (ie as a Metal replacement).
Apple (arguably) initiated the WebGPU API development effort. Surprisingly, Apple is pushing for a shader language based on Microsoft's HLSL. Some of the others are pushing for a lower-level, non-human-readable shader binary format instead (which is what Vulkan takes in).
> Apple (arguably) initiated the WebGPU API development effort.
That's not correct as far as I'm aware. All of us (Mozilla, Google, Apple) had prototypes in the works for the new API, and WebGPU just happened to be the name of the Apple one, which we borrowed later.
Ultimately, it doesn't matter at this point. If the API is usable and popular, hardware vendors may later decide to pave the way from it directly to the driver. For now, layering on top of the existing native graphics APIs is fine.
I dont care about what API wins, but the current stalemate is very disappointing. For all its flaws OpenGL was lingua franca for the longest time, now that Apple has deprecated it, I hope it gets replaced by a single credible alternative. The current fragmentation where every platform is pushing its own graphics API is wasteful when it comes to developer productivity.
I strongly agree. We tried really, really hard to not invent a new standard. We proposed Obsidian API based on Vulkan, and we invested into Vulkan Portability initiative...
But at the end of the day it became clear that Vulkan is simply not fit for many things, including the Web. It can't be the new lingua franca simply because it's too complex for users to pick up.
I think Vulkan is a good example that a one-size-fits-all 3D API (spanning all GPU vendors, platforms and low-/high-power GPUs) only results in a mess.
And OpenGL was a mess too to be honest, at least after 30 years of development. Vulkan started right off with an even bigger API complexity.
It might actually be a good idea to have a handful of low-level, small-ish GPU-vendor and/or platform-specific 3D-APIs, each of those is most likely less complex and has less design-compromises than Vulkan.
Apple aren't known for collaborating on common APIs. They prefer to push NIHs everywhere they can. I suppose a common API could win, if developers would be pressuring Apple more to start following standards.
Good, but they didn't do anything to help the lower level. I.e. they push Metal only and don't support Vulkan on their systems. That would require those who implement WebGPU to duplicate their work.
I've been reminded that Apple's webgpu prototype is distinct from the standardisation process. And yes, implementors get to write separate metal, vulkan, and probably directx backends.
I would be more excited if there was evidence that graphics card vendors and platform vendors were onboard. Ie some commitment that webgpu is the replacement for Vulkan / Metal / DirectX / OpenGL (with backers of each of these APIs communicating an intent to deprecate it in favor of webgpu). In this case it would be a very exciting move because the amount of entropy in the world would reduce greatly. On the other hand if webgpu is an abstract API with Vulkan/Metal/DirectX/OpenGL backends then the net entropy in the world is actually increasing and now there is one new additional thing for people to debug / port to.
The group hasn't yet agreed on what shading language is directly digested by the WebGPU implementation. We may provide some helpers in the future to convert WHLSL to SPIRV for wgpu-rs to consume, just like today we provide GLSL to SPIRV helpers, and our examples use GLSL in Vulkan style.
As for the binding models, it's a hot topic. I believe the discussions are converging towards Vulkan-like binding model at the end of the day.
It's not based on Metal API. There is quite a bit of confusion in here, let me clarify. Originally, Apple called their (WebGL Next?) prototype WebGPU, and it looked like Metal. Then the W3C group was formed with all the browser vendors, and they agreed to give that name to the group's work. That doesn't relate to the actual prototype code, which was implemented in Safari preview and is about to be removed from there.
Today, WebGPU takes some bits from Metal and Vulkan, it is not fair to say that it's "based on Apple Metal API".
This deprecates the old gfx (also called "pre-ll" for pre-low-level), which you can find at [1]. The new gfx (also called "gfx-hal" for hardware abstraction layer) is not deprecated, clearly, since wgpu-rs runs on it :)
C99 Hello Triangle:
https://github.com/gfx-rs/wgpu/blob/master/examples/hello_tr...
Rust Hello Triangle:
https://github.com/gfx-rs/wgpu/blob/master/examples/hello_tr...
Both look very neat and tidy IMHO.
The C99 version could also use nested designated initialization right in the function call, this would make it look very similar to the Rust version, e.g.:
But of course that's a matter of taste :)