In general if you are writing a game (that you expect to ship on multiple platforms) you’ll have some cross-platform layer that makes swapping out runtimes for each platform easy. Then the game is built on top of that. Could be in the same language or in another or both. For example LuaJIT has been popular. WebAssembly provides another target.
Browsers are similar in that they are like mini-sandboxed OS’s with “standardised” interfaces. So applying the idea of standardised, sandboxed interfaces to WebAssembly you end up with WASI and probably eventually extension to GPU, audio and other interfaces useful to games.
It’s actually a pretty old idea, plenty of old games were actually written to run on interpreters that were then rewritten for each platform. For example Another World which has cropped up on the front page recently takes this approach.
This I/O device thing seems to be a feature of one specific wasm engine that just exposes a simple framebuffer, so, it has nothing to do with anything else.
There are no cross-platform APIs for something as simple as "blit a bunch of pixels to the screen" anyway, so WASI is pretty much free to define its own.
I'm sure over time, and if desired, WASI will get a set of higher level platform-wrapper APIs for rendering, audio, input and networking, but the requirements are somewhat different from traditional "native" APIs, for instance, compatibility with web APIs is useful. So we'll probably see the WebGPU API for 3D rendering, instead of Vulkan, D3D12 or Metal.
It looks quite modern to me, specially given what some people are fighting to get into ISO C++, or what other languages offer on their standard libraries.
... don't remind me of that proposal, lol. I think it's finally dead, and admitted it was a complete and utter failure of LEWG to let it get as far as it did [0].
Unfortunately, we now have "diet graphics", which consists of stuffing WebKit into C++ and calling it a day [1], so, looking forward to where that goes! [2]
> There are no cross-platform APIs for something as simple as "blit a bunch of pixels to the screen" anyway, so WASI is pretty much free to define its own. [..] compatibility with web APIs is useful
Personally I think Web compatibility is so useful that wasm off the Web should use Web APIs, instead of the direction WASI is going.
WASI is an effort to create a new OS API layer from scratch, without the baggage of POSIX or the Web or such. That's definitely interesting, and may be a big deal in the long run, but it has risks (it will take a lot longer, in particular) and it has downsides for the Web (which is personally the platform I care most about).
A complex browser API like canvas requires much more effort to "emulate" in WASI than a simple 2D framebuffer access. I think emulating the Linux framebuffer device is the right first step.
PS: I think there's some confusion about WASM running in browsers, vs WASM running on top of WASI. WASI is all about running WASM outside the browser: https://wasi.dev/
The Linux framebuffer device is a legacy thing which doesn't mesh with the way modern graphics chipsets work. GPUs today are practically glorified sprites engines where a "framebuffer", a "texture" and a "sprite" are essentially one and the same in implementation, and new entities can be created by compositing ("blitting") existing ones with arbitrary 3D transformations, alpha-blending, anti-aliasing etc. You don't want to be limited to a single frame-buffer with pixel-level access and everything being drawn/blitted by software, that would be dog-slow.
It's not about performance but about convenience. Having a way to plot single pixels with the CPU without a lot of boilerplate code to setup a 3D API context, creating textures, vertex buffers, shaders etc etc is a good thing to have in many situations.
That this is not a good way for doing more complex rendering tasks seems quite obvious (e.g. in the long run this cannot be the only API for covering all sorts of rendering, but it happens to be a good sweet spot for getting anything on screen at all, and not having to deal with an overly complex rendering API just to get some 2D image data rendered).
If convenience is what you want, that can be provided by a 2D canvas. This could even support a "soft framebuffer", i.e, a pixel-perfect canvas taking up the full screen, or some well-defined window inside it. But that would not be a low-level rendering API, of course.
To reinforce on this: is pixel actually a well defined usable unit? as far as I understood many modern monitor often use a definition of pixel quite different from the one of a couple decades ago.
Pretty much the only popular abstraction I’m aware of that most GPUs share (as far as complex graphics operations go) is shader programs. That’s a pretty complex abstraction compared to a framebuffer which is quite frankly good enough for most things.
OpenGL is on the way out, and for rendering 2D image data it requires too much setup code (need to create the GL context, a swap chain, a texture, and (unless GL 1.x is used) a vertex buffer, shader and setting up tons of render state).
For 3D rendering, WebGPU would be the better option to support in WASI because it has standalone native implementations:
But for blitting CPU-generated 2D image data to the screen WebGPU is also sort of overkill (of course it would be possible to create a much simplified wrapper library in WASM).
OpenGL will not be on the way out until Khronos defines an high level alternative to Vulkan, as not everyone enjoys being a GPU driver writer for doing graphics.
As mentioned by flohofwoe in the parent comment, Dawn and wgpu are both native implementations of WebGPU for use outside the browser. In particular wgpu-rs will allow users to target both the browser API and run natively.
I think it's debatable whether WebGPU is less capable than OpenGL 4.6 with AZDO. Even if it is less capable, it doesn't mean WebGPU is not a solution nor only usable by browsers.
Regardless it's the closest portable solution at the moment, and it will continue to add capabilities. The WebGPU CG has already had public meetings with Khronos to talk about running WebGPU on top of Vulkan, and running Vulkan Portability on top of WebGPU. So I think there's a lot of interest in making WebGPU succeed on both web and native targets.
What many of us want is a high level standard that doesn't require being a driver engineer to take advantage of modern GPUs.
If WebGPU ends up being as castrated as WebGL, then we either keep using OpenGL, or change to modern middleware engines (which is what most are doing), with the added benefit that they make the actual 3D API less relevant, it is just a checkbox.
AMD's OpenGL tools don't look too active either, e.g. the last release of CodeXL is from 2018, and this only updated dependencies or removed functionality, for instance this nugget from the release notes:
---
* Removal of components which have been replaced by new standalone tools:
...that Radeon-GPUProfiler is that tool which has only D3D12 and Vulkan support.
Look around for AMD's OpenGL activity more recently, there's not much, which isn't surprising because they've been lagging behind NVIDIA with their GL drivers since forever. I bet they're eager to close that chapter.
NVIDIA seems more committed to GL still, but without having all GPU vendors on board to continue supporting OpenGL, Khronos won't be able to do much to keep it alive.
> but without having all GPU vendors on board to continue supporting OpenGL, Khronos won't be able to do much to keep it alive
Please don't make things up. OpenGL 4.6 was released in July 2017. According to Wikipedia, modern AMD and NVIDIA cards both gained driver support for it in April 2018. Intel drivers have support since May 2019.
OpenGL is certainly used by a number of creative apps like Photoshop, but you can't deny that the number of games released with OpenGL is down significantly since say 2010.
I didn't say that there were no OpenGL games being released in 2019/2020 but that the ratio is definitely skewing away from OpenGL. "A lot of games" seems like a stretch compared with what the numbers used to be.
Also I think it's obvious but I'll say it anyway; it's not up to khronos it's up to developers. It doesn't matter if they continue to support OpenGL if developers move to some combination of dx11/12, Vulkan, and Metal.
WebGL and OpenGL ES aren't OpenGL. Their uptake or lack thereof in other arenas is orthogonal to whether people are moving away from OpenGL for game development.
Considering UE4 and Unity both support OpenGL, yes, a lot if not the majority of games.
WebGL and OpenGL ES are pretty much OpenGL. Formally they are different standards, but they are based all in OpenGL: if you know one, you know the others pretty well, which is what is important for developers as you agree in the second paragraph.
Once again, that's support not implementation. Developers could use OpenGL in Unreal. They could, but they haven't in general. Vulkan is the default, and is the more common choice.
If it were true that lots of games were picking OpenGL, why is it so hard to make a long list of them? It's easy to make such a list for vulkan.
There is also a talk about it at The X.Org Developer's Conference 2019-10-02 here, for example Blender seems to work already:
https://youtu.be/rV0P1ChE_5o?t=3970
OpenGL really is a legacy API in 2020. There is a massive mismatch between modern GPU design and the OpenGL API design. The only reason it wasn't left behind a decade ago is that there wasn't a decent replacement.
Today, with Vulkan and soon WebGPU available, there really is very little reason to hang on to OpenGL for anything but legacy.
The only platform that has marked it as legacy doesn't support Vulkan (directly) either so I'm not convinced it's dead yet. The simpler API for the minor performance trade-off is still compelling for lots of uses.
EGL doesn't let you create a window, it only lets you bind an OpenGL context to an existing platform window. And input isn't covered by EGL in the slightest.
People expect their WebAssembly applications to run on more platforms than just Microsoft Windows. There is a reason why web browers have their own graphics API for example. DX12 (and previous version) is the only first class API on Windows. Metal is the only first class API on Mac/iOS. Vulkan/OpenGL are only first class on Linux (and friends) and second class on Mac/Windows. So which API are you going to choose? You'll have to choose the most popular, all or a newly created API that is first class in every compliant web browser. The same thought process happened with I/O devices and WebAssembly itself.
WASI has some unique goals around extending the WebAssembly sandboxing concepts into the API space using capability-based security, forming one of the key building blocks for nanoprocesses. Beyond that, WASI will indeed likely reuse existing APIs and API concepts, rather than always inventing new things from scratch.
The article linked here is an advertisement for a startup.
If people want to use WebAssembly to write non-browser applications, then it needs some non-browser API to replace the user interaction capabilities a browser's API would usually provide.
The goal for WASI is a secure-by-default capabilities model for access to system APIs.
It's also an open collaborative cross-vendor specification with at least four different independent implementations.
MSIL/CIL is similar in some ways, but it's still largely only supported by Microsoft and doesn't sandbox binaries the same way. It's an inspiration to We assembly, but not a feasible alternative to it. Similar to Google's PNaCl bytecode.
For user interaction that works for everyone, i.e. covering internationalization and accessibility, one could do much worse than to just use the existing web platform APIs, DOM and all. Why reinvent all that?
Sure, you can build a flimsy deathtrap house on top of a solid foundation in a lot of contexts. But if the foundation is unsound, it doesn't matter if your Ada is formally verified or not.
I don't see anyone pretending any such thing about WebAssembly or WASI. If anything they are drawing on the decades of experience with bytecode formats and security research.
AIUI, wasm does support using multiple linear memory segments at the same time. You don't get that on native processes short of using memory segmentation, which no modern architecture supports.
I'm not sure I understand what you're saying. There are two core principles behind WebAssembly. First of all you can run low level programming languages like C on any architecture. This is possible if the code is available and you are willing to compile the C code before installing the application (see gentoo). Secondly the libraries/APIs that the program depends on must be available.
What you're saying is that every vendor shipped their software with source code and only used cross platform APIs. Is that right?
I don’t see how asmjs/wasm’s marketing would be much different to the one of pnacl/nacl. It’s just fundamentally better and that’s why people are excited.
So what is so exciting about a bytecode format, created out of the political war between PNaCL and asm.js, without any big news for anyone that actually cares about bytecode formats invented since 1958 with the introduction of UNCOL and subsequent attempts?
Compared to other bytecode formats, it has lots of advantages. Browsers support it, LLVM targets it, people are using it.
I don't really see the point of using WebAssembly (or any other bytecode format) outside the browser, though. It just seems like a way to throw away performance for no benefit.
Browsers support it because Mozilla refused to adopt PNaCL, which was also targeted by LLVM, WebAssembly is a political compromise, not any kind of technical achievement as it keeps being celebrated for.
Mainframes and midrange computers have used bytecode formats since the early 60's as hardware independence mechanism, where you AOT compile at installation time, or have an hardware interpreter implemented as part of a micro-coded CPU.
It has allowed those kind of computers to advance their hardware to modern times, while keeping the applications unchanged.
The same idea is widely used on embedded platforms, as it gives OEMs flexibility designing hardware, while reducing software development costs even further.
I honestly do not see the point you are arguing "for" here. I understand you find wrong the hype about webassembly and many of you comparison are adequate.
You say that wasm is a political "bastard" born out of mozilla refusal for PNaCL. I disagree, but it is a way of looking at history.
But then my stance would be that to satisfy the w in wasm it Needs to be a political chimera, the whole web is predicated on the model of vendors fighting political and technical battles over standard. PNaCL lost that battle, Java lost miserably that battle, every other portable assembly ever never even participated.
Not a technical achievement so much as a different point in the design space. Two key differences. First, NaCl was originally designed to run in its own process, for a few different reasons:
- The original 32-bit x86 implementation fundamentally required it because, due to how it used %ss, a crash in the sandboxed code would crash the whole process. [1] Avoiding that issue would have required a different model that had performance overhead (and was more complicated). Ironically, the 64-bit version had no such requirement, but by then NaCl's design was already well-established.
- AFAIK it was originally developed separately from Chrome, so there was less motivation to deeply integrate it with JS.
- It was apparently conceived as a replacement for plugins (originally using NPAPI, before PPAPI was invented), which Chrome had run in a separate process anyway.
But WebAssembly is designed to run in the same process as DOM/JS, giving it two key advantages: it can reuse Web APIs rather than needing a whole separate set of APIs (PPAPI), and it can directly call and be called from JS. Both make life easier for developers; reusing web APIs also makes life easier for implementers.
The second key difference is that NaCl leaks much more information about the native environment than WebAssembly. Even with PNaCl, the native text segment, the native stack, and certain bits of implementation code were all accessible within the sandboxed address space. If you were up to shenanigans, you could even do things like jump into the middle of a native function, completely breaking the portability abstraction. Was that a problem? I don't know. It was conceptually secure; you couldn't break out of the sandbox without finding a vulnerability, just like WebAssembly. But it was a lot less isolated. In contrast, WebAssembly makes none of those things accessible and enforces full control flow integrity. You can't even tell which architecture you're running on, aside from indirect observations by benchmarking instructions or checking for memory ordering violations.
That difference is certainly responsible for some of the performance gap between NaCl and WebAssembly. Is it responsible for all of it? I don't know. I don't even know how big the gap is in the first place; the PDFTron benchmark makes it look major, but I haven't seen any other benchmarks, and I'm not sure if anyone has investigated what exactly is causing the overhead in the PDFTron benchmark, especially since the code isn't open source.
The same as from using Electron, but a step further: less overhead from unneeded browser features, ability to lock the user into a kiosk/fullscreen mode, while most of the code is still reusable.
There are significant differences between standard desktop applications, Electron applications, progressive web applications, isomorphic/universal applications and so on. Usually the business case decides. If you want people to make more standard desktop applications, you should figure out a way to make it work for the common Electron app business case. Or stop throwing baseless shit in their general direction just because they've chosen a technology that does not meet your purity requirements. I'm very sure everyone in the community agrees Electron is not ideal.
> There are significant differences between standard desktop applications, Electron applications
That is certainly true.
But I was answering to
> The same as from using Electron, but a step further: less overhead from unneeded browser features, ability to lock the user into a kiosk/fullscreen mode, while most of the code is still reusable.
namely _Desktop applications_
Wasmer claim is
> Use the tools you know and the languages you love. Compile everything to WebAssembly. Run it on any OS or embed it into other languages.
There's no browser involved here, just compile once, run everywhere
The reuse developers part I suppose could be achieved with QtQuick since QML is javascript based, but beyond the language, not that much else translates and you can’t completely get away from C++ for anything non-trivial, so it’s not really an electron competitor.
Imho, the WebAssembly community should focus on the missing features multithreading and shared memory. That way, we could run arbitrary code, written in arbitrary languages, and we could just compile existing code to the platform without problems, saving lots of developer time.
Good news! WebAssembly has community group projects setup for both multithreading [0] and multi-memories [1] which would allow a module to both define a memory space and also import a shared one.
There are so many parties interested in wasm, there are a ton of inflight proposals and extensions to get it beyond the mvp stage.
Development of the spec(s) happen in the open on the various github projects (https://github.com/WebAssembly/). Meeting notes, agendas, proposals, etc... are all kept as markdown in the git repositories.
Shared-memory multi-threading is only one piece of the puzzle though. A lot of existing code also depends on synchronous I/O (fopen, ...), or want to execute "long running loops", these are also currently not possible when running in a web browser context, at least not without hacks and workarounds.
Ideally, libraries should only use a very small subset of the POSIX / C-runtime APIs (ideally none which need to call into the underlying operating system), and provide ways for the library user to override this functionality, or not depend on those at all (simple example: allow to provide input data via memory instead of letting the library call into the C runtime I/O functions). Same for threading: instead of spawning threads inside the library, provide an API which takes "chunks of work", and let the library user care about doing this across multiple threads.
I agree to a point, but what if you wanted to port a virtual machine such as the JVM to WASM, which runs a concurrent garbage collector in the background. You can't really divide the process into "chunks of work" in that case, or only in a contrived way (starting the threads is not the issue), and you'd need the shared memory functionality to even make it work (the GC thread would need full access to the data in the other threads).
The madness needs to stop. I find it especially grating to have the security circus on one side and the endless stream of new side channels on the other.
When people write interactive 3D graphics for the web, there's ThreeJS and WebGL.
Where does WebAssembly games using I/O devices fit in all of this?