Hacker News new | past | comments | ask | show | jobs | submit login
Standalone WebAssembly games using I/O devices (medium.com/wasmer)
137 points by panic on Jan 19, 2020 | hide | past | favorite | 128 comments



When people write games for ie. Steam, I assume they either use a 3D engine or something custom written in C++ for OpenGL, DirectX or Vulkan.

When people write interactive 3D graphics for the web, there's ThreeJS and WebGL.

Where does WebAssembly games using I/O devices fit in all of this?


In general if you are writing a game (that you expect to ship on multiple platforms) you’ll have some cross-platform layer that makes swapping out runtimes for each platform easy. Then the game is built on top of that. Could be in the same language or in another or both. For example LuaJIT has been popular. WebAssembly provides another target.

Browsers are similar in that they are like mini-sandboxed OS’s with “standardised” interfaces. So applying the idea of standardised, sandboxed interfaces to WebAssembly you end up with WASI and probably eventually extension to GPU, audio and other interfaces useful to games.

It’s actually a pretty old idea, plenty of old games were actually written to run on interpreters that were then rewritten for each platform. For example Another World which has cropped up on the front page recently takes this approach.


This I/O device thing seems to be a feature of one specific wasm engine that just exposes a simple framebuffer, so, it has nothing to do with anything else.


You can compile unity to web assembly. There are several other similar engines out there that compile to wasm.


Yes, and while that is super cool and nifty, I am struggling to find the perfect use-case for this.


F2P Games that run in browsers.....


Fair enough!


I'm a little out of the loop with WASI. Is there a way to play sound with it too? or is it just video output for now?


wasi doesn't have sound or video, it appears wasmer exposed a nonstandard api via wasi file streams.


Does the WebAssembly community really need to reinvent the wheel and create yet another set of new platform APIs?


There are no cross-platform APIs for something as simple as "blit a bunch of pixels to the screen" anyway, so WASI is pretty much free to define its own.

I'm sure over time, and if desired, WASI will get a set of higher level platform-wrapper APIs for rendering, audio, input and networking, but the requirements are somewhat different from traditional "native" APIs, for instance, compatibility with web APIs is useful. So we'll probably see the WebGPU API for 3D rendering, instead of Vulkan, D3D12 or Metal.


> There are no cross-platform APIs for something as simple as "blit a bunch of pixels to the screen" anyway

SDL? Has ports for more obscure platforms, too


Yup my first though and drawing a pixel with SDL is pretty easy comparatively.

    SDL_Window *window;
    SDL_Renderer *renderer; 
    SDL_CreateWindowAndRenderer(800, 600, 0, &window, &renderer);
  
    SDL_RenderDrawPoint(renderer, 400, 300); //Renders on middle of screen.
      SDL_RenderPresent(renderer);

There's also SDL bindings in a bunch of languages, a few official ones here

https://www.libsdl.org/languages.php


Agreed. SDL2 is one of the few libraries I've used that I can honestly say is well-designed.


Would the Java APIs be something to copy? I believe they have had cross-platform graphics classes for a long time.


Um, I can think of some folks who got in a bit of legal trouble for (allegedly) ripping off the Java API. I don't think you'd want to do that!


This comment has made me sad about that stupid, stupid verdict all over again.


Java has been immensely successful and carefully modernized for decades, but none of that is true for its graphics APIs.


It looks quite modern to me, specially given what some people are fighting to get into ISO C++, or what other languages offer on their standard libraries.


... don't remind me of that proposal, lol. I think it's finally dead, and admitted it was a complete and utter failure of LEWG to let it get as far as it did [0].

Unfortunately, we now have "diet graphics", which consists of stuffing WebKit into C++ and calling it a day [1], so, looking forward to where that goes! [2]

[0] https://www.reddit.com/r/cpp/comments/89q6wr/sg13_2d_graphic...

[1] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p110...

[2] Hopefully into the bin.


SDL comes to mind.


> There are no cross-platform APIs for something as simple as "blit a bunch of pixels to the screen" anyway, so WASI is pretty much free to define its own. [..] compatibility with web APIs is useful

Personally I think Web compatibility is so useful that wasm off the Web should use Web APIs, instead of the direction WASI is going.

WASI is an effort to create a new OS API layer from scratch, without the baggage of POSIX or the Web or such. That's definitely interesting, and may be a big deal in the long run, but it has risks (it will take a lot longer, in particular) and it has downsides for the Web (which is personally the platform I care most about).


I don't understand, WASM can't use canvas?

Because if you "blit a bunch of pixels to the screen", you don't care about performance regardless.


A complex browser API like canvas requires much more effort to "emulate" in WASI than a simple 2D framebuffer access. I think emulating the Linux framebuffer device is the right first step.

PS: I think there's some confusion about WASM running in browsers, vs WASM running on top of WASI. WASI is all about running WASM outside the browser: https://wasi.dev/


The Linux framebuffer device is a legacy thing which doesn't mesh with the way modern graphics chipsets work. GPUs today are practically glorified sprites engines where a "framebuffer", a "texture" and a "sprite" are essentially one and the same in implementation, and new entities can be created by compositing ("blitting") existing ones with arbitrary 3D transformations, alpha-blending, anti-aliasing etc. You don't want to be limited to a single frame-buffer with pixel-level access and everything being drawn/blitted by software, that would be dog-slow.


It's not about performance but about convenience. Having a way to plot single pixels with the CPU without a lot of boilerplate code to setup a 3D API context, creating textures, vertex buffers, shaders etc etc is a good thing to have in many situations.

That this is not a good way for doing more complex rendering tasks seems quite obvious (e.g. in the long run this cannot be the only API for covering all sorts of rendering, but it happens to be a good sweet spot for getting anything on screen at all, and not having to deal with an overly complex rendering API just to get some 2D image data rendered).


If convenience is what you want, that can be provided by a 2D canvas. This could even support a "soft framebuffer", i.e, a pixel-perfect canvas taking up the full screen, or some well-defined window inside it. But that would not be a low-level rendering API, of course.


In almost no situations you want to have software, pixel-based access.

Are we really going back 30 years?


To reinforce on this: is pixel actually a well defined usable unit? as far as I understood many modern monitor often use a definition of pixel quite different from the one of a couple decades ago.


gpu.js has this in a really (in js terms) performant way. I love that project.


Pretty much the only popular abstraction I’m aware of that most GPUs share (as far as complex graphics operations go) is shader programs. That’s a pretty complex abstraction compared to a framebuffer which is quite frankly good enough for most things.


What about OpenGL?


OpenGL is on the way out, and for rendering 2D image data it requires too much setup code (need to create the GL context, a swap chain, a texture, and (unless GL 1.x is used) a vertex buffer, shader and setting up tons of render state).

For 3D rendering, WebGPU would be the better option to support in WASI because it has standalone native implementations:

https://dawn.googlesource.com/dawn

https://github.com/gfx-rs/wgpu

But for blitting CPU-generated 2D image data to the screen WebGPU is also sort of overkill (of course it would be possible to create a much simplified wrapper library in WASM).


OpenGL will not be on the way out until Khronos defines an high level alternative to Vulkan, as not everyone enjoys being a GPU driver writer for doing graphics.


That will most likely be the already mentioned WebGPU.


That is only for browsers.


As mentioned by flohofwoe in the parent comment, Dawn and wgpu are both native implementations of WebGPU for use outside the browser. In particular wgpu-rs will allow users to target both the browser API and run natively.

Dawn and wgpu are also collaborating on a common set of native headers (https://github.com/webgpu-native/webgpu-headers) to make it easy to switch between implementations.


That is not a solution, OpenGL 4.6 with AZDO is more capable than WebGPU, just like WebGL 2.0 still lacks many of the OpenGL ES 3.2 capabilities.


I think it's debatable whether WebGPU is less capable than OpenGL 4.6 with AZDO. Even if it is less capable, it doesn't mean WebGPU is not a solution nor only usable by browsers.

Regardless it's the closest portable solution at the moment, and it will continue to add capabilities. The WebGPU CG has already had public meetings with Khronos to talk about running WebGPU on top of Vulkan, and running Vulkan Portability on top of WebGPU. So I think there's a lot of interest in making WebGPU succeed on both web and native targets.


What many of us want is a high level standard that doesn't require being a driver engineer to take advantage of modern GPUs.

If WebGPU ends up being as castrated as WebGL, then we either keep using OpenGL, or change to modern middleware engines (which is what most are doing), with the added benefit that they make the actual 3D API less relevant, it is just a checkbox.


I mean, that is exactly what WebGPU is, so I am not understanding why you are so determined to complain about it.


No, OpenGL is not "on the way out". Please back up your claims.


“Apple deprecates OpenGL across all OSes”: https://www.anandtech.com/show/12894/apple-deprecates-opengl...


Apple is not Khronos.

They haven't supported OpenGL for years anyway, so the fact they deprecate it now is irrelevant.


A cross platform library loses its potency if it doesn't work cross platform.


OpenGL does not stop being cross-platform just because it does not work natively in 1 platform.

In any case, abstraction layers for macOS already exist.

And, going by your definition, if Apple only allows Metal, then no API will be cross-platform ever anyway.


OpenGL was never supported on games consoles, so....


The GL tools rot is already starting. E.g. the Radeon GPU Profiler supports Vulkan, D3D12, OpenCL, but not OpenGL:

https://gpuopen.com/gaming-product/radeon-gpu-profiler-rgp/


That profiler is meant for low-level debugging as its own description says. That is why it does not support DX11 either.


AMD's OpenGL tools don't look too active either, e.g. the last release of CodeXL is from 2018, and this only updated dependencies or removed functionality, for instance this nugget from the release notes:

---

* Removal of components which have been replaced by new standalone tools:

* FrameAnalysis - use https://github.com/GPUOpen-Tools/Radeon-GPUProfiler

---

...that Radeon-GPUProfiler is that tool which has only D3D12 and Vulkan support.

Look around for AMD's OpenGL activity more recently, there's not much, which isn't surprising because they've been lagging behind NVIDIA with their GL drivers since forever. I bet they're eager to close that chapter.

NVIDIA seems more committed to GL still, but without having all GPU vendors on board to continue supporting OpenGL, Khronos won't be able to do much to keep it alive.


> but without having all GPU vendors on board to continue supporting OpenGL, Khronos won't be able to do much to keep it alive

Please don't make things up. OpenGL 4.6 was released in July 2017. According to Wikipedia, modern AMD and NVIDIA cards both gained driver support for it in April 2018. Intel drivers have support since May 2019.

(https://en.wikipedia.org/wiki/OpenGL#OpenGL_4.6)


AMD has always lacked tools and has never produced much on the software side.

That is not news, and that has nothing to do with the state of OpenGL.

Please, stop spreading misinformation about OpenGL.


OpenGL absolutely is a legacy API today. It has been an awful impedance mismatch to modern GPUs for about a decade now.


It's a high-level API. It went through N generations of graphics hardware for 30 years (40 if you consider IRIS GL).


And it became increasingly less of a good fit with each one. By now, it is ridiculously mismatched to the task it needs to perform.

Just leave it to die.


OpenGL is certainly used by a number of creative apps like Photoshop, but you can't deny that the number of games released with OpenGL is down significantly since say 2010.


A lot of games and apps are being released using OpenGL today.

Then there is WebGL and OpenGL ES, widely used everywhere too.

So, no, it is not going anywhere. In fact, Khronos themselves have said so.


I didn't say that there were no OpenGL games being released in 2019/2020 but that the ratio is definitely skewing away from OpenGL. "A lot of games" seems like a stretch compared with what the numbers used to be.

Also I think it's obvious but I'll say it anyway; it's not up to khronos it's up to developers. It doesn't matter if they continue to support OpenGL if developers move to some combination of dx11/12, Vulkan, and Metal.

WebGL and OpenGL ES aren't OpenGL. Their uptake or lack thereof in other arenas is orthogonal to whether people are moving away from OpenGL for game development.


Considering UE4 and Unity both support OpenGL, yes, a lot if not the majority of games.

WebGL and OpenGL ES are pretty much OpenGL. Formally they are different standards, but they are based all in OpenGL: if you know one, you know the others pretty well, which is what is important for developers as you agree in the second paragraph.


Once again, that's support not implementation. Developers could use OpenGL in Unreal. They could, but they haven't in general. Vulkan is the default, and is the more common choice.

If it were true that lots of games were picking OpenGL, why is it so hard to make a long list of them? It's easy to make such a list for vulkan.


OpenGL is deprecated on macOS and iOS, because Apple has changed to use Metal: https://github.com/godotengine/godot/issues/19368

That's why Godot is moving to Vulkan, that can also run on Metal: https://godotengine.org/article/retrospective-and-future

For Godot game developers, this will not be visible in any way, it's under the hood change to engine.

For other apps, there is in progress development to run full OpenGL on top of Vulkan with Zink. There is a blogpost about it here: https://www.collabora.com/news-and-blog/blog/2018/10/31/intr...

There is also a talk about it at The X.Org Developer's Conference 2019-10-02 here, for example Blender seems to work already: https://youtu.be/rV0P1ChE_5o?t=3970

All conference recordings are listed at conference website: https://xdc2019.x.org/


Thanks for sharing Zink, this gives me hope for OpenGL's future.


OpenGL really is a legacy API in 2020. There is a massive mismatch between modern GPU design and the OpenGL API design. The only reason it wasn't left behind a decade ago is that there wasn't a decent replacement.

Today, with Vulkan and soon WebGPU available, there really is very little reason to hang on to OpenGL for anything but legacy.


Yeah, because everyone wants to be a driver writer and compiler expert, in addition to graphics programming.


The only platform that has marked it as legacy doesn't support Vulkan (directly) either so I'm not convinced it's dead yet. The simpler API for the minor performance trade-off is still compelling for lots of uses.


WebGPU is a simpler API for doing modern rendering, with less tradeoff.

OpenGL is not simple if you want to actually do useful things with it.


> What about OpenGL?

You cannot create a window or handle input with OpenGL.


Sure you can, EGL is part of OpenGL set of standards.


EGL doesn't let you create a window, it only lets you bind an OpenGL context to an existing platform window. And input isn't covered by EGL in the slightest.


People expect their WebAssembly applications to run on more platforms than just Microsoft Windows. There is a reason why web browers have their own graphics API for example. DX12 (and previous version) is the only first class API on Windows. Metal is the only first class API on Mac/iOS. Vulkan/OpenGL are only first class on Linux (and friends) and second class on Mac/Windows. So which API are you going to choose? You'll have to choose the most popular, all or a newly created API that is first class in every compliant web browser. The same thought process happened with I/O devices and WebAssembly itself.


That abstraction work was already done with ANGLE when web standards started requiring browsers offer WebGL across the different platforms.


That API is under development, it is WebGPU.


WASI has some unique goals around extending the WebAssembly sandboxing concepts into the API space using capability-based security, forming one of the key building blocks for nanoprocesses. Beyond that, WASI will indeed likely reuse existing APIs and API concepts, rather than always inventing new things from scratch.

The article linked here is an advertisement for a startup.


If people want to use WebAssembly to write non-browser applications, then it needs some non-browser API to replace the user interaction capabilities a browser's API would usually provide.


I'm not sure why you'd choose webassembly over, say, MSIL for this?


The goal for WASI is a secure-by-default capabilities model for access to system APIs.

It's also an open collaborative cross-vendor specification with at least four different independent implementations.

MSIL/CIL is similar in some ways, but it's still largely only supported by Microsoft and doesn't sandbox binaries the same way. It's an inspiration to We assembly, but not a feasible alternative to it. Similar to Google's PNaCl bytecode.

Some more detail here:

https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webas...

https://hacks.mozilla.org/2019/11/announcing-the-bytecode-al... "secure-by-default foundations for native development that are portable and scalable."


For user interaction that works for everyone, i.e. covering internationalization and accessibility, one could do much worse than to just use the existing web platform APIs, DOM and all. Why reinvent all that?


What's wrong with using a subset of the browser APIs? Or, alternatively, one of the many existing platform APIs?


Browser APIs are not C ABIs and the platform C ABIs are typically not cross platform and make for a bad target.


POSIX, SDL and the usual suspects are pretty much cross-platform and run everywhere.


Browser APIs have C++ ABIs that can be generated through webIDL if I recall well.


What is the benefit of replacing the browser though?


There is no browser to replace. Many of us are trying to look into WASM for loadable modules or entire binaries.


I thought I already could do that since the late 80's.


With what? lisp? Java? MSIL?

https://hacks.mozilla.org/2019/11/announcing-the-bytecode-al.... "secure-by-default foundations for native development that are portable and scalable."


You can compile Hearbleed to WASM, secure by default, as long C or any of its derived languages are not used.


Sure, you can build a flimsy deathtrap house on top of a solid foundation in a lot of contexts. But if the foundation is unsound, it doesn't matter if your Ada is formally verified or not.


At least the Ada folks acknowledge that the language isn't perfect, and don't pretend they were the very first one on its field.


I don't see anyone pretending any such thing about WebAssembly or WASI. If anything they are drawing on the decades of experience with bytecode formats and security research.


Apparently not, otherwise bounds checking inside of the same linear memory segment would actually be supported.

Likewise, they wouldn't "forget" the formats that already had support for languages like C when talking about what is "new" with WebAssembly.


> bounds checking inside of the same linear memory segment

That can be supported in the source language.


Which is meaningless for module consumers and hardly any different from native processes.


AIUI, wasm does support using multiple linear memory segments at the same time. You don't get that on native processes short of using memory segmentation, which no modern architecture supports.


CPUs like SPARC ADI and ARM MTE offer much better memory protection than what is being sold by WASM.

Solaris, iOS and future Android versions take advantage of their existence.


Which is completely irrelevant as by definition they are not portable.

The fact remain that wasm module have a very well defined behavior when embedded in other applications and can be embedded in a portable way.


I'm not sure I understand what you're saying. There are two core principles behind WebAssembly. First of all you can run low level programming languages like C on any architecture. This is possible if the code is available and you are willing to compile the C code before installing the application (see gentoo). Secondly the libraries/APIs that the program depends on must be available.

What you're saying is that every vendor shipped their software with source code and only used cross platform APIs. Is that right?


> First of all you can run low level programming languages like C on any architecture

Just like with TIMI, MSIL, EM, ADF, TendDRA, PNaCL among others.

> What you're saying is that every vendor shipped their software with source code and only used cross platform APIs. Is that right?

Rather that this is nothing new, and the formats listed above, already offered the same capabilities, without the same marketing.


I don’t see how asmjs/wasm’s marketing would be much different to the one of pnacl/nacl. It’s just fundamentally better and that’s why people are excited.


So much better that PNaCL still outperforms WASM.

https://www.pdftron.com/blog/wasm/wasm-vs-pnacl/

And it doesn't offer any significant security improvements, in spite of its marketing, compiling Heartbleed into WASM is still possible.


The reason people are excited about WASM is neither security nor performance.


So what is so exciting about a bytecode format, created out of the political war between PNaCL and asm.js, without any big news for anyone that actually cares about bytecode formats invented since 1958 with the introduction of UNCOL and subsequent attempts?


Compared to other bytecode formats, it has lots of advantages. Browsers support it, LLVM targets it, people are using it.

I don't really see the point of using WebAssembly (or any other bytecode format) outside the browser, though. It just seems like a way to throw away performance for no benefit.


Browsers support it because Mozilla refused to adopt PNaCL, which was also targeted by LLVM, WebAssembly is a political compromise, not any kind of technical achievement as it keeps being celebrated for.

Mainframes and midrange computers have used bytecode formats since the early 60's as hardware independence mechanism, where you AOT compile at installation time, or have an hardware interpreter implemented as part of a micro-coded CPU.

It has allowed those kind of computers to advance their hardware to modern times, while keeping the applications unchanged.

The same idea is widely used on embedded platforms, as it gives OEMs flexibility designing hardware, while reducing software development costs even further.


I honestly do not see the point you are arguing "for" here. I understand you find wrong the hype about webassembly and many of you comparison are adequate.

You say that wasm is a political "bastard" born out of mozilla refusal for PNaCL. I disagree, but it is a way of looking at history.

But then my stance would be that to satisfy the w in wasm it Needs to be a political chimera, the whole web is predicated on the model of vendors fighting political and technical battles over standard. PNaCL lost that battle, Java lost miserably that battle, every other portable assembly ever never even participated.

Are you criticizing wasm or the open web?


Not a technical achievement so much as a different point in the design space. Two key differences. First, NaCl was originally designed to run in its own process, for a few different reasons:

- The original 32-bit x86 implementation fundamentally required it because, due to how it used %ss, a crash in the sandboxed code would crash the whole process. [1] Avoiding that issue would have required a different model that had performance overhead (and was more complicated). Ironically, the 64-bit version had no such requirement, but by then NaCl's design was already well-established.

- AFAIK it was originally developed separately from Chrome, so there was less motivation to deeply integrate it with JS.

- It was apparently conceived as a replacement for plugins (originally using NPAPI, before PPAPI was invented), which Chrome had run in a separate process anyway.

But WebAssembly is designed to run in the same process as DOM/JS, giving it two key advantages: it can reuse Web APIs rather than needing a whole separate set of APIs (PPAPI), and it can directly call and be called from JS. Both make life easier for developers; reusing web APIs also makes life easier for implementers.

The second key difference is that NaCl leaks much more information about the native environment than WebAssembly. Even with PNaCl, the native text segment, the native stack, and certain bits of implementation code were all accessible within the sandboxed address space. If you were up to shenanigans, you could even do things like jump into the middle of a native function, completely breaking the portability abstraction. Was that a problem? I don't know. It was conceptually secure; you couldn't break out of the sandbox without finding a vulnerability, just like WebAssembly. But it was a lot less isolated. In contrast, WebAssembly makes none of those things accessible and enforces full control flow integrity. You can't even tell which architecture you're running on, aside from indirect observations by benchmarking instructions or checking for memory ordering violations.

That difference is certainly responsible for some of the performance gap between NaCl and WebAssembly. Is it responsible for all of it? I don't know. I don't even know how big the gap is in the first place; the PDFTron benchmark makes it look major, but I haven't seen any other benchmarks, and I'm not sure if anyone has investigated what exactly is causing the overhead in the PDFTron benchmark, especially since the code isn't open source.

[1] https://static.googleusercontent.com/media/research.google.c...


Eh, no. Vendors everywhere will package things for your architecture and give you binaries.

That is how it has always been done and is still done.

Yes, you can do it differently with an IL, but that isn't new either (Java, .NET, etc.).


I tried and ran into many (well known, at this point) problems. WASM solves them.


The same as from using Electron, but a step further: less overhead from unneeded browser features, ability to lock the user into a kiosk/fullscreen mode, while most of the code is still reusable.


You mean like standard desktop application we had until the browser took over?


There are significant differences between standard desktop applications, Electron applications, progressive web applications, isomorphic/universal applications and so on. Usually the business case decides. If you want people to make more standard desktop applications, you should figure out a way to make it work for the common Electron app business case. Or stop throwing baseless shit in their general direction just because they've chosen a technology that does not meet your purity requirements. I'm very sure everyone in the community agrees Electron is not ideal.


> There are significant differences between standard desktop applications, Electron applications

That is certainly true.

But I was answering to

> The same as from using Electron, but a step further: less overhead from unneeded browser features, ability to lock the user into a kiosk/fullscreen mode, while most of the code is still reusable.

namely _Desktop applications_

Wasmer claim is

> Use the tools you know and the languages you love. Compile everything to WebAssembly. Run it on any OS or embed it into other languages.

There's no browser involved here, just compile once, run everywhere

I have the feeling I already heard it...


> [..], just compile once, run everywhere

>I have the feeling I already heard it...

If you are referring to java then something to note is

> Use the tools you know and the languages you love.

> Compile everything to WebAssembly.

> [..] embed it into other languages.

Which to my understanding apply only marginally to java; especially the embedding part.


> the common Electron app business case

Serious question: what is the common electron app business case?

Being able to run as a desktop app as well as in a browser?


From my POI: Yes, and reuse developers and code you already have for the web version.


Ok, thanks, that makes sense.

The reuse developers part I suppose could be achieved with QtQuick since QML is javascript based, but beyond the language, not that much else translates and you can’t completely get away from C++ for anything non-trivial, so it’s not really an electron competitor.


>Serious question: what is the common electron app business case?

Easy portability (not perfect) between all major platform


Electron is the least worst technology we have for building expressive 2D GUIs. Carlo and others are just an incremental improvement.

Every other GUI libraries are order of magnitude inferiors except if you just want to look like windows default utilities.


Imagine if apps would blend in with the host OS! The horror!


Sorry but the founder wants his specific UI, I can't do anything about it, my job is to do it.


Imho, the WebAssembly community should focus on the missing features multithreading and shared memory. That way, we could run arbitrary code, written in arbitrary languages, and we could just compile existing code to the platform without problems, saving lots of developer time.


Good news! WebAssembly has community group projects setup for both multithreading [0] and multi-memories [1] which would allow a module to both define a memory space and also import a shared one.

There are so many parties interested in wasm, there are a ton of inflight proposals and extensions to get it beyond the mvp stage.

[0]: https://github.com/WebAssembly/threads [1]: https://github.com/WebAssembly/multi-memory


Where is this all being discussed and developed?


There is a W3C sponsored Community Group https://www.w3.org/community/webassembly/.

Development of the spec(s) happen in the open on the various github projects (https://github.com/WebAssembly/). Meeting notes, agendas, proposals, etc... are all kept as markdown in the git repositories.


Thanks!


Shared-memory multi-threading is only one piece of the puzzle though. A lot of existing code also depends on synchronous I/O (fopen, ...), or want to execute "long running loops", these are also currently not possible when running in a web browser context, at least not without hacks and workarounds.

Ideally, libraries should only use a very small subset of the POSIX / C-runtime APIs (ideally none which need to call into the underlying operating system), and provide ways for the library user to override this functionality, or not depend on those at all (simple example: allow to provide input data via memory instead of letting the library call into the C runtime I/O functions). Same for threading: instead of spawning threads inside the library, provide an API which takes "chunks of work", and let the library user care about doing this across multiple threads.


I agree to a point, but what if you wanted to port a virtual machine such as the JVM to WASM, which runs a concurrent garbage collector in the background. You can't really divide the process into "chunks of work" in that case, or only in a contrived way (starting the threads is not the issue), and you'd need the shared memory functionality to even make it work (the GC thread would need full access to the data in the other threads).


The madness needs to stop. I find it especially grating to have the security circus on one side and the endless stream of new side channels on the other.


Why use WebAssembly to create standalone programs, instead of some commonly used languages for writing desktop apps, like C++ or Java?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: