I've been incredibly impressed at the level of coordination that we've seen from browser vendors on WASM. I know that they've all been working together behind the scenes for decades now on various initiatives, but I can't remember the last time they've all been so publicly supportive of each other as they are here.
It's not WebKit that matters; it's Apple. Apple was also conspicuously absent from the original wasm announcement. Brendan Eich announced (paraphrased), "This wouldn't work without all browser makers buying in, and all of them are today announcing their intention to support wasm, including..." [insert a list of all the companies EXCEPT Apple].
Asked about the missing Apple, he responded that he was sure they were on board but hadn't submitted their written statement by his deadline. I asked why he would assume they were going to publicly commit to it if they hadn't. It would have taken a one-line statement.
He got very snippy and said that they were obviously just a bit busy getting ready for one of their regular public performances (maybe WWDC, I don't remember.) Yeah, sure, that must be it.
Their show came and went, and I must have missed the announcement. More shows came and went. If Apple ever publicly committed to supporting wasm, as all the other companies did on day one, I haven't seen it, but maybe they eventually did. (It would be easy for me to miss, so I'm seriously asking here.) Is anyone aware of any link to that public announcement of support, the one that we would have seen on day one if Apple hadn't been so busy?
If not, then there is another possibility. There are Apple people contributing to the wasm technology, both the development of the spec and the implementation in WebKit (I believe, but correct me if I'm wrong.) That would give Apple the option of supporting wasm if they ever chose to do so. I'm sure they want that option. And no public announcement of support means they are hanging on to their option to exercise their iOS/Safari veto power over this significant advance for open web apps (vs. vendor-controlled native apps).
Sorry, I don't recall being snippy, but I do recall you were pushing for me to "prove a negative", which is a fallacy. I can't prove Apple is committed to supporting WebAssembly and speak for them. But if you follow their bugzilla, you can see that the
dependencies are getting attention. Note also who is assigned to some of the recent ones: JF Bastien, formerly of Google (long-time PNaCl team member).
I may also have been unwilling to put people I know at Apple, with whom I'd spoken about WebAssembly in person at CurryOn in Prague (July 2015), on the spot. But suffice to say they're keen on WebAssembly and everything looks as on track as it can be, for a stealthy prima-donna company like Apple!
Yeah, you were snippy, and caps-lock shouted at me when you mistook me for an Apple fan boy skeptical of wasm, when in fact I was a wasm fan boy skeptical of Apple, but as I noted earlier in another comment (just a few below this one, written before you chimed in here), I'm an Eich fan, too. I was so before you got snippy, and nothing has changed, because you're fighting for what I consider very important: the Extensible Web concept, giving us a much better app platform that has unprecedented reach and openness.
Apple's "courageous" willingness to enforce their own agenda and disappoint those with other priorities does not inspire confidence that Apple is fully on board with this wide-open web app platform, and that "no comment" is just how they express their enthusiasm.
But maybe they really are enthusiastic, and maybe they can hardly wait to announce their support for progressive web apps, too. Since I'm not in the smoky back rooms, I'll just have to wait and see.
But you and others who are working toward making these things happen, not just creating the tech but also working on the politics, should know that you're doing a valuable thing for the world, and lots of us appreciate what you do.
I don't think that counts as caps-lock shouting, as it's not all caps-lock. :-/
But whatever you call it, sorry about that. Re-reading it, I still think you seemed a bit over-aggro re: Apple, hence my mocking (again, sorry) -- but who knows? You could be right and they'll slow-roll or cripple WebAssembly.
I doubt it, based on what I know, but it is all speculation until they ship. Only thing to do is carry on and do what we can to up the competitive pressure.
You're very gracious, but whether I owe you an apology for being "over-aggro" or not, you don't owe me any, which you would have seen if you had seen my expression face to face. Imagine me looking down, smiling, shaking my head while asking, "yeah, but are you SURE you aren't just imagining the answer you want to hear?" and you'll get the right "feel". The caps lock business is just me ribbing you with a grin. The only serious part is that I'm grateful to you for trying to make something happen that I really want plus a bit of worry that it sounds a little too good to be true and not wanting to get my hopes up too much.
But you've answered my real issue ("how serious are they, really?") as far as you possibly can at this point, and I'm more optimistic than before, while still reserving a bit just in case....
And I really hope that more of the "extensible web" enabling technologies will follow sooner rather than later. Good luck and thanks.
Not an oxymoron, note well: the best prima donnas work their PR on their schedule, and suppress all others' attempts to reveal their secrets. Just like ol' Shiny!
> If Apple ever publicly committed to supporting wasm, as all the other companies did on day one, I haven't seen it, but maybe they eventually did.
They haven't made such a statement, but that's not unusual for them because their corporate policy is to "not make forward-looking statements". They do have representation in the WebAssembly W3C community group and do send representatives to the (rare) in-person summits that we've held. (Turns out the proverbial smoky back rooms do exist, it's just that smoking is not allowed).
This seems to be mainly an abundance of caution rather than obstinance or objection. It does seem that they have started working on an implementation, but we are careful not to put words into their mouths or read too much into that.
They were kept abreast of the planning this particular announcement and of late have ramped up their interest in settling design issues.
I would take Eich at his word that there is support within Apple's browser development team. They have, after all, managed to reach ES6 compatibility faster than anyone else.
Apple's lack of public engagement probably has more to do with management's priorities. Apple's lack of community engagement is a problem: they drag their feet on WebView, block 3rd party browsers, and stymie adoption of open codecs like Opus and WebM.
However, Wasm can be polyfilled, so it can reach critical mass without Apple's explicit cooperation. At some point, not supporting Wasm will mean losing market share (and thus revenue) to Firefox and Chrome.
Apple has a general policy of not commenting about features in any future releases: they will rarely even say something currently in WebKit nightlies will be in the next release, so it's merely keeping with their typical form of not making any comment about future products/releases.
As such, I'd read absolutely nothing in to them refusing to commit to supporting it in a future release: that's just their normal behaviour, because per Apple policy they cannot. What actually are signs (whether they're implementing a given feature—typically publicly, whether they're sending detailed feedback on the spec, and what they're willing to say about implementing it off-the-record in private) are all there for WASM. I would be entirely unsurprised if Eich had off-the-record confirmation that it will almost certainly ship in Safari 11.
I'm an Eich fan, so what I DO trust is that he's doing everything he can to make this happen. I do trust that whatever he says is an attempt to help the web platform become the best it can be, meaning he's trying to help me.
I'll remain a bit skeptical about the literal correctness of those words until I hear what Apple has to say, but I'm encouraged by some of the other comments I see here.
I'm going to give Apple one potential benefit-of-the-doubt point here, despite the fact that I think there's also good reason to suspect they're not above succumbing to the incentives they have to hobble the web vs native.
If they're primarily about tablet and mobile devices now (lots of indicators that's the case), they're going to be concerned about battery life, performance, and other impacts on user experience. If history repeats itself, they're probably concerned about these things at a level of fussiness that other vendors often don't share. They might well not be on board with wasm for related reasons, or they might take longer to conclude it'll be OK, or work out whatever efforts make them feel good about it.
Or they might just lurve their walled garden a lot.
wasm may well be much more efficient and less resource intensive than JS. Seems likely enough, given the analogue of ASM on most machines.
But, then again, most people don't write ASM. They target it. So in practice, the performance of apps taking advantage of wasm will have a lot to do with compilers and the culture/choices developers bring to writing the software targeted at the browser-as-runtime.
They're concerned about not getting their 30% monopoly share of app store purchases if people use the web instead of native, not to mention the lock-in effect of native apps.
Apple is the new Microsoft (but worse than MS ever was).
Just like with the very late and crippled intro of webgl support in iOS Safari they are aftaid that WASM apps and especially games wriiten for WASM (e.g Rust has a WASM target now) will compete with the native apps/games, taking a chunk of AppStore revenue with it
I agree that Webkit is conspicuously absent here, but at least the announcement does mention Apple: "designed by collaborators from Google, Mozilla, Microsoft, Apple, and the W3C WebAssembly Community Group".
> compatible and stable implementations of WebAssembly behind a flag on trunk in V8 and SpiderMonkey, in development builds of Chakra, and in progress in JavaScriptCore
Getting to 100% on ES6 was pleasant, but still a surprise given their track record not just with JS support but with some CSS features too. They're often stragglers, even more so now that Microsoft's browser is getting better.
If you want to build a mobile browser that works on both iOS and Android, WebKit is the only game in town. That's because Apple doesn't allow other web browser engines to be used. If Gecko and Blink (and the upcoming Servo) were given a fairer playing field, perhaps the mobile browser market would be less tied into WebKit.
Apple's "app revenue" is slightly in the black but really is designed to break even with costs to running the App Store (trivially verifiable from profit reports and other extensive analyses, including comments from Tim Cook on earning calls). Apple makes money on hardware, and to a much much lesser extent on media: apps exist to make their hardware enticing, nothing more. So maybe "if I were Apple I would be very worried about web pages undermining the platform lock in achieved through my leading native app platform", but poking at revenues just repeats a falsehood (which as was pointed out in the article here yesterday is how false information becomes engrained).
> is designed to break even with costs to running the App Store
They've paid out 50 billion USD to app developers since the beginning of the app stores (1).
Since they are taking 30% of the profits, are you saying it has cost them around 21 billion USD to run the appstore?
So they've made $3bn per year from the App Store (averaged). There are certainly development costs, personnel, bandwidth, etc.
Apple suggested last week they expect around a 40% profit margin, and sold 45m phones last quarter. Assume an ASP of $500 that's $200/phone profit. Multiply it out you get $9bn in the quarter (not their strongest, they expect winter to go better.
But that's a quarter. So take a quarter of 3bn and you get $750 million, which is only 8.3% of the phone profit.
They make some money on the App Store, but it's not like it's 30% of their busines.
Hello from the Chrome/V8 side of this announcement!
We'd like to say that we're incredibly excited to keep moving WebAssembly forward, and that's in large part due to the amazingly collaborative model that all of the vendors have put together through interpersonal relationships and shared vision.
Please scrutinize and comment on the design, bang on the tools, and give us feedback! Maybe try writing a codegen or tinkering with the existing tools. Try porting an app or a game. If all goes well, this is what the vendors have agreed to ship at the beginning of next year, so we want to be absolutely sure that it's something solid that others can build on top of.
This is also not the end of the evolution for WebAssembly, since there is a pipeline of features planned that go beyond the MVP (minimal viable product), well into next year and after. The web and the working group is the place to experiment with and perfect those details going forward.
It's something we've discussed on various occasions, usually after in-person meetings. So far it has not been the highest priority additional feature, since one can always FFI to JS in the mean time, so that gives us some breathing room to see how usage patterns develop before committing to a potentially large new API surface. A step along that path is just the ability to refer to JS and other objects directly through opaque typed references, and that is more likely to happen sooner rather than later.
Oh no, please. There are some truly awful logos in there. There is nothing wrong with not being good at creating a logo so please just let someone who is good at it do it.
The HTML5-style shields are the most practical options. They fit with other web technologies, and they don't try to obscure the name of the project like the logos in the first comment (Pointing arrows and "assembled" people? Really?).
Also, the "negative space" logo is incredibly slick, and I'd love to see it used on some project even though the HTML5-style shields are more practical for WASM.
Fogaccio's binary-inspired WA ligature looks really professional, too.
Yeah the Fogaccio one is one of the better ones. It also has the domain-specific benefit of working great as ASCII art. That's not usually a requirement of logos but it's a nice benefit here.
There's a couple that are nice. A good logo doesn't just come from nowhere though. There's a lot of work that goes into all the really good logos you see.
LOL, for how smart we like to think we are, we are incredibly stupid people. Who doesn't know about bikeshedding by now? Yet we still do it.
Logos should never be created by the community. The community should always appoint a logo dictator and either accept their selection or overthrow them and appoint a new dictator.
That's the thing about bikeshedding - it's easy to learn what it is. It's hard to actually stop contributing to it yourself, and often even harder to lead a team out of doing it.
- Why the chose stack-based VM, rather than register-based one?
- I see the docs mention Float128 type, is this a real possibility? What it their opinion on having Float128?
- there doesn't seem to be any support for ADC instruction ("add with carry") which would be very useful for implementing multi-precision numeric types. Are the plans to support ADC and the like or not? How to implement, say BigInt, with WebAssembly?
- maybe I misunderstood but when adding two integers result in an overflow, does it trigger the "trap"? I mean, lot of time (e.g. modular arithmetic) one does what fast "wrap around" (i.e. modulo 2^INT_SIZE) in integer types. Is this behaviour (of C) going to stay in WebAssembly?
Signed integers in C don't wrap around. That's undefined behavior. Unsigned integers behave modulo 2^n, so if you want wrap around in C you should be using unsigned integers. Any C compiler to WebAssembly would have to implement unsigned behavior regardless of the semantics of WebAssembly's native unsigned operations.
For information on the semantics of WebAssembly arithmetic, see
1. You will not see obfuscated (a.k.a.) uglified JavaScripts anymore. Everything will be compiled into bytecodes.
2. But don't worry, you will see bunch of webDisAssembly tools.
3. You will not see blog posts "How bad is that JavaScript" anymore.
4. But do expect to see "How bad is WebAssembly" though.
5. "Vanilla JavaScript" camp may have "Vanilla WebAssembly" banners on some tents.
6. You will not see anymore those ugly attempts to write Ray Tracer in JS.
7. Each framework will finally have its own programming language. Most advanced of them will change language on month basis, like Angular35 will use Rust build 3234.23.400. Moving asymptotically close to the Web Holy Grail.
8. Browser API will be decomposed to bare-bone state,
8.1 you will see immediate mode drawing a la WM_PAINT drawing style. Finally.
8.2 you will see CSS extendedable by custom layout managers and properties.
8.3 CSS modular system will reach its eternal ideal. Instead of them writing specs for us each our site will have its own CSS modules.
At the end: each team will write their own browser - FF, GC, IE will just provide WebAssembly loader means. Everything else will be just loadable.
So at the end you will get good Java Applet idea with User Agent exposing pure AWT alike primitives. With the browser as such a ClassLoader thing.
As someone with a better grasp on this, care to comment on a wasm-based future and the idea of a browser essentially turning into a network-driven hypervisor?
I realize the initial wasm spec doesn't seem to be aiming at that level of complexity, but I'm curious if/why repeated extensions of it won't move us in that direction.
The original vision of the browser was a cross-platform vehicle for delivering applications over the network. It's not just an idea, it was always the explicit goal and WASM is the last piece of the puzzle.
Java was supposed to be key to that goal, but it didn't match the capabilities of the network and processing speed of the time. As painful as HTML, CSS, and JavaScript are to use, they actually worked.
Compile-to-JS, NaCL, and asm.js culminated in a realistic blueprint for how to build a low-level network-oriented runtime. Combined with increases to network speed (the average webpage is the size of the original DOOM) WASM is finally doable.
On the plus side, holy shit we can do (almost) anything! WASM will also mean outside money pumped into improving the browser runtime, such as Ethereum's migration to WASM.
Some of the parent comment is tongue-in-cheek, but you can already see one downside: fragmentation. Any good API gets screwed up in committee, so we end up with the most bare-bones and verbose implementation possible (WebComponents, IndexedDB, etc). Since they are so painful to use we end up importing 3rd party libraries which entails incompatibility. The content itself also becomes more opaque, as we can't just parse HTML to figure out the contents of the page.
"network-driven hypervisor" is what we have already with modern browsers. WebSockets, WebWorkers, WebRTC ... all that.
Personally I see WASM as a potential way to speed up technology evolution and acceptance.
Like from CSS 1.0 era authors were struggling with flexible layouts on HTML pages. In 2008 work on flexbox module had started in CSS workgroup at W3C. And at 2015 we've finally got something. 7 years and very far from ideal at the end to be honest. If CSS would have an open architecture and with something like WASM in place + an ability to hook up that almost native code in rendering tree creation and composition process ... We would have significantly better results and significantly faster.
That's just an example. May be that's too optimistic of course. In any case if web browser, that we all rely upon, would be an open platform/technology it will be better for all of us. Can WASM be the way to that? I wish to hope.
in the same HTML document then "yes" - it conceptually will work faster if instead of ray tracer in JS you will call native function doing that.
As far as I understand the main goal of WASM is to have bytecode that is 1:1 mapable to current CPU architectures. JS source is quite hard to JIT due to typeless nature of JS (as an example of one of problems).
This http://sciter.com/htmlcss-ui-in-medicine/ is an example of such a hybrid application that uses HTML/CSS/script with native code (C++) responsible for low level data processing and filtering. In some cases native code there is generating image fragments for rendering. That application uses http://sciter.com engine where it is possible to mix native code with HTML/CSS/script.
Best-case scenario: in 5-10 years you can use any language you want for building apps on the front-end, just like you currently can for the back end.
Existing, widely-used C libraries can be used in front-end applications if desired, and libraries in varying languages can all seamlessly interact with both each other and the DOM.
Not sure how the integration with JS looks like in Emscripten. Scala.js is fabulous, with typed facades for the DOM, easy interaction with dynamic JS types, etc.
1. WebAssembly isn't an assembly format in the traditional sense, it's a compressed AST.
2. IIRC the plan is to develop a human-readable form of WASM that can be read directly from the browser. Basic debugging can be done directly in the browser.
I really wish Adobe would write a Flash runtime in WebAssembly. There are tens of thousands of great Flash games that are unplayable in modern browsers... Sure, you could rewrite them in js, but no one is going to.
Three years ago, I had an idea to write my own Flash Player in Javascript, but I was too lazy to start working on it.
Today, we have PDF.js, emulators of gaming consoles, or even emulators of a whole x86 computer, which can run Windows OS.
I believe that "SWF interpreter" in Javascript can run pretty fast. The specification is free available online.
The only problem I see is, that in Flash, you can create TCP / UDP sockets, which is not possible in a web environment.
There are some other things you can't do from the browser in addition to sockets. Full screen, camera, and microphone. But most games don't use that stuff.
This is not true, all of the things you mentioned are possible in modern browsers and are being standardized. However some of them require permission from the user.
As long as it does the antialiasing correctly, then yeah! But I've seen .swf->.js stuff which has the telltale garish seams between adjacent vector objects, showing it's just deferring to the broken AA use of the immediate-mode canvas renderer.
Does anybody know how much of the Flash plugin AA runs on CPU vs GPU? Obviously the older versions were CPU-only, though I'm not sure how much of actual Flash object rendering has moved to the GPU in recent years, especially as a ton of it is bezier based. If it is still fully CPU-based, with GPU just doing blits & blends, then wasm should allow fast enough use of a proper .swf AA renderer without going through canvas's forgetful AA.
Most of new games built with Flash are using Stage3D which runs on the GPU. These games however don't really rely much on the vector capabilities and animation timeline, which still uses the CPU. Adobe could probably make a vector renderer run on the GPU though. Some people have made vector animation renderers as a proof of concept.
Right, the polished Stage3D stuff would just go through to WebGL. It's the vector stuff I'm concerned about. There's a lot of great, fun little Newgrounds-style animations & games made with that level of graphics.
There's still a pretty big gap between parsing and actually rendering. I don't think that spec contains anywhere near enough information to play animations the same way that Flash Player does.
Adobe should do it because lots of people still use Adobe Air and Animate CC (previously called Flash Professional) to build games. If they could make the vector animation capabilities fast with WebAssembly it would bring back a set of extremely fast and flexible workflows for building games. There presently seem to be a lack of viable workflows for building games with vector animations, which is where Flash's strengths have always been.
I have made several Flash games in the past, and I really wanted them to run in a web environment (i.e. without a dependancy on any plugin).
That is why I made IvanK.js http://lib.ivank.net . It is a library for doing graphics (including rewritable text fields), mouse and keyboard events in a "Flash way". I was able to rewrite my flash games to JS in a couple of hours, it was just a "mechanic" rewriting of ActionScript 3 to Javascript. If you want to port your flash projects to the web, it may be extremely useful to you :)
I haven't really looked into the implementation, but won't this be yet another possible way for getting out of the V8 sandbox and be able to execute code and thus be able to exploit? I recall exploits using typed arrays for code execution before, because they can contain raw assembly in a very low-level style (e.g a plain char array in C++)...
iOS's security concerns about JITs aren't general-purpose. They only apply in the weird security model that Apple has set up for themselves.
The fundamental security challenge of iOS is that there exist things called "private APIs" - Apple-authored code that lives in the address space of a developer's application that the developer is not permitted to call directly. It exists so that "public APIs," a different set of Apple-authored code that is technically indistinguishable from private APIs (mapped in the same way, with the same memory protection, etc.), can use the private APIs in their implementation, and so that developers can call public APIs in turn.
In order to enforce this, Apple does static analysis and manual inspection of binaries uploaded to the App Store, and signs every page of executable code that can execute on an iOS device. (It's not very good static analysis. I've had an app rejected because it used a third-party framework that happened to share a symbol name with a private API, and I've had an app accepted and shipped that, at runtime, disassembled the code of a public API to find the offset of a private API, and did questionable things to the private API's static local variables.)
The ability of the app to generate code at runtime would completely defeat Apple's ability to do static analysis or manual review worth anything. It could, for instance, download a function from the developer's website and then execute it. For this reason, Apple has technical means to prevent you from running code that it did not sign. As a side effect, this prevents you from implementing a JIT.
The attacks described in that Black Hat presentation you link are about gaining "arbitrary code execution" with the permissions of the app, no more. In any other OS design, this isn't an exploit, this is how apps work. If you download a Windows .exe and run it, the .exe gets to supply arbitrary native code. If you download an Android .apk and run it, the .apk gets to supply arbitrary native code. If you go to a website, the website gets to supply arbitrary JavaScript (though not native code). Nobody is reviewing that JavaScript to figure out what it does; the JavaScript simply executes on a platform that restricts the abilities of all possible JS. That's how application distribution works.
Apple has chosen this very different model on iOS where apps are reviewed in advance to limit what code can run; under that model, being able to run arbitrary code is an exploit. But that doesn't mean that JITs, which necessarily require being able to run arbitrary code, are a security hole under any other model.
This is incorrect. I've talked to Apple security people, and interned there a few years ago. They're not that dumb; they know that it's trivial to get around the private API check, and private APIs are never intentionally used to enforce a security boundary (that's what XPC is for). Apple doesn't want you calling private APIs for a variety of reasons, but those reasons aren't connected either with security or with the JIT prohibition.
It's perfectly permissible under App Store rules to use an interpreter which is functionally equivalent to a JIT in its ability to obfuscate code, or invoke or manipulate private APIs (e.g. can read/write arbitrary addresses, run dlsym, whatever). You aren't allowed to download bytecode for that interpreter from the internet, but again, that's a policy restriction, not a technical one.
Yeah, I think by calling this a "security model" I was simultaneously too generous and too insulting. :) There is definitely a team of smart people at Apple that design the OS / kernel to be secure against apps that are allowed to run arbitrary code; this is where stuff like kernel sandboxing comes from. There is also a team at Apple that tries to prevent apps from running arbitrary code, and does an okay job of it. (I assume this team is also populated by smart people, and they know that they're not actually implementing a foolproof security restriction, they're implementing a policy restriction.) The ban on JITs, as I understand it, comes from the second team. There's no actual threat to iOS device security from allowing apps to run custom, arbitrary native code, right?
> There's no actual threat to iOS device security from allowing apps to run custom, arbitrary native code, right?
An app might have a buffer overflow bug. Allowing all or some of the writable memory to be executable makes it easier for an attacker to exploit such a bug. (An attacker can still exploit the bug without arbitrary native code execution, but it's harder.)
The apps are still protected from one another, so this restriction mainly protects app developers from their own mistakes.
Yes, I know this is part of the motivation, though I'm not aware how much other factors were involved. Don't want to sound like I know more than I do...
Its usefulness can be debated, since ROP and other techniques are powerful enough for a motivated attacker to go a long way without actually executing code, but it's not fundamentally invalid as an exploit mitigation technique. In fact, PaX implements something similar:
Of course, Apple evidently thinks the speed benefits of JIT are important enough for Safari to be worth letting the browser process not just remap RW to RX but actually map RWX pages - despite the browser process probably being the biggest target of all for exploits. I personally think iOS should allow apps to opt in to doing the same, but that's just my opinion.
Wait, you get RWX pages? Is that new-ish? Last I was paying attention to this (around iOS 7 and 8), the kernel support for Safari's JIT gave it a single RW page at a random address, that you could flip to X (not even RW) when done. What's the use of concurrently modifying executable pages?
A middle ground is to allow memory to be either writable or executable at any given time, but not both (W^X). At least Firefox's JIT already implements this.
Yeah, iOS's JIT does the same thing, I believe (it JITs a page and then turns it execute-only). But it's not completely foolproof since a sufficiently lucky buffer overflow can, in theory, overwrite the JIT page while it is being JITed, and then wait for it to be executed.
W^X, which is a really good+necessary idea, allows ROP. Still I think it's a necessary part of Defense in Depth for JITs which then necessarily leads to some bloat.
Ah yes within the browser space the various companies are just putting web assembly in/alongside the existing javascript engines. WAVM was just an example of a whole new vm that isn't for a browser, but does run webassembly.
Yeah; I probably should've been more explicit in my original comment: there's definitely new VMs being written for WASM, but none are currently expecting to go into browsers, hence concerns about sandbox escapes from browsers aren't affected by them.
I thought web assembly compiled to JavaScript, but it doesn't look like that's so. It looks like a binary format. I think that's great if so. Finally we can expand beyond JavaScript on front-end development.
Yes, you're right. This can allow to avoid JS in browser altogether one day.
As a hardcore JavaScript developer I welcome this tech for a couple reasons. For one thing, it's undeniably a significant step forward for Web Platform as a whole. For another, now people can struggle with the peculiarities of the said Web Platform with their own languages. The legitimate grievances associated with JS over the years were due to the browser APIs and first of all DOM, which wasn't designed for JS to begin with and later just accumulated problems for the sake of backwards compatibility.
I don't think that will be feasible in the near future, because in order to run another language in the browser (say, Python) through wasm, you will need to bundle the entire runtime with your app, which is an unacceptable amount of overhead for most apps.
I don't think anyone is suggesting that people do this. The idea is to save time by compiling JavaScript ahead of time and shipping the results of the compilation, which I think I understand to be a binary version of an AST.
IIRC one of the bigger reasons behind wasm is to cut out the parsing time for JS (which is currently a pretty big bottleneck to startup time and memory, especially on mobile devices)
How would you draw the line on which languages/runtimes deserve to get bundled/cached? You could make the same argument for JavaScript libraries—shouldn't web browsers just bundle jQuery?—but there's ultimately no good way to do that.
I feel like this shouldn't be the end of the world. If everyone uses the same CDN for a given language's runtime (with subresource-integrity to ensure security), even a 1MB download should be... tolerable. Not great, but tolerable. Load time is also an issue, but WebAssembly is designed to greatly improve initial load time compared to asm.js.
That has been tried and failed with JavaScript libraries in the past.
Even if you can get everyone to agree on one canonical CDN (you can't), you also need to get everyone to agree on one version of the file at the CDN.
Plus cache sizes on most platforms are laughably small (a heavy page can completely blow out your cache on mobile devices).
IIRCthere was a "study" while back that found by using the most common CDN at the time to host jQuery, you only got like a 5% cache hit rate. This was because of multiple CDNs, multiple versions, multiple ways to reference the version (1.2.3 vs 'latest'), and http/HTTPS.
In my past experience, we had more trouble with people blocking our CDN via corporate networks or something.
One of the problems with JS is the unbelievable amount of churn. It's not uncommon to see libraries have version half-lives in the days or weeks.
The firewall issue is always going to be a problem on corporate networks. I don't think we're ever realistically going to get away from having to self-serve dependencies if you want to ensure that things are going to work, particularly if it's software that is deployed in on-premise, internal servers.
But this isn't a js churn problem. Even if updates only happen every 6 weeks, that means that unless the majority of devs update their links every 6 weeks the distribution of links out there will be spread out enough to be nearly worthless.
And while updates every 6 weeks might seem crazy to you, unless we just don't version the links (which seems like a terrible idea), even simple bugfxes blow the whole system.
Plus CDN hosted systems come with a bunch of other downsides. Lack of http2 pushing, tree shaking and bundling doesn't work, being able to compile with your own settings, and more.
CDNs aren't the solution here, something like using service worker and/or an "install" process for web apps will solve all of these and more.
I wonder why browsers don't use subresource integrity to share cached items across origins. Maybe they do? If hash collisions is a problem, they could limit it to certain hash algorithms.
IIRC there has been a little bit of work in this area, but there are a lot of downsides.
First of all, it's dangerous because you could "probe" the user's cache cross-domain by offering files with a specific checksum and seeing if they take it or not. Letting evil-example.com figure out if you have visited pornhub recently by offering up their javascript file with the subresource hash attached and seeing if you download it is a massive privacy violation and can cause issues much worse than just knowing you were on pornhub.
Second, Cache sizes are already too small, and while a content-addressable-cache might help with that a bit, it won't really change all that much. You'll still have 100 versions of jquery out in the wild at any time, you'll still have 10,000 different versions of react bundled with other things. You'll still have the version with a UTF-8 BOM and one without, or one with \n and one with \r\n.
Finally, (and this one is just my opinion) it's a solution that encourages worse behavior. It's going to be easier to include that 300kb of jquery when you think that your users will have it cached already. And that just "loosens the belt" around an area where we should be cutting back. Now users that are arriving for the first time will get a significantly worse experience, and that is the starting to go against a fundamental strength of the web, that you can get the same experience on any device, anywhere, any time, whether it's your computer, your cousin's desktop, your friend's phone, or your damn car. Making devices that don't have that in their cache download a massive "basically binary" blob before they can use it is against what the web is, and content-based caching really encourages the behavior of including entire libraries (so they'll be cached) instead of compiling down only the code you need.
While most of this can be mitigated, this just isn't something that's sorely needed. There is an MDN document floating around somewhere that they were talking about something like this, but I can't seem to find it right now.
I'm currently working on a large-ish code based designed primarily to run via Emscripten. Current optimized .js size is 3MB. While the WASM size is noticeably smaller <2MB, startup time is actually worse due to browsers compiling everything ahead of time.
> startup time is actually worse due to browsers compiling everything ahead of time.
On an upside, with WASM browsers can potentially save a snapshot of the compiled code to speed up re-runs. I recall the idea was promoted back when Dart was expecting to get its own VM, and one of the things that it would be capable of was this snapshot using.
I would really like to know specifics on your criticism of DOM. I've seen a lot of people complain about DOM over the years, but I've never heard anything specific.
Where should I begin. I've been building for web for 10 years and the horrors I've seen... specifics... Well, before I go criticizing, I want to say that the web is in a far better place today than it was 10 or even 5 years ago, and nowadays I'm an ardent VanillaJS user that doesn't use jQuery or React/Angular (I do use a minimalistic in-house MVC with Incremental DOM though).
Back to the question:
Traversal: Historically we had 4 methods to get an element: by its id, name, tag, and class; yet there was no general method to get an element by value of its attributes until Selectors API brought querySelector. IMO it was one the most important things that pushed the community towards jQuery back in the day.
Manipulation: Up until recently one couldn't remove an element without referencing its parent, that is, you have to use removeChildNode, or replace, or innerHTML, or other methods of the parent. Now they added .remove() that is only supported in the evergreen browsers. Replacing or inserting nodes operates on the same principles and is a hell of its own that makes people opt for re-generating nodes instead of shuffling the existing ones.
Generation: For years now we have been relying on non-standard innerHTML for element generation because the only way DOM allows to do it is to create each element with createElement and then set attributes one by one. It's a ridiculously low level api considering our use cases. Hence, we got the whole templates/jsx movement today.
Modularity: Ain't there. In order to truly "componentize" our apps we have to wait for something like Shadow DOM because there is no way to guaranty that one part of the page won't affect the other in today's DOM.
All in all, DOM wasn't meant for what we are trying to use it today and for a long time it has been playing catch with browsers implementing ad-hoc solutions to new problems.
A lot of what you're saying is about the history of DOM. You even admit that some of your complaints don't hold anymore, so I don't understand what is the point of holding on to them. And I don't think it follows that just because SGML wasn't designed for SPAs in 1986 that the browser infrastructure we have today is not good for applications. Von Neumann architecture computers weren't designed for playing MP3s, but somehow they manage to do it pretty well.
Because I look at all the GUI toolkits out there (and in the last twenty years I've used a LOT of different ones), and none of them are idealistically "good". I don't see anything particularly special about other toolkits that makes DOM look particular bad. In fact, I tend to think of DOM as being pretty good in comparison, if only for the fact that it works on everything.
"createElement and then set attributes one by one" is no different than any other toolkit. They pretty much all have a simple, static editor syntax for doing bulk element creation and attribute setting, and then a verbose API for dynamic element creation. It's low level because your use case is not the only use case it needs to support. I've seen--and built--a bunch of systems that have tried to simplify it and it always loses something in translation.
So maybe you're complaint isn't with DOM. Maybe you just don't like user interface programming in general.
> A lot of what you're saying is about the history of DOM.
That's because my whole point was to elaborate on the statement I made in the first comment:
>The legitimate grievances associated with JS over the years were due to the browser APIs and first of all DOM,
People who were complaining about web dev and JS over the years were mostly doing it because of the browser APIs and DOM, hence my recollection of recent history. And of course I do point out what was fixed, and I said from the start that it's far better now than it was before. I'm rooting for VanillaJS after all.
I cannot comment on other GUI toolkits since I don't work with them. You may be right that they too are low level, and I wholeheartedly you're right about DOM being better, because I hope DOM with Web Platform replaces them all one day. That doesn't exempt DOM from criticism though.
Technically DOM still doesn't have API for bulk element creation since innerHTML/insertAjacent comes as a separate API which is still a working draft. But that's just details.
There is another problem somewhat related to DOM being low level. Although it has to do with the browser implementations rather than the standard, it's still a problem for the end user, and the user is going to blame it on the web/JS being bad in general. That is performance. As it was mentioned here, DOM is separated from JS engine in browsers. Calls to DOM from JS are a lot more expensive than calls inside the engine. This gave rise to the whole Virtual DOM movement we see taking over the web dev. It's not just the verbosity of low level DOM that pushes people towards React and such, but the fact that at the end of the day their approach of mimicking DOM in the engine and limiting the DOM calls turns out to be more performant that the low level manipulations we do directly on the DOM.
> Calls to DOM from JS are a lot more expensive than calls inside the engine.
Sort of. Calls to DOM from JS are a few dozen machine instructions for the call itself in modern browsers; a little more if lots of arguments are involved. The slowness is what the DOM implementation actually has to _do_ as a result of the call. If you write a loop in which you repeatedly modify styles and then ask for geometry information, then the only options an implementation has are to provide stale geometry information (the virtual DOM approach!) or end up with that loop being a lot slower than asking for all the geometry information up front and then doing your modifications.
We can have a useful discussion about whether it should be possible to ask for stale geometry information, but that has nothing to do with calls from JS into the DOM per se; a DOM implemented in JS but exposing the same API as the current DOM would have _exactly_ the same problem in that regard.
Yes, indeed, DOM is doing a lot of heavy lifting and the cost of the call itself might be small in comparison. And you're raising a valid point with geometry changes which are indeed a terrible thing to do in a loop due to recalculations/reflows they cause.
What got me first thinking about the cost of DOM calls were benchmarks we did back when we were building polyfills for IE6-IE8 to support new HTML5 APIs. One example in particular--mimicking data attributes. Our lib would hold a mapping between DOM elements and attached data attributes (as simple objects) and it would be significantly faster than using native element.dataset. While dataset isn't as simple as plain JS object, it's not much complicated either, and to my knowledge changing it doesn't cause reflows or repaints, so I expected it to be slower but not very much; hence, I concluded that the additional speed came from avoiding the DOM call itself. Since then I've been cautious about DOM and tried to cache whatever came from it and was supposed to be used repeatedly.
Ah, element.dataset is an interesting case. It's a _lot_ slower than normal DOM calls, because it needs to map a different underlying data structure: the data in the dataset needs to be reflected in the element attributes. And the set of names it exposes is not fixed.
So you have to implement it as a Proxy, so it can capture arbitrary property assignments, including for properties it doesn't have yet, and do the corresponding setAttribute calls. Unfortunately, once you're a Proxy your gets end up somewhat slow too. Partly this is because JITs haven't optimized proxies that much, and partly it's because they're rather hard to optimize in the best of circumstances.
I expect that the actual implementations of dataset are not as fast as they could be if they used scripted and inlinable proxy handlers _and_ the JITs had implementations for those. But there's still a lot more work involved in dataset than a plain object, especially if you have a small number of property names in practice so the plain object doesn't have to convert to dictionary mode or anything like that.... Even if dataset were implemented on top of a pure-JS DOM implementation (which exist), it would be a lot slower than just a simple object, unfortunately.
That explains it, along with the fact that getAttribute('data-*')/setAttribute() seem to be faster in microbenchmarks[1] than getting/setting properties on element.dataset. Thanks for clearing this up, I'd have never guessed that about dataset!
Yeah, a get on a dataset has to do strictly more work than getAttribute: it has to convert the string it has to a "data-whatever" string and then call getAttribute.
But note that in microbenchmarks you are likely to get some confusing effects. For example, in Firefox this microbenchmark:
1) getAttribute is known to be side-effect free when called with a string argument, its return value is not used, the call can be dead-code eliminated.
2) getElementById is known to be side-effect free when called with a string argument, its return value is not used after step 1, the call can be dead-code eliminated.
3) The get of the "document" property is known to be side-effect free, its return value is not used after step 2, the get can be dead-code eliminated.
So in the end the microbenchmark is measuring how fast the browser can increment a loop counter, and that only because we haven't bothered to try dead-code eliminating that. That's why you get numbers in the billions of operations per second range (comparable to the CPU clock speed; always a dead giveaway that your thing got optimized out). ;)
Note that Firefox will also perform loop-hoisting on all of the above if possible, so even if the return value were assigned somewhere that would not matter: the whole thing would just get hoisted out of the loop.
The setAttribute and "dataset.set = stuff" benchmarks don't have these problems, because those operations are clearly not side-effect free. A sufficiently advanced JIT might be able to determine that earlier iteration assignments are dominated by later ones and eliminate them, but now we're talking quite hard work on the part of the browser.
Yes, after watching so many of Vyacheslav Egorov's (from V8) talks on microbenchmarking, I'm convinced that JIT's are doing some serious witchcraft unmeasurable by microbenchmarks; yet googling relevant tests on jsperf is tough habit to shake off. As this example shows.
While we are on the subject, can I ask if there are any performance advantages of using data attributes over custom attributes for storing data in an element? In other words, can adding non-standard attributes to an element cause any deoptimizations? I know that JS engines use hidden classes/object shapes to optimize JS objects, I assumed something similar might be the case for DOM elements, in that case adding non-standard attribute must mean deoptimization.
That is incorrect. React is faster than destroying the document and rebuilding it from scratch on every day's change. It is slower than making your own stateful edits.
True, but the culprit is still DOM calls being expensive. Virtual DOM minimizes those calls through various optimizations, including not destroying elements unnecessarily as you said, but it's not limited to it.
Consider the following scenario. You have two handlers on an event that both change the DOM and may result in cancelling each other. With the "raw" DOM you'll end up calling DOM at least twice (and changing it twice if no debouncing is used), whereas Virtual DOM both times calls to, well, its "virtual" DOM (which is a lot cheaper) and by the time it gets to its next cycle of updating the "real" DOM it may not need to change anything or do it once. These optimizations are hard to implement without resorting to a virtual DOM of one sort or another.
> What's the problem with node.parentElement.removeChild(node)?
It's node.parentNode.removeChild(node). And the fact that this is even a mistake that can be made, and is made by people all the timem is part of the problem!
I think having it be entirely separated from JS was a mistake. A lot of the web is Apps not documents and the separation is often much more of a hindrance than a help.
Obviously it can be nice for organization, but that should really fall more on the individual to decide than the spec.
Not everyone is developing SPA's nor working with the same constrains that facebook has (optimized incremental changes to UI). This new trend of making everything in JS for JS by JS Developers is crippling the web ability to display the dynamic information as a document and forced the creation of standards (AMP) to preserve the real legacy of the web.
The DOM is _abstracted_ by React not a subset of React. If you can't see the difference between subset and abstract you are either new on the web development scene or just another die-hard fan of JS development.
You can write any and all DOM in react, that is the definition of subset. your ad hominem is pathetic, and i hope the moderators take action on my report.
I think you're thinking of asm.js, which is a subset of JavaScript that can compile to native code directly. WebAssembly is more or less a binary representation of asm.js.
Edit: Looks like WebAssembly is more low level than asm.js, and is actually closer to real assembly, whereas asm.js is more akin to C.
asm.js is a strict subset of JS, designed to be valid JS for preexisting browser engines while at the same time containing what's effectively a lot of primitive types hinting for an optimized engine.
For a simple example...
i = i|0;
This is valid JS, but a JS engine designed to use asm.js optimizations can look at the bitwise OR with 0 before the script even runs and realize that i must be an integer in that block of code, so it can use more-efficient integer operations instead of float ones.
WebAssembly is also akin to C, but it has a more efficient encoding and is not constrained by the set of operations that can be expressed in JavaScript.
I sense some wishful thinking here. WA will supplement JS not replace it. You'll have a canvas and that's it, but with the JS interface I can see how some people will try to abuse it.
You can get rid of JS's shortcomings for years now with compilation. That's not really the goal here. What I see here is low level access for raw power. Most of the JS apps don't really need it.
You are mixing it with asm.js. Asm.js is essentially a 32-bit VM that is very well-suited as a cross-compile target for single-threaded C/C++ code, but to ship something more high-level you would have to bundle it with a respective VM implementation.
AFAIK WASM literally aims to be a "universal web bytecode", and they hope to eventually add multi-threading, GC and DOM access. Of course the exact implementation details might differ from e.g. your local Java runtime vis a vis GC and threading timings and such.
But for example in a distant future each browser's JavaScript runtime could be just an interpreter that internally emits WASM. And there should not be any reason why you could not compile most Java / C# code to WASM once threads and GC are added.
No. GC access is a feature that they may possibly add at some undisclosed point in the future. (Just like it was with asm.js via Object Types IIRC) but it's not an explicit design consideration.
No where does it say that it's intended to be a universal bytecode.
At some point in the "future [unicorn icon]" sure. But the design today is around the porting of C and C++ apps the web. The feature you're talking about is meant for integrating with the browser. It could allow emulating a variety of VM's but that's not a promise or stated goal.
Even if it was just C/C++, that would get you all the interpreted languages where the runtime is C/C++, which is a fair chunk. In any case, if it's a good compilation target for C and C++, it will be a suitable, if not always ideal, target for compilers for many other languages.
Someone writing an interpreter or (more likely) a compiler needs to decide whether to output WebAssembly or JavaScript. Since code size matters, JavaScript will make more sense for high-level languages. You can reuse JavaScript strings, objects, the garbage collector, and so on. WebAssembly only makes sense for language like C/C++ that have a minimal runtime that can be shipped in each web page.
Users don't care which one you use, so it won't make sense to switch to WebAssembly until there's a significant performance benefit and the code size issues are solved.
> Since code size matters, JavaScript will make more sense for high-level languages. You can reuse JavaScript strings, objects, the garbage collector, and so on.
Which can be useful, but it can also be bad if your languages model doesn't map perfectly to JS's (this is an issue already with JS-hosted versions of a number of languages that introduces incompatibilities with non-JS-hosted implementations.)
Unless you go 100% no-js you are still going to need to interact with it, and even if you do completely remove JS in your stack, the DOM APIs are still geared toward JS, and work in ways that make sense to JS.
Eventually it should be, but trying to design a way to integrate multiple runtimes into a single webpage is going to take some work (e.g. imagine a page that imports code originally written in Python and Javascript and Ruby and Go, all with their own garbage collectors (and Go's custom concurrency model!)). The reason that C and C++ and Rust will work from the start is that they don't require invasive runtimes and so allow the browser vendors to punt on this problem until later.
You just need an AOT compiler that targets wasm directly. AOT compilers already exist for Java and C# for various target architectures; this is just another one.
I wonder if ad networks try to fight back against adblockers using web assembly? In theory, it should be easier because you can avoid dom, but who knows what the future will bring.
i feel like one important piece thats missing is a standardised debugging-API. The introduction of WebAssembly will probably lead to a larger percentage of people compiling a (completely different) language to something running in the browser will rise. We need solid tooling for that.
IIRC Firefox already implements the Chrome Debugging Protocol (https://developer.chrome.com/devtools/docs/debugger-protocol), which e.g. would mean that it should be possible to use Firefox's debug tools to debug web pages in Chrome (and vice-versa). Would be nice to have some clarification on this though.
i am not following this very closely and was not aware of this (Developed something in Chrome/Safari a few months ago and this bothered me. I needed a custom Webstorm-Extension). I think the push for Wasm would be perfect to sort this thing out and provide a real, standardised API.
We're very interested in exploring this space. It's possible that this capability is exposed through an extension of sourcemaps [0]. Definitely curious to hear more ideas and feedback in this space.
It summarizes the discussion so far between various stakeholders. I've since moved on to other things, and I don't know where the effort currently stands.
Solid tooling and plenty of prior art, tooling and standards that exist for adding debugging info to binaries[1]. I wonder if this could be layered into the WASM binary format in some way.
WebAssembly doesn't permit unstructured branching. That is, the bytecode doesn't support "goto". All conditionals and loops use structured mechanisms. Reconstituting branches and loops is the hardest part of decompiling, which means WebAssembly decompilers will have the hardest part already solved for them. It doesn't matter what binary or text formats the WebAssembly team chooses, they'll all be equally easy to parse, and even generate, in this regard.
The downside is that translating goto constructs in languages like C can be tricky and sometimes computationally very expensive. Even supporting multi-level "break" can be tricky. And constructs like "computed goto", which are often used in C, Fortran, etc to improve the readability (easier to read state machines) or performance (faster state machines for bytecode interpreters) cannot be supported by WebAssembly directly, which means instead of always being faster they'll always be slower.
But the downside is mostly irrelevant for people concerned by decompiling WebAssembly to readable source code.
Huh? There are many thousands of applications and millions of lines of javascript code which will not immediately be rewritten as C++ apps. Javascript is not going anywhere anytime soon.
The initial MVP of WASM won't have privileged support for any custom runtime services, so in order to run Go code in the browser (or any other GC'd language) you'd have to compile the entire runtime with it and ship it all over the wire to the client. Eventually they do hope to lift this restriction.
Not sure what you mean? Libraries like libgc can't be directly ported over to Emscripten since they depend on platform specific behavior to inspect the execution stack and registers.
In asm.js/WASM the execution stack is not inspectable from code.
Rather than using the execution stack provided by the environment (by using the built-in calling conventions directly), you could manage your stack on the heap using the linear memory facilities. Function calls are a bit more expensive, since now you need a macro to convert calling conventions. But it's definitely not impossible, and it's not even really all that hard (just tedious).
The idea is that webasm won't have versions, it will have features that you query, which is the way javascript and in fact actual assembly language works today.
WABT (https://github.com/WebAssembly/wabt/) and Binaryen (https://github.com/WebAssembly/binaryen) as well as the spec itself (https://github.com/WebAssembly/spec/tree/master/interpreter) all have interpreters for the WebAssembly language. What they don't have is the same import and export binding mechanisms that the JavaScript engines have. JS engines can bind wasm imports and exports with JS functions. The interpreters that exist today are great for testing implementations or compilers but don't have interesting capabilities to interact with an embedding environment. There have been a few experiments with non-JS embeddings (for example, implementations build on LLVM which could easily define bindings to native functions) but nothing really usable or productized yet.
There are nascent plans from the Rust project to retrofit Spidermonkey's WASM compiler (called Cretonne, and which is apparently already written in Rust) to serve as a general-purpose compiler backend: https://internals.rust-lang.org/t/possible-alternative-compi...
I believe I've responded to the right post. :) Currently, WASM (and Javascript, when JIT'ed) are lowered to Cretonne IR, which is then optimized and translated to native code by the Cretonne engine, which is then executed by Spidermonkey. What the link above is proposing is to extract Cretonne into a standalone component capable of producing object files which can be linked into other programs as expected. The Rust project would intend to use such a backend by writing a pass to translate Rust to Cretonne IR. However, the important observation is that WASM and Javascript are already being translated to Cretonne IR, which means that if the Rust developers do the work of creating a standalone backend out of Cretonne, then WASM and Javascript should be able to leverage it to produce code that's independent of the browser (though for JS you'd probably need to implement some additional external runtime support).
So Rust is considering making a general backend which compiles to WASM first and then compiling that to native code, which could be faster and means that they would be creating a general WASM->native compiler in the process? That sounds pretty useful and clever if I'm getting it right.
You're close (it's tricky to follow all these things feeding into all these other things). What they have right now is a prototype WASM -> IR -> pseudo-native code compiler (producing code only fit for being directly executed by a JS VM). What they're proposing is to flesh out the native code backend so that its output can be fed into the usual platform-native compiler toolchains. Then, they'd modify the Rust compiler to be able to produce the IR that this project (Cretonne) operates upon (which may be similar to, but is not precisely, WASM). So Rust would not be producing WASM directly, it would be producing code that is conceptually one level "below" WASM, but still one level above machine code. And yes, in the process this work would also enable the lower levels of a platform-native WASM toolchain.
A semi-formal, reference spec done declaratively in Ocaml using s-expressions for the AST? When did The Right Thing become popular in browser developments!? I'm impressed. Way better than trying to get compatible, correct implementations out of a pile of English and text art as done by some other groups.
The jury is still out on whether this interpreter gets to be called a reference spec, or even a spec. It may be a reference interpreter that is offered as a companion to the official English text. It all depends on what others feel comfortable with.
But I'd rather that this OCaml program was _the_ spec. I found it enjoyable to read the other day when I was trying to figure out WebAssembly verification semantics. Much better than reading an English description of a verification algorithm.
I don't know how useful that would be, it's still javascript (just a lower level version of it) after all. It's coupled pretty tightly with the browser.
I've also linked this in a sibling comment to yours, but this post from the Rust project developers talks about a few of the potential upsides to a general-purpose compiler backend derived from the WASM initiative: https://internals.rust-lang.org/t/possible-alternative-compi...
TL;DR: compilers written for the JIT use case are designed to minimize latency (compilation times) while still producing above-average code, so having such an alternative backend would be a boon for regular users of compilers with heavyweight backends (e.g. LLVM, which produces excellent code but at the cost of very large compilation times, even in unoptimized builds). Additionally, a compiler built to process WASM is more security-aware than your typical compiler, and thus won't include optimizations that attempt to exploit undefined behavior (thus losing out on potential optimizing opportunities, but hopefully producing code that's a bit more bulletproof).
WASM isn't related to JavaScript in any way. Maybe you were thinking of asm.js?
You cannot access the browser APIs from WASM so there is no tight coupling with the browser environment. I think non-browser use could make definately sense.
According to the docs, high-level goals of WASM are to allow synchronous calls to and from JavaScript;
enforce the same-origin and permissions security policies;
access browser functionality through the same Web APIs that are accessible to JavaScript; ?
WASM code can call external functions defined per module, and the external code can be JS code that interacts with browser APIs. So indirect use of browser APIs is possible.
Not exactly. You have to export those APIs into the WebAssembly module. It's certainly possible to have a WASM implementation that doesn't provide web APIs, but is still WASM.
As others said it has nothing to do with JS, i'd like to add how it's usefull - it's a low level VM spec (lower than say JVM or CLR) so it will hopefully be easy to compile various languages to it (hopefully because they need to fit a GC mechanism in to it yet to support higher level languages) but it's still safe/sandboxed and portable.
The main benefit would be shipping a single WebAssembly module to target client and server (e.g. the same benefit of sharing JS between the browser and Node).
I haven't seen any official solution for AOT from NodeJS foundation. Having "Official" WASM support drives/drags the community in a standard direction.
So, at the very least there's support for C, C++, and Rust at the moment, though of course since the standards aren't yet finalized and the toolchains are evolving how well it works at any given moment may vary. I'm sure there's work going on for other languages to compile down to WASM, just not sure of the status of any other languages at the moment.
Yes, for instance Swift should be able to compile to web assembly? If Swift can be compiled with the clang/llvm toolchain, it should be able to target web assembly.
Sorry, I wasn't clear, I mean run the entire thing in the browser, to make it interactive. Since Chrome apparently supports the text format as a way to view the binary, I was hoping that it would support a two way process.
However, from searching around, it looks like I'll have to emit bytecode directly, at least for the near future. Which isn't the end of the world I guess!
Yeah, I think it would have been nice to have text format input support in browsers, but the decision was against that.
But you do have other options than emitting binary wasm yourself. You can use a compiled version of wabt or binaryen, here's an example of binaryen.js,
Maybe someone that's more familiar with wasm can enlighten me, but isn't this still extremely limited in the capabilities it exposes? With so many limitations in place, what are some example use-cases that this helps with? Am I being too negative and unrealistic? I don't know what kind of expectations I should have. I'm not writing this to bait responses.
It seems to be that until you get lower level network capabilities and some form of file system access, you're just too limited.
I do think JS is very limited for non-networked applications. If most of what you're doing is hitting APIs and displaying that data, it's great.
But if you're building an app that doesn't depend on the network, you find yourself way more limited in most cases. The solution for many people is to go for Electron, but that takes you completely out of the web sandbox.
I understand that it's incredibly challenging to get multiple vendors to agree on anything, and I'm sure we'll get to a good place eventually, but I guess I'm just feeling a bit frustrated with the state of things.
What does this initial release of wasm enable? I've heard that game developers are expected to be the initial target audience. Do we provide adequate APIs for storing game assets? As an example, I think the latest Doom game is 50GB.
I guess I'm a bit concerned that all of this incredible work will get done and released, but nobody will be able to make use of it? Is this a legitimate concern or am I being irrational? I guess I'd have a bit more peace of mind if there were some examples of concrete use-cases that are expected to be solved by this work. Maybe these use-case examples already exist and I haven't seen em? I'll admit I haven't looked around much.
This is a concern I've had with regular web APIs as well. Sometimes I'll read about a new spec that's being developed, but maybe due to my poor understanding or lack of knowledge, I'll end up confused as to the purpose of those APIs.
We have been watching people redoing computer software perhaps 20 years behind with JS (the original Doom was ported in 2011, if I remember correctly, please bikeshed the exact number of years) Let's hope WebAssembly will seriously cut this figure down.
so it compiles c++ to webassembly syntax using empscripten compiler. what js has to do with webassembly? Is the idea is same with js? i.e. compiler js code to webassembly.
Java applets needed to interact with the OS in various ways and required browsers to have sub views controlled by plugins. They were a lot more complex and risk prone. Especially since Sun kept adding features.
WASM is much simpler. It's just a standard bytecode format that the JS engine can handle. It's much easier to secure.
From a security perspective, how is this any different from what you can already do with asm.js? (Remember, SpiderMonkey has a separate JIT mode for it.)
There is this weird belief that being a text format automatically means open, reusable, interoperable and being a binary format automatically means closed, proprietary, opaque.
As counterexamples, here are several binary formats that are very open: ELF, DWARF, PNG, tar, gzip, Ogg and all its subformats, FLAC, JPEG2000.
On the other hand, you have something like OOXML, which while technically open/standardized, is so incredibly complicated that it is extremely difficult to implement well. Just because something is text-based does not make it simple.
I think Wasm will be more advanced then PNG. And if there are code execution exploits in PNG, I can only imagine how many exploits there will be in Wasm.
I was saying this because in many binary formats, lots of information is lost in the code to binary conversion (.c to binary, for example) ... and with current connection and processor speeds, I don't think that binary will help that much or be worth the trade-off. But I get your point and I know that html, js, etc can be highly obfuscated but I did extract some features from gmail before and it was not quite as bad as with ASM extracted from a binary...
Js minification already removes all the "extra" info you would get over a binary format (like variable names, dead code removed, loops optimized, etc...)
Webasm will be almost identical to minification when it comes to decompiling. And minification is already extremely common.
And you can be a skeptic, but one of the biggest reasons for webasm was to reduce parsing overhead which is currently the bottleneck (in terms of startup time) to most JavaScript heavy applications. Plus needing to keep the original source around bloats memory. IIRC some of their test applications would take 30+ seconds JUST TO PARSE on mobile devices. That's a HUGE amount of overhead that this will cut out.
Can you please link to the test that took 30+ seconds to parse? It just seems that the way computers are evolving that we will never see that in practice even if we avoid the binary web...
It's in the FAQ for web assembly[0]. First section. I may have misspoken there as I don't know if they actually have real "tests" that show that, or if it's just that some know applications take that long.
FWIW, the "demo" from that site takes about 40 seconds to load on my mobile device when using the asm.js version.
Minification does things like rename variables to single characters, remove dead code, optimize code in a number of ways, inline constants, shorten if statements, unroll loops, hoist variables, rewrite comparisons, and more.
It is compilation in just about every sense of the word (especially when combined with a compiler like babel or typescript)
Why would that be a problem ? The open web needs standards that are well defined and used by everyone, and WebAssembly seems to bring just that with the simultaneous announcements by several browser vendors. PNG is binary too and it's arguably one of the assets of the open web.
Games are the first type of application to use this, but won't be the last. If you can run AAA games in a browser, you could run Photoshop, Maya, programming IDEs, and other professional tools. It's not just another framework.
The web will always chase after native unless you truly remove the web portions. WebGL might be the best example of getting close, but for the meat and potatoes of UI, there's very little comparison when looking at the DOM.
https://blogs.windows.com/msedgedev/2016/10/31/webassembly-b...
https://hacks.mozilla.org/2016/10/webassembly-browser-previe...
http://v8project.blogspot.com/2016/10/webassembly-browser-pr...
[edited to make links clickable]