The ECMAScript stage 3 proposals related to decorators and class private and static fields need a lot more time to bake - maybe a few years. The whole # or -> thing seems fundamentally wrong. The standards process needs to slow down until better ideas emerge from the JS community and the Babel ecosystem.
I do like the idea presented of having a transpiler track and a JS engine track. Keep the ES spec for the JS engines simpler and hoist all the syntactical sugar onto the transpilers.
> I do like the idea presented of having a transpiler track and a JS engine track. Keep the ES spec for the JS engines simpler and hoist all the syntactical sugar onto the transpilers.
Alternatively, if you focus on getting web assembly up to speed, you can compile any flavor of JavaScript you want directly to web assembly. You wouldn't need transpilers at all except for backward compatibility (which would eventually fade).
Everyone keeps saying this, but it doesn't make sense to compile JavaScript to WebAssembly. Even if WASM had enough built-in APIs to allow you to implement JS without also implementing its runtime, you would lose all of the JIT performance optimizations. There is literally no benefit, aside from being able to implement non-backwards-compatible features (which doesn't feel like a strong reason to me, given the cost).
I believe the .Net world does its JITing based on a bytecode that is at the conceptual level a lot like web assembly. They're not the same, of course, but the point is you can JIT on that sort of thing OK. I won't promise you won't lose some performance, but there's other ways in which that might still be a long-term win as you may be able to gain in other ways relating to exploiting the language you start with. (In the long run, and to be honest even today, if you really deeply care about performance, you don't want a dynamic language. They're slow. We've invested crazy amount of effort into optimizing them, made tons of progress in that optimization vs where we started... and they're still slow. At this point smart money has to be put on that not changing.)
They're nowhere near the same. WASM is based on linear memory and has no concepts of objects and methods. CIL bytecode as used by .NET has a full runtime which has objects, methods and GC. Even with the GC proposal, as currently spec'd, it would not be able to handle .NET's object model.
The only similarity, really, is that they're both binary formats.
Blazor is a .NET interpreter written in C++ that's compiled to WebAssembly. You can think of it as being comparable to Duktape or another JavaScript interpreter being compiled to WebAssembly and running some JavaScript code. It won't JIT anything.
> you would lose all of the JIT performance optimizations
I don't follow. Why would losing optimizations from one language prevent optimizations from being done in an IL? Are you saying that Web Assembly can't be as fast as JavaScript? If so, why?
> There is literally no benefit, aside from being able to implement non-backwards-compatible features
You ignored the benefit I outlined above. Why would you want a single scripting language to be the only one to ever be used in a web browser? Heck, web assembly can even make the ECMAScript iteration loop tighter if you really wanted to.
I'm having trouble seeing the downsides to this approach.
The downside is that it's extraordinarily difficult to create a single runtime which has a satisfying object and GC model for all parties. The GC proposal which adds such a thing on top of WebAssembly is nowhere near complete and is currently being punted on, with the intermediate solution being host-bindings. And keep in mind that this GC proposal doesn't handle JavaScript's dynamic objects which can change shape at runtime. It only handles ones with known shapes.
It will be a long time for it to be possible for JavaScript to "just be compiled" to WebAssembly while keeping the same performance characteristics, if it ever happens.
Years of development have gone into making JavaScript in the browser really, really fast. If you re-implement a JS engine on top of WASM, you lose all of that work—unless you do something crazy like compile V8 into WASM, but then the client is going to have to download an enormous payload just to run your app.
Even if WASM had APIs for GC, interaction with the DOM and browser APIs, interop with JS, etc, you would still have to compile the JS ahead of time into a static binary, and so the browser would not be able to apply its JIT magic to your code.
I'm all for the dream of running any arbitrary language in the browser, but it's simply not realistic for any language that has a large runtime, which is most dynamic languages.
I haven't seen any work from any browser implementers working towards host-bindings or GC. I just skimmed through the I/O talk, as well: AutoCAD was running in a 2D canvas through emscripten in a WebWorker, while the rest of the code was a custom-built React/TypeScript app. Hardly the desktop AutoCAD everyone is thinking of...
... and I didn't spot any mention of "WebAssembly 2.0", whatever that might mean...
Browsers are already very optimized for JS. Wasm is not currently a good target for managed languages (you would need to implement your own GC on top of wasm memory), and AOT compilation in general is not good for languages as dynamic as JS.
JS took over not just because it ran everywhere but also because of the network effects of everyone learning it because it ran everywhere. wasm has no such benefits, as an intermediary format. Nobody's ever going to get a job as a "wasm expert" building anything other than a wasm build pipeline. wasm may afford us more freedom, but that very freedom is in opposition to its broad adoption. It'll certainly be interesting to watch it play out over the next few years.
Oh, and:
> Alternatively, if you focus on getting web assembly up to speed, you can compile any flavor of JavaScript you want directly to web assembly
You can already compile any flavor of JavaScript to... JavaScript. It even maps better than wasm. wasm really doesn't help there. Today's ecosystem _is_ that ecosystem. Except there's some hope that some variants become future core language, unlike if you were using wasm, where you wouldn't even have such a thought because you'd be using a compiler forever. But... If you're ok with that, why wouldn't you be ok with transpiling forever?
That would be my preference too. It feels like there are so many possibilities for performance gains and better tooling stuck behind WASM's lack of DOM integration and other native browser libs.
Does this help the JS engines? Instead of _knowing_ the developers intent and optimizing for it, it has to infer what the transpiler outputted and optimize for that. One seems harder than the other.
The yearly ES release cadence seems to be adding sugar for its own sake. This stuff hasn't been battle tested in the field long enough. Look at how many times class field and decorator proposals have changed over the years.
JSX by comparison seems much more stable than the class and decorator proposals, although arguably its features are much better optimized by a transpiler rather than a native JS engine.
> This stuff hasn't been battle tested in the field long enough.
There is a circular dependency: TC39 wants new language changes to be prototyped and proven in compiled-to-JS languages but the transpiler developers don't want to implement language changes that are not already on the standards track.
ReasonML has JSX. It just translates to function calls which do whatever you want. It’ll probably never happen but it’d be cool to see native JSX in JS.
They aren't automatically generated. These are the notes taken in the meeting. There were several note takers that worked over the three days to record the meeting.
I do like the idea presented of having a transpiler track and a JS engine track. Keep the ES spec for the JS engines simpler and hoist all the syntactical sugar onto the transpilers.