I am not a Rust fan nor user, but I must heartily applaud the sign that the times are changing -- for that we will finally free ourselves from being forced to have only one choice (Javascript) for delivering frontend code at the browser.
Looking forward to many interesting stuff done by leveraging Webassembly to the max!
> being forced to have only one choice (Javascript)
When it comes to JS, you already have a gazillion prettier and more enthralling languages to choose from that transpile neatly to JS. Are {{TypeScript/Flow/ReasonML/PureScript/Elm/CoffeeScript/Haxe/Haste/GHCJS/Fay/Fable/WebSharper/Scala.js/GorillaScript/Roy/Idris/etc}} really not enough choice? While wasm is still busy being hatched and birthed, JS has already long become "the web's choice compilation target".
That's not to say that projects compiling to wasm aren't worthwhile of course :D but I observed that the precursor "asm.js" paradigm didn't get widely adopted even though browsers already do optimize-for/precompile it. Unless the whole "EcmaScript + Browser APIs" (dom/xhr/string/array/etcpp) are readily-available from within wasm by default without bridging (ie "the modern web client runtime" exposed fully within wasm -- in a prim-op/pointer fashion I guess), I remain mildly skeptical that it'll take off as "a full JS replacement" rather than remain a pluggable niche component for specialized high-perf computations and such..
> enthralling languages to choose from that transpile neatly to JS
I wish people would stop using "transpiling". This "to transpile" verb happened in order to inform that a language such as CoffeeScript compiles without information loss to JavaScript -- i.e. they are at the same abstraction level, CoffeeScript being just syntactic sugar.
But when it comes to other languages, like Haskell, Idris, PureScript, Scala, Clojure, those languages are not at the same abstraction level and you lose information by compiling to JS.
And we have had a perfectly adequate term for naming tools that transform one programming language into another – to compile.
Transpile comes from transcompile, (nicely defined as source-to-source compiling), which refers to a subset of compiling. It's a perfectly valid term from the 80s. (I've answered this so many times, I don't care to look through my history for the sources. Feel free to.)
You're fine to get upset when people say transpiling isn't compiling.
But getting upset that people use a more specific term than you are, is verging on the overly-pedantic, and isn't going to win you any friends on the other side of the fence.
The first transcompiler produced optimised AIX assembly code from C.
By your measure, that wouldn't be considered source code. The original meaning was somewhere along the lines of "not assembled bytecode".
Why the new term then?
Compiling was commonly thought of as including what we now refer to as the assembling step. They wanted to point out they weren't doing that. Editable code, non-hex code, non-binary code, etc. was thought of as source code.
This stuff isn't worth getting upset over. Defining terms to have reasonable conversations has always been difficult.
Here, if you see compiler or transpiler, you understand a transformative process is happening, and so does everyone else.
Accusing someone of misusing a term that can mean different things to people from different backgrounds, or being "wrong", is just going to isolate you from being able to speak reasonably to them.
Our language evolves. A word from nearly thirty years ago is coming into common usage, which means that it's definition will likely change. It might become even become more specific. Most programmers using transcompilers today, don't have thirty years of experience behind them, but they will shape the usage. You can find a middle-ground with them... Or convince them your experience isn't worth listening to.
How so? Interested in your elaboration. I would have surmised it'd be very smooth and fun to target, given all the freedoms it gives you (free-form records, dynamically-sized arrays, no types). All the things that are "foot guns" when hand-writing it, should make codegen-ing it a lot easier. Because when a code-gen "shoots yourself in the foot", one can just fix the transpiler once instead of one's individual JS code-base(s) time-and-again
> free-form records, dynamically-sized arrays, no types
AFAIK, free form records and lack of types do not an easier compilation target make. It's pretty easy to compile an untyped language to a typed language.
> How so?
- There are a ton of implicit casts that are almost certainly not what you want. So, for example, if you're going to use addition in the compiled code, it's a 10-step process [1, section 12.8.3.1], and you should be careful not to trigger the 9 steps that do things other than addition.
- There is, last I heard, no way to determine the size of the stack.
- No integers.
- Say you have a demo page for some language, and someone using the page writes an infinite loop. Don't want the page to crash? Welcome to advanced compilation techniques like CPS transformations and trampolining.
Overall, JS has a ~850 page spec, and any part of the language you target for which you don't fully understand the spec is a potential bug. Instead, you want your target language to be dumb, tiny, and explicit.
Very easy to avoid if you're compiling a statically typed language to JS.
> No integers
Not in the JS specification (outside typed arrays), but every JS JIT works in a way you can actually declare variables as 32 bit signed integers, and made its way into the asm.js specification. They are declared like this:
var a = value|0;
Where the |0 is a no-op so not actually done, just a type hint.
If what worries you is precision and not performance, doubles allow 53 bit integers (not counting sign bit) with full precision.
> Say you have a demo page for some language, and someone using the page writes an infinite loop. Don't want the page to crash? Welcome to advanced compilation techniques like CPS transformations and trampolining.
Or just use a web worker, which you can terminate if you haven't heard back in a while (pun intended).
> and any part of the language you target for which you don't fully understand the spec is a potential bug
It heavily depends on what type of language you do. If your compiler tracks the types and doesn't mix them, edge cases are much, much easier to avoid.
Source: I did make a compiler for my own language that targeted JS.
>Not in the JS specification (outside typed arrays), but every JS JIT works in a way you can actually declare variables as 32 bit signed integers, and made its way into the asm.js specification. They are declared like this:
The "|0" trick you mention is not for javascript; it is for asm.js; to be able to declare such a "true integer" variable, your code would need to be in asm.js, not javascript.
Javascript has no integers, only floating point numbers. This is a very strong limitation.
When JS is a compiler target, that truck will generally only be used if the source language excludes all those edge cases, i.e. the value is guaranteed to be an integer.
> It's pretty easy to compile an untyped language to a typed language
Did you mean the other way around? Certainly it's possible to compile an untyped language to a typed language, but it's nontrivial especially in the presence of duck typing.
I think they mean by using a single generic type for everything, like you'd see in an interpreter for an untyped language. It's pretty easy, but it's also very slow...
Not the parent, but I've been reading the WebAssembly spec and that sounds like a perfect description. It's honestly probably one of the more ideal compiler targets
For one thing, JS has no unstructured control-flow. So your compilation process involves breaking down ifs, loops and so on into branches, then… trying to messily reconstruct them.
> For one thing, JS has no unstructured control-flow. So your compilation process involves breaking down ifs, loops and so on into branches, then… trying to messily reconstruct them.
An omission which, incidentally, is also intentionally present in WebAssembly. There are supposedly good reasons for it, but I still find it really disappointing.
Well, ouch, a transpiler writer has to code up a bit of boilerplate, most of it just once early on in the project's life-time.. "too bad"! IMHO reconstructing stuff may be a somewhat tricky challenge, but there's no intrinsic need for it to be "messy" regardless of the target language? I must be missing something here still.. =)
I'm doing transpilation to Go right now, so often I think "much of this would have been much simpler to get done if I transpiled instead to an anything-goes scripting language". (Reason of course being I want to emit idiomatic human-written-like code, and working with mostly-incomplete type information coming in, still reconstruct types rather than pass-and-return-typeless-boxed-values-around messily.) A lot of this is pretty "messy" right now, but I place all of the blame for that on me (guess I prefer rodeo-ing into it rather than "sitting down and writing a formal paper on it first"), neither the target or source language.
> there's no intrinsic need for it to be "messy" regardless of the target language
The generated code with “re”constructed control flow is going to be a mess and you can't really help it. Worst-case it'll be a `while(1) { switch(i) {` type of thing.
Emscripten has a specially-designed algorithm called relooper for reconstructing control flow (there’s a paper on it!). Switch statements are its last resort.
Or ... you don't break down ifs, loops and so on into branches to begin with. Scala.js has an optimizing compiler that doesn't do that (and I've heard a bunch of compiler people who were really impressed at the level of optimizations we can do without breaking down control flow into branches).
>more enthralling languages to choose from that transpile neatly to JS
Many of those languages you listed (for example, TypeScript) run slowly compared to plain Javascript. Transpiling to js is not an easy task and I don't think one can claim it can be done in a "neat" way; see for example the paper that Emscripten authors wrote regarding that topic.
Why should I accept a performance penalty if I write in TypeScript, being so similar to JS? TypeScript is js with typing support. In the outside world (outside of the js ecosystem), if you have a dynamically typed language and you add type annotations to it, performance gets boosted dramatically, 3x to even 10x speed (practical example: Lisp). Not in the case of Typescript. It goes slower than js. Why? Because transpiling to js isn't so nice.
Those langs, compiled to Webassembly, should work much faster, and many will outperform Javascript substantially. Even in simple stuff: Javascript only has floating point numbers. There is a lot of computationally intensive code that will massively outperform the JS equivalent, if implemented using Webassembly using plain integers.
> Why should I accept a performance penalty if I write in TypeScript, being so similar to JS? TypeScript is js with typing support
Indeed you shouldn't. TS set out to be "JS with types, that get checked then ditched". Has it morphed into something more? The above shouldn't be slow to generate. And as for code speed, not much code-gen should be happening in the first place once type info is erased from source as "TS turns into JS". I might not be fully up to speed on TS' latest developments however.
>better opportunities for wasm than trying to oust JavaScript; that's not really a stated goal of the wasm project.
Yes, that's what the wasm guys say, however, what I think is that it will effectively displace Javascript. Not only that, the labor market implications for the current JS full stack developers and front-end developers will be interesting.
Its definitely meant to make javascript more capable. Currently while you can do some things with webaudio for example, with WASM you can use complex DSP, you can also build out valuable audio code that you want to keep locked down in binary format where someone getting your algorithm could do damage to your business but in the case of a webaudio DAW the code can't be run on the server.
For ... reasons ... I want to build a Chrome extension that implements USB-IP support as described in the Linux kernel https://www.kernel.org/doc/Documentation/usb/usbip_protocol.... , with the Chrome USB API on one end and a websocket tunnel on the other, leading to websockify running on an EC2 instance. (Or, apparently, WebUSB is a thing these days and I don't need an extension?) I could do protocol parsing in JS, but Rust seems like a much more well-suited language for this.
The frontend itself, as in the UI, I'll write in JS, but there are parts that need to be running client-side that aren't really "frontend".
Someone I know built a thing which (for more complicated reasons) ends up speaking HTTPS tunneled over a websocket connection, with a need to terminate TLS within the local JavaScript context. There are a few pure-JS TLS libraries, but it'd be nicer to use an actual TLS library.
Someone else I know literally built a Kerberos client in JS. Again, having the UI in JS seems good but the protocol bits probably shouldn't be in JS. (Also, it is possible that I keep weird company.)
As a maybe more sympathetic example, consider noVNC or Guacamole, which are JavaScript clients for VNC. It would be a lot simpler to just hand an existing VNC implementation a <canvas> and let it do what it's been doing on desktop for years (drawing to a region of memory) than to reimplement VNC in JS.
Because a lot of us really hate JavaScript. I see JavaScript as being somewhat like racism. It used to be really bad and it's gotten a little better. And those who are happy with the status quo (JS devs) keep saying stuff along the lines of, "it's better now...I thought we were past this." They're continually told, "no, from our perspective it's a fundamentally-broken language and we won't be happy until we can write anything front-end without a single line of JavaScript." And yet because they're happy with the way things are, they have problems really hearing that message and understanding just how unacceptable the current situation is for others.
Btw, WebAssembly has nothing to do with choice, but with performance, its primary purpose being to let C/C++ to run in the browser. And was preceded by Asm.js.
This is actually very relevant. Going with WebAssembly means you're going low level — e.g. a language author will no longer have the JavaScript runtime to piggyback on, no garbage collector for example. Which might be good news for a language like Haskell, because people now have the opportunity to run the actual GHC runtime and garbage collector in the browser. But you have to port it all and the binary size downloaded by users will have to include everything.
So language implementations like Scala.js or PureScript aren't rushing to WebAssembly yet. In the future anything is possible of course, but currently it's not a useful compilation target for garbage collected languages.
Therefore I don't have good news — if diversity is what you're looking for, then know that WebAssembly was built for C/C++ and Rust and languages not fitting this narrow profile are out.
First, Rust is a far superior language to JavaScript, so even if it were a 1-for-1 swap, we'd still be coming out ahead. But you've forgotten a huge example of a language that fits into your "narrow profile." Swift is also an LLVM language and people are already writing front-end code in Swift for Apple platforms. Being able to share front-end code across web and iOS/macOS platforms will be a huge win. There hasn't been much movement towards compiling Swift to WebAssembly yet, but there's nothing that should make that much more difficult than Rust.
But the list of LLVM-capable languages is much longer than that. So even if it does force an initial download of a GC, that's still more choice than we have now. And you seem to still be assuming that the JavaScript runtime is a good thing. Lots of us feel otherwise. It's a huge source of bugs and vulnerabilities and the sooner we can ship browsers that no longer rely on it, the better it will be for everyone. WebAssembly will eventually get the equivalent of shared libraries so GCs can be downloaded once from a CDN and cached for future use.
WebAssembly is the first web platform development that promises a future that doesn't rely on JavaScript in any capacity. It's obviously not there yet and initial forays into it will be primarily about performance until crucial web APIs like DOM manipulation are enabled for the platform. But the future of WebAssembly really is about choice and not just performance. Don't confuse the current state with the end state.
For the same reason people use JavaScript in the back-end (Node.JS) — Code Reuse
If you are a Rust developer and need to create some code for the frontend of a web service, this solution allows you to keep your confidence in the language that you are already used to use (Rust in this example) instead of having to worry about the ambiguity of the JavaScript language.
I would expect WebAssembly to be used to implement things like video codecs that aren't provided by the host browser, not for typical frontend code. I'm happy to be proven wrong by ambitious WebAssemblers, though!
But don’t forget you still have css and html in your frontend stack. Javascript libraries need to be written in Rust so we can mainpluate the DOM (which needs support with browser API).
i think that’s probably right, but sharing things like form/api validation so that you can fairly reliably say that there’s unlikely to be backend validation errors (other than DB) would be nice... and it doesn’t require high throughput at all
Not exactly rust, but there's benefits to running the same language both on the front and back end. You could reuse template systems and validation logic for example.
1. Java (apple) plugins were not built into the browser; and it was not an internationally agreed standard between browser vendors. You needed to install a big plugin to make it work.
2. Due to the computing capacity of the average PC on those times, startup was slow and it consumed lots of memory.
3. Java applets could only be written in Java.
In comparison:
1. Webassembly is an standard, and its support is built into the browser.
2. Webassembly is very high performance, higher than what users come to accept as good (= Javascript under Google V8)
Note the GP's use of "delivering" - people have been using non-JS languages for a while and they have mature tooling & good communities. Elm, ClojureScript, etc. And the semi-Javascripts like TypeScripts and CoffeeScript.
Last comparison I saw gave it about 25% compared to js. And it currently loses in dom manipulation, if you do that a lot it will be slower. It can of course still be worth it but I think people sometimes overestimates the performance gains of wasm.
Looking forward to many interesting stuff done by leveraging Webassembly to the max!