Hacker News new | past | comments | ask | show | jobs | submit login
What makes WebAssembly fast? (hacks.mozilla.org)
338 points by nfriedly on Feb 28, 2017 | hide | past | favorite | 230 comments



My excitement for WASM has nothing to do with speed or efficiency of the runtime. It's all about finally having a universal compile target for the web. We're finally going to be able to develop web apps in a proper language of our own choosing, without needing hacks like TypeScript.

Hopefully, this will lead to real dedicated IDE tooling and standardized libraries for the web along the lines of what's available in iOS and Android native development.


That's what I find great too, though for some reasons they put too much emphasis on JS-wasm compatibility and less on language support(i.e DOM or GC are considered "unicorn" features). I'm pretty sure 90% of the web developers won't mix js with wasm once they can compile their preferred language to wasm. You will find either js apps or wasm apps. There may be also a very small percent of js apps with wasm modules(i.e for video decoding)


They are trying to convert current web devs. JS is the language of the browser and essentially the web, so most developers working in webdev primarily work w/ js.


Actually many of us use es6 with a compiler to turn into some dialect of js that runs in browsers. We will hardly notice if our code starts getting compiled to wasm instead..


JS will not compile to WASM.


Eh, there isn't any reason that it couldn't at some point in the future. That won't be happening any time soon, but I'd love to see that day, personally.


Lack of GC in WASM would make that difficult right now.

But yes, that's the future I'd like to see. Browsers only have a WASM engine, and provide a JS->WASM compiler client-side for backwards compatibility.


GC is just something you implement on the language runtime.


Then you're shipping the entire (or at least large pieces of) language runtime to the client since presumably, you'd need to compile that to WASM as well.


Given that runtimes like Lua are comparable in size to a single image, and/or could be cached in web browsers, I don't think that will much of a problem. And the more that WebAssembly adds, the smaller that will get.

Obviously if we're trying to run a full JVM or CLR in WebAssembly, that will cause problems, but C# and F# both have compile-to-js targets that would likely just get faster.


Something that never really worked was cached versions of jquery (and similar) from cdn. One reason is that browsers didn't give them any particular precedence. The other was that jquery was moving quite fast compared to devs using it so as an end user i nearly never benefited from the caching. I wonder if this might work better for language run times. Maybe browsers could be provided with some manifest saying 'this site needs python 3+' or 'this site needs python 3-3.5' so that end users end up with less runtimes cached.


You can probably fit the runtime in a 10th of the size of JQuery.

You'll need most of a runtime anyway, since it provides much more than GC.


The JS devs don't really need wasm. JS works just fine in browser. I find the use cases mentioned (js app + high performance wasm modules) limited to a very small niche. WASM is supposed to be a target language. Are they trying to convert current web devs to C++/rust devs? It doesn't make any sense to me. If wasm was designed just to help JS to do more then it is doomed to fail.


>JS works just fine in browser.

Sure it does. And the myriad of languages that pop up to support WASM will work infinitely better. ES6 was a huge step forward, but it's still a long way from a real language like C#.

I suspect Javascript development will always be a thing, but that front end development will fracture into multiple different languages and runtimes as full WASM support comes online.


> a real language like C#

While I can agree that C# has tons of language features that make it more powerful than JavaScript, I take issue with the idea that those features are the defining characteristic of a "real" language. I think there are many reasons to choose C# over JS, but there are also a non-trivial of reasons to make the opposite choice.

https://blog.codinghorror.com/the-principle-of-least-power/


True. I agree with it all.

However, by then, TypeScript might be so darn good that people choose to keep going with it!

It really does have a better type system than C# in a lot of ways.


Could you please elaborate on how TypeScript's type system is better than C#?


I disagree with OP, but two interesting corollaries of TS type semantics are its strong type inference and structural typing:

Const x = [{x: 3, y: "hi"}, {x: 9, y: "bye"}];

Automatically gets you x: "Array of {x : number, y: string }". And if you declare the same type somewhere else, as long as they're "structurally equivalent" (i.e. same x and y types) you can actually use them interchangeably. You can't do that e.g. with c#, where one class will never be exchangeable with an unrelated class , no matter how similar the definitions.

This lets you do some funny stuff like"subset type detection ": https://gist.github.com/hraban/66c1778cdd31868034b12db93fcce...

All in all, it's more of an oddity than an actual strength, if you ask me. It's necessary to emulate JS semantics, but I wouldn't consider it an advantage in a new language.


> You can't do that e.g. with c#, where one class will never be exchangeable with an unrelated class , no matter how similar the definitions.

Agreed. Duck typing is a killer feature. I can't say how many time's I've rolled a new class for no reason other than data structuring, which would have been far preferable as something just defined inline.


Mostly, for me, it comes down to union types, intersection types, the ability to disallow implicit nulls (meaning null must be declared as a valid value at the type level, and therefore cannot creep in unexpectedly), string literal types, discriminated unions (most of the good stuff from algebraic data types), and index types.

All demonstrated here: https://www.typescriptlang.org/docs/handbook/advanced-types....

The result is a very nice, expressive way to state your types and get maximal benefit from the type checker.


It could be another target for TypeScript though.


Typescript is designed to be compiled to JS. WASM is designed for staticly compiled languages. Even if there is GC and DOM access through WASM, Typescript will be better off to run for the natively JIT optimized JS path.


Really?

Anders Hejlsberg is in charge of TypeScript and a apt compiler architect with a proven track record(turbo pascal, delphi, .net/C#).


Sure, Anders is great but this not so easy. Basically you would have to re-implement all the features and optimizations of current JS-Engines. In WASM you are restricted to the given (statically-typed) opcodes, while a JS-JIT has more knowledge about program semantics and can thus apply optimizations more easily. In addition the JIT can also make use of hand-written assembly if it is needed for best performance.

I think JRuby does a similar thing for Ruby and Java Bytecode. TruffleRuby shows that it is possible to be faster, they do not compile to bytecode and use their own compiler.


Sure, but all else being equal, a statically typed language will always be faster than a dynamically typed one, because you can do way more optimization at compile-time rather than run-time. So you're right in that JS is pretty darn fast, but it probably wouldn't take that much effort to surpass JS performance with a custom TypeScript -> WASM compiler.


Sure, but AFAIK TypeScript is not generally fully statically typed. You could just rename a .js file to .ts and call that TypeScript. Please correct me if I am wrong.

There are some dynamically typed languages like Python or Dart that allow (or are in the process to allow) type annotations. But all these languages I know about claim that type annotations do not help performance. I guess you could define a fully statically typed subset of TypeScript that allows you to compile it to fast WebAssembly bytecode. But that would probably also mean to change some semantics: see int vs double for numbers in JavaScript.


The only reason type annotations in those languages don't increase performance is that they compile down to JS, which is dynamically typed, so all static type information is lost. But you're right, you would either have to annotate everything with types, or re-implement the JS runtime in WASM in order to gain any benefit from it. The most feasible option would be to target a fully-typed subset of TS, which may or may not be practical.


I am not sure what you mean? Both Python (PyPy) and Dart have a JIT (yes, you can also compile Dart to JS), so they don't lose type information in those VMs. See what the PyPy-Devs have to say about type annotations and performance (they have a faq entry for that!): http://doc.pypy.org/en/latest/faq.html#would-type-annotation... And here is what the Dart devs have to say: https://www.dartlang.org/faq#q-but-dont-you-need-sound-typin...

Your claim was "it probably wouldn't take that much effort to surpass JS performance with a custom TypeScript -> WASM compiler"

I guess you basically have two options:

1) Implement a JS-Engine on top of WASM. Which is a lot of work. But what could you do to get better performance than current JS-Engines? Every optimization you can implement on top of WASM, you can also implement for native JS-Engines. Quite the contrary it is even harder, since you are an abstraction layer higher and can only make use of WASM opcodes.

2) Define a fully-statically typed subset of Dart and compile to WASM. You could sure do that, but don't forget about e.g. the ubiquitous number type in JS/TS. You may have to use double (almost) everywhere if you want to match semantics of JS. How many non-trivial TypeScript programs could such a subset run successfully? I assume not a lot.

I don't see how you could write a TypeScript to WASM compiler without much effort. Care to explain?


F# has an even better type system :)


JS provided that already without WebAssembly, no?


I don't understand why the downvotes , JS is one of the most target languages. Elm,GHJS(Haskell),Purescript,Clojurescript and supersets like Typescript which come with their own toolchains.


Also Javascript itself is frequently compiled down to an older version of Javascript.


Asm.js was exactly what your talking about.

You can think of webassembly more like a common standard akin to the JVM specification. Anyone can implement the spec.

It's a formal definition of what people are doing with JS.


You can write wasm. It is the official shortcut (not an acronym). Source: http://webassembly.org/



Leading to real IDEs, real languages, real libraries.

downsides will be some proprietary solutions will crop up.


There is no harm in that


Is it possible to run an efficient garbage-collected runtime system on top of WASM? And how about a multithreaded one?


> In the last article, I explained that programming with WebAssembly or JavaScript is not an either/or choice. We don’t expect that too many developers will be writing full WebAssembly code bases.

I see this statement all the time, but it doesn't make sense. If you're looking at any programming language out there, they all have a growing members of their community asking and showing interest in targeting WebAssembly for their language of choice. It's not just C/C++. Go, Rust, Ruby, Python, Crystal, Nim, D, and many more. Now you might get the reaction that "meh, why would anyone write web apps with Rust?", but that's an irrelevant question. Companies are going to see this as an opportunity to save resources and become more efficient, especially since wasm has so much better perf than JITed JS and the possibility of going isomorphic is a reality (back-end & front-end written in Ruby for example, deriving from the exact same codebase and shares code).

Now I'm not saying "WebAssembly will take over JS!", what I'm saying is that it perhaps, possibly, maybe will. It will depend on these languages and how they add support for it, what abstractions and integrations they provide with their current ecosystems. And of course, how WebAssembly will improve over the coming years.


I agree completely, but I would take it a step further:

If wasm doesn't overtake JS, something else that offers native bindings to other languages will eventually. There are huge benefits to be had for teams that want to be able to code their full stack in a language that isn't JS.

We never asked for JavaScript (well the vast majority of us), but we've been stuck with it for the past two decades for all things web. Now that a doorway to replacing it with a general-purpose solution has been cracked, I expect the industry to kick it wide open as soon as possible. Not because JS is bad per se (it has certainly gotten much better), but because a lot of developers would simply prefer to use something else.


> something else that offers native bindings to other languages will eventually...

No it won't.

This is our one chance to kill javascript; if we don't do it now, it'll be entrenched forever, and best we'll ever get is 'compiles-to-js' languages like clojurescript and typescript.

Let's be realistic; who has the man power, community good will and business savvy to push an entirely new language across all platforms, mobile and desktop?

Microsoft? Come on. Apple? Google? Google tried with dart and failed.

Who else, seriously, is going to step up?

We can day dream about the magical 'no js' world, but it's never going to happen if web assembly doesn't work.

Currently we're seeing the reverse, js stretched off the browser, onto the server, into mobiles, onto iot devices.

You've got to layout some pretty damn fine arguments about why that trend is going to suddenly reverse.

Web assembly is pretty much our last best chance to flip javascript off and have something better...but it might already be too late for that to work. :/


> This is our one chance to kill javascript; if we don't do it now, it'll be entrenched forever, and best we'll ever get is 'compiles-to-js' languages like clojurescript and typescript.

Do people really hate JavaScript that much? I've grown fond of it in recent years, especially after ES6.


I really do hate javascript that much. It was somewhat acceptable when it was confined to the browser for simple scripting. But the more complicated the application and the more it moves out of the browser the more it's flaws get magnified.


Yes. People really hate JavaScript that much. ES6 brought some sanity to the language, but I continue to be firmly in the camp of continuing to hate and despise its existence.


Weird. I remember hating it ages ago. So much so that I was developing a Python->JavaScript transpiler (before all the cool kids were writing transpilers!) and blogging about how bad it was [1].

Granted, that was back in 2008 and most of my complaints have been addressed now almost a decade later (except for destructors, it still doesn't have those, but I also don't need them anymore).

These days I move between JavaScript, Python, Go, and C seamlessly and I can honestly say that I don't hate any of them as much as clients, managers, and 3rd party code.

[1] http://davywybiral.blogspot.com/2008/01/javascript-bad.html


People have definitely had and continue to have successful careers in, and many actually enjoy, JS.

I would never debate that. I think language choice for many people is a personal decision. There are some that I will hold my nose and use; not even complain too much. There are others that the minute there is an alternative to, I would use that instead. JS is in the latter camp for me.

If it helps, my current favorite language is Rust... they may be syntactically similar, but I just can't stand the semantics, or lack there of, of JS.


This could be said for any language.

Yes. People really hate C that much. C++ brought some sanity to the language, but I continue to be firmly in the camp of continuing to hate and despise its existence.


It's a bit different. For system level programming you have a choice to get around C/C++.

For browser programming, or full stack in a single language, you have less choice, and none if you reject transpiling.


Right, with JS there's almost no choice. C/C++ I actually really like, but I fear the great harm that I can inflict upon myself and others with them.

Anyway, the nouns are important.


These people are probably over represented here else things like Node.js, Electron would have never existed - and be successful.


Electron is awesome. And with webassembly, I think electron will be even better. Electrons success is because it offers a single Dev experience across all major platforms, windows, macOS, Linux and the web.

It's a great container for reducing the work to deliver code across platforms.

JavaScript is a necessary evil right now; it's the least common denominator for delivery. I would probably choose typescript if I were starting a new app right now to target it, though I want to play with Rust and webassembly soon.

It's success is not bc of JavaScript; it's because it merges the cross platform Dev experience.


I don't hate JS but would rather write my web codebase in something else, something that has a standard lib for instance. ES6 is also a transpile to JS language on the browser. The main problem is that we are stuck with ancient JS on the browser and WASM could fix that. Wouldn't you rather conpile ES6 to WASM instead of JS?


browsers that support WASM will support ES6 natively. compiling ES6 to WASM would not be a good idea, since you would have to send a garbage collector and full dynamic language runtime down the pipe instead of just using the one in the browser.

wouldn't you rather just run ES6 right in the browser instead of compiling it first at all?


If all major browsers support 100% of the spec and every implementation is bug to bug compatible with the others, yes, I would rather run ES6 directly in the browser. The problem is fragmentation and not supporting the whole spec -- some functions working in Firefox and Chrome and not implemented in IE/Edge or Safari.


Has any runtime anywhere had 100% compatibility with anything ever?


The lack of choice has to be hated. Some hate JS just for that.


Not everyone can use ES6 or transpilers.

For example, on the type of projects I work on, all tooling is certified by customers IT and anything extra requires change request, that might get approved, or not.


I don't hate it, I just don't find it as productive as the server side language I use.


Absolutely


I don't quite agree with the all-or-nothing assessment, but your passion is exactly why this will happen.

Too many of the silent developer masses (probably mostly back-end engineers) have and continue to feel this way about being stuck with JS. The genie's out of the bottle, it's not going back in.


Is TypeScript as annoying a language to code in as Javascript? Or is it comparable to other modern languages like Swift and Go? If it is, I'd say that pain of dealing with Javascript (for developer productivity and happiness) is already taken care of.


Typescript is a type safe language that "compiles" into javascript. The Microsoft implementation compiles on save, so you work in your file.ts and a program running somewhere fires when the file is saved and converts it to file.js which you would distribute.

I haven't used it yet, but the demo from Anders looked good. It's an attempt to fix the "loosey goosey" nature of javascript. I would be great if browsers supported it as a built in language.

https://channel9.msdn.com/posts/Anders-Hejlsberg-Introducing...

If you watch the video, you'll notice the argument notation is name: type rather than type name. I think Anders is reminiscing about his Delphi days.


> If you watch the video, you'll notice the argument notation is name: type rather than type name. I think Anders is reminiscing about his Delphi days.

ActionScript and the proposed ECMAScript 4 also use the name:type notation, so there is definitely some prior art in the web sphere


Thats not really what he asked though. If in the end a developer can work 98% in TS, do they really care if it runs as JS or something else if it solves their problems with the language ? I have heard from people that love TS, but surprisingly many that also prefer pure ES6 JS.


The only concerns I would have is I believe you have to debug the javascript rather than the typescript.


You can make the sourcemaps refer to the TypeScript code. I set breakpoints in TypeScript files and step through it all the time.


You should think of typescript as javascript+ more than a different language. It looks more like what javascript would look like in 2-3 version if the governing body thought static types were important to add.


That doesn't answer my question of whether TypeScript solves the concern of JS being frustrating to work in.


TypeScript is a joy to program in. I was on a project where we had the opportunity to turn on all the strict checks (no use of any, no implicit nulls, etc.), and it was super enjoyable and efficient to program in.

You have algebraic types (well, 99%), union types, and concise ways of expressing tree types.

I would imagine it is more pleasant than Swift or Go, though I haven't used either of those very much.


But webassembly will allow for typescript or you language of choice, c/c++ and rust first, and as the OP suggests others too.


The question was whether the concern of JS being frustrating is already solved by TypeScript, in which case WebAssembly is less important than it would be otherwise.

(Which isn't to say that there aren't other questions like efficiency and already-existing codebases written in other languages.)


Jup, I think pretty much the same.

JavaScript is freaking everywhere.

Your DMS? Chances are it understands JavaScript.

Your scanning software? Understands JavaScript.

Your security gateway? JavaScript.

You API management software? Also JavaScript.

And a ton more. JavaScript is here to stay.


Wild guess? Oracle with Java.


Java failed to be what JavaScript became for a number of reasons, but I think the primary was the fact that it never integrated with HTML as cleanly as JS.

It could have been JS, but it was strangled to death by Sun (on the client).


Java also has the problem of long startup times while the JIT is doing its thing. Javascript has always just started executing right away. Waiting a minute or two for a Java loading bar wasn't fun.


JavaScript hasn't always worked that way. My guess is that if Java had become the tool for this, then it's startup times would have been fixed. Pure speculation of course.


If Sun had not killed Hotjava and continued to develop and market it, Java instead of JS might have been the frontend web programming language today. There were millions of Java applets written in a very short time and a Java browser would have been a first class citizen for hosting those apps.


Java already compiles to JS via Kotlin.


And gwt


:troll: well played.


Not the intention, but I see that interpretation. It's not hard to imagine oracle making a netbeans plugin for free and an expensive enterprise version that's faster. Selling point is leveraging existing back end skills.


> If wasm doesn't overtake JS, something else that offers native bindings to other languages will eventually

When I read this I realised that WASM might just pave the way for something more fundamental: browser runtime stdlib.

One of the big issues, even highlighted in this very thread, is that people are wary of forcing 10MB+ blob downloads on their users. What would the effect be if browser provided a proper stdlib?


What was the last time you wrote something that was 10MB after compiled to native code when optimizing for space?

Linux + BusyBox is less than 2MB (I've fitted it in floppies not too long ago) if you don't include the drivers for everything. I've created a 10MB binary set that implements an entire email server, with everything statically linked, and in Haskell (that leads to inherently big binaries).


Yes, I still remember the days when we could have core Linux experience on one 1.44MB floppy disk. This keeps reminding me how bloated most modern software are.


no, WASM will not replace JS unless somehow, overnight, the idea of downloading 30mb of compiled runtime libraries to read an inaccessible-to-screenreaders news article becomes appealing.

but i wouldn't hold my breath for 1990's style java applet loading throbbers to come back into fashion. there's a reason that stuff got outcompeted by the supposedly "inferior" javascript.

what wasm competes with is flash games, insecure java applets, and npapi and nativeclient in general.


In all fairness, javascript should have never even been required to display a news article. Javascript makes up for a lot of the shortcomings of HTML.


JS isn't required to display a news article, and never has been. You can make a fast, beautiful and modern news site using only HTML and CSS, without a byte of JS.

Take a look at the source for any molasses-slow news site. The only site-specific JS will be jQuery, and a few basic event handlers to do things like show and hide elements, which could have been done with CSS.

99.9% of the JS will be advertising, analytics, and tracking. People understand that these sites are tracking them, but I think most people have not idea just how much tracking is going on. Major news sites have hundreds of tracking libraries loaded per-page. That is the only reason they are slow.


And adaptive design, auto-complete, client side validation of forms, etc. But all of which should be handled by a better CSS/HTML


Well, 5G is not that far. Remember when SPA was considered too "fat"? Now most of the web apps are SPA with few MBs to download.


SPAs are still considered too fat. how do you miss the nearly daily "web bloat crisis" stories on HN?


You're probably access web sites sitting at home or office, where you have decent 10+ Mbps channel. All this changes when you travel and you use either slow public WiFi or phone tethering. Yes, there's 4G, but in most places in most countries 4G coverage is very poor: it's only in the most populated areas.

I travel between countries and of course I hate staying in huge cities because they're expensive and noisy. But when you move to smaller towns, there're usually limited options for connecting to the Internet.

Last year in India I spent 2.5 hours incessantly trying to open my bank website to make an urgent payment. All these huge React/Redux bundles were loading for ages, and they aren't cached and there's no support for resuming download after it dies because of some timeout or because you were looking at the spinner for 30 minutes and decided to click reload button.

And even when these huge SPA load, they're totally broken on slow connections. People become lazy and they don't handle timeouts and responses coming in weird order because of the slow channel. Even if you manage to open SPA on slow connection, it's a pain to use, almost impossible with all these XMLHttpRequests on every click transferring megabytes of data after every click or typing any letter. :(


Well, I'm talking about "state of the art" apps. You can always fallback to server-side rendering/pure html if the connection is too slow and you can't afford the boostrap/init download. Depending on your use case SPA may actually be an improvement as the static resources can be cached. WASM is supposed to compete with native apps. Why don't you complain about MBs to download on native apps?


>>10+ Mbps channel

that's peasant level net :P. 100 or bust {literally what I have home and better at work). seriously though, indeed the web is overbloated and I don't think there is coming back


It got outcompeted by Flash, JavaScript had zero to do with it.

Had Apple not forbidden Flash on iOS, there would be no JavaScript superiority to talk about.


I invite you to try your favourite flash website in Puffin browser to see how "great" an experience flash on an iphone is.


Surely not worse than what JavaScript is.

Thankfully on Android and WP devices I was able to take the battery out, when the web site developer got too creative for the device's CPU/GPU.


You still have to ferry data in and out of Javascript to interact with the page. In the long-term, maybe language-specific bindings to interact with DOM APIs will become popular, but otherwise to get anything done in most cases you're probably going to expend enough effort on ferrying things in and out of JS land to negate the possible benefits of not needing to deal with a different language. I'd be surprised if many do it soon for anything more than a small high-performance core or for a large application that has a small DOM surface area (a game that just renders to a canvas).


Yes of course, but only during the early stages. The post-MVP roadmap for WebAssembly specifies DOM & Web API integration (http://webassembly.org/docs/gc/), so that specific case will too disappear with time.

If you read point 3 on this page (http://webassembly.org/docs/high-level-goals/) you should realize that the need for JavaScript won't be required at that point.

edit: I'd like to clarify that I'm not saying wasm will boom in usage over night, but as time goes on, the rough edges clears up & features like these are added it might become the better choice.


Direct DOM access including manipulation is planned for WebAssembly.

They wanted to progress on the spec and push it out without that, but it's on the roadmap. (Along with GC integration).


Can someone explain what the option is for accessing the dom initially? Is there some wrapper JS that will be needed to call into the wasm?



Omg, hello. How did I miss that!



JavaScript is strong competition both on the front end (source ecosystem) and the back end (compiler target).

For people who don't want to use JavaScript (they are using a different source ecosystem), compile-to-WebAssembly toolchains will be in direct competition with compile-to-JavaScript toolchains. It's not at all clear which ones will win. It may be different for each language.

For C++ and Rust I'd expect WebAssembly will perform well, since it fits these languages' runtime model better. But for garbage-collected languages, compiling to JavaScript may result in smaller and faster code, since you can reuse JavaScript's runtime instead of downloading it. There may not be any reason for Elm or Dart (for example) to replace a compiler that works well with one that targets WebAssembly. Or if you like Go, I'd at least try out GopherJS before thinking you need a whole new toolchain.

It's not just performance, either. Generating somewhat-readable JavaScript that you can debug using Chrome Devtools when something goes wrong is a pretty nice advantage.


Isn't debugging just a short interation away from being like native debugging? You get a source map, you see C++ in the devtools with C++ variables and a C++ call stack and an option to "view webassembly"

I have to assume the browser teams are working on this


That might work well enough for C++, but I'm not sure how well it would work for other languages. For example, apparently gdb doesn't work all that well for debugging Go programs?

Each language has its own peculiarities for how objects are represented on the heap.


To be fair, a generational mark-sweep GC algorithm doesn't take much more space over the compile-time modifications to the code itself. A reasonably performant GC algorithm suitable for most front-end work would probably only add about 10 kilobytes to 20 kilobytes of code to an executable. That can be downloaded and cached in the blink of an eye.


The problem is that you need to get all the GC root nodes on the stack. This is platform specific and must be implemented by the browser. The only crossplatform way is to create an additional shadow stack that contains only the root nodes but this means you pay an additional cost for the GC on every function invocation.


That would only be required for data on the GC heap. Yes, you would have to implement an FFI for communicating with the browser, especially for any callbacks into the browser that take allocated memory, but it is far better to build a hygienic FFI if you want to use a GC language than re-architect the browser with GC support.

It can be done. There may be pain, but the pain can be managed.

EDIT: by that I mean that one would need to maintain two heaps. One for data that interacts with the browser and that is wrapped by the FFI as external resources, and one for the internal system itself, which can be managed via GC. Only the WebAssembly code would have to manage GC roots, which can be done as per any GC language.


Garbage collection is coming to WebAssembly.


Sure, eventually, and I assume some day there will be decent debuggers. But if your toolchain outputs JavaScript, you already have these things today.


Honestly I don't see how a low-level paradigm takes over a field when every other trend is towards ever more abstraction. Sure, wasm has its rightful place, like DX12/Vulkan, but it's a tiny fraction of apps/projects. Most likely we'll see js-to-wasm preprocessors of sorts to invoke optimized calls when it matters, but most devs won't touch low level (much like most game devs don't touch engines/middleware).

There's the 1% edgy companies built on node+rust/D and what-have-you, and then there's the 80% wordpress crowd who longs for a human-readable programming language a la Star Trek. I think the latter is a future that will come unimpededly over this century, as far as the majority of the industry is concerned.

More pressingly, people are waiting for async and other cool multi-threaded features in JS, and ES6 really is a second coming of sorts in that regard. Long gone are the days of horrendous paradigms once we adopt the 2016 spec, really.

Economically, because that's the true judge ultimately, we have to consider the existing JS ecosystem: actual prods, skillsets, experience, actual needs of businesses who fund the construction of the web currently solved by JS, and so on and so forth. It's a giant thing, that JS. If we had a better alternative today, we'd still be writing JS in many projects in 2020 and maintaining some of it late into that decade. Let's face it, JS is here to stay, there's no turning back now --see how polarizing Java has always been for another historical case, and see how it's still top 3: can't suddenly undo all that cash poured into production, can't instantly retrain hordes of programmers.

I'll say one thing about wasm though: it's a formidable effort that paves the way towards a truly 'next-gen' web paradigm, most notably suited to be actuated (viewed) within a VR/AR environment; it's also a prime candidate to run on the smallest devices at the heart of an "ambient"/IoT ecosystem. I'm considering a superset of low-level networking protocols that go beyond the web, obviously. And I haven't researched anything about it yet but I suspect the low-level hooks of wasm could be hacked to do some fancy ML stuff on-the-spot (thinking low-cost online algos and reinforcement learning here, if we are to make our 'ambient' IoT ecosystem able to 'learn' about us as users and map the physical world semantically).

Ha, fascinating times, really. Let's just not lose sight of how the real world operates, though, if only to improve on that. In due time when ideas become feasible under the right circumstances. Low-level programming is very much not how we've trained the bulk of tech people since the smartphone era / mobile paradigm essentially. It'll take time to shift, if we ever did. I don't think we will, in the grander scheme of things.


On one hand, I'm excited about performance improvements. On the other, I lament the fact that this will kill one of the best parts of the web: the fact that the source is sent to the end user instead of a binary. It now makes the code and how it works opaque, thus killing the spirit of innovation and learning.


Agreed. But I've felt that way about minified JS for a long time too. I'm not sure there's a way around it.


This is the same situation we're already in with minification. The debugger can reformat or decompile it, and if you have source maps you'll see the original source, even if the original source wasn't even javascript.


I agree with the others under this comment that websites already hide their source through obfuscation. What I am slightly more concerned is how this will go with the law. I think it is much harder to get into trouble when messing with obfuscated JS than when decompiling binary code.

Yes, you can get into trouble when you decompile code: http://security.stackexchange.com/a/30375/93013


The specifics of how this will work are not totally clear to me, but web assembly will have a text format that will be human readable and editable according to:

[0] http://webassembly.org/

[1] http://webassembly.org/docs/faq/#will-webassembly-support-vi...


It will have a text format (or more than one), but yea probably it will be less readable:

https://github.com/WebAssembly/design/blob/master/TextFormat...


The edge over asm.js is a subset of this. Obviously, asm.js neither has JIT reoptimisation overhead, nor garbage collection to worry about.

However, a weird, highly-annotated strict subset of JS is not the ideal representation of what is basically portable assembly language. WebAssembly's big strength over asm.js is it has smaller executables and they can be rapidly decoded and verified in binary IR form, rather than having to shove megabytes of ungzipped bracket-fest through a JS parser.


> WebAssembly's big strength over asm.js is it has smaller executables

It's not about the "smallness" as measured in the number of bytes, the minimized (that is, short variable names, no comments and whitespaces) asm.js code with the "bracket-fest" can actually be quite compact.

It is about the form which does save some lexing, parsing, searching and allocation steps in the run-time. Which matters when the code is measured in megabytes and the goal is to run it as soon as possible and save as much battery as possible on the mobile devices. From the FAQ:

https://github.com/WebAssembly/design/blob/master/FAQ.md#why...

"The kind of binary format being considered for WebAssembly can be natively decoded much faster than JavaScript can be parsed (experiments show more than 20× faster). On mobile, large compiled codes can easily take 20–40 seconds just to parse, so native decoding" "is critical to providing a good cold-load user experience."


A big part of the advantage is more consistent adoption by browsers. All the major browsers have experimental WebAssembly support already, and Firefox and Chrome are already shipping it (although it's off by default). Firefox/SpiderMonkey and Edge/Chakra have AOT compilation for asm.js, but notably Chrome/V8 doesn't (although they did optimize its performance significantly). Asm.js also still hasn't become a formal spec, while WebAssembly is already very close.


> A big part of the advantage is more consistent adoption by browsers. […] Firefox/SpiderMonkey and Edge/Chakra have AOT compilation for asm.js, but notably Chrome/V8 doesn't

Not quite. SpiderMonkey has AOT compilation, whereas Chakra and V8 throw it at the JIT, though Chakra's compiler is specially optimised for asm.js AIUI.

Thing is, specific support for asm.js is unnecessary, a sufficiently good JIT is good enough. V8 hasn't implemented asm.js AOT because it doesn't need to. I assume the same would be true of WebAssembly.

> Asm.js also still hasn't become a formal spec

Though it is a spec.


V8 hasn't implemented asm.js AOT because the authors claim not to need to; but if you compare the performance of a Unity3D WebGL export vs FireFox the gap is very wide.


Well, each has its tradeoffs. JIT has low startup time, whereas Firefox will spend quite a while compiling before anything actually happens.


For the content I'm describing, FF has vastly superior startup/cold execution performance, and mildly better long term.


> Obviously, asm.js neither has JIT reoptimisation overhead

Why does it never need to re-optimise the JITed code? I know WebAssembly and asm.js are more static than JS, but even very static languages like C benefit from speculative optimisations which may need to be reversed. For example asm.js and WebAssembly have branches don't they? Does the JIT always compile both branches even if one has never been taken in practice?

And what is the reoptimisation overhead anyway? If deoptimisation is caused by a bad speculation on the same thread, it's zero overhead on the fast path until it's used isn't it?


> For example asm.js and WebAssembly have branches don't they? Does the JIT always compile both branches even if one has never been taken in practice?

WebAssembly is treated just like other "real binaries" produced for the "real" OS. Whatever survived the static optimizations while producing the binaries is converted, at the end, to the pure machine code, you don't "trace" it in run-time by the user.

> even very static languages like C benefit from speculative optimisations which may need to be reversed.

I'm not aware of "speculative optimisations which may need to be reversed" in C, and I'd be very interested to read what you mean when you write that, possibly with some links and references. Do you mean run-time, by the user, or something else?


I think that is left open to the implementation. _If_ an implementer thinks there is a benefit in doing so, he's free to do so.

In fact, that's similar to how a CPU runs "real binaries". Modern CPUs use _some_ runtime information to make code run faster. Examples include branch predictors and the recognition of stride lengths to move data into the cache before the instructions being executed need it.

That's only small bits, but it _is_ runtime information. I do not rule out that WebAssembly developers, similarly, will find hat there are ways to use runtime information that speed up WebAssembly code.


I think you confuse the tracing during the interpretation of JavaScript with asm.js&WebAsm. Namely asm.js&WebAsm are designed to avoid as much as possible anything deciding in the run-time, except for verifying and generating the machine code, exactly because these "let's see what the code is doing in the run-time" were already implemented and were used for "plain" JavaScript, but had too much overhead, compared to what asm.js&WebAsm does (or avoids to do), for the kind of uses where asm.js&WebAsm are desired.

For the "plain" JavaScript, there are the run-time decisions.

> Modern CPUs use _some_ runtime information to make code run faster. Examples include branch predictors and the recognition of stride lengths to move data into the cache before the instructions being executed need it.

Sure. But that run-time information is internal to the CPU. And the CPU will use it for the native binary code that is the final result of asm.js or WebAsm, just like any other. But that native binary code is "static," it's explicitly not "small traced chunks" the way "plain" JavaScript is handled.

> I do not rule out that WebAssembly developers, similarly, will find hat there are ways to use runtime information that speed up WebAssembly code.

Think about that: it they would find something like that, exactly the same technique could be used to speed any native code, including Linux kernel and anything native you imagine.

If you manage to develop software method that actually improves the execution speed of the native code in run-time, you'd be famous and (if you know how to market it) rich.


> I do not rule out that WebAssembly developers, similarly, will find hat there are ways to use runtime information that speed up WebAssembly code.

Think about that: it they would find something like that, exactly the same technique could be used to speed any native code, including Linux kernel and anything native you imagine.

That's exactly what profile-guided optimization does. It's not a new invention, it's working technology. And it's far from inconceivable that PGO could be applied to the intermediate code that web assembly effectively is.

Specific example: a common tradeoff made in compilation is between space and speed, and there are some cases - like padding to align a jump target - that can win big speed improvements in inner loops, but are pessimal when used liberally (because code size inflates and doesn't fit in cache). Realigning jump targets is something that can be done to compiled code, in particular compiled code before it has been linked, when all the relocations and fixups are still available. Having information about hot code can make a big difference here.

(BTW: optimizing linkers are surprisingly involved here, particularly for targets that have a bunch of addressing modes, like x86. Some ways of writing in fixups can result in smaller code (e.g. 1 byte offsets rather than 2 byte offsets), but this effect cascades: making code smaller can make what used to be a 2 byte offset possible to fit in 1 byte. Don't underestimate the amount of optimization that your linker does with native code (if you have a smart linker; my experience is from the Delphi compiler source).)


> it's far from inconceivable that PGO could be applied to the intermediate code that web assembly effectively is.

A developer could do some kind of PGO before he produces the final binary, I can imagine that. But then it's still just a static binary.

And I personally can't imagine PGO being done in the user's browser and not being slower than the alternative of not doing it, just like I've never heard of some OS which does PGO on the native binaries when user runs them. Maybe you know of something like that? The PGO I know is always a slow process, done only before shipping the binary to the user, it's a kind of "post-processing" step of compiling, not of the normal execution at the user's computer.


Java does PGO on every server running Java in the world.

Even some swing apps benefit once you get it started up..

That said, I'm not sure the full JVM is coming to browsers anytime soon.


The target model of the asm.js (and therefore WebAsm) intentionally doesn't assume the "VM" features that Java VM has. It's much, much lower level. No classes. Even no strings.

Java VM receives the classes, methods, strings, has GC and all that, but it has to JIT to reach that lower level to be efficient and has to do a kind of establishing what's actually used, similarly to what tracing JIT engines for classic JavaScript do.

I believe that kind of run-time measurements and then code generation and optimization is what some Java people call PGO, and what asm.js at the moment intentionally (by design) avoids.

In short, Java's PGO on the user side is not what PGO for static languages like C is (by the developer). And asm.js is even lower than C. It's really closer to... asm.

Exactly because asm.js avoided these decisions was Firefox with asm.js support faster than Chrome at the time the later treated all js code the same.


>Why does it never need to re-optimise the JITed code? I know WebAssembly and asm.js are more static than JS, but even very static languages like C benefit from speculative optimisations which may need to be reversed.

Because its marginal returns?


I really hope web assembly takes off and becomes a thing wide implemented in all the major browsers. The web is such a fantastic application platform (despite its frequent misuse...), and removing the javascript performance tax will be huge.


JavaScript is not just a performance tax, it's also a language that is poorly suited to a great number of tasks which we would like to apply web browsers to today.

Not that it is all around terrible, but definitely much preferred if we can choose the best language for the job instead of being strictly stuck with JavaScript.


While Javascript has its flaws, it's been refined over many years and finally there are pretty decent ways of building UI's, using React, Vue etc. Wouldn't we have to start all over again if people begin to write UI code in languages not built or configured for interacting with the DOM?


The W3C spec is now finalised and locked in, and Chrome and Firefox will release Web Assembly completely in their next releases, respectively.


It looks very promising, most major players have experimental support for web assembly already.


Is WebAssembly's binary format finalized? I wasn't able to find anything about it, but about two weeks ago binaryen pushed a new release[1] whose notes said "update wasm version to 0x01, in prep for release, and since browsers are ready to accept it".

From what I had heard the plan was not to do that until the final standard was settled on, but I wasn't able to find any corresponding announcement.

[1]: https://github.com/WebAssembly/binaryen/releases/tag/version...



What's the point if you can't interact with the DOM? Almost all examples of webassembly just render to a canvas.

Does this mean instead of a normal "native app" I'm going to start getting C applications compiled with wasm and distributed in electron? What does this possibly gain the end user?


You're unnecessarily constraining your thinking to the present-day Web. WASM fundamentally expands what's feasible in that domain.

For instance, access to the DOM doesn't really matter for game engines like Unity or Unreal, or for lower level libraries like OpenCV, libsass, or libarchive which you might want to use in your web application.

No one is arguing that WebAssembly will completely replace JavaScript, especially not right out of the gate, but it will be used to optimize hot code paths within JS apps, as well as allowing robust, efficient, native libraries to be used directly on the Web. This is more about the pie getting larger than it is about WebAssembly hypothetically crowding out JavaScript.


> For instance, access to the DOM doesn't really matter for game engines like Unity or Unreal

My point was who wants these to be in the browser? I mean they're cool as demos, but I don't see the use. Is it just people using web browsers as the content delivery platform instead of e.g. Steam?


> who wants these to be in the browser

Anybody that resents the user having ultimate control over the User Agent. A game in e.g. the Unity engine isn't important. Instead, there are a lot of people that would love to replace their (easily adblockable) web page with a small opaque binary that contains freetype, custom layout/UI library, and maybe "drm"-like obfuscation. You don't need the DOM if you intent to render everything yourself.

At best, we will see a new wave of "flash intro"[1] style "custom user experience" replacing perfectly usable HTML pages. At worst, this could be the catalyst that replaces what remains of the open web with a locked down "cable tv"-like mess.

[1] http://www.zombo.com/


That's an interesting idea, but I'm not sure if it's got any legs for "mostly-content" sites given the inherent unsearchability of that approach.


forget the DOM it's a steaming pile. In future we'll be thinking either rasters,vectors or something higher level like ratios and components.


>At least for now, WebAssembly does not support garbage collection at all. Memory is managed manually (as it is in languages like C and C++). While this can make programming more difficult for the developer, it does also make performance more consistent.

I support anything that improves performance and efficiency. But the best of both worlds is always great. I'm wondering if it would be possible to implement reference counting (and maybe automatic reference counting) similar to Objective-C, and if so, would that simply be a matter of the particular language and WebAssembly transpiler you're using supporting it? And are there disadvantages to reference counting that make it a bad idea? I enjoyed using it doing earlier iPhone programming.


GC integration is on the roadmap for WebAssembly, as I understand it. At the moment languages can bring their own GC implementations written in WebAssembly, but eventually they should be able to hook into and take advantage of the browser's native GC facilities. I believe this is considered a blocker for exposing a standard DOM API directly to WebAssembly code.


> And are there disadvantages to reference counting that make it a bad idea?

I thought it had a performance cost.


Not much of one. The amount of time it takes to allocate the memory and free it is the dominant factor.


Maybe in the non-atomic case. Atomic refcounts force synchronization between CPUs when different threads access the same data, even if they only read it; that can have a quite significant cost.


They also tend to prohibit some smart gc optimisations like moving your shit while you're not looking, and the cache behaviour can be bad because either,

- the data isn't stored next to the count so you get two pointer indirection per access, or

- the data is stored next to the count, and the object gets cached-in when it gets collected.

Piggybacking on the thread because I haven't had much concrete experience with smart pointers: how does the "circular chain" problem seem to manifest? Is it

- "goes wrong quickly," usually picked up and fixed without too much trouble,

- "like any old memory leak" -- maybe a problem if processes run a long time, hard to track down, or

- devs are usually smart enough to see them coming, knowing to keep the "has a pointer to" relation a partial order (either by type or some other natural hierarchy.)

?


Since languages like Rust/C++/Swift compile reference counting into machine code already, there's no real reason why they shouldn't be able to compile it to WebAssembly.


I recall seeing a test compile of some C++ code to WebAssembly instead of JS and on Chrome it was an order of magnitude slower than the JS version. Has the performance increased lately?


Javascript is an awfully designed language (it's getting better, but the basics are awful, no ints, crazy comparsions...) I hope that it will die soon and enable everybody to write web apps in a language of their choosing.

This may even end the craze around JS frontend tooling.


I can totally see Qt a and .NET apps running on top WebAssembly. Imagine MS Word is running in your browser without them having to rewrite it in JS!


Yay! Nothing improved for me as a consumer, but total control of the app by Microsoft (it runs on their website and I don't even own my own copy) and slower speed (because of the extra VM and sandbox).

Can't wait!


> Nothing improved for me as a consumer

Nothing to install, no updates, no license key, not tied to a machine.... ?


>Nothing to install

Modern app stores like the Mac App Store make installing an one click affair. It will take me more clicks to login to their Word website...

Also, "nothing to install" means "nothing to own".

>no updates

There will still be updates, only I won't be controlling them, and I wont be able to stick with an older version.

Besides, it has been ages since updates for apps have been totally painless (e.g. with something like the Mac App Store, or a framework like Sparkle).

>no license key

Yes, but a password. And presumably a subscription, and my access to this "WebAssembly-based Office" lost when I can't fork out the money for a few months.

(Also no piracy option for the developing world anymore).

>not tied to a machine....

I can have that with regular software not running on a browser. I can use Reason e.g. (a native application) on any machine I care to install it with their over-the-internet verification.


None of these things are true about MS Office (native - not online 365) right now, which was the specific subject this sub-thread.


You seem to have totally missed what the "specific subject of this sub-thread is".

A quick reminder from just the parent comment above to which we're answering: "Imagine MS Word is running in your browser without them having to rewrite it in JS!".

That is, we're not talking about the native MS Office "right now".


> Nothing improved for me as a consumer

Improved with respect to what, then, if not the current "right now" Office?


WebAssembly won't give you unlimited power. You will still have the browser sandboxing and security / privacy policies in place. Accessing to the local file system is for instance constrained in the browser for good reasons. WebAssembly apps will be able to do what Web APIs let any page do.


Is this universally true – i.e. baked in to the wasm spec – or is it just true in a browser environment? Consider things like NW.js[1] or Electron[2] – might we see cross platform apps being developed where all or parts of it are written in <insert language here> and compiled to wasm, then packaged up with a runtime and delivered as a "native" application for desktop, mobile, or whatever?

[1]: https://nwjs.io

[2]: https://electron.atom.io


No, the wasm spec doesnt mention web standards at all. It can be implemented without a JavaScript VM nearby, and at the moment isn't even capable of interacting with the DOM/JavaScript APIs directly when it is run in web browsers.


What would be the benefit of that? Might as well just compile to native and distribute that without all the overhead, no?


I probably should have specified but I was referring to GUI applications specifically. For things without a GUI, or that require deeper system integration than you get with these runtimes you're absolutely right – there's little point. But for a very large class of GUI applications it makes a lot of sense, working with technologies that not only have stood the test of time and are good enough for plenty of use cases, but also mean you can with very little effort package your application for multiple operating systems. Some of these applications will need stronger performance than can be provided by the JS engine, so hence the question re: wasm.

Does that make more sense?


Not a problem, we can just wait for the inevitable security vulnerabilities.


Receiving megabytes of untrusted blobs of compiled code from websites? Perhaps even DRM encrypted? What could possibly go wrong.

At least js could be unminified and many standard libraries are somewhat trusted.


If it's "not a problem", it should be easy to write a Web page that opens calc.exe. Can you do that? Can anyone on HN do that?


I think the parent was joking.

Though there are people on HN who can do that, given some development time, like me.


"the team working on React could replace their reconciler code (aka the virtual DOM) with a WebAssembly version."

Is anyone working on such a thing?


It's both amusing and absurd that what was practically intended as Java's little helper, JavaScript, has grown up to be this thing that might actually replace Java entirely.

How long until there's a really good JVM written in JavaScript of some form and embedded Java apps end up running in JavaScript for performance and security reasons?

It'll be even more ridiculous and hilarious if the "j" in "jruby" ends up meaning "JavaScript".


> It's both amusing and absurd that what was practically intended as Java's little helper, JavaScript, has grown up to be this thing that might actually replace Java entirely.

I though JS was named to piggyback on Java's popularity, not because it was actually intended to complement Java.


"That’s right. It was all within six months from May till December (1995) that it was Mocha and then LiveScript. And then in early December, Netscape and Sun did a license agreement and it became JavaScript. And the idea was to make it a complementary scripting language to go with Java, with the compiled language."

http://www.infoworld.com/article/2653798/application-develop...


> How long until there's a really good JVM written in JavaScript of some form and embedded Java apps end up running in JavaScript for performance and security reasons?

That's an amusing idea. But it's extremely unlikely for the same reason that you are unlikely to see C++ rewritten in JavaScript.

It's possible, however, that over time, more and more languages might target JavaScript, rather than the JVM or the native hardware.

For instance, I program in Scala for a living. The main platform for Scala is the JVM, but there's also Scala.js, which targets JavaScript. And in fact, we now write most of our "JavaScript" in Scala, rather than in JavaScript.

I suppose that someday there might be a version of Java that targets JavaScript, and then your dream might come true. Oh wait... it already happened! It's called GWT and it's been around for a decade.


Java and Javascript are related like Car and Carpet are similar. There was no historical connection between Java and JavaScript beyond renting the brand "Java" from Sun to make the language more credible.


Well, it's possible that if it hadn't been for Java JavaScript might have had a decent, parens-based syntax.

I still wonder what might have been if Eich had been allowed to write the Scheme system he had been hired to do. While I'm not a fan of Scheme by any means, it is approximately 281,757,423,024,353 better than JavaScript.

And Scheme's a great little language for implementing other languages. Imagine, we could have had transpiling years ahead of time. We might be using S-expressions instead of JSON. Heck, we might even have moved to an S-expression syntax for HTML & CSS by now.

Imagine, one syntax for everything (thanks to https://www.w3schools.com/js/tryit.asp?filename=tryjs_intro_... & https://www.w3schools.com/css/css_howto.asp for the examples):

    (html (head (title "Example")
            (style (body (background-color linen))
                   (h1 (color maroon)
                       (margin-left (px 40)))))
          (body
           (h1 "What can JavaScript do?")
           (p (@ (id "demo")) "JavaScript can " (em "change") " HTML content.")
           (button (@ (type "button")
                      (onclick (set! (inner-html (get-element-by-id document "demo"))
                                     "Hello JavaScript!")))
                   "Click me!")))
The world could have been so much better.


I'm a HUGE fan of Scheme. And of Lisp, in general. But having seen in the "real world" how so many people are so adverse to Lisp's beautiful syntax, I'm pretty confident that JavaScript would never have caught on if it looked like Lisp rather than like C.


They wouldn't have had a choice, though, just as folks don't have a choice with JavaScript.

I imagine that folks would have cottoned on to the advantage pretty quickly. And those that didn't … I guess they could have always become telephone cleaners or something:-)


Browsers worked just fine before the advent of Javascript. Nobody had to use it, but they decided to use it because it was good enough. If some other language was deemed not good enough, there would have been more aversion to using it, and it would have never taken off.

Not to mention that there were other ways to execute code in the browser in those days. We have 'no choice' but Javascript today simply because it's the option that people eventually gravitated towards, forcing the others out of the market.


They decided to use it because it let you add things like drop-down menus (and falling snowflakes!) to your website, which made it look better than your competitor's website. Syntax was just the hoop to jump through to achieve that, and a very minor one at that. I doubt it would have made any difference.


You honestly believe that Javascript would have been equally successful if it had shared similar syntax to, say, Brainfuck? Especially given the alternatives that were available at the time? If syntax doesn't matter, I guess so, but I'm not sure I'm convinced.


I was talking specifically about the Scheme-like syntax that JS originally had. That syntax was a minor obstacle. BF-like syntax would be a far bigger one. Although what would probably have happened then is the same thing we saw with JS anyway - someone would have made a transpiler from something more sane - except it would have happened much faster.


There's a pretty big difference between Brainfuck and Scheme. Of course syntax matters, but parent wasn't asserting that it doesn't matter at all.


There is a big difference, but since we're talking about any language "deemed not good enough", a language similar to Brainfuck definitely applies, and perhaps is the shining example.

The original suggestion was that people would use Javascript, or any other language in its place, simply by virtue of it being the only option. I argue that the wrong language would hamper adoption, and eventually some other language – like, say, VBScript which came only 6 months later – would have taken over.

Javascript persisted, and eventually completely dominated, because it was good enough. A language not good enough would have not had the same success, even given the otherwise same environment to grow up in.


Well we could all be using VBScript. If Netscape had failed to establish a defacto standard, Microsoft was waiting in the wings.


You underestimate the average programmer's aversion to Lisp syntax, despite it being the clearly superior syntax.

If JavaScript had had Lisp syntax, then everyone would still be using Flash. Or some other plugin that provided a language with a more popular syntax.


You underestimate the average programmer's aversion to Lisp syntax, despite it being the clearly superior syntax.

IME it's a learned thing from only knowing languages that at least superficially resemble ALGOL.

JS is a first language for a lot of people - because their goal was to make a website do something. Scheme would not have been any different.


IME it's a learned thing from only knowing languages that at least superficially resemble ALGOL.

IME many computer programmers learned Lisp in college and for some reason hated it–I think in part due to the syntax and in part because they had a hard time with recursion and other "mind-bending" concepts. They often still harbor an irrational precedent to this day.

Me, on the other hand, I loved Lisp as soon as I learned it. (Perhaps I was primed by having learned APL in high school.) I changed my major from Astrophysics to Computer Science because I felt that SICP was even cooler than black holes and neutron stars.


> They wouldn't have had a choice, though, just as folks don't have a choice with JavaScript.

If people disliked the syntax enough to avoid the language (and the browser that uses it), I imagine that other browsers would've started supporting other languages, and something else would've dominated.


Browsers actually did support other languages back then. Remember VBScript? IIRC there was also an attempt to draft Tcl into that role.

In IE world, things were actually even crazier, because the whole scripting story was extensible (via Active Scripting). So you could use Perl, for example...


Pure Scheme would not be sufficient. You need at least DOM-handling libraries. And if those libraries were poorly designed, then people wouldn't cotton onto the advantage quickly. Look at DSSSL.


> You need at least DOM-handling libraries.

Did you see the '(set! (inner-html (get-element-by-id document "demo")) "Hello JavaScript!")' bit? That's DOM-handling in Scheme. Not a really great API, but it's not terrible either.


Because of macros lisps are more resistant to bad APIs than other languages, in my experience.


That is a very good point.


Just about every modern Lisp has an X-Expression library, and having used Racket's in production, it's a hell of a lot better than the DOM browsers have today.


They could have used "Scheme" with angled brackets instead of parens, which would have made it look much more like an extension of HTML.


Yes and no. In practice JavaScript was sometimes the thing that detected if Java was present and kicked off a Java applet. It also handled roll-overs before CSS was a thing, doing mundane stuff by comparison to what a full Java app could do. It was intended to be complementary.


It would be interesting to recompile the old applet plugin as WebAssembly - it should be both secure (well, as secure as WebAssembly ever will be) and fast.


I seriously can't wait for a Flash in JavaScript so that all those old Flash games don't get shuttled off to the dustbin of history, never to be enjoyed again.

I do not want to ever have to update Flash again. Ever.


That's a dead Mozilla project called Shumway


>How long until there's a really good JVM written in JavaScript of some form and embedded Java apps end up running in JavaScript for performance and security reasons?

Much of the JVM is written in C++ (From the docs: "There are nearly 1500 C/C++ header and source files, comprising almost 250,000 lines of code.")


Until JavaScript offers a robust concurrency abstraction, I don't see it completely replacing anything that is used for that purpose.


Based on the comment you're replying to I infer that you think Java offers a robust concurrency abstraction.

I disagree. In moving to the single-threaded async style of JS concurrency, the hardest part for me has been to rid myself of the ugly habit I acquired from Java's threading model, i.e. asking myself at each line of code, what happens if my thread gets preempted here?


Async/await and promises are quite good concurrency abstractions.


>has grown up to be this thing that might actually replace Java entirely.

Java is just a language that runs on JVM, there have been many many attempts to replace it with languages brand new ( scala), existing languages ( jruby) including javascript( nashorn/ringojs ect), but no language has even come close to dethroning it. Its unlikely that javascript will replace it just because jvm is rewritten in java script.


> Its unlikely that javascript will replace it just because jvm is rewritten in java script.

that is not what oc meant. he is talking about runtime env, not source language


When compiled from C/C++, does WASM do bounds checking of pointers and arrays? What kind of memory safety does it offer?


WASM offers a virtual machine to the user, in which there are global variable slots, a stack for local variables, and one or more "linear memories" (chunks of RAM for heap allocations). The first two are completely bounds-checked - you tell the virtual machine to operate on a certain offset in a global/local variable ID, and you are safe from your access affecting anything outside of that variable.

The linear memories, on the other hand, are up to you - you can execute operations on address within a linear memory, and it's up to you if you messed up and overwrote or read out data that you shouldn't have. What you do get is a) protection against heap overflows affecting values outside the heap, and b) protection against execution of heap data. In the future, they're also planning to support multiple linear memory segments, which would allow you to isolate memory used for potentially-buggy dynamically-allocated buffer code from memory used by less-fragile or more-sensitive code. WebAssembly.org's descriptions indicate that linear memories would be used both for C/C++ heap allocations and for any local/global variables accessed with operators (like the & reference operator) that make it hard to make these distinctions statically.

More information on the security properties of the system: http://webassembly.org/docs/security/

General information on linear memory semantics: http://webassembly.org/docs/semantics/#linear-memory


No, it's like a virtual proccess but with simpler memory layout. If you try to read or write memory outside of the allowed range it will trap (which is like segmentation fault on Linux) but it doesn't check any individual array for out of bound access because it doesn't actually know where are arrays in the memory.


> At least for now, WebAssembly does not support garbage collection at all. Memory is managed manually (as it is in languages like C and C++). While this can make programming more difficult for the developer, it does also make performance more consistent.

Ouch, back to the 80s, early 90s. I think I'll stick with JavaScript at least until WebAssembly gets garbage collection. I might be wrong but I don't see many people writing SPAs in C++ for the extra speed (let's say, the JS layer acting like an X Server for the DOM, driven by a C++/WebAssembly application). Games, yes.

Furthermore JS has view source. WebAssembly has a text format [1] but it's really assembly. Hopefully there will be source maps [2].

[1] http://webassembly.org/docs/text-format/

[2] http://webassembly.org/docs/tooling/


You don't have to go back to the 80's to develop closer to the metal. Neither Rust nor Swift have garbage collection, but both are thoroughly modern, productive languages. I've also been told that C++14 is reasonably pleasant, but I have no personal experience with which to judge.


Swift is garbage-collected – it might not have a tracing GC, but ARC is essentially compile-time GC – certainly, from the perspective of a developer, ARC is extrordinarily similar to a tracing GC (with some small caveats).


Reference-counted GC is trivially implemented on top of WebAssembly memory model.


"compile-time GC" sounds like it would apply to C/C++'s automatic storage duration too.


The lack of GC is a benefit here. Remember, WASM as an MVP is meant to bring C/C++ codebases on the web.

There is plenty to be excited about here. This will enable faster [media apps, games, etc](http://webassembly.org/docs/use-cases/) on the web. Wasm is knocking at the door and I can't wait to see what it's bringing with it.

Edit: HN really should support a few more formatting options.


It's not a benefit, it's simply a temporary compromise to get it out faster.


There are a lot of developers -- a huge amount, actually -- who believe memory management is too important to leave to an automated GC algorithm. That, and the memory requirements of a particular application might require something different than a GC provides.


If your algorithm is such that everything can be placed on the stack then GC gets you nothing.


A lot of GC languages don't give you access to the stack, though, all objects live in the heap.


Isn't WASM a compile target?


It doesn't prevent a language from adding its own GC.


At least in Rust's case, RAII/lifetimes provide mostly the same development experience without GC overhead.


What makes a compiled executable faster than interpreted code seems kind of self evident.


Many developers have no practical exposure to compiled languages, and are prone to believe that Node.js is generally "faster" than Java and comparable to C++. We all had to learn this stuff somewhere. :)


I see what you mean, but it's actually possible for JITed code to be faster than compiled code, as JIT can make run-time optimisations that aren't visible at compile-time.


True fact but not really useful if this only applies to 0.01% of the cases.


I realize this is about discussing technical merit, but I'd like to share a less naive view of what WebAss does to the web (eg. DRM, making ad-blockers impossible, etc.) over in the [other thread] (https://news.ycombinator.com/item?id=13755370).

Right now, the discussion evolves around finding fecalia-/coprophilia-inspired names for the WebAss ecosystem. Can't wait until America Awakens to contribute :)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: