Hacker News new | past | comments | ask | show | jobs | submit login
Maybe you don't need Rust and WASM to speed up your JS (mrale.ph)
600 points by youngtaff on Feb 19, 2018 | hide | past | favorite | 181 comments



I think this is an interesting exploration because I really enjoyed the description of profiling and improving performance, but I came away feeling exactly the opposite of the title. I think it's really cool that JS optimization can provide so many wins, but this article makes it seem fairly fickle and if they're not familiar with VM internals I would not expect most developers to complete this journey. Using wasm+rust/c++ is interesting in part because you can get so much more predictable performance out of the box. When the performance comes from language features instead of VM-specific optimizations the performance might be easier to maintain across revisions and hopefully will be less subject to regression from VM changes.


[Thank you for reading the post! I am glad you enjoyed it]

All optimizations in the post can mostly be divided into three large groups:

1) algorithmic improvements; 2) workarounds for implementation independent, but potentially language dependent issues; 3) workarounds for V8 specific issues;

You need to think about algorithms no matter which language you write in, so we don't need to talk much about the first group. In the post it is represented by sorting improvements (sorting subsequences rather than the whole array) and by discussions of caching benefits (or lack of them there-off).

The second group is represented by a monomorphisation trick: the fact that the performance suffers due to polymorphism is not really a V8 specific issue and it is not even JS specific issue. You can apply this approach across implementations and even languages. Some languages apply it in some form for you under the hood.

The last group is represented by argument adaptation stuff.

Finally an optimization I did to mappings representation (using typed array instead of an object) is an optimization that spans all three groups. It's about understanding limitations and costs of a GCed system as whole.

Now... Why did I choose the title? That's because I think group #3 represents the issue that should and would be mostly fixed over time. While groups 1 and 2 represent universal knowledge that spans across implementations and languages.

Obviously it is up to each developer and each team to choose between spending N rigorous hours profiling and reading and thinking about their JavaScript code, or to spend N hours rewriting their stuff in a language X. What I want is:

a) that everybody was fully aware that the choice even exists; b) language designers and implementors worked together on making this choice less and less obvious - which means working on language features and tools and reducing the need in group #3 optimizations.


> monomorphisation trick

Here's a crazy thing I recently learned: apparently monomorphism isn't just "object with identical keys", apparently (at least in Chrome), the order in which you declare those keys matters. According to this presentation from 2015[0], adjusting the following lines in the Octane/Splaytree benchmark so that node.left and node.right are always assigned in the same order resulted in 15% better performance:

    var node = new SplayTree.Node(key, value);
    if (key > this.root_.key) {
      node.left = this.root_;
      node.right = this.root_.right;
      ...
    } else {
      node.right = this.root_;
      node.left = this.root_.left;
      ...
    }
Now, I assume that this out-of-order thing was actually done on purpose, to benchmark how the JIT handles code like this. Further evidence for that is that the SplayTree constructor[1] does not feature a left and right key either:

    SplayTree.Node = function(key, value) {
      this.key = key;
      this.value = value;
    };
Still, I wouldn't be surprised if it was common for real-life code to accidentally have objects that should have the same hidden class end up with different ones because of this.

[0] http://mp.binaervarianz.de/fse2015_slides.pdf

[1] https://github.com/chromium/octane/blob/master/splay.js#L390


Yes, this is because the JS spec requires that object keys are iterated in insertion order (with a bizarre exception for arrays).


Ah, I guess that explains it. Although that does not require the engine to have under-the-hood layout respect that, does it? They could just as easily choose to keep the memory layout in the same order.

Although I suppose you're already required to have two separate hidden classes to distinguish these two kinds of objects anyway.

> with a bizarre exception for arrays

Wow, you weren't joking with how bizarre this gets:

    var a = {};
    var b = {};
    a.a = 0;
    a.b = 1;
    a[0] = 0;
    a[1] = 1;
    b.b = 1;
    b.a = 0;
    b[1] = 1;
    b[0] = 0;
    Object.keys(a); // Array [ "0", "1", "a", "b" ]
    Object.keys(b); // Array [ "0", "1", "b", "a" ]
PS: Thanks for making a great shell :)


Prior to Chrome's release, browsers preserved insertion order for numeric keys as well (but this was not in the ES3 spec).

Chrome/V8's team found they could get substantial performance improvements by diverging from the de facto standard, without too much of a cost to web compatibility.


TIL. For those like me for which this is new, this was apparently a change in the ES 2015 / ES 6 version of the standard.

See: https://stackoverflow.com/questions/5525795/does-javascript-...

The order is only sometimes guaranteed, of course, because JavaScript. (But critically, for this discussion, it is important that it is sometimes guaranteed, because it forces that information to be stored by the VM.)


I feel so justified in adding this guard in my compression pre-processor[0] code, even though I never had a bug without it:

    // the keys of an object
    export function keysOf(obj) {
      let keys = Object.keys(obj);
      keys.sort();
      return oneOf(keys);
    }
Context: the rest of the code takes an object representing a schema, and creates two functions. One that can turn any object fitting that schema into an array, with positions indicating which key they originally belonged to, and another one that can reverse the process.

Could this also explain why I have no consistent order of properties when viewing state with the Redux dev-tools? Instead of Object.assign I use my own simplified merge code[1]. Maybe if I also make that use a sorted set of keys, the devtools will become more consistent in their presentation (and it might result in more consistent hidden classes too).

[0] https://github.com/linnarsson-lab/loom-viewer/blob/master/cl...

[1] https://github.com/linnarsson-lab/loom-viewer/blob/master/cl...


Wait what? Since when? That wasn’t the case at all 5 or 6 years ago (I got bitten in the arse by Rhino not implementing it that way)


Most browsers already did, and some (not all) of the ES2015 stuff requires property ordering. Probably easier to just keep everything ordered since it could be needed by those methods.


> Some languages apply it in some form for you under the hood.

I suspect some respondents will say, or believe, that this falls under the category of "you have to know the VM/runtime intimately to get good performance".

I don't think that's true. If you know the general problems with polymorphic call sites, you can (in JS and many other languages) check to see if the runtime is applying optimizations of this kind by being explicit. If it helps, you can get a free optimization by setting explicit arity/dispatch just because you happened to know about the issues surrounding polymorphic dispatch. That's a case of fundamental knowledge speeding up/improving your progress; not a case of "you need to know the VM guts like the back of your hand to make code fast".


Ultimately, much of the enthusiasm over WASM, like ASM.JS, is rooted in the idea that it's extremely difficult to improve JS engine performance. ASM.JS used certain conventions to essentially create a "language within JS", while WASM is a new language altogether. The goal was the same in both cases: construct a language that is easier to optimize and make the end users conform.

The key takeaway of your commentary on asm.js five years ago (http://mrale.ph/blog/2013/03/28/why-asmjs-bothers-me.html) is the same as the key takeaway from this post: we haven't reached the end of JS engine performance improvements, and if we apply the same rigor to JS development that we apply to C++ or C or Rust or some other language the results are definitely surprising!


I work on SpiderMonkey JS performance (and got mentioned in the article for fixing a perf bug!) and this is exactly the point I wanted to make. If you're not a VM engineer, it's much harder to write fast JS. Wasm also gives you much more predictable performance across engines, without requiring (often engine-specific) 'hacks' like function cloning.


From what I can tell, doing basic things like a property get on an object in WASM/host-bindings will require a spill to function, with no chance of specialization:

https://github.com/WebAssembly/host-bindings/issues/11#issue...

If you think the assembly in this article is bad, WASM/host-bindings appear to be even worse.


That comment (which I made) explains why probably the best we can expect for wasm accessing normal JS objects is the same level of speed as a modern JS engine's "tier 1"/"baseline" JIT. But the Rust code discussed in the OP wasn't accessing JS objects, it was accessing Rust memory which lives in wasm's "linear memory", so I don't see how my comment really disagrees with what jandem or dikaiosune are saying.


Having micro-optimized and maintained JS code over years and dozens of browser versions, I cannot echo this enough. Tuning JS both the in face of fickleness of VM quirks and data/context is a losing proposition for all but the most speed- and core-functionality-critical code.

The point of rust+wasm benchmarks is that one can write reasonable, maintainable, functionality-focused code (not to mention all the rust-specific benefits) and get good performance out of the box.


Writing an O(N^2) algorithm that processes lots of data in wasm is not going to speed it up more than writing an O(log N) algorithm in pure JS; the knowledge of how to identify problems like that will serve you better in many more cases than the selection of a "fast" runtime.


Not totally true, some algos while large O(n) are incredible cache friendly.

Radix sort for instance comes to mind.


I'm afraid it's easier to get started as a Javascript developer than a Rust developer. What if it's easier to learn JS optimization quirks than to become an average Rust developer ?


It's a cost/benefit type of situation. Learning to optimize JavaScript might cost less, but the benefit is fairly narrow, and may lesson over time (or you may need to pay more in attention and time to keep up with JavaScript VM changes). Rust (or C++) to WASM costs more (if you don't already know the language), but the benefit should stay static and useful over time, and there are many other benefits as well (such as having a fast low-level language to fall back on for other needs.

This isn't unique to this situation either. If you're writing Python and your choice is to either deep dive into various hacks to maximize performance of your pure python or learn Rust/C/C++ and call out using an FFI mechanism, if you have the time to space, I think it's almost always better to learn the new language. There are many benefits to learning new languages beyond just the different performance characteristics, so if you can afford to take that path, I think it's usually a good choice to do so.


I agree with this, but I think most people don't, until it's demonstrated Rust kicks ass. Javascript is all the age right now. People might think "I'll get a Javascript job now, and I will learn to optimize later. Javascript is highly optimized anyway." Then it's just inertia. They already know Javascript. Besides, Javascript may have a reputation for being messy, but not for being a hard language to pick up. Yes it does have an unusual OO style, and supports functional programming, but you don't have to start with high level topics just to write any code with it.

As far as I know Rust has just two major success stories right now: Firefox 57 (Servo) ripgrep (even included in Microsoft Visual Studio)


> As far as I know Rust has just two major success stories right now

Here's a bunch more. Dunno what your definition of "major" is though: https://www.rust-lang.org/en-US/friends.html


Sure, and that's why I was careful to couch the argument as just some other language that includes C and C++ as options (and truthfully includes others, but few have quite the same compatibility story). At this point if you don't know C++ or Rust, which one will give you the most benefit from learning it is a very open question. I think Rust has advantages over C++ in correctness and safety, but C++ currently offers better paths for understanding and utilizing existing code or getting a job using it (whether that was an initial goal or not). Five years down the line the differences may not be so cut and dry though. C++ will almost definitely still have those advantages, but the gap have have closed somewhat. Or maybe not? It's all up in the air.


Currently C++ still wins over Rust, regarding UI and mobile development, specially in what concerns out-of-box experience.

C++ had the advantage of being immediately adopted by the OS vendors for GUI development, although nowadays, with exception of Windows, its role has changed into just addressing the GPU.

So maybe one day we will get something like shaders, CoreGraphics, DirectX, SurfaceFlinger in Rust, but it will still need a couple of years.

Webrender is already something into this direction.


>but C++ currently offers better paths for understanding and utilizing existing code or getting a job using it

I wouldn't say this is entirely wrong, but proper cross-platform package and test management with Cargo is a reason alone that utilizing existing code is waaay easier in Rust. C++ has more existing code though, so they might hit your niche needs better.


Here something like conan might finally win the hearts of C++ devs, but it is still pretty much in the beginning.

In what concerns Windows development, NuGET and vcpkg are already a big improvement.


NuGet is pretty good, I've only used it with C#/F# though.


Does drop box writing a fs in Rust count as a major success story?


It doesn't count as a major success story I knew about when I was typing that comment.


It seems less reliable to learn is optimization quirks, since you’re depending on an (infrequently changing?) implementation detail. Also writing those things sounds muh messier and less maintainable. All heuristic points I’m making.


And yet, which is better; finding someone that has the skill and knowledge needed to optimize this (e.g. bring in a consultant), or rewrite the thing - hundreds if not thousands of lines of code and working hours being replaced by even more. And not just rewrite it, but (IMO) replacing it with more complicated technology (additional steps required to make it usable, additional skills needed to work on the project).

I think people think too lightly of rewrites. Actually, I'd even argue that in some cases, rewrites are done because they're easier than actually fixing the problem. Yes, Rust and other languages will give you better performance out of the box, but at what cost?


Like any situation in software, there are likely to be tradeoffs. I certainly wouldn't advocate for launching large rewrites except under extreme duress. That said, I'm not sure I agree with how you're characterizing the tradeoffs for using wasm here. For one thing, the library in question sounds like it is a few thousand lines, not some gargantuan project. For another, part of what's awesome about wasm is that it can be somewhat incrementally adopted inside an existing project. I would also point out that for many teams or projects maintaining a consultant's code is not going to be as cost effective in the long run, especially if that code has a lot of language or VM specific optimizationswork done to it. Part of my point here is that the addition of new tech (rust, wasm) isn't complicated where optimizing JS is simple, but that they both carry complexity cost and should be assessed in context as possible solutions to a performance problem.

I agree that rewrites are often taken too lightly, but if they address the original problems I think it would be more accurate to say that they are often needlessly expensive ways to solve problems that can also be solved in other, cheaper ways.

I'd also point out that in the case of many open source projects, finding an optimization consultant is not even remotely an option. For many of those projects, if performance is suffering, someone needs to step up and figure something out. Then the question becomes which approach can be applied by some contributor who's actually willing to do it. If you don't have someone who understands polymorphism in VM runtimes, I think in many cases you'd be well served by sprinkling some wasm on the problem. Of course this doesn't apply in all cases.


> cool optimizations ... but ... fickle ... internals ...

Yes. I wrote about this in some detail 2 years ago, Jitterdämmerung:

http://blog.metaobject.com/2015/10/jitterdammerung.html


The problem with these tricks is that they're dependant on V8 internals, which means that there's no guarantee that it will be fast on Firefox (or edge, or even Safari).

So now if google changes the back end, websites will slow down, which means that Google has to ossify implementation details, furthering this sort of black magic into CS lore forever.

This, honestly, is what I hate with modern dev. A few days back there was a discussion on how programming is hard nowadays.

Really, programming is more accessible than ever. What is hard is the "black magic" that's becoming more and more prevalent, most of which is implementation defined, that everyone is expected to know (and really, most don't really know anything. They just repeat rumors they read about online, often years out of date).


The article does mention SpiderMonkey, though not Safari or Edge, and the point does stand that profiling four different JS JITs to guarantee that you trigger their optimizations is always going to be more work than profiling one C++/Rust program generated by a single compiler toolchain (though it remains to be seen how much individual browsers' implementations of WASM will diverge in runtime optimization potential).


They will diverge about as much as JS does, that will hardly be a surprise


I doubt it. If further optimizing pre-optimized assembly were that easy (or that valuable), then we'd also expect to see more tools for postprocessing compiled binaries. (Just because WASM is a bytecode format doesn't mean it's comparable to Java bytecode and Java's JITs; javac isn't an optimizing compiler.) I'd be happy if anyone actually working on a WASM interpreter could chime in regarding expected optimization potential.


It's like SQL servers all over again, where an infinite plethora of rumours and received wisdom and magical thinking guides performance optimization because it's such a goddamned black box that will optimize your query however it sees fit.


Which is why profilers exist.

Writing code based on gut feeling and urban myths was never a good idea.


Even x86 machine code will work differently on different processors, because it's not executed directly but translated on another machine code and this translation is different on different processors (and AFAIK completely proprietary). There are layers above layers. I won't be surprised if someone will implement JVM over WASM to run Java applications in the browser. And JVM has JavaScript engine, so...


Same applies to other languages with multiple implementations.

You might have some high performance C code only tested in gcc on GNU/Linux, and then get some nasty surprises when testing on HP-UX with aC, for example.

Back in the day, DDJ and The C/C++ User's Journal used to run articles comparing the quality of all most well known compilers.


In this case, it looks like he's optimizing for Firefox and Chrome--he mentions benchmark results from both and also running into a SpiderMonkey optimization that was causing problems.

That does leave out Safari, Edge, Opera, and probably some other obscure browsers, and doesn't help with your point about future engine changes potentially breaking the optimizations.


> Is it better to quick-sort one array with 100K elements or quick-sort 3333 30-element subarrays?

> A bit of mathematics can guide us (100000log100000 is 3 times larger than 3333×30log30)

You have to be careful doing this kind of analysis. Big-O is explicitly about what happens for large values of N. It is traditional to leave off smaller factors, because they don't matter at the limit of N going to infinity. There is implicitly a constant coefficient on each component, and that might matter more at small N.

So e.g. I've seen cases that were O(N^2 + N), and which of course you'd traditionally write as O(N^2), but where the O(N) factor mattered more at small values of N because of constant factors. Depending on whether you cared more about small or large values of N, would guide whether you'd actually want to go after the O(N^2) factor or not. If you just blindly went for the larger factor, you could waste a lot of time and not actually accomplish anything.


The problem with this kind of deep dive optimization is the cost of maintaining it in a long-lived project as the underlying javascript engines keep changing. What was optimal for one version of V8 can actually be detrimental in another version, or in another browser.

It's precisely the unpredictability of JIT-driven optimizations that makes WASM so appealing. You can do all your optimizing once at build time and get consistent performance every time it runs.

It's not that plain Javascript can't be as fast -- it's that plain Javascript has high variance, and maintaining engine-internals-aware optimization in a big team with a long-lived app is impractical.


It seems to me there is no reason we shouldn't be able to create an "optimizing babel" that could be doing performance optimizations based on the target JS engine and version, as a build step. I don't think we need to go to a completely different source language and a compilation to WASM in order to get permission to create such an optimization tool.

Such a tool would give you the benefits you're praising about the WASM compilation workflow: Separately maintained, engine-specific optimizations that can be applied at build-time and don't mess up the maintainability of your source code.


But what if I want to target all the engines, including future ones? The compiler could compile separate versions for each engine, I guess, and you could choose which one to load at runtime based on the UserAgent string ... but even then everyone would have to recompile their websites every time a new browser version comes out.

The advantage of WebAssembly is supposed to be (I think) that it's simpler and will give more consistency between browsers, so browser behaviour will be less surprising, and you can thus get away with compiling a single version.

And if you take this approach of compiling JavaScript to a simple and consistent subset of JavaScript that can be optimized similarly in all engines, you'd end up more or less targeting asm.js, the predecessor to WebAssembly. :)


JS engines already do insane levels of optimization, and they do it while watching the code execute so they understand the code better than any preprocessing tool can hope to.

What could a tool like you're describing do that the engines don't do themselves?


I assume that JITs don't do very expensive optimizations because they have to do a trade off between execution speed and compilation time. JITs are also fairly blind on the first execution of a piece of code. Static optimizations are not made obsolete by the existence of JITs.


Expensive optimizations like what? Can you give a before/after example of something such a tool might do?

(Note: I'm glossing over the case where one is using bleeding-edge syntax that a JS engine doesn't yet know how to optimize. In that case preprocessing out the new syntax is of course very useful, but I don't think this is the kind of optimization the GP comment was talking about.)


JIT engines usually don't do static analysis. I'm not sure if that is because the cost for that is that much higher, but a hint towards why could be that the engine simply does not know which parts of the (potentially huge amounts of) code that was loaded is actually going to be needed during execution, so analysing all of it is likely to bring more harm than gain.

As an example for something that static analysis could have caught, take the example from the article about the "Argument Adaptation"[0]. Here the author uses profiling to learn that by matching the exact argument count for calling a function, instead of relying on the JS engine to "fix that", the performance can be improved by 14% for this particular piece of code. Static analysis could have easily caught and fixed that, essentially performing a small code refactoring automatically just like the author here did manually.

[0] http://mrale.ph/blog/2018/02/03/maybe-you-dont-need-rust-to-...


I replied in main to your other comment, but regarding the "Argument Adaptation" issue it's maybe worth noting that I'm 90% sure V8 would have optimized this automatically if not for the subsequent issue (with monomorphism). I'm dubious that the former issue could be statically fixed as easily as you suggest, but either way I think it should be considered a symptom of the latter issue.


Well, for one, the compiler could rearrange key assignment of similarly shaped objects, so that they actually are similarly shaped objects to the JIT .... but that seems like really dangerous territory


Something like that would probably speed up more code than it breaks, but it would be a breaking change, so probably not what could be strictly considered optimization.


The author has proven exactly this point: By going through a number of engine-specific / implementation-specific code transformations they have achieved a significant performance boost for hot code, which, for whatever reason, the JS engines themselves failed to attain with the optimization repertoire they already have.

Also, remember that JS engines are not all-powerful and all-knowing in their optimization techniques, it's still just a limited number of individually imperfect humans working on them, just like the rest of us. So naturally there are going to be opportunities for other humans to utilize and help complete the picture and increase the overall value and effectiveness.

Maybe in this case there is room specifically for a JS-syntax-level tool that also has more freedom in terms of execution time and related concerns, because it can execute at build-time, potentially pull in or bundle much more information about optimizations with it (imagine using (potentially expensive) ML model to search for probable performance wins, actually do micro- or macro-benchmarks during the build step), be maintained outside of the implementation of a JS engine and thus have the additional benefits of a potential broader contributor base and a faster and more focused release cycle, etc. Or this may not be a good idea after all. I cannot tell. All I know is that if we actually do see that there are and will continue to be optimization gaps in the JS engines themselves, then there is a way to fill them, and likely without having to switch to basically entirely different (frontend) technology stacks.


> By going through a number of engine-specific / implementation-specific code transformations they have achieved a significant performance boost for hot code, which, for whatever reason, the JS engines themselves failed to attain with the optimization repertoire they already have.

I think you're mischaracterizing what happened a little. Most of the author's improvements weren't engine- (or even JS-) specific, they were algorithmic improvements. But for the first two that were engine-specific, it's not like he applied a rote transformation that always speeds up scripts when you apply it. Rather, the author (himself a former V8 engineer) saw from the profile results that certain kinds of engine optimizations weren't being done, and rewrote the code in such a way that he knew those specific optimizations would take place as intended. Sure, a deep ML preprocessor might do the same - but only after trying 80K other things that had no effect, and on code that wasn't even hot, no?

More to the point though, it strikes me that you say JS engines aren't all-powerful, but in the same breath you seem to assume that just because V8 didn't optimize the code in question that it can't. It seems very likely to me that any case you can find where a preprocessor improves performance is a case where there's a fixable engine optimization bug. Sure, in principle one could build a preprocessor for such cases, but it seems more useful to just report the engine bugs.


You're basically describing asm.js -- a subset of javascript that is known to be easy for engines to turn directly into native code and execute, that you can use as a compilation target.

The difference between asm.js and WASM is mostly just that WASM is more compact and easier to parse, while asm.js is a more gradually compatible upgrade story.


Wasm will turn into an optimizing jit. Or it will run slow or deliver massive binaries if it is the equivalent of statically linked. I don't really see a way around that.

Once it tries to start inlining on the client side, that will open the floodgates to other optimizations.


It could easily allow a different binary to be downloaded per client (e.g. sse3 capable, sse4 capable, avx capable, avx512 capable).

The compiler would then spit out eighteen different versions and the right one would be downloaded by the user.


Does this suggest that all you got to to do is compile the JS to WASM, to have it be just as appealing?


Whether that's what's being suggested or not it's untrue. They benefits in this case come from using a much lower level and more performant language. Javascript to WASM will always be at a disadvantage to an embedded Javascript VM natively compiled given similar optimization time, since the VM can bypass some security safeguards of WASM that it can ensure aren't needed.


This is a great article and goes to show that statements like "X language is fast" are a little blurry when you put the language in the hands of a skilled developer, who can go beyond the standard idioms and surface level understanding of a language to use it like it's another language.

At the end of the article, the author wisely chooses to move some objects out of the control of the GC:

We are allocating hundreds of thousands Mapping objects, which puts considerable pressure on GC - in reality we don’t really need those objects to be objects... First of all, I changed Mapping from a normal object into a wrapper that points into a gigantic typed array...

This suggests to me that we are no longer programming the way one usually does in a dynamically typed, garbage collected language -- and thus it might still be the right decision to move to something like Rust (or Swift or Go) where there is considerably more control over allocation.

The author is able to achieve a speed-up of ~4x which is close to the 5.89x achieved by the Rust implementation. There are benefits to having all ones code in the same language; but there are also benefits to switching languages to obtain better ergonomics and safety properties.


> The author is able to achieve a speed-up of ~4x which is close to the 5.89x achieved by the Rust implementation.

This is not a fair comparison because the author made algorithmic improvements that would also improve the wasm version if applied to the Rust code. Source: https://twitter.com/mraleph/status/965616993310265344


FWIW, quick look at Rust code actually reveals that that implementation is also different algorithmically from what I was optimizing, e.g. it only sorts generatedMappings array.

In reality it means that performance of my code should not be that far from what WASM is showing because sorting of originalMappings (which I do eagerly and WASM version does lazily) is one third to one half of the overall runtime.

I will try to measure and update the post tomorrow or Wednesday.


If Mark Twain were alive today, the saying would be "Lies, Damned Lies, Statistics, and Benchmarks"


I should mention in the post that source-map version that relies on WASM actually requires you to manually manage lifetime of SourceMapConsumer object[1], which my change does not actually require because the memory object is encapsulated within SourceMapConsumer itself.

I do agree that the way I manage mappings is hardly ergonomic. As mentioned in the post I would prefer to use Typed Objects to access the packed array of mappings, but alas that proposal is stalled.

[1] https://github.com/mozilla/source-map#sourcemapconsumerproto...


Yeah, this is also where value types are really awesome and you can get the same results from(assuming no strings). Sadly other than C# most of the mainstream GC'd languages don't include them yet as a feature.


There was even a firm proposal for user-defined value types in JS, but asm.js (and now wasm) took all the air out of the room - though I'm not sure it would have ever gotten implemented in all the browsers anyway, since only perf-sensitive developers care about it.

EDIT: I noticed it's actually mentioned in a footnote in the article - the Typed Objects API was really nice, I had a chance to use it for a prototype.


I wonder if the performance gain is worth the effort. Surely you can always squeeze more performance from JS like from any other language but what's the point if you spend hours to match the performance you get for "free" from other languages?

As far as WASM is concerned I'm more excited about the possibility to run any programming language on the web than the raw performance gains. So far it is still year(s) away from this goal(i.e. lack of web APIs/DOM access makes it useless for web dev).


> hours to match the performance you get for "free" from other languages?

In a sense, you pay those hours that you saved up by using an easier, more intuitive language like JS. Or you can choose not to and still have pretty good performance and blazing fast dev cycles.


Can you elaborate on how JS is more intuitive than, say, Rust? I can give a simple counter example - make a JS object, assign another object as a key, copy it three times into a list (without deepcopy) and modify the object inside. Suddenly you've modified all three objects. Not intuitive at all, and easily missed if you're a less experienced developer. Rust would stop you in your tracks and force you to explicitly say that you want that behaviour.


JS has more than its share of problems, but I wouldn't consider that to be one of them. It's the same reference semantics as Java and Python, and people are used to it.


People are used to it but saying that is kind of a cop out. It's very easy to write code that reads well logically but forgets about reference semantics, especially if you're a junior developer.


I believe there are other languages easier and more intuitive than JS. The lack of DOM and GC support in WASM is the only reason JS is still ruling the web.

As far as the performance is concerned I believe profilling the internals of the VM(as the author does) is not a good fix on the long term.


I agree regarding JS not being the easiest or most intuitive language, but I don't see JS going away any time soon and I don't think that's a bad thing. The last 5 years or so have seen JS transform from one of the most frustrating languages to one of the better ones. I think Typescript is where the future of web dev lies. Typescript for your main logic and WASM for optimising the parts that need it.


Yeah, JS is better now but I bet we could do even better starting with a clean slate. Web dev shouldn't be tied to a single programming language. How would you like using PHP and its derivates(i.e. Hack) for everything web APIs/back-end? I could tell you that it made great progress since PHP 4.


Oh I'm sure we can do better. Would it be worth throwing away decades of libraries, many of which have ironed out many subtle logic bugs over time? That's a different argument. Not going to say it's not worth it (there is a lot of mess that could be removed if we started from scratch).

It's the same argument any junior engineer makes when they start at a company with a legacy codebase. They always want to start from scratch but don't realise the cost or the fact that while they'll avoid 100 mistakes of the past, they'll also make 100 new ones. Why not just make JS better? It's already happening year by year.


That and the fact that it's the only language blessed to have it's interpreter in the browser.

If we could have been able to challenge it by allowing another language in there I would like to think we wouldn't be facing a lot of the issues we do now.


I would never categorize JS as easy or intuitive. Maybe in comparison to C++ or Assembly, but it's maddening coming to it from modern statically-typed languages that are better designed and have better tooling.


Huh? I think the best part of WASM is getting away from JS and its associated tooling.


It seems to that WASM is more suitable for non-browser work like Node.


No offense meant, but that couldn't be more wrong. If you're writing for the server, you can already use whichever language best suits the problem at hand. WASM will eventually allow the same choice in the browser.

The extent to which WASM could be used in the Node ecosystem would essentially be an indictment of how badly the Node community has bungled FFI. (node-ffi[0] should absolutely be added to Node core, not be independently maintained and always trailing compatibility with newer node releases, right now you have to choose between security or compatibility)

[0]: https://github.com/node-ffi/node-ffi


No offense taken! I am pretty new to the JS world.

"WASM will eventually allow the same choice in the browser."

When will that actually be? That's the big question.


Since November WASM is enabled by default in the major browsers now:

https://caniuse.com/#feat=wasm


I guess if you don't include IE11 as one of the "major browsers".

I wish that was the case. Yes, I know it is only receiving security updates -- but it will supposedly get those until Windows 10 is no longer supported (if I'm reading things right).

Unfortunately, that doesn't stop a bunch of people from using it. It still has a considerable amount more marketshare than Edge does (if netmarketshare.com is to be believed).


40% of desktop users use IE11 for our browser based app. 1% use an older version of IE, and 1% use Edge. Our clients are businesses, mostly in Australia and New Zealand.


Yeah, I know many businesses still have to use IE11 because some web app they use requires IE (often on an intranet, but sometimes still even in the internet). Sad state of affairs.


I'm excited about compiling to WASM not for performance but for correctness. Typescript is better than nothing but it really can't compete with the safety and ease you get from a language with a really good type system.


One thing I've been confused with about transpiled languages is how does correctness transfer? How do types or lifetimes etc. transfer to WASM, for instance? Or does it just depend on the correctness to be verified in the pre-compilation/transpilation stages?


Correctness is typically checked long before code generation happens. This does mean the code generation backend must be trusted, whether the target be x86 or wasm. Just as you have to trust CPUs to not expose processes' memory to each other (oops).


Typed assembly is a thing - it’s mainly to ensure the compiler itself is correct, but it does give stronger guarantees. I think Frank Pfenning at CMU has written a language that’s dependently typed all the way down - so it’s IR and assembly are both dependently typed.


edit: I misread the question, my comment was geared toward languages transpiled to JS.

It's compile-time safety. Virtually none of the type knowledge exists in the runtime. Still, it's far better than nothing, with the self-documenting nature of typed code being one of the biggest wins IMO.


>how does correctness transfer?

The same way it transfers when the language is compiled to x86 instructions! By adding a pre-written and pre-compiled runtime.


No correctness isn't usually ensured with a runtime in these languages with stronger type systems, it's usually done with static analysis on an intermediate representation at compile-time.


WASM gives you a linear memory block and performs only bounds checking. It provides very few higher-order abstractions.

For example, the way you allocate memory in WASM is to link in a malloc() implementation. This malloc lives in the same linear memory block, and so its internal data structures are vulnerable to being smashed, just like classical heap smashing.

One improvement is that the call stack is maintained externally, so it is not possible to stack-smash a return address. Still heap function pointers are vulnerable to a return-to-libc style attack.

Round and round we go.


Unlike typical assembly, WASM enforces strong guarantees around memory safety (which I'll argue is precisely what you want for something with a web browser's attack surface). This does imply a performance hit (conventional wisdom is that WASM will, at best, be half as fast as "native" code), but also insulates the user from the usual memory-related pitfalls of C. Code transpiled from Rust is forced to pay this price as well, despite Rust providing substantially stronger memory safety guarantees than C; fortunately if you're using Rust you'll probably be the sort of person who agrees that this is a good thing, because Rust still lets one drop into unsafe blocks and potential memory unsafety in untrusted code delivered over the network is pretty much a nightmare scenario.


Same way it does when compiling to C or assembly, the correctness is checked on the source and assume the codegen and platform you run on behave correctly (and fix/work around bugs which inevitably arise).

So the correctness transfers in the sense that the code the compiler output should always be correct.


What would you consider to be a "good type system"? I find typescript to have one of the most sane and still strict type systems of all languages that doesn't fall into the extreme functional spectrum or those that care about memory ownership.


Any language that doesn't have first-class support for ADTs is a waste of my time. And I say that as someone who's working with TypeScript all day long, and try to push it in my team, even though it sometimes requires verbose code and boilerplate. But I take that over plain JS any day. It took long enough for my coworkers to recognise the benefit of a type system. Can't push them too hard too far too quickly.


When you say ADTs, do you mean algebraic data types or abstract data types? My impression has been that TypeScript is quite good at algebraic data types (i.e. tagged unions). You need to use normal conditionals like if/switch or your own function instead of syntactic support for pattern matching, but IMO the static analysis makes it work out pretty well.


It is good at neither. Union types and sum types are veeery different from each other.


To be clear, I was talking about tagged unions; TypeScript supports both tagged and untagged unions[1]. My understanding is that "tagged unions" and "sum types" are the same thing, and Wikipedia[2] seems to agree.

[1] https://www.typescriptlang.org/docs/handbook/advanced-types....

[2] https://en.wikipedia.org/wiki/Tagged_union


> My understanding is that "tagged unions" and "sum types" are the same thing

Well, your understanding is wrong. Sum types satisfy a specific universal property, which union types don't.


What specific universal property?


The universal property for a sum type that having an X-valued function from a sum is equivalent to having X-valued functions from each summand, for any type X. In other words:

(0) There exists a procedure “foo” that recovers the functions from the summands, given a function from the sum.

(1) There exists a procedure “bar” that recovers the function on the sum, given the functions from the summands.

(2) “bar . foo” is the identity on the space of functions from the sum.

(3) “foo . bar“ is the identity on the space of tuples of functions from the summands.

It is easy to see that, if your definition of “sum” is “a JavaScript tagged union”, and your definition of “function” is “any JavaScript function”, such functions “foo” and “bar” are not constructible.

Tagged unions only work under very stringent conditions:

(0) Tags never overlap. Have fun enforcing this.

(1) Tags are only used to discriminate between cases. Again, have fun enforcing this.

Moreoever, any talk about universal properties being satisfied is utterly pointless when object identities permeate the semantics of the language.


Interesting, thanks for the explanation. See the link at the bottom for what I think foo and bar would be in TypeScript. I'm curious if you see specific weaknesses in it. Seems like maybe this is all just a clash between formalism and how languages work in practice, similar to how it's nice to just pretend that computers work on real numbers instead of floats. In this case, maybe you'd need to pretend that JS works on values rather than having object identity.

> Tags never overlap.

Agreed that TypeScript doesn't enforce this property of the type definition, and it would be nice that it did, although my interpretation of the claim was "any sum type is expressible as a tagged union". I certainly don't claim that TypeScript has a syntax for describing tagged unions and ensuring that they're well-formed; it's just that they're expressible via other features.

> Tags are only used to discriminate between cases.

It's true that tags are string values in my case, although possibly a fancier version could use opaque values. But given that TypeScript enforces that the type is either 'A' or 'B', couldn't any other language derive that string from the case and use it?

Do you view (say) SML datatypes as being valid sum types? I'm having a hard time understanding when SML types would be strictly more powerful/expressive than TypeScript here.

https://www.typescriptlang.org/play/index.html#src=type%20A%...


> Seems like maybe this is all just a clash between formalism and how languages work in practice,

I think you missed the word “broken”, before “languages”. Sums in Standard ML are really sums.

> similar to how it's nice to just pretend that computers work on real numbers instead of floats.

Computers can totally work on real numbers. (Okay, not all of them. But way many more than floats allow.) For example, you can implement an abstract type whose internal representation is Cauchy streams of rationals, and only provide operations that send equivalent streams to equivalent ones. Of course, this would be horribly slow, and few people actually need exact real arithmetic anyway.

> In this case, maybe you'd need to pretend that JS works on values rather than having object identity.

JavaScript does work on values. The values are the object identities.

> See the link at the bottom for what I think foo and bar would be in TypeScript. I'm curious if you see specific weaknesses in it.

You anticipated it yourself:

https://www.typescriptlang.org/play/index.html#src=type%20A%...

> Do you view (say) SML datatypes as being valid sum types?

Yes. On the other hand, Haskell's aren't, albeit for different reasons than the ones given in this thread.

> I'm having a hard time understanding when SML types would be strictly more powerful/expressive than TypeScript here.

In Standard ML, type abstraction actually works. You can hide the implementation of an abstract type, making it impossible for others to destroy the invariants that you worked so hard to establish. Or at least should have.

See here for why union and intersection types make type abstraction difficult: https://news.ycombinator.com/item?id=16399722


It's not: the TypeScript type system is unsound (accepts code that violates it), and there are no runtime checks, so it doesn't actually guarantee anything: a TypeScript variable may in fact contain any value regardless of its declared type.

I'd say the best type systems are found in Rust (naturally models zero-cost abstractions but doesn't have dependent types) and Idris and Coq (have dependent types but don't naturally model zero-cost abstractions).


> a TypeScript variable may in fact contain any value regardless of its declared type.

That's true in pretty much all languages. Even in Haskell you can use `unsafeCoerce`. In Rust you have 'unsafe' blocks. And any language which has a FFI you can implement the type-safety-violating functions in the foreign language (often C).


Yeah but in Rust and Haskell it's isolated to clearly marked code (except for some minor bugs in Rust that will be fixed - see https://github.com/rust-lang/rust/issues?utf8=%E2%9C%93&q=is... ).

Instead TypeScript has a bunch of serious unfixed design flaws that make the problem pervasive, plus they refuse to fix them (see https://github.com/Microsoft/TypeScript/issues/9825 ).


TypeScript intentionally strikes a balance between strict soundness and developer productivity, and I've personally been pretty happy with that balance. There's more to language design than soundness, and I've been happy with TypeScript's willingness to get out of my way for little snippets of unsafe code, especially when interfacing with external libraries. Within my own code, I've never had the unsoundness actually cause problems.


Honestly I only use TypeScript for its lovely autocomplete. I'm pretty certain that JavaScript autocomplete could never be this good.


Hear, Hear. At the same time, Typescript's type system is getting so complicated, and the syntax so fugly because of it, that I'm tempted to go back to straight ES6.


I love Rust but its type system isn't necessarily good at modelling zero-cost abstractions. After all, one of its nicest abstractions is the iterator interface, and that is by no means "naturally" zero-cost - you really have to trust the compiler to optimize the code back into a regular loop, and even though it's pretty good at that, it isn't perfect.


Spoiler: no type systems guarantee anything. At least JS doesn't have the audacity to make false promises about bug prevention.


> I find typescript to have one of the most sane and still strict type systems of all languages that doesn't fall into the extreme functional spectrum

MLs (OCaml, F#, Reason) are hardly "extreme functional spectrum".


TypeScript, while interesting, isn't so great if you're coming from an expression based language (i.e. where everything is a value).

That you have to return from statements is such a flow-breaking PITA, one that leads to ugly, verbose code. switch is particularly painful in this regard, would be wonderful to write `let foo = switch (x) { case ... }`, but no, it's a statement, not a value.

Also, structural typing has its strengths, but equality is not one of them. In order to generate a nominal type you have to hack in a private class member (what they call "branding"). This makes newtype/value class wrappers needlessly verbose.

TS feels somehow like a Java/Javascript hybrid with some more advanced type system features (e.g. union types) mixed in. Saying that, editor support (VS Code) and JS interop are pretty amazing.


As someone who loves JS, your post sort of made me more comfortable with WASM as a competitor. JS is currently under seige by competing factions of OOP and FP that are constantly at odds with one another. Perhaps if these two groups had the WASM bone thrown to them, they'd leave TC39 the hell alone and let actual JS devs develop the language in its own right.


At an absolute minimum I'd want support for non-structural types and sound variance of collections; typescript also lacks real support for sum types. Honestly these days I probably wouldn't bother with a language without higher-kinded types - once you think in them it becomes very tedious to be repeating the same code over and over because your language can't express the types the code actually is.

(Really I'd like polykindedness as well. I've only needed it once in over a decade of programming but it was such an intensely frustrating experience to need it and not have it)


What about BuckleScript?


My impression with Scala.js is that it's nice (and maybe the best option going at the moment) but you're still kind of a second-class citizen, which I'm hoping WASM will change. I imagine the experience with BuckleScript would be similar. (Whereas Typescript feels a lot more first-class, but at the cost of having to still be semantically very close to Javascript).


I am looking into TypeScript right now and I think it's just a hard problem to integrate a typed language into a dynamic environment like JavaScript. The transitions between JavaScript and the typed language will always be tricky and error prone.


Typescript does a better job than most that I've seen.


PureScript doesn't do a bad job either. And even Haskell (GHCJS) has good FFI into JavaScript (I'd argue even better than PureScript!). How bad can you make interop between a higher-level language and JS? What are the bad examples?


Agreed. Anders Hejlsberg has repeatedly shown excellent taste in programming languages. See C#, Delphi and Turbo Pascal.


I thought all the benchmarks indicated that WASM is still slower than JS. That being said the only performance improvements offered by WASM is if it doesn't for garbage collection for consistence execution.

WASM isn't about performance. It is about writing applications in any language and importing those applications into an island in a web page.


> I thought all the benchmarks indicated that WASM is still slower than JS.

No, wasm is almost always faster than JS, and that has been the case for a while.

Maybe you're thinking about some very specific aspect of performance or benchmark? (For example, wasm->JS calls might be slower than JS->JS calls in some cases.)


> WASM isn't about performance. It is about writing applications in any language and importing those applications into an island in a web page.

You don't really need wasm for that, do you? Anything compiles to JS nowadays and you'd actually get easier access to the DOM and GC and support for source maps (are those working for wasm yet?)


Flash has always been better for media and games than JavaScript. WASM fills that niche.


So much work to achieve what should be default behavior :-/

How did we end up in wasting so much time on trivialities?


When JS came out, it was poorly designed, but nobody cared because we used it only to make snowflakes appear on the web page.

Then MS gave us AJAX and 37 signals made it popular, until apps like gmail made it so mainstream it was impossible to go back to old static pages.

But it was too late. The shitty language we had was the only one available everywhere to do dynamic web pages now.

IE would not move, and Firefox and Opera were the underdog, spending their resources on more important things. So nobody tried to implement a better existing language.

When Google faced the challenge of creating chrome, they had to be compatible. So instead of implementing a better language, they also used JS, and injected millions of dollars into making the V8 so it has decent performances.

After that, JS was usable, and so we moved on.


> apps like gmail made it so mainstream it was impossible to go back to old static pages

Google maps. When Google maps first came out as a beta/preview and I first loaded it up, it was magic. Not magic as in the "this is wonderful" sense (it was) but in the "any sufficiently advanced technology is indistinguishable from magic" sense. I truly didn't understand how what they were doing was possible for the first few minutes. I'm pretty sure the feeling was the same with all the other programmers and sysadmins at the ISP I worked at.

Gmail might have clued a lot of people in as to how you could make webpages more dynamic, but Maps truly showed us how you could make a web application every bit as fluid and usable (if not more so) than a local one in the right circumstances.


> So instead of implementing a better language, they also used JS

They did implement a better language, Dart. But it was too late, like you said. Maybe we'll have a change of using it for mobile apps with Flutter.


Dart came way, way later. And it was a new language, while they could have just used an existing one. They could have implemented lua, ruby, Python, anything with a good track record. With the millions spent on V8 with those geniuses working on it, can you imagine how fast one of those language would have become ? The tooling it would have had ?


Actually, Google tried to make Python faster, with Unladden Swallow project.

They failed miserably.

And several other projects failed miserably at the same thing as well.

Either way, speed was not the main reason they created Dart.

Dart was designed for writing large web apps.

Dart's top design constraints was good interop with JavaScript meaning transpilation of Dart to decent JavaScript and consumption of existing JavaScript libraries from Dart.

I repeat: top design goal.

You can't take a language (be it Python, Ruby or Lua) that was not design with that constraint in mind and magically make it work.

There are transpilers from those languages to JavaScript but they are toys.

You just can't reconcile Python semantics with JavaScript semantics in a reasonable way.

So they did the next best thing: they designed a better Java/C# while keeping JavaScript interop as a priority.

Now Dart is morphing because the top design goal is to make it the best language for cross-platform mobile (i.e. iOS and Android) apps, which requires different trade-offs.


Pypy is proof that you can speed up Python considerably. And it's written in Python. Imagine what could be achieve if one would write something like that in, say, Rust ?

My guess is that they didn't invest anywhere near the resources on unladden shallow (which was probably a side project) than they invested on chrome vm (which was a core project).

Actually, I spend quite a lot of time on the Python mailling list, and here you can see they regularly find things to improve, perf wise. They just have terribly low resources.

I've rarely seen a project as popular as Python, used by so many rich huge corporation, which such little resources actually. It's heart breaking.


> You just can't reconcile Python semantics with JavaScript semantics in a reasonable way.

That's actually an interesting claim, and so I would really appreciate if you could provide a (possibly informal) proof that justifies it?


This is a well written article, and I absolutely agree with the idea that profiling and analysis is more important than language choice in order to eke out performance wins.

That being said, some of these optimization techniques completely took me by surprise. Defining the sort function as a cache lookup that converts the sorting template to a string and then builds an anonymous function out of that string which is finally used as the exported function seems, to me, like an extremely roundabout way to achieve inlining the comparator. And the argument-count adapter having such high overhead on V8 seems like something that should generate a warning for the developer.

The cache analysis and algorithmic improvements seemed fairly straight forward, but when you're at the point of having to manually implement memory buffers to alleviate GC pressure, you're diving below the level of abstraction that the language itself provides you. At that point, I think the argument to switch to a language designed to operate at that level of abstraction holds some sway.


What a well-designed and well-researched article!


We should be more wary about premature optimizations, like in the article where caching in the original code made it slower! Always measure! Write naive code and measure, the JavaScript engines are very good at optimization, especially V8 and the others are catching up.

However when I do optimize JavaScript code I often get 10-100x performance. Usually by writing better algorithms. Eg no "black magic". So the original code in the article is not that bad, considering he "only" got 4x performance.

Moving to another programming language / WASM for less then 2x performance is not worth it - unless you hate JavaScript.


> like in the article where caching in the original code made it slower!

Caching could have been originally faster, but became slower thanks to the continual improvement of the Javascript VMs.


Tangent: I see that the author focused on improving sorting algorithms, and also at some point switches to a Uint8Array (although not in the sorting part).

I recently discovered that a JavaScript implementation of radix sort up to four times faster than the the built-in sorting algorithms for TypedArrays[0][1][2]. Imagine how much faster a good WASM implementation could be!

It also makes me wonder why browsers don't make use of the guaranteed memory layout of typed arrays to use faster native algorithms. Sure, Typed Arrays have to support the sorting API, which is comparison based and works with closures. But why not detect when someone calls sort() without any further arguments, and make that very common case use faster native code?

Because for me, this difference in performance made a difference: I am animating plots where I need to sort more than 100k elements each frame, and sort eating up 5ms or 20ms is the difference between choppy and smooth animations.

[0] https://run.perf.zone/view/radix-sort-uint32-1000-items-type...

[1] https://run.perf.zone/view/radix-sort-uint32-1000000-items-t...

[2] https://run.perf.zone/view/Radix-sort-Uint8Array-loop-vs-fil...


On my OnePlus 5T the standard sort() was twice as fast as the radix sort for the UInt8 array, but it was only 30% slower for the Float64 and regular arrays, whereas sort() was hundreds of times slower - strange...


This is a broad, naive question, but the number of responses and upvotes on this post suggest to me that many people actually need to speed up their JS. I've never once come across this problem in web app development. The bottleneck is always DOM rendering like layout changes, networking, handling large WebGL vertex buffers for video games, etc. In which use cases is JS performance significant?


In this case, the sourcemaps library isn't normally running as part of user application code. It's primarily used by the browser's DevTools, and server-side build tools like Webpack and Gulp.

The faster sourcemaps can be parsed and manipulated, the faster the DevTools and build tools will execute.


I rely heavily on a decent JS performance baseline for http://8bitworkshop.com/, and I also rely on asm.js / WASM. But I need different things from them.

For JS, I need consistency and stability, because I'm dynamically generating code. Usually I'm pretty satisfied, but sometimes after recompiling code I get a huge performance hit for no reason.

For WASM, I know I have stability, but I need faster load times. On my Chromebook, for example, it takes 10-20 seconds just to load the WebAssembly. If this problem is solved, I might move everything performance-sensitive over to WASM eventually.


Awesome. I think it makes a lot of sense to explore new algorithmic approaches before choosing to reimplement in a new language - and thankfully these are not mutually exclusive.

To those saying "these are implementation defined optimizations" etc - you do the same exact thing in rust. I know some rust code is 'fast' and some is 'slow' and I have to understand rust and to some extent the state of llvm + rust. This is simply part of writing fast code, no matter the platform or language.

Nice writeup!


I'm continually surprised by the Rust -> WASM pressure. I see projects like Redox and Servo and ripgrep and Tokio and, well, native or systems level things being it's true calling.

I don't want to write a webapp in Rust. It'll always be second class (though maybe that won't be a problem if the WASM apis get good enough...).


Think of it this way: wasm is pretty similar in many ways to an embedded platform. Rust wants to be good at embedded, so making Rust good at wasm fits. A lot of the work is identical.


I think that is a very post-hoc rationalization to justify the silliness that is/has been eating the JS world (emscripten and asm.js show that this sort of thing has been going on for a while), but is an entirely valid reason to indulge it for free interest and experience.

+1


I don't think it's a reason that wasm exists, but I do think it's one of the (multiple) reasons for us to invest in making Rust -> wasm an excellent experience.


Maybe there should be a tool from linters or VMs to warn against arguments adaptation, monomorphisation. For the rest it is a lot of know how is JS that is for free in with more performance minded languages. At the end with the optimized JS is is 4 times faster, but still 6 times faster in WASM it seems.


It's only necessary in edge cases of super-hot code, avoiding argument adaptation "because performance" is a bad idea.

It's also a v8 specific weakness, it's not hard to imagine a small fix getting rid of the performance penalty.


Given that this contrasts a JavaScript implementation with Rust implementation compiled to WASM: my 5 minute investigation shows that Rust doesn't have optional arguments at all.

So in Rust you would be forced to write it the way you consider to be "bad idea".

Many people think that default function arguments are a bad idea and languages like Rust or C or Go or Java don't even have them.


you get creative, passing map, passing Option. it's very trivial to do.


Next steps: leverage web workers and GLSL?


Irrelevant and maybe a bit small-minded, but the font of the article is too squished to be pleasantly readable.


Not irrelevant. Question for you, is this more legible:

https://twitter.com/mraleph/status/965686742614462466

I can update CSS if that helps.

UPD. Updated CSS to use non monospace fonts for the body.


This is better by far. I'd be interested if there's a better combination of code + paragraph font, but otherwise the only other minor adjustment I'd make is increasing the padding / margins between text and code.

Thanks for taking criticism in a healthy way.


For writing high-performance JavaScript, I'm really hoping that AssemblyScript takes off[1]. The project is still a bit in early stages, but theoretically, it can have the same role as Cython: you write JS/TS code, find the slow parts, and get perf improvements by adding stricter types to your code and changing your code to not use dynamic JS features. You can stay pretty much in JS rather than having to switch to a completely new language to get predictable fast performance.

[1] https://github.com/AssemblyScript/assemblyscript


For me the purpose of wasm would not be just performance, but to let me avoid javascript entirely.

Javascript is already very fast, but compiling to javascript feels awkward.


I don't think you can avoid JavaScript even with WASM - and WASM in it's current form is, for some languages, a worse compilation target than JavaScript.


Well sure it doesn't support the DOM yet, but I think the design goal of WASM is to do just that.

Since the beginning of computers, it should be trivial to distribute programs online efficiently.

The web is already platform dependent, but it also need to be fast and language independent.


I can't see any numbers comparing the rust implementation and the optimised javascript. Did I miss them in the article?


A comment upthread did the math:

> The author is able to achieve a speed-up of ~4x which is close to the 5.89x achieved by the Rust implementation.


The math is not that obvious because Rust implementation does not match 1-1 what baseline JS version was doing (e.g. it sorts only generatedMappings and not originalMappings).

I will do measurements later to compare and update the post.


Sounds good!

I enjoyed the article :)


With the caveat that it's apples-to-oranges (algorithmic changes to the JavaScript version (~4x) are not in the Rust version (~5.9x)).


It's not just speedup it's also about writing backend and front-end in one language. But the speedup is real too.


So...

I'm a (junior/so-so) react dev. I like the language, I probably could get better at it but I've gotten to the point where I like the sound of my own music.

That said, my question is the following. There seems to be a lot of resources on the web of the type "Hey Rust and WASM is a thing! You can make webpages with it". Ok, fine. However, I don't see a lot of the things that are in libraries like Vue and React that offer me SPA, fast development time and component (OR!) functional pattern design (I won't mention the ability to add npm packages, because yeah, Rust/WASM doesn't have an ecosystem yet, so that might be punching a bit below the belt). Is there anyone out there making a React like or "eco-system builder"-esque platform that would give me a lot of the benefits I'm seeing with React but with increased performance?

Also, I've done some low level programming before and think that Rust is very cool for that (yay memory safety! yay error messaging (no seriously yay)!). However, and I may be betraying my ignorance here, if I want to animate a div to fly across the page, am I going to have to write lots of low level code to do that? If that's the case, and development time suffers to eek out that extra bit of performance, I can't see this as having much utility outside of niche fields like game development.

I don't mean to be overly critical mind as I'm still (and probably will always be) a bit of a n00b. I'd love it if somebody would point me at some resources that I could burn a weekend on, if I thought the juice was worth the squeeze. I'm just not sure I know enough to know if this is something that I could be productive in (some day).


> "Hey Rust and WASM is a thing! You can make webpages with it". Ok, fine.

It is very early days. More to come...

> However, I don't see a lot of the things that are in libraries like Vue and React

Yes. There's not too many of these yet; there are some though. For example, https://github.com/DenisKolodin/yew

There's also the inverse: can we re-write parts of libraries like Vue or React in something that compiles to wasm, so that you get better performance as the user? I know of at least one major SPA framework that's doing this. Don't want to spill the beans too much, even though it is technically public knowledge.

That said, that's how I think wasm will impact the lives of most JS programmers: as technology that underpins the libraries they use to make them better. Unless you want to, or unless you want to write a high-performance library, I wouldn't expect wasm to really change the way most JS programmers operate. It's abut augmenting JS, not replacing it.

> if I want to animate a div to fly across the page, am I going to have to write lots of low level code to do that?

That depends entirely on the library!

> I'd love it if somebody would point me at some resources that I could burn a weekend on

They're sorta scattered all over the place right now. Such is life for early adopters. More will come as stuff matures. Don't under-estimate how much this will change as the tooling gets built out, for example.


> Can we re-write parts of libraries like Vue or React in something that compiles to wasm, so that you get better performance as the user?

For React, we would love to do this although it's not clear what parts of React would benefit from being moved to wasm right now.


Yeah, that's what I've been hearing through the grapevine. We'll see! I'd love to see it.


Call us up if you have any advice!


I’m confused as to why people think this isn’t generally applicable to JS here.

Very few of these changes are VM specific or likely to change (or any more so than WASM implementations)

- choose your algorithm carefully - make sure you’re paying attention to the data you’re applying the algorithm to is a good fit - pay attention to arity & GC pressure.

None of these are hard to do in JS & most of the VM debugging was to help identify problems in existing unoptimized code.

The rest are lessons you can take into ANY JS data processing.


Source maps are a debug tool. Why does the performance matter?

(And if you're shipping so much JavaScript to your site users that you need to "minify", maybe you're doing it wrong.)


Tools like Sentry parse them, and need consistent fast performance in this area. Other debugging tools use them.

Your statement in brackets seems very "those damn kids"-ish. You need to "minify" your JS files even if you're shipping a small amount of JS, because loading performance matters and JS minifies very well.


Browsers also use them do display "expanded" code in the console and debugger.


> Source maps are a debug tool. Why does the performance matter?

If the source map decoder is slow, then the debugger feels slow.


> if you're shipping so much JavaScript to your site users that you need to "minify", maybe you're doing it wrong.

100% agree.


Make that 101% agree.


102%




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: