Hacker News new | past | comments | ask | show | jobs | submit login
Fast, Bump-Allocated Virtual Doms with Rust and Wasm (hacks.mozilla.org)
241 points by BoumTAC on March 14, 2019 | hide | past | favorite | 59 comments



How exciting. One of the great things about React is that it's just as much a pattern as a library, and with Rust macros and v-dom it should be rather easy to build something similar to JSX in Rust (with the proper Rust-isms of course). Can't wait for Rust to rule the web.


I'm watching https://github.com/bodil/typed-html personally; not just JSX, but statically typed!


https://github.com/chinedufn/percy/ already has a virtualdom implementation and a procedural macro for JSX like HTML.


Also see scalatags! (this is in some ways a response to your child comment).


>Virtual DOM libraries provide a declarative interface to the Web’s imperative DOM.

I'm not sure what this means. DOM is an object model for HTML. It is mutable, but HTML itself is definitely declarative.

Which brings up an interesting question. Why are DOM-diffs something that is done by userland libraries when it can and probably should be done by the browser itself?


HTML is declarative, but it's also a string. It is faster to manipulate the DOM to make the changes you want than to generate an HTML string and ask the browser to parse it.

A virtual DOM allows you to use declarative _code_ instead of a string to get the same benefits as declarative HTML, but avoids the overhead of HTML by manipulating the DOM directly under the hood.


As I understand it - please correct me if I'm wrong!

> HTML itself is definitely declarative

HTML is the markup (not programming) language (which is Javascript). The API to actually change the DOM is imperative. See https://developer.mozilla.org/en-US/docs/Web/API/Node (I don't quite grasp why they felt the need to specify that)

> Why are DOM-diffs something that is done by userland libraries?

I guess if the browser provided primitives to build trees and diff them that would be cool. But I presume using WASM *is getting the native-browser code to do the diffing.


Because virtual DOM is not the only way to achieve efficient DOM updates, and even within the concept of virtual DOM there could be many different ways to achieve efficient diffing (e.g. React vs Snabbdom). There is nothing special about a particular virtual DOM spec to deserve a place in web standards.


Web standards could specify an API (which is very simular across all virtual dom inplementations) and let browsers chise their own implementations and optimisations.

Since the browser has full access to the entire object model, and memory layouts, and dom optimisations, and..., and..., it’s a shame that we have to write code that is the prerogative of the browser.


There is enough differentiation in virtual DOM designs[1] that I don't think implementing a lowest common denominator that is flexible enough to support competing strategies would be useful. Being a browser API it would need to be a rather timeless, low level design. Essentially, we already have that – the imperative DOM API.

With WASM the performance gap between libraries and native browser code will be reduced even further. Focusing on that has the benefit of lifting all boats, not just a particular DOM building paradigm that has been popular recently.

[1] Just some examples that come to mind:

- logic that decides when to update an element vs when to create a new one (including but not limited to having the concepts of components, thunks, deciding to look at class names, ability to move nodes, etc.)

- design of lifecycle methods and hooks, as well as any performance-oriented optimizations such as asynchronous / delayed rendering, short circuiting logic, etc.

- handling of props vs attributes vs reflected attributes


> Essentially, we already have that – the imperative DOM API. With WASM the performance gap between libraries and native browser code will be reduced even further.

Why waste time implementing and “reducing it even further” and not just implement it in the browser?

> Just some examples that come to mind

First bullet point is relevant for internals mostly, little bearing on actual API.

Second bullet point is nearly identical in all virtual doms and similar lifecycle hooks exist in WebComponents (which squandered the opportunity to introduce a declarative API).

Third bullet point is valid, but the main problem isn’t the difference between library APIs. The main problem is that there is no browser API, so everyone has to reinvent the wheel. A simple {attrs: {}, props: {}} would render the differences moot (or you would have very thin wrappers on top for your favorite syntax)


> Why waste time implementing and “reducing it even further” and not just implement it in the browser?

Because the browser should not offer inflexible implementations of complex, highly opinionated and currently overhyped paradigms, because those standardized APIs will stay with us for much longer than they will be useful, wasting everyone's time.

We're only talking about virtual dom here because it's a popular concept with good library implementations. We don't need to reimplement it in all major browsers because we already have it working well.

Even aside from that, it is easier to build one library than implement the same spec in all major browsers. Moreover, library designs compete with each other, and can be improved faster than APIs baked into a browser which will have to be maintained in a backwards compatible manner for more than a decade.

> First bullet point is relevant for internals mostly, little bearing on actual API.

The concepts of Components, State, Context, Thunks, Fibers, Plugins, etc. are very important differentiators between various virtual DOM APIs. Either the presence or absence, let alone the specific design of those concepts strongly affects the API surface and what users can do with it and how. Don't mistake React's API for some kind of standard.

Once the hype inevitably moves on from virtual DOM to whatever the next declarative UI paradigm will be (e.g. FRP with precision DOM updates) this whole standardization and reimplementation exercise will be rendered a giant waste of time.


> because those standardized APIs will stay with us for much longer than they will be useful, wasting everyone's time.

Oh wow. You just described the existing imperative DOM APIs, haven't you? Inflexible implementation wasting everyone's time. When is the last time you used actual DOM APIs? The story of web development has been: "JFC, these things are impossible to work with, let's waste time creating actual useful abstractions on top of them".

> it's a popular concept with good library implementations. We don't need to reimplement it in all major browsers because we already have it working well.

You know what one of the goals of jQuery was? To become a disappearing library. As browser APIs got better and better, the need for jQuery would diminish and it would disappear, becoming just a browser API.

Why would browsers implement querySelector and querySelectorAll? We already had it in jQuery and it was working well.

Why would browsers implement fetch? We already had jQuery.ajax, axios, superfetch and dozens of others, and they were working well.

Why would browsers implement <insert any improvement>? We already have <insert doezns of libraries> and they are working well.

> it is easier to build one library than implement the same spec in all major browsers.

It doesn't mean we should freeze the spec in the same state it was in 1998.

> The concepts of Components, State, Context, Thunks, Fibers, Plugins, etc. are very important differentiators between various virtual DOM APIs.

None of those refer to actual APIs. Those are parts of internal implementations or additional implementations/additions/APIs on top of virtual dom.

We are talking about one thing specifically: we need a browser-native declarative DOM API with browser-native DOM-diffing that wouldn't require us implement it in userland.

The rest like thunks, state management, plugins, whatever can be provided by actual libraries on top of actual built-in high performant built-in virtual-dom API.

Because the browser knows infinitely more about what's happening to the DOM than userland libraries and has access to infinitely more optimisations. All userland code needs to do is to tell the browser: this and that changed.

Funnily enough, browser implementors are now spending considerable amounts of time implementing CSS Containment [1] (emphasis mine):

--- quote ---

Browser engines can use that information to implement optimizations and avoid doing extra work when they know which subtrees are independent of the rest of the page.

Imagine that you have a big HTML page which generates a complex DOM tree, but you know that some parts of that page are totally independent of the rest of the page and the content in those parts is modified at some point.

Browser engines usually try to avoid doing more work than needed and use some heuristics to avoid spending more time than required. However there are lots of corner cases and complex situations in which the browser needs to actually recompute the whole webpage.

--- end quote ---

Wow. Browsers (and browser implementors) actually want the developers to tell them what exactly changes on the page so that they don't do extra work. And wow, you can actually implement the same spec in all major browsers (eventually).

So why not virtual DOM?

> this whole standardization and reimplementation exercise will be rendered a giant waste of time.

So what you're saying is essentially this: new things are hype, DOM APIs should never get updated because who cares about the needs of developers.

[1] https://blogs.igalia.com/mrego/2019/01/11/an-introduction-to...


> Oh wow. You just described the existing imperative DOM APIs, haven't you? Inflexible implementation wasting everyone's time.

Existing DOM APIs are very simple, low level and unopinionated. A pleasure to build libraries on. That's what durable platform APIs should look like.

It is very easy to build virtual dom and importantly other DOM management paradigms on top of those low level APIs.

The same can not be said about virtual dom - it's a very opinionated, very rigid paradigm. I know because I built FRP UI libraries based on Snabbdom and on native DOM APIs. The latter is much simpler to deal with and more performant. Virtual DOM only works well if that's exactly what you want. It has no place among browser APIs, at least not in any recognizeable shape or form.

Regarding performance, the whole point of the virtual dom is that the diffing engine does not know which elements changed or didn't change. It gets a new virtual subtree and has to diff it with the previous one. The browser would be doing all the same diffing work, just closer to the metal. But we will soon be able to do the same with just WASM.


> I know because I built FRP UI libraries based on Snabbdom and on native DOM APIs. The latter is much simpler to deal with and more performant.

I built something on native APIs and on userland APIs. Native APIs are more performant.

Really? That surprises you?

> The browser would be doing all the same diffing work, just closer to the metal.

Exactly my point


The things a VDOM get you is a simpler interface than the browser DOM, with an often faster comparison than the browser actually provides. There have been significant real browser DOM improvements since React came out, but I'm pretty sure an optimized VDOM in WASM could be faster because of issues of interaction with the real/full browser DOM. There are also side effects wrt the full/real DOM in practice.

I agree though, nothing that requires a place in web standards at all.


Well... faster than what exactly? You have to do the virtual DOM pattern (generating new DOM state and then diffing with old state) with virtual elements. You can't compare proper virtual dom to using real DOM elements instead of virtual ones in a virtual dom pattern, it wouldn't make any sense.

But there are other non-virtual-DOM ways to manage DOM state efficiently and in a maintainable manner. For example, my own library uses Observables to drive precise DOM updates and works with trees holding real (not virtual) DOM elements, so it doesn't need to do any diffing at all: https://github.com/raquo/Laminar

I don't think it's a given which of these techniques would be faster, it depends heavily on the particular use case and the implementation of diffing (for virtual DOM) and Observables (for my pattern). If both are well optimized I'd expect virtual DOM to lose in a lot of cases.


Like I said, could, it probably depends on actual use... DOM navigation for read or update can be optimized, but depending on how it is done may not work as well. React itself is moving towards diffing against the browsers real DOM iirc. Browsers have gotten a lot better than in the past. That said, actually comparing each node for updates against large trees may be more costly than updating and diffing against a partial abstraction.


What I'm saying is outside of the virtual DOM paradigm you might not need to diff any elements at all, real or virtual, and so you wouldn't care about the performance of DOM reads, as you're not doing them.

Then it becomes a matter of DOM write performance, but that is the same for everyone assuming the native DOM API commands issued by the libraries are the same, which is a more or less reasonable assumption for well optimized libraries even if they use different paradigms to calculate what those commands should be.


Templates are faster than vdom and are a browser feature now.


Lots of good answers, but the one thing I noticed nobody has mentioned (unless I missed it) is that "DOM" stands for "Document Object Model". The HTML is the "Document". The DOM is an object oriented programmatic interface to the document. The DOM was used not just for HTML, but also for SGML and XML (if you use libraries like Sax).

So the statement you have quoted is slightly confused (although a very common confusion). People think of the DOM as being essentially an AST of the document tree. The "Virtual DOM" is seen as kind of scratch pad for that tree: update the "virtual DOM" and the AST is automatically brought into sync. At that point the "DOM" (as Document Object Model) loses any meaning because it is no longer that object model (with it's completely insane API) that we all love to hate.

Why are these AST diffs being done in userland libraries when it could be done by the browser itself. Because you already have a DOM. Why would you possibly want another programmatic interface? ;-) Though tongue in cheek, this is really historically the answer. The API you've got is the API you've got.

Providing an ability to get the document as an AST and to update it by furnishing a new AST would be wonderful. But you've got to get it by the standards committee.

Edit: I should point out that browsers used to support XSLT which allows you to transform the document declaratively based on the AST. Not sure if any browser still supports it though...


Because once the declarative HTML has been transformed into DOM objects, they are not just dumb data but OO-style rich objects. They have identity and internal state, which may or may not be reflected by their markup (not even innerHTML or outerHTML). For example when you attach an event handler, that object gains this behavior. If perhaps that object has some other change, and a DOM-diff library stupidly deletes the original DOM and add a new one, that could be lost. The same for typed text in <input> fields.

Simply put, the OO nature of the DOM complicates things. If DOM were just a simple recursive tree data structure of which HTML is a serialization, there would be no such issues.


Virtual DOM is needed to enable a programming model where in each UI iteration, you return the entire UI tree anew. Actually creating a new DOM tree anew would be terribly wasteful, and you might lose lots of state (e.g. half entered text or selected text or something) so instead you do dom-diffing. For some reason, people consider that model to be more hip than the normal one where you take care of updating yourself. Probably because writing code to create the entire UI tree anew is more easy to write, but tbh it's also more wasteful. Classical ease of use vs computation tradeoff.


>For some reason, people consider that model to be more hip than the normal one where you take care of updating yourself.

A huge benefit of the virtual dom model as used in React is that you don't need to have separate code paths for the initial render of a widget and updating part of a widget. I can't count the number of times I've seen pre-React widgets coded to assume most attributes would never change, had bugs in updating some uncommonly-updated attributes, or gave up on having separate initial-render/update code paths by throwing away the widget's entire DOM and re-rendering on most attribute changes. With the React way, there's one code path for initial render and updating, so it doesn't take any special effort to make every attribute efficiently update-able. A whole category of bugs disappeared in codebases I've worked on that adapted React.


It's not because it's "more easy to write". It's because code that generates the entire tree every time can be side-effect free, while stateful UI is, well, stateful.

We can debate if that's good or not, but it's not about "ease of use". Neither is it necessarily more wasteful. It's a different paradigm.


My guess is that the desired DOM is entirely re-created from some state, instead mutating it in place. The declarataivity then is that "given this state the DOM should be this".


If something gets standardized, then it becomes hard to iterate on the details. You can't make incompatible changes to browser standards, or else you break compatibility with old web pages.

React is currently working on figuring out how to adapt their API to allow asynchronous updating of the DOM. If the virtual DOM was standardized already, then the React team would have been very limited in how they could implement asynchronous rendering.


1. This is called "Shadow DOM", it is only supported in the latest browsers. wasm is AFAIK slightly more widely compatible.

2. Even if calls from wasm to the browser are as fast as JS, they will still incur overhead above "internal" function calls.


Shadow DOM has nothing to do with this.


Yes, I was thinking web developers, rather than using rust and wasm directly, would first get the benefits when the libraries they use start moving the heavy duty parts to it. Can't wait to see if someone uses this for building a react-like framework.


The Rust Wasm working group agrees, and that's why they've been pursuing a strategy of building libraries, rather than full front-end frameworks. There are some people who are doing that, and now that the "build a library in wasm" story is going pretty well, there are some plans to move into that space too (https://rustwasm.github.io/2019/03/12/lets-build-gloo-togeth...). But almost all of the previous work has been for libraries.


I am hopeful for the other half to land as well - heres to hoping that something like piet and druid results in a completely rust based Application delivery platform that ships in the browser.


Could you elaborate on what Piet and Druid are, perhaps with a few links? I assume you don't mean the esolang[0].

[0] https://esolangs.org/wiki/Piet



Thanks! Looks very interesting, going to watch the video about Druid linked in the README.MD later[0].

And no worries, it's very easy to forget not everyone is introduced!

Anyway, to actually engage with your point: I can see the appeal of an all-Rust framework: good typing system + high performance (both in speed and low memory use, ideally) sounds fantastic!

So out of curiosity: how do you think hot swapping would be handled? I'm asking this who has not dived into Rust at all and only observed it with interest from a distance.

For starters, I understand Rust has fairly slow compilation times, no? Or is that only true for optimized code and do we have fast debug options?

Similarly, hot swapping requires maintaining state. With JS that's not too difficult because you don't worry about memory layout: as long as the high-level structure and names are the same the code works fine (it just kills the JIT optimizations).

With WASM, that goes out the window: add or remove a field to your struct and all offsets change. Rust may guarantee no memory leaks, but that is not the same as guaranteeing that a snapshot of the memory state of one Rust program works on a different one.

I guess some kind of "export to/import from JavaScript based representation" glue code that runs every hot reload could work, but that sounds like it could really freeze the browser when hot-reloading the big frameworks.

[0] https://www.youtube.com/watch?v=4YTfxresvS8 would be funny if I forgot to include links myself at this point :p


Had to look up hot swapping, to be honest I dont think the experience will be as good as it would be for dynamic languages like Dart / Javascript.


I believe this is also true for development tooling, not just actual code executing in the user's browser. There's no real reason why tools like webpack, eslint, and so on need to be written in JS for most use cases.

Members of the Rust community in particular are actively working in this space, building ES parsers, transpilers, and module bundlers. There's no all-in-one solution but even now with a little effort you could replace webpack with Rust-made tools, so long as you don't mind losing hot reloading or code splitting.


See also (for one example) Notion (https://www.notionjs.com): Node version management, in Rust, for fully-reproducible environments. (I contribute a bit!)


Would be interesting to see if React could simply integrate it.


Firstly, congrats on shipping a virtual DOM lib in WASM. Hopefully, frameworks intent on using a V-DOM will greatly benefit from this.

Having said that, is a V-DOM required in 2019, if DOM updates are optimally batched, like in FastDom ( https://github.com/wilsonpage/fastdom ). Decades of optimizing browser internals would surely account for not trashing the DOM, if updated optimally. So, is it required?


They are only batched if they are all updates or all reads, but not if there's an interpolation of both.

Even if the browser optimizes it with an optimum update strategy, it doesn't matter if a developer who doesn't know better forces thrashing to occur. Virtual DOM libs help structure code in such a way that idiomatic code falls into the pit of success.


It’s never been required — and is usually substantially slower – but a virtualdom may be worth the overhead because it avoids the need to organize those updates. Most cases aren’t performance sensitive to the point where that’s the deciding factor in a decision.


It appears lit-html is using a method that keeps updates declarative. https://lit-html.polymer-project.org/


The optimization VDOMs make is orthogonal to the optimization that batching offers. VDOMs prevent you from having to update more of your tree than is necessary. Batching is a separate optimization, and could easily be applied to VDOM-originating mutations.


To build a client-rendered site? No, vdom is not required. To build a tool with multiple render targets? Maybe, or at least something similar.

In particular, the trend of rendering a page via a node.js server, and delivering the rendering logic to the client so that subsequent renders do not require a full round-trip is pretty alluring if you're building something more complex than a blog or run-of-the-mill ecommerce site.

https://catberry.org/ is one of the older examples I can think of that do the above sans vdom, though now that I look at it again, it seems to have changed quite a bit in the years since I first found it.


The benchmark has some old versions of Angular (2) and the legacy AngularJS (1.x) - how do the benchmarks look with a more recent version (v7)?


The post notes that they had issues with recent versions.


It seems memory fragmentation can occur, rather easily, if you hold onto a few of them.

> The disadvantage of bump allocation is that there is no general way to deallocate individual objects and reclaim their memory regions while other objects are still in use.


The use case in the article does not hold onto a few of them, though- it holds on to exactly two at any time.


Why virtual DOM is not a part of browser APIs is anyone’s guess at this point.


Because it's an inefficient hack and native APIs could improve upon the way in which it operates to such a degree that it would no longer be called the virtual DOM?


It is! They call it the DOM, for short.


Uh no, actually virtual DOM would be a great addition to the browser. What is there is an imperative api, the big win of virtual DOM is the specification of the UI is the specification of all possible updates to the ui in an efficient manner. It would be nice for the browser to include an api method that takes VDOM data-structure, diffs it and applies the changes in native land vs implementing this in user land.


I mean virtual DOM as exemplified by virtual-dom, hyperscript, React.


The benchmark leaves me wondering if this was worth it.


Could this be used in something like Cloudflare Workers to enable server-side rendering?


Bump allocation sounds a lot like heap allocation. Is a "bump" a small heap?


Bump allocation is a strategy for building an allocator; where that allocator's memory is is irrelevant to the algorithm itself. Most of the time, that is the heap, but you could write an allocator that takes a bunch of stack memory and hands it out too.


As the article describes, it's a small (or large...) heap with some specific behavior and restrictions- space is allocated in order, making allocation extremely fast; and in exchange there's no way to free individual objects, you must instead free them all at once.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: