Hacker News new | past | comments | ask | show | jobs | submit login

Web standards could specify an API (which is very simular across all virtual dom inplementations) and let browsers chise their own implementations and optimisations.

Since the browser has full access to the entire object model, and memory layouts, and dom optimisations, and..., and..., it’s a shame that we have to write code that is the prerogative of the browser.




There is enough differentiation in virtual DOM designs[1] that I don't think implementing a lowest common denominator that is flexible enough to support competing strategies would be useful. Being a browser API it would need to be a rather timeless, low level design. Essentially, we already have that – the imperative DOM API.

With WASM the performance gap between libraries and native browser code will be reduced even further. Focusing on that has the benefit of lifting all boats, not just a particular DOM building paradigm that has been popular recently.

[1] Just some examples that come to mind:

- logic that decides when to update an element vs when to create a new one (including but not limited to having the concepts of components, thunks, deciding to look at class names, ability to move nodes, etc.)

- design of lifecycle methods and hooks, as well as any performance-oriented optimizations such as asynchronous / delayed rendering, short circuiting logic, etc.

- handling of props vs attributes vs reflected attributes


> Essentially, we already have that – the imperative DOM API. With WASM the performance gap between libraries and native browser code will be reduced even further.

Why waste time implementing and “reducing it even further” and not just implement it in the browser?

> Just some examples that come to mind

First bullet point is relevant for internals mostly, little bearing on actual API.

Second bullet point is nearly identical in all virtual doms and similar lifecycle hooks exist in WebComponents (which squandered the opportunity to introduce a declarative API).

Third bullet point is valid, but the main problem isn’t the difference between library APIs. The main problem is that there is no browser API, so everyone has to reinvent the wheel. A simple {attrs: {}, props: {}} would render the differences moot (or you would have very thin wrappers on top for your favorite syntax)


> Why waste time implementing and “reducing it even further” and not just implement it in the browser?

Because the browser should not offer inflexible implementations of complex, highly opinionated and currently overhyped paradigms, because those standardized APIs will stay with us for much longer than they will be useful, wasting everyone's time.

We're only talking about virtual dom here because it's a popular concept with good library implementations. We don't need to reimplement it in all major browsers because we already have it working well.

Even aside from that, it is easier to build one library than implement the same spec in all major browsers. Moreover, library designs compete with each other, and can be improved faster than APIs baked into a browser which will have to be maintained in a backwards compatible manner for more than a decade.

> First bullet point is relevant for internals mostly, little bearing on actual API.

The concepts of Components, State, Context, Thunks, Fibers, Plugins, etc. are very important differentiators between various virtual DOM APIs. Either the presence or absence, let alone the specific design of those concepts strongly affects the API surface and what users can do with it and how. Don't mistake React's API for some kind of standard.

Once the hype inevitably moves on from virtual DOM to whatever the next declarative UI paradigm will be (e.g. FRP with precision DOM updates) this whole standardization and reimplementation exercise will be rendered a giant waste of time.


> because those standardized APIs will stay with us for much longer than they will be useful, wasting everyone's time.

Oh wow. You just described the existing imperative DOM APIs, haven't you? Inflexible implementation wasting everyone's time. When is the last time you used actual DOM APIs? The story of web development has been: "JFC, these things are impossible to work with, let's waste time creating actual useful abstractions on top of them".

> it's a popular concept with good library implementations. We don't need to reimplement it in all major browsers because we already have it working well.

You know what one of the goals of jQuery was? To become a disappearing library. As browser APIs got better and better, the need for jQuery would diminish and it would disappear, becoming just a browser API.

Why would browsers implement querySelector and querySelectorAll? We already had it in jQuery and it was working well.

Why would browsers implement fetch? We already had jQuery.ajax, axios, superfetch and dozens of others, and they were working well.

Why would browsers implement <insert any improvement>? We already have <insert doezns of libraries> and they are working well.

> it is easier to build one library than implement the same spec in all major browsers.

It doesn't mean we should freeze the spec in the same state it was in 1998.

> The concepts of Components, State, Context, Thunks, Fibers, Plugins, etc. are very important differentiators between various virtual DOM APIs.

None of those refer to actual APIs. Those are parts of internal implementations or additional implementations/additions/APIs on top of virtual dom.

We are talking about one thing specifically: we need a browser-native declarative DOM API with browser-native DOM-diffing that wouldn't require us implement it in userland.

The rest like thunks, state management, plugins, whatever can be provided by actual libraries on top of actual built-in high performant built-in virtual-dom API.

Because the browser knows infinitely more about what's happening to the DOM than userland libraries and has access to infinitely more optimisations. All userland code needs to do is to tell the browser: this and that changed.

Funnily enough, browser implementors are now spending considerable amounts of time implementing CSS Containment [1] (emphasis mine):

--- quote ---

Browser engines can use that information to implement optimizations and avoid doing extra work when they know which subtrees are independent of the rest of the page.

Imagine that you have a big HTML page which generates a complex DOM tree, but you know that some parts of that page are totally independent of the rest of the page and the content in those parts is modified at some point.

Browser engines usually try to avoid doing more work than needed and use some heuristics to avoid spending more time than required. However there are lots of corner cases and complex situations in which the browser needs to actually recompute the whole webpage.

--- end quote ---

Wow. Browsers (and browser implementors) actually want the developers to tell them what exactly changes on the page so that they don't do extra work. And wow, you can actually implement the same spec in all major browsers (eventually).

So why not virtual DOM?

> this whole standardization and reimplementation exercise will be rendered a giant waste of time.

So what you're saying is essentially this: new things are hype, DOM APIs should never get updated because who cares about the needs of developers.

[1] https://blogs.igalia.com/mrego/2019/01/11/an-introduction-to...


> Oh wow. You just described the existing imperative DOM APIs, haven't you? Inflexible implementation wasting everyone's time.

Existing DOM APIs are very simple, low level and unopinionated. A pleasure to build libraries on. That's what durable platform APIs should look like.

It is very easy to build virtual dom and importantly other DOM management paradigms on top of those low level APIs.

The same can not be said about virtual dom - it's a very opinionated, very rigid paradigm. I know because I built FRP UI libraries based on Snabbdom and on native DOM APIs. The latter is much simpler to deal with and more performant. Virtual DOM only works well if that's exactly what you want. It has no place among browser APIs, at least not in any recognizeable shape or form.

Regarding performance, the whole point of the virtual dom is that the diffing engine does not know which elements changed or didn't change. It gets a new virtual subtree and has to diff it with the previous one. The browser would be doing all the same diffing work, just closer to the metal. But we will soon be able to do the same with just WASM.


> I know because I built FRP UI libraries based on Snabbdom and on native DOM APIs. The latter is much simpler to deal with and more performant.

I built something on native APIs and on userland APIs. Native APIs are more performant.

Really? That surprises you?

> The browser would be doing all the same diffing work, just closer to the metal.

Exactly my point




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: