Hacker News new | past | comments | ask | show | jobs | submit login
Virtual DOM is pure overhead (2018) (svelte.dev)
316 points by Maksadbek on Feb 1, 2023 | hide | past | favorite | 326 comments



I'm quoted in this blog post so I figured I'd respond. I'm a former member of the React team but I haven't worked on it in a long time.

Largely I agree with everything in this article on a factual basis but I disagree on the framing of the trade-offs. Two points in particular:

1. Before open sourcing React we extensively measured performance on real world applications like mobile search and desktop ads management flows. While there are pathological cases where you can notice the overhead of the virtual DOM diff it's pretty rare that it meaningfully affects the user experience in practice, especially when compared to the overhead of downloading static resources like JS and rendering antipatterns like layout thrash. And in the situations where it is noticeable there are escape hatches to fix it (as mentioned in the article).

2. I disagree with the last sentence strongly. I would argue that Svelte makes huge sacrifices in expressive power, tooling and predictability in order to gain performance that is often not noticeable or is easily achieved with React's memoization features. React was always about enabling front-end engineers to take advantage of software engineering best practices so they could level up velocity and quality. I think Svelte's use of a constrained custom DSL is a big step backwards in that respect so while I appreciate the engineering behind it, it's not a technology I am interested in using.

Even though I disagree on these points I think it is a well-written article and is a pretty fair criticism of React outside of those two points, and I could see reasonable people disagreeing on these trade-offs.


Regarding your second point, this is exactly why I like Vue and Svelte so much over React (which I am also quite familiar with). Coming from a design background and having learned HTML/CSS first, the Svelte and Vue approach to SFCs and templating makes it far easier to understand, write, and visually parse than React where everything is always JS first. I can't think of a situation where Svelte/Vue limited me from doing something you can do in React, and Svelte/Vue make getting into a codebase far more accessible for people with a wider range of skillsets.

The secondary effects of React's popularity have been a significant loss in emphasis on HTML and CSS skills over doing everything in JS which often results in div-soup that is less performant and less capable. HTML and CSS not being first class-citizens as they are in other frameworks means developers simply aren't encouraged to learn them nearly as well as they used to.

Front-end frameworks should not be just for front-end developers. They should be understandable and accessible to people who are not JS first and who come from backgrounds that are not purely engineering. In my experience it is much easier to teach someone familiar with design and HTML/CSS how to explore a Vue or Svelte codebase than a React one, and the JS devs I have worked with who fully learn these libraries have not complained about any limitations compared to React.


> HTML and CSS not being first class-citizens as they are in other frameworks

In what ways are HTML and CSS less than first-class citizens in React? In my React codebases we've always written CSS (or SCSS) stylesheets, and components bottom-out in JSX (which is almost exactly just an HTML template with inserts)

In my experience the biggest barrier to (and most legitimate complaint about) React vs other frameworks is state management. It's always had special rules around that (even for experienced JS devs), and hooks turned that up to 11 (not without good reason, but still). While Vue at least has always had super simple state management: you modify normal JS values and the UI updates. Are you sure that hasn't been part of the divide you've observed?

(for the record it is also possible to retrofit React with state management more like Vue's (MobX, etc), but it'll be a compromised experience in some other small ways, and it may create some pain around some other library integrations that ship as hooks)


There is no built-in support for writing your styles alongside your JSX the way Vue and Svelte have. Both provide automatic component style scoping, escape hatches, deep selectors, etc. Vue passes classes given to a component instance into the top level element of that component. All of this makes styling with plain CSS and SCSS 1000x easier than it is in React.

Then there is the template side of things where it is much easier to read a basic conditional or loop in Vue/Svelte for someone who doesn't know JS.

How much have you used Vue or Svelte? It is a significantly better developer experience for building HTML and styling it because it doesn't force everything to be written in JS. In React JS is the only first-class language. There is no built-in support for SFCs, style scoping, etc. Thus the awfulness of CSS-in-JS was born.


There is definitely no built-in support for those things, but I don't think it's fair to call them second class citizens in the React world. AngularJS bundled an entire http client back in the day, but that doesn't make it more "first class" in the AngularJS ecosystem than in Vue or React!

My experience with Vue has been pretty mixed here, though. It's nice having everything in one file, sure, but it's also a pain having to create a new file whenever you want a new component. In JSX-based frameworks, I tend to start with a component in a file, then split it up into multiple components in one file, then finally into different components in different files as I'm going along - the same as I would do for normal Javascript functions. That's difficult to do in Vue, and I often find components tend to grow much larger and more unwieldy because it's easier to just add more to a single component than to split it into multiple components.

I also like the way CSS is built in, but I'm unconvinced that magic style scoping is all that helpful. I found myself fighting against it, and trying to remember the correct `::v-deep` incantation more often than I'd like. For me, the ideal abstraction here is CSS modules, where the class names are scoped, but the actual declarations behave like normal CSS. I never managed to get CSS modules to work with Vue, but we were stuck on Vue 2 for a long time, and that might have changed with more recent versions.

But I think that's the thing: there isn't really a clear "best in class" here. Different tools work for different people and in different contexts. You talk about the awfulness of CSS-in-JS, but last time I used it, I found it gave a really good balance between conventional CSS (albeit not in CSS format), while still being collocated with JS components. There are definitely downsides, but there are downsides in most of these solutions.


> For me, the ideal abstraction here is CSS modules, where the class names are scoped, but the actual declarations behave like normal CSS.

How is this different from Vue or Svelte scoped style blocks? Those are just normal CSS with scoped class names.

> Different tools work for different people and in different contexts. You talk about the awfulness of CSS-in-JS, but last time I used it, I found it gave a really good balance between conventional CSS (albeit not in CSS format), while still being collocated with JS components.

People think too much about how these things work for themselves and their own skillsets, not whether it prevents other great people from contributing as easily. I know HTML/CSS developers who are way better than the average React dev at building great clean and accessible UIs, but they aren't as strong in JS and end up struggling with unnecessarily complicated JS solutions to things that don't need them.

Besides the colocation what other benefits did CSS-in-JS provide you besides the familiarity of JS logic? This feels like again a solution to solve a limitation of React, not actually the best solution.


CSS modules work quite differently from scoped CSS. In scoped CSS, every element in a component is given an extra data attribute unique to that component. Then when you write your CSS, each declaration is given an implicit extra selector that scopes it so that it only affects elements in that component. As an example, the selector `.header > nav:hover` would be implicitly transformed into `.header > nav[data-535728]:hover`, and every element in that component would be generated with the `data-535728` attribute.

In contrast, CSS modules is far simpler: every class used in a CSS file is replaced with a random identifier (normally something deterministic), and any JS file that imports that CSS file will receive an object mapping the original class names to the new identifiers. Essentially, the only change from normal CSS is that class names are no longer global - everything else remains the same.

In my experience, I tend to run into far fewer surprising situations with CSS modules because it really is just CSS. If I target a class and all of its children, then I get what I expect, regardless of how the components are laid out in practice. In contrast, with scoped CSS, I tend to find that the scoping rules get in the way - my CSS is now tied to my components, as opposed to being able to be used and reused in multiple places. It's not a big issue (and I've used it plenty, and most of the time it's fine), but I always feel like it's an extra layer on top of CSS that I don't actually need.

Whereas, ironically, CSS-in-JS tends to feel closer to true CSS (and CSS modules) because there aren't any surprises. It's just CSS declarations (albeit often written in an unusual syntax), except that class names are locally scoped. In that sense, I can use my existing CSS knowledge and craft selectors however I want.

I understand the value of SFCs for people who don't feel as comfortable with the scripting side of things, but I do wonder if that's a bit of a false economy. At the end of the day, Vue is a tool for writing Javascript components. If you've got a team that doesn't want to write Javascript, then something like Alpine.js is probably a much better option - minimise the JS side of things completely and just concentrate on the HTML and CSS sides. But component-based web development is going to involve combining Javascript, CSS, and the DOM, and that involves understanding all three technologies, and not just specialising in one of them.

E: as a complete aside, I love the user name! I've just finished rereading the Watch books and I'm deciding which thread to go down next.


The difference between Vue/Svelte style scoped CSS and CSS modules isn't really that different in practice. When working with a larger team it is harder to enforce being careful with class name uniqueness, so the auto-scoping is beneficial but in a personal project I honestly don't mind turning off scoping entirely and just being strict about BEM class names. I LIKE the cascade part of CSS. So when working with a team Vue/Svelte-style scoping gives me the best of both worlds, global classes when I need them, scoped classes when I want to keep it component-specific.

The issue isn't that the companies/people I work with don't want to write JS, it's that they ONLY want to write JS. They don't hire the people that know HTML/CSS and accessibility better than their React devs so they don't even know what they are missing, they just think it is normal for implementation to be clunky, not match designs, and have poor performance. This is pretty much the standard at most early stage SAAS startups except the ones started by people who actually care about this stuff and understand it technically. At every company I have worked with (usually under 100 people) I as product designer have written more CSS than any front-end developer.

The point is to make codebases that are accessible to people who come from all sorts of backgrounds, not just JS developers. I have worked with designers who knew HTML/CSS well and they were so happy to be able to write simple template logic and plain old class names in Vue without having to figure out the correct JS syntax for modifying an array of style objects or writing a conditional and not leaving an accidental 0. Passing classes and props into child components and not having to explicitly tell it what to do with each one makes building JS components a LOT easier for people who aren't JS experts.

Discworld is a true joy, I miss Terry Pratchett! I read the books in publishing order, but I have had the urge to go back and read them as sets of related themes.


The main divide I've observed is whether you are encouraged to use JavaScript's built-in control flow and code reuse mechanisms or a less expressive DSL.

Hooks are pretty sweet but I think they should have launched with higher order hooks that used the same lifecycle as original React. That way you wouldn't need to think too hard about things like object identity (which I think is the state management issue you're getting at -- it wasn't a huge issue in the pre hook days)


What makes the svelte template language less expressive than JSX in your view?

Props and expressions will look similar in both (minus the useXX ceremonies), and the rest is all loops and if conditions. Writing simple inline conditions in JSX is painful, so it already starts with a negative score.


The template language doesn't compete with JSX. It competes with JSX + JavaScript, which is far more expressive.


Can you be more specific? What is easier to do with JSX+JS that is harder to do in Vue/Svelte context? Or another take on the question, will it be easier to read and understand for someone new to the codebase or JS development?


Svelte's template language can evaluate JavaScript just fine. Svelte's template language doesn't need to compete with JSX + JavaScript, Svelte + JavaScript is what competes with JSX + JavaScript.


> templating makes it far easier to understand

I think this is the key take away.

React is great and offers a lot of tunability in terms of performance but to me it often feels a lot like writing low level code.

When I use something like Vue, Petite-Vue, Svelte, Angular (flawed but underrated) - it feels like using something high level (like an ORM) that is designed to take something difficult and map it to something human readable.

The relationship between my changes in code/state ergonomically translate across to the template and that leads to very maintainable applications.

In reality though, despite the endless stream of blogs telling you how to write React applications, React isn't prescriptive on how you write it. You can create "view models" with proxy objects to implement primitive data-binding in components and use Context to inject dependencies.

In fact I once wrote a compiler that compiled html template syntax into `React.createElement()`, and along with a supplied runtime I was able to use React as a backend for a reactive template renderer.

> Svelte and Vue approach to SFCs

I have personally found SFC to be super annoying given they are not optional and require custom tooling.

A single file is nice sometimes but I don't really mind if a component is split into 3 files (js, css, html) as long as they are associated with each other somehow. I just want to ensure that my view renderer requires the least amount of tooling possible so my project will stand the test of time and I can be self reliant for updates.

Can I use my own test runner? Can I pick my version of TypeScript? Can I setup my own eslint? Can I set up my own css preprocessor? Can I use custom compiler configuration (like using SWC)?

To me, the fact that React doesn't require a compiler (aside from translating jsx) is its greatest asset, and Vue/Svelte and Angular's greatest shortfall.

It's just a shame that React is so fiddly to work with


I love having HTML/CSS/JS in one place, it means things never get detached from their immediately relevant context. Also thanks to nice extensions for VSCode (which wouldn't work across separate files) I can option-click any class name and get a mini-panel to edit the styles inline without hunting for the style section or a separate style file. I can also see when styles I have added are not being used in that component.

When I demo side projects and my own work to my coworkers they are always surprised "I didn't know you could do that!" and the thing is they can't because they are all stuck using React.


> they can't because they are all stuck using React

You can get most—if not all—of that behavior in React with JSS and TypeScript. All of your component code (HTML, CSS, and JS) can be colocated in one file, you can jump to the CSS class definition, TS ensures the class name is legit, etc.

Svelte and Vue provide a more batteries-included experience, but React is very flexible.


But it's a far worse developer experience, particularly for designers and HTML/CSS/A11y folks who aren't as JS focused. It's not like you really are getting that much benefit from writing CSS in JS besides familiarity with a language you prefer. It's harder to read and parse at a glance (even for the devs who wrote it, I see this in practice all the time when pairing and attempting to fix layout and style issues with my devs), harder to inspect from the browser, encourages practices that undermine many of the core benefits of CSS, etc.

My whole point is that we should not be finding worse workarounds just so we can stick with the familiarity of React. Even outside of HTML and styling it is easier to do stuff in other frameworks than it is in React. React is not providing some magical level of power other frameworks don't provide.

I don't hate React, I'm just sick of it being the defacto standard. We are creating a React monoculture in web dev and it absolutely has secondary impacts on how we write markup, styles, and how approachable and accessible our codebases are. So many major decisions being made just so a team can stick with React.

Why not pick a different framework and get better performance, easier to write JS logic, easier to write and read markup and styles, and a code that designers and others can engage with easily without having to be JS fluent?


It's great to argue for Vue/Svelte, but just to clarify - the entire point about HTML/CSS isn't really true, there's no difference in support between React or any other framework there.

And disagree on HTML/CSS skill loss being a thing, or being proximately caused by React. And no it doesn't have any div-soup causative effect either.

I get that templates are simpler for non-coders. Unfortunately, in all but the simplest cases the cost-benefit is towards the React model as you are just "using the platform" to build your views - ie, JS and not some arbitrary DSL.


How can you say there is no difference? What do you think Vue and Svelte SFCs are? Where does React have built-in component style scoping? Where does React have automatic passing of classes and properties to components? Can I write SCSS in a React JS file and have it just work? Nope.

The platform is more than just JS, and React has plenty of stuff going on that is "magic" enough to be no different for me than the abstraction of a DSL. How is the cost-benefit better for React when I keep seeing developers who actually try other frameworks have their minds blown at how much easier it is to do the same things?

The CSS skill loss is very real, particularly with the number of straight-to-React bootcamps that barely touch on it and the insistence on using CSS-in-JS which makes it way harder for people to explore a production website and learn the fun way. I'm not some HTML/CSS purist, and React was a big step forward for the industry, but now its time is past and we have way better alternatives.


No one believes the gospel of syntax, simplicity, and inline styles more than me [0]. I made that before Svelte was a thing and the Vue creator actually asked if he could borrow multiple ideas from that as he was making it.

But idk what React codebases you've been in, the bog-standard, overwhelmingly popular bootstraps and guides since way before Svelte ever existed had some pretty nice style solutions and at the minimum importing CSS.

React has some of the most gorgeous style solutions around (I should know having built the best one) and has since forever ago. Svelte is less popular than many React starter kits with I'd argue much nicer DX.

CSS needed drastic changing too, and I think if you looked on average total CSS knowledge is higher than ever, you're just seeing frontend become popular and wide, different types of sub-specialties at this point.

[0] https://youtube.com/watch?v=HHTYHm6qLFY&t=19


> React has some of the most gorgeous style solutions around

Can you give an example?

> But idk what React codebases you've been in, the bog-standard, overwhelmingly popular bootstraps and guides since way before Svelte ever existed had some pretty nice style solutions and at the minimum importing CSS.

Vast majority use Styled Components or another flavor of CSS-in-JS that continues the problem of making it harder for HTML/CSS experts to contribute to a JS-only codebase. Tamagui is exactly the issue I'm talking about. It's great for JS/React first devs, but doesn't actually do that much that a good HTML/CSS developer couldn't do without all this added tooling and with much easier to read clean code for people unfamiliar with the framework.

> I think if you looked on average total CSS knowledge is higher than ever,

Definitely not my experience. There are some great CSS devs out there, but it is always secondary to the skillsets that are actually highly paid (JS, React) and they mostly end up in large companies by happenstance of those companies hiring more people. Smaller tech companies make the consistent mistake of not hiring front-of-the-front-end devs and end up with poor UI and experiences because of it.


Vast majority use CSS modules or tailwind.

Tamagui is explicitly React Native and Web focused so yea you don’t learn CSS as directly, but that’s the point as there’s no CSS on native. It’s not really relevant since Svelte doesn’t do native (big downside and shows why maybe CSS extremism is not helpful).


I think your conclusion in point 2 is misguided, but I can see where you’re coming from. It’s true that virtual DOM has a lot of expressive power, and Svelte’s templating features are more limited in how you can approach certain problems. I find Svelte’s slots especially challenging for more advanced use cases, and wish they were more composable.

But in my experience, React’s openness to expressive experimentation is not a net positive for maintainability, productivity, or quality. There are lots of footguns, and bad ideas embraced by the ecosystem. HoCs were a terrible idea, and were replaced with an even worse idea, hooks. Compared to that, Svelte’s reactive statements and stores are a bliss.

Arguably, it’s because React has experimented and made those mistakes that Svelte has been able to focus on a constrained DSL that mostly supports the features that are actually a good idea.


I think there are two things that are interesting about your point of view.

First, I think a mistake the react team made was not exerting more control over the ecosystem. I think a lot of questionable content marketing pieces ended up as “best practices” which we are still unwinding as a community.

Second, I totally buy the idea that there are multiple personas and that different tools speak to different personas. The sustained popularity of constrained dsls among subsets of the frontend community speaks to this.


But are you against constrained templating DSLs on principle, or just the specifics of how Svelte does it? I think the reason bad ideas take hold is because people are looking for guidance; constraints, if you will. In React that is offered through libraries, frameworks and best practices, but not all of those are good. Svelte has a lot more control of its ecosystem because the constraints are built in to a compiler. And as I said, I think it falls short in certain areas, but all in all I think those restrictions are a net win.


I am against constrained DSLs in general unless it buys you something. In the general case, with a DSL you sacrifice expressivity in order to gain security guarantees or improved performance. In this case you are sacrificing expressivity to gain performance but I don't think the performance gains are meaningful enough to be worth sacrificing all of that expressivity. So to answer your question, I am generally opposed to constrained templating DSLs for web frontend applications overall.


I’d say DSLs in general are really annoying. They’re almost never as well designed as a mainstream language, their tooling is usually worse, and good luck if you ever need to do something that isn’t well supported by them.

These problems are surmountable, but man, I’ve used and created some crappy DSLs in my time. If you’re wondering, “Should I create a a DSL for this?” Default to “No” unless the evidence is overwhelmingly in favor of it.


I couldn't agree more. Every time I have to do a for-each loop in some templating language instead of just using a normal programming language I want to shoot it into the sun. Templated yaml is my current pain point. Why not build data structures in a normal programming language and then write the result to yaml. It feels like a lesson we learnt after templated XML 20 years ago that we keep forgetting.


> performance that is often not noticeable or is easily achieved with React's memoization features. React was always about enabling front-end engineers to take advantage of software engineering best practices so they could level up velocity and quality

I've been working daily with React for almost 6 years now. Another 10+ years of web development before that. And I have yet to see these benefits be realized.

It's not easy, at all, to "optimize performance using memoization" in React, best practices are mostly tied to JSX/React and its tooling, not general software engineering, and they change yearly based on the latest version and current third-party library trends. Velocity and quality are both hurt. Your best bet is to use an all-in-one framework like next.js, which delegates most of the pain to the authors - but not all of it.

The big improvements that React brought to the mainstream were componentization, and popularizing declarative rendering. I think we have better options today.

Svelte does not limit expressiveness in my experience, in fact it makes it way easier to do what you actually want to do, without contorting yourself to the framework. It's liberating. This is a viewpoint I hear frequently though, so it definitely depends on your personal path as a developer and what you've been exposed to.


At least in my team the "constrained custom dsl" has been a net productivity win. The only people struggling for a bit were the ones used to react, since they tried to apply react solutions to every problem. It appears you need to unlearn a lot of react when using svelte, so I would argue that it's actually react that wants you do develop in a specific way. The same does not appear to be true for devs coming from other technologies. Other experiences may of course vary.


#1 might be true today, but it was definitely not when react was new. I made a fairly simple react site that worked great on my workstation, and then I tried using it on my Android phone and it was unusably slow.

Mobile phones have gotten faster, and so have mobile js improvements, so I wouldn't be surprised if it's negligible overhead these days.


“A constrained DSL” is precisely what is wrong with so many React alternatives. Anyone who had to deal with writing Angular 1 directives and fight with the digest loop understands how very real the pain is.

Also, there are two types of performance: how fast code runs and how fast a team can maintain and extend code. Does one optimize for running code or for building code? React won because it optimized for the latter. And thank god, because who wants to mess with digest loops and magical DSLs?


Isn't SolidJS supposed to be like React (in that it has JSX, hooks etc) but without the VDOM? So I guess that would sidestep your point #2, but curious to hear your thoughts on it.

Personally I don't use Svelte, Vue, Solid etc simply due to the (lack of) library support compared to React. For example, I wanted to do something in 3D the other day and reached for react-three-fiber, there simply isn't something comparable in the non-React world.


SolidJS also sacrifices expressivity for performance (ie you need to do contortions like this[1] to build dynamic lists), so I prefer React to it. However, I like its approach more than Svelte because there’s less of a DSL to learn.

[1] https://www.solidjs.com/tutorial/flow_for


> For example, I wanted to do something in 3D the other day and reached for react-three-fiber, there simply isn't something comparable in the non-React world.

Respectfully, that is because you dont _need_ anything other than ThreeJS in the other frameworks.

I do find it interesting that it says it performs faster due to reacts scheduler.


> Respectfully, that is because you dont _need_ anything other than ThreeJS in the other frameworks.

Not really, ThreeJS is imperative, react-three-fiber is declarative. I use the latter for the same reason I use React over jQuery, I don't have to mess around with appending nodes, I can lay out my view declaratively and have the framework fill in the rest.


You don't _need_ react-three-fiber to embed ThreeJS in React component either.

The point of react-three-fiber is that it allows you to use React's declarative rendering model to render your scene.


> reached for react-three-fiber, there simply isn't something comparable in the non-React world.

This came out a year ago now.

https://svelthree.dev/ https://github.com/vatro/svelthree

Svelte, Vue, etc. all have plenty of awesome libraries - you just need to be aware of them.


As a seasoned engineer (compiler background), but beginner in frontend, I'm curious aboutthe lack of expressive power that Svelte incurs.

Could you (or someone) characterize things that are hard to do in Svelte?


Once a data is transformed into a view in Svelte, you can't do anything with it in JS. For example, a component can't iterate over the children in a slot [1].

This is limiting when designing generic Component-APIs.

As an extreme example: In React, I could trivially define a tabbed interface by mixing strings, JSX, and components, without imposing any DOM-structure. The component that reads this definition could use it to build a tabbed-view on mobile, and a master-detail-view on desktop. ...or it could build a table of contents. When defining the tab names, I can mostly use strings, but fall back to JSX if necessary. Importantly, the API is completely independent from the implementation.

    const tabs = [
       { name: "Tab 1", icon: <img ... />, content: MainTab },
       { name: <>Tab with <b>bold<b/> text</>, icon: <MyIcon ... />, content: SecondTab },
    ] 

    return <MyLayout tabs={tabs} />
[1]: https://github.com/sveltejs/svelte/issues/5381


As a complete aside, happy birthday bud!


Yes and no.

Having implemented virtual DOM natively in Sciter (1), here are my findings:

In conventional browsers the fastest DOM population method is element.innerHTML = ...

The reason is that element.innerHTML works transactionally:

Lock updates -> parse and populate DOM -> verify DOM integrity -> unlock updates and update rendering tree.

While any "manual" DOM population using Web DOM API methods like appendChild() must do such transaction for any such call, so steps above shall be repeated for each appendChild() - each such call shall left the DOM in correct state.

And virtual DOM reconciliation implementations in browsers can use only public APIs like appendChild().

So, indeed, vDOM is not that performant as it could be.

But that also applies to Svelte style of updates: it also uses public APIs for updating DOM.

Solution could be in native implementation of Element.patch(vDOM) method (as I did in Sciter) that can work on par with Element.innerHTML - transactionally but without "parse HTML" phase. Yes, there is still an overhead of diff operation but with proper use of key attributes it is O(N) operation in most cases.

[1] https://sciter.com


innerHTML doesn't preserve event handlers. So you're either reassigning event handlers over and over or relying on delegate handlers everywhere.

And while your statement makes intuitive sense regarding performance, actual measurements show clearly that idiomatic Svelte (and other modern frameworks) routinely beat VDOM-based efforts handily in their idiomatic cases and often even when folks jump through the performance optimization hoops needed for VDOM.

VDOM is pure overhead. Better than manually aligning writes before reads manually a la 2010, but noticeably worse than the current crop of compiled offerings.


It depends how really you use virtual DOM. React's "reconciliate whole world" approach can be excessive, yes.

But, for example in Sciter, vDOM works in [web] component cases that are similar to Svelte:

   class Beers extends Element {
     bottles;
     
     render() {
       return <span .bottles>{this.bottles}</span>
     }

     set value(v) {
       this.componentUpdate({bottles:v})
     }
   }    
When you will do

   document.$(".bottles").value = 12;
it will update only what is needed. Pretty much Svelte style but with the convenience of vDOM.


To make it even closer to Svelte, Sciter has native signal() implementation, so

   let bottles = signal(0);

   function Beers() {
       return <span .bottles multiple={bottles.value > 1}>{bottles.value} bottles of beer</span>
   }

   document.body.append(<Beers/>);
That can be updated by simply changing signal:

   bottles.value = 42; // Party time!
Note: this does not require any preprocessors or precompilations.


Let's compare lines of code, because more lines invariably leads to more bugs.

Contents of Beers.svelte:

    <script>
      export let bottles = 99;
    </script>
    
    {#if bottles > 0}
      <span class="bottles"
            on:click={() => --bottles}>
        {bottles} bottles of beer on the wall
      </span>
    {:else}
      <span class="bottles">
        No more bottles of beer on the wall
      </span>
    {/if}
    
Then to use it:

    <script>
      import Beers from './Beers.svelte';
    </script>

    <Beers />
No knowledge of Reactor's existence needed let alone the library's "signal" function. No functions needed at all. No bespoke syntax for the "bottles" CSS class. No vDOM API call. No extra "values" accessing property. It's >90% plain old HTML, CSS, and JS with literally the bare minimum of syntax to handle data binding.

Yes, it requires a compiler, but I would honestly astounded if you even noticed the compiler build time in dev mode. AND the deployed code is smaller. AND it's simpler for the dev to understand and maintain. AND it's likely faster at runtime.

The argument that Svelte adds mental overhead is manifest nonsense. If you like the vDOM, have at it. Follow your bliss. Some folks like hitting and kicking trees. Some folks prefer their coffee too hot to drink.

I for one want a web framework that makes web development as simple, straightforward, and powerful as possible. HTML, CSS, and the smallest amount of JS and HTML annotation imaginable.


In Sciter you do not need any preprocessor at all, not even JS in your HTML:

    <style>
      div.beer {
        prototype: Beer url(/components/beer.js);
      }
    </style>
    <body>
       <div class="beer" />
    </body>
After that div.beer element will be instanceof Beer. In this case class Beer used as a [Web-alike] component.


1. I think you accidentally a <style> tag

2. Your example would have atrocious load time implications for any non-trivial web page. Iterating the DOM through querySelectorAll to replace items at load time? Yikes!

So apparently with Sciter you can either have minimal code or acceptable performance. Got it. Would rather have my cake and eat it too.


My sample is correct.

CSS prototype property is a Sciter specific extension.

When the engine computes styles it does [if needed] this (pseudocode):

   Object.setPrototypeOf(element, thatClass);
   element.componentDidMount();
on applied elements.

There is no "performance sacrifice" in case of handling prototype properties.

Prototypes are switchable if needed:

  div.beer { prototype: Beer url(...); }
  div.beer:hover { prototype: BeerHovered url(...); }


Oh geez. It's Internet Explorer CSS Behaviors all over again.

"Those who do not learn from the past…"


Close but not exactly, there is no esoteric HTC stuff for example.

   element.selector {
     prototype: ClassName url(in-module.js);
     color: blue;
     ...
   }
That above is significantly better than almost-dead-at-birth WebComponents:

   let customElementRegistry = window.customElements;
   customElementRegistry.define('my-custom-element', MyCustomElement);
One simple CSS property instead of 20+ additional entities https://developer.mozilla.org/en-US/docs/Web/Web_Components

Used quite a lot actually, on half of machines where Sciter is installed (~500..600 mln machines :)


Web Components is a marketing coup. It’s a great name. People wish it existed and did the thing it says. So they ignore that customElement and shadow DOM are two terrible APIs that are best ignored by 99% of developers…

Meanwhile, shit that would actually help framework authors, like native morphDOM don’t happen.


This seems pretty interesting.


From what I know React does not register event handlers on individual nodes, but rather on root component. Then it uses virtual events from it’s pool in your callbacks.


Side note, synthetic events are no longer pooled since React 17. There's no longer a noticable performance gain with more modern browsers nowadays.


Wow, I didn't know this! All that Synethetic stuff makes sense in retrospect...

I tried a quick google and didn't find any articles discussing it directly. Do you have any links to offer?

Thanks in advance!


Cannot find a page about it in the docs, but here they discuss how it was changed from document root to React root in React 17: https://reactjs.org/blog/2020/08/10/react-v17-rc.html#change...


Element.insertAdjacentHTML() appears to fix the event handlers issue (and similar issues with element state), since unlike innerHTML it does not replace the element it's being used on.


Again, in case of Sciter, event handlers are not the problem at all as it supports declarative event handlers:

    class FooBar extends Element {

      render() {
        return <form.foobar>
          <button.foo>Foo</button>
          <button.bar>Bar</button>
        </form>
      }
      
      // event handlers:
      ["on click at button.foo"](evt,button) {
        console.log("click on button foo") 
      }
      ["on click at button.bar"](evt,button) {
        console.log("click on button bar") 
      }
   
    } 

    document.body.append(<FooBar />);
This approach will define event handlers on the class rather than on individual element.

And there is a slight difference between Sciter's components and React ones. Sciter component is real DOM element:

    const foobar = document.$("form.foobar");

    foobar instanceof FooBar; // true
That allows to use JSX/vDOM as Web Components as React Components - bests of two worlds.


I swear, folks and their JSX have me convinced they have Stockholm Syndrome. HTML+CSS in JS was always a pragmatic choice back in 2015, never the most elegant or most maintainable one.

It's like the folks who refused to use anything but the DOM APIs when JQuery was sitting right there. Or who keep on using onclick handlers on their div tags instead of using perfectly good HTML tags like:

    <a>
    <button>
    <input type="submit">
    <details>
JSX was never the best of any web development world. It was at best the least worst option at the time. We have better ones now.


> We have better ones now.

Would you mind elaborating on what the better options are? The way I see it, there are a few possible alternatives:

1) Keep the same runtime DOM representation but use normal JS (something like `div({className: 'beer'})`). I know some people disagree, but I strongly believe that this is strictly worse than JSX because it's more verbose and far less readable.

2) Use string templates parsed at runtime. You lose most of the structure you get with JSX—static syntax checking, type safety, autocomplete, etc. Composition becomes a matter of string concatenation, which is possibly the worst way to do it. On top of that, you have to learn templating primitives specific to your templating library instead of being able to simply use what you already know: JavaScript.

3) Use templates parsed at compile-time. This removes most of the drawbacks of #2, but you still have to learn a new templating language and all of the idioms that come with it. On top of that, you're entirely dependent on IDE integration for syntax highlighting and autocomplete. (I realize that JSX has the same problem with custom syntax, but the tooling around it is ubiquitous by now.)

You could make a strong case for #3 being a good way to do templating, but there is no "best" or "most maintainable" option; there are only tradeoffs. JSX happens to have a really good set of tradeoffs going for it, and no one has (yet) created anything that's strictly better.


I would argue #3 is obviously better, especially if it's done as Svelte has done it. It's hard to look at a Svelte component and see much more than a <style> tag, a <script> tag and some lightly annotated HTML for data binding, event capture, and control flow etc. Compared to JSX, it's a breath of fresh air.

Are you dependent on IDE integration for syntax highlighting? Yes, of course. Same with HTML, CSS, and JS. And if Svelte were not already six years old, I'd be more concerned. But the simple truth is that every major IDE I'm aware of for front end development supports Svelte already.

    • VSCode
    • Jetbrains Webstorm
    • Neovim
    • Sublime Text
I would be shocked to the core if Emacs didn't already have something mature as well. Compilers are better than humans at managing rote boilerplate, of which React has no end of. I can only see how output improves by removing that recurring cognitive load. It's Assembly vs C all over again where folks have a hard time accepting that the easier path also leads to demonstrably better results. If I can do in 10 lines what previously required 50, that code I contribute is far less likely to have as many bugs or suffer from performance problems.


> It's Assembly vs C all over again where folks have a hard time accepting that the easier path also leads to demonstrably better results. If I can do in 10 lines what previously required 50, that code I contribute is far less likely to have as many bugs or suffer from performance problems.

It's entirely unclear to me how compiled templates result in drastically less code. If we're comparing Svelte and React as frameworks, then sure, but your original comment specifically talked about JSX syntax being inferior to the alternatives. Templates require a custom DSL for control flow, iteration, etc, whereas with JSX you can use standard JavaScript. That also means that you can take third-party libraries that work on regular JS data structures, like objects and arrays, and apply them to JSX elements with zero fuss. With a DSL, you have to find a domain-specific version of the code you've already written in your head, and in some cases, it may not even be possible to create the same abstractions. This has its advantages, of course, but I strongly disagree with the notion that it's simply better.

For the record, I really like Svelte as a framework, but I can't honestly say that their decision to use templates has anything to do with that.


innerHTML can set event handlers so you don't have to assign them separately. And if you re-create dom fragment with innerHTML you can reattach children that didn't change and their handlers are preserved.


Is this still the case with the newer transactional methods like Element.append(), Element.before(), and DocumentFragment?

When I manipulate the DOM I try to create the entire structure in a fragment and the use .append(...) only once.


This:

    element.append([array of Elements]);
is in magnitude of times faster than

    for(const el of [array of Elements])
      element.appendChild(el);
so yes, it helps to improve situation.

But think about updates like this:

   element.patch(<p multiple={n > 1}>There {n > 1? "are": "is"} {n} bottle{n > 1? "s": ""} of beer on the wall</p>);
Here you need to update (or not) as the attribute as text nodes. You need some transactional mutation mechanism.


Oddly enough, this doesn't seem to be accurate: check out https://jsbench.me/02l63eic9j/1.

I also would have sworn up and down that using a DocumentFragment would be loads faster than both, but it doesn't seem to be the case. I wonder why that is.



> this doesn't seem to be accurate

It is pretty accurate here, case #3 is significantly (almost two times) slower than case #1.


Not on my browser (Safari 16.1). Here case #3 is the fastest, over 7% faster than case #1.


Seems like Safari has quite naive element.append(...list) implementation and/or "destructuring to argv" operation is slow there.

I suspect that element.append(...list) is just a

   for(auto arg : args) 
     element.append(arg);
so no transaction there at all - slower version of case #3.

On Windows I am testing it in Edge, Chrome and FF. Edge and Chrome show close numbers (#1 fastest). FF shows #3 is faster - same problem as Safari I think.


I'd be interesting in learning the answer here as well. I've read that documentFragment are faster, but some microbenchmarking on chrome/mac makes me think either the improvements are negligible. Rerunning benchmarks on stackoverflow (https://stackoverflow.com/questions/14203196/does-using-a-do...) (both individually and swapping the order of fragment vs non-fragment tests) nets me ~60ms when rendering 100000 ul in each case.

My naive take on this is that browsers have overall gotten a lot more consistent with the layout-paint-composite loop, and it's not worthwhile to swap out all your appendChild calls with fragments. On the other hand, making sure your all your layout reads (.clientWidth) are batched before the layout writes (appendChild) is much more important (fastdom)

edit: something like documentFragment/append(...children) would help guarantee the layout trashing addressed by fastdom


Fragments are still real dom nodes, and those are heavy.

Unless you getting computed styles in between add/deletions, etc, I don’t think there would be much of a difference.


If you use dom fragments, in recent browsers, it's up to par.


> And virtual DOM reconciliation implementations in browsers can use only public APIs like appendChild().

Why can't they also use innerHTML?

Meaning, they could define a cost function where they deduce that it's cheaper to use innerHTML on a potentially larger than necessary scope if the alternative is >some_threshold for modification API calls.


They could, but they would have to either render the whole scope again or somehow apply the change to a copy of the scopes HTML and then set that. Neither seems ideal, but that may be a reasonable, if complex, optimization.


Probably because we are doing diff per node so we'd have to aggregate diffs somehow, rip out children that don't change to reattach them into the change part recreated using innerHTML.


We could create component system that doesn't diff per DOM node but per component. It would render components to strings and place them with innerHTML into slots (dom elements) exposed by their parents. On first render it could just splice strings.


Thank you for your work on Sciter.

Wanting to use it on a new project soon. Love it over some complicated and bloated Electron solution.


Just have a suspendLayout resumeLayout / beginUpdate endUpdate method like Winforms surprised doesn't exist after all these years


Not so easy unfortunately.

1. BeginUpdate stops a control from repainting itself and that is what browser is doing already - no painting happens at the moment of JS execution. So primitive "postpone painting" does not really help.

2. element.update(callback) or DOM.mutate(root,callback) shall be a single method - no one wants EndUpdate() calls to be skipped because of errors thrown and the like.


A script will eventually return to the event loop, where endUpdate() may be called automatically. You don’t even need beginUpdate(), because it may be hidden behind update methods.

Every time I read about DOM I frustrate about how many frontend issues are there due to just bad platform-level patterns. We’re long past the need of reflecting updates auto-instantly in a single call to the engine. And that wasn’t even necessary before.


> no one wants EndUpdate() calls to be skipped because of errors thrown and the like.

but surely with enough effort and care this should be a non-issue? And maybe auto-rollback on error?


layout is suspended automatically. Unless you query the dom for something, in which case you can get lots of thrashing. For example, you don’t want to add some dom elements, then get their height/width as that will force the layout. And don’t do that in a loop! Last I looked, addjng/removing dom elements only schedules the layout and repaint. Things have gotten more multithreaded since I looked at browser code for this, but I doubt they would make a performance regression here.


The key observation about HTML templates is that usually large portions of them don't change with new data. There is static content, and even with lots of dynamic bindings they're tied together in a static structure.

So the vdom approach of processing all the static parts of a template during a diff is just extremely wasteful, especially for fairly common cases like conditionally rendering a node before some static content.

Ideally you already know what changed, and can just update the parts of the template that depend on it. In JS that typically requires a compiler, complexity, and custom semantics (like Solid). But you can get very, very close to that ideal with plain JS syntax and semantics by capturing the static strings and the dynamic values separately then only comparing the dynamic values on updates.

This is what we do with lit-html and why I think tagged template literals are nearly perfect for HTML templates.

With tagged template literals, an expression like:

    html`<h1>Hello ${name}!</h1>`
is passed to the `html` tag function as an array of strings `['<h1>Hello ', '!</h1>']` and an array of values: `[name]`, and the strings array is the same every time you evaluate the expression, so you can compare it against the previous template and only update the values in the DOM if it's the same.

It's a really efficient way to render and update DOM with a pretty simple conceptual model that requires no compiler at all - it's all runtime. I think it's straightforward and powerful enough to be standardized at some point too.


> So the vdom approach of processing all the static parts of a template during a diff is just extremely wasteful, especially for fairly common cases like conditionally rendering a node before some static content.

Vue 3 already just render the whole static contents to string in this case. And this is one of the selling point of vue 3.

It just happily serialize a big chunk of static template into string and create fragment on runtime with it. So client don't need to create static elements one by one. Besides that, it also mark that static content as "just don't diff it, it won't change", so runtime won't even try to diff it.

https://shorturl.at/aijOQ

Switch to the js panel and you will realize that it already serialize the whole v-node thing into html on build time.


> The key observation about HTML templates is that usually large portions of them don't change with new data.

Isn't this a core idea underneath the https://fresh.deno.dev/ "islands" and I believe the https://astro.build/ framework when they confronted issues around hydration/SSR?

https://www.patterns.dev/posts/islands-architecture/

Clearly there's some overhead via the vDOM and simply using React-like templates when building large blocks of HTML. But if the bulk can be rendered server-side that overhead isn't an issue. So you can address this by simply reducing the data binding to the bare minimum of HTML that actually need to be interactive.

That way you can use the same templating and component systems app-wide but the default is still static-first.

That said - the Cons section notes: "The architecture is not suitable for highly interactive pages like social media apps which would probably require thousands of islands." But at that scale there's often far more performance concerns than vDOM vs compiler vs [some better optimized templating system], where the benefits aren't as straightforward (as linked below https://twitter.com/dan_abramov/status/1135423065570127872).


Also Marko, which was on here yesterday: https://markojs.com/docs/why-is-marko-fast/#compile-time-opt...

> Marko will recognize that the template fragment produces the same output every time and it will thus create the virtual DOM node once ... Rendering a static sub-tree has virtually zero cost. In addition, Marko will skip diffing/patching static sub-trees.

> Marko will also optimize static attributes on dynamic elements. [Static] attributes [are] only created once and [are] used for every render. In addition, no diffing/patching will happen for static attributes.


vdom is overhead server-side too. When rendering HTML on the server you really want to stream longer pre-allocated strings as much as possible. The serialization overhead of converting many small objects to individual HTML tags shows up in profiles. And when you want low latency and the ability to handle high loads, it matters.


Can we please go back to templates compiled directly to php files that just get executed?


PHP has many other problems. React was literally developed by arguably the largest PHP shop.

In terms of performance specifically you do a ton of unnecessary, redundant work, because you recreate the whole world and throw it away again with every request. PHP does it’s best to be a fast language and mitigate this issue, but it can’t really solve it.


That's because there's no PHP in the front-end. ;)


Why php and not JavaScript?


Yeah, I can take JavaScript. After all it's 21 century.

So from now on we only use template engines that compile templates directly to simple .js files. Agreed?


I'm sure there's a different concept in there somewhere, but what you described sounds exactly like vdom.


This model doesn't represent DOM nodes, it represents entire templates. And there's no per-node diff - you only compare bound values. So the structure of templates is entirely static.


Let's call it vdata. Data diffing instead of DOM diffing.


I mean in the native Windows app world you just have your “ViewModel” raise an event with the name of the property that changed, and the UI layer just goes and assigns the new value to the appropriate element(s). Much simpler and easier to debug than vdom approaches.


This is also a core design principle of Angular - the compiler extracts the static template structure and generates code to update dynamic bindings within it.


Yes, but it turns out that you don't actually need a compiler for that. You can do it in a runtime that's very small because the standard JS syntax already separates the static structure for you.


Good to see Lit called out here, I love the lit-html approach to using tagged template literals to define views.


Imagine being someone in the semiconductor industry reading this. You're at the absolute pinnacle of high-tech and are approaching the limits of material reality to realize a 20% faster CPU. It's a true super human accomplishment.

Software developers: well yes, 99% of the cycles I use are completely needless, but it's still plenty fast enough!

Which we justify with the idea that a framework like React is abstract, hence expressive and productive.

Excuse me? Abstract? React is absurdly low level.

25 years ago I was coding in Borland products. You visually drag you UI together. Link events with a click. Drag data sources and fields into UI to do two-way binding. Tooling chain? What tooling chain. There's a build and a run button. No weird text config files or CLI crap. And every setup is the same.

25 years later we're worrying about painting frames. We're pissing away impressive hardware performance whilst simultaneously not actually achieving abstraction. That's a double fail.


"What Andy giveth, Bill taketh away"

https://en.wikipedia.org/wiki/Andy_and_Bill%27s_law


Good one, did not know that law but it's on point.


The state of things is ridiculous. But I think that this is how people operate, in general: they just fill the available space, use all the available resources.


Did the borland apps you developed 25 years ago handle all sorts of devices and screens including screen readers?


Does any web framework do that? No, you have to do all of that from scratch.


React (or other web frameworks) have a unique advantage. They run everywhere.

Any device with a modern web browser can run a React application. Sure, Electron and the alternatives are resource hungry, but they allow developers to create true cross-platform applications.

Sure, there are other ways to create a cross-platform app, but none of those approaches allow you to tap into the massive number of web developers that exist.


Why did you say ”true cross-platform” and not just ”cross-platform”?

I would define Flutter as ”true cross-platform”.

Update: React + ReactNative too.


I'm not super familiar with the space nowadays, but a few years ago cross-platform apps were full of compromises. Mobile apps wouldn't look native (tiny things like a being a pixel or two off) or they wouldn't support Linux. Maybe it's better with Flutter -- I've heard good things about it.

Additionally, Electron apps not only work as a standalone app, but you can often access a near identical version of the app in your browser. For example, you can either download Discord or Slack as a desktop app, or open them in your browser.

I'm not saying this comes for free; these apps are very heavy, and Flutter/React Native would probably produce more efficient apps.


My life, and blood pressure, would be greatly improved if every article about JS frameworks ended with a paragraph that says something like "... but JS frameworks are pretty fast really, so you'll only see a problem if you're changing lots of things on the page at once. And if you're updating lots of DOM nodes in a single action then maybe you need to think hard about your underlying HTML structure and UX instead of worrying about what JS is doing. That's where your problem lies after all."


In more cases than not I've noticed the choice of single page app itself is pure overhead.

SPA technology brings some key advantages but also a whole new realm of cost and complexity. It's my experience that SPA popularity has convinced many folks to use it when they really don't have a practical reason to justify it.


Svelte is so insanely lightweight, I think it is a great counter argument to a lot of the SPA hate.

And honestly, most of the weight in modern websites comes from analytics and tracking tools. I've made insanely performant SPAs that work well on low budget hardware. My personal phone is 5 years old, if code I write doesn't perform flawlessly on it I'm not going to ship it to customers! Heck my personal dev laptop is a 4+ year old $999 Costco special.

Well made React sites can run just fine on devices like that, and Svelte runs even better.

Also SPAs scale better, I remember the days of website backends being slow, with page loads taking forever, massive amounts of HTML being generated server side using slow scripting languages.

Sending a JSON blob of some fields to the browser and having the browser update some nodes makes a lot more sense than the old school way of reloading the entire bloody page to change a couple fields.


By choosing an SPA. You must choose a dedicated static site hosting which is separate from your web application. You may already have this but you may not. In most cases you must choose a framework for routing. Also a framework for state management. You also dedicate to duplicating validation and security trimming logic both on the client side and the server side. More often than not you will find yourself including hundreds of NPM packages as dependencies which you must continually update and maintain. Also the requirement for unit testing on the front end is common. Which brings in the need for things like jest and enzyme. This complexity inevitably trickles into your build and deploy processes. Perhaps in larger teams this is a burden you can absorb. In smaller teams however you start to see division of responsibilities. One person only knows front end but not back end and vice versa. Common knowledge of the application as a whole can become fragmented. Perhaps the day comes where you want to take a partial of a user interface posted in a peripheral application and place it inside your web page. Because you have a virtual DOM this is now a security risk. You must build a component which duplicates the user interface which already exists. If the user interface needs to be shared among many applications you must build a commons code base to host your components. You start shouldering the burden of maintaining component libraries instead of just HTML and CSS. Again this is all very general and hypothetical but it feels worth pointing out some of the common implications that simply choosing an SPA can have in the longer run.

Plus this is not an all or nothing sort of choice. For decades we have used Ajax to perform partial updates on a web page. Consider alternatives like HTMX as a comparison.


> By choosing an SPA. You must choose a dedicated static site hosting which is separate from your web application.

No you don’t.

> In most cases you must choose a framework for routing. Also a framework for state management.

I don’t understand this argument. React gives the developer this freedom by design. If you want a framework that has all of these decisions made for you, they exist.

> You also dedicate to duplicating validation and security trimming logic both on the client side and the server side.

I’ve been validating on the frontend for 15 years, long before I worked on an SPA. It has never been necessary but it provides a better experience. If you don’t like this, you can still let the server do all the validation. There is nothing about an SPA that enforces client-side validation. And you’re wasting your time if you’re doing security filtering on the the frontend.

> Also the requirement for unit testing on the front end is common. Which brings in the need for things like jest and enzyme.

“Grr this paradigm allows me to test my code, I hate it!”. Seriously, we’re now able to write unit tests which were previously impossible. How is this a bad thing? Also Enzyme hasn’t worked since React 17, I now use RTL which asserts user behaviour - super nice.

> Because you have a virtual DOM this is now a security risk.

What?

> You must build a component which duplicates the user interface which already exists.

How is this any different to a non-SPA? Regardless of technology you can’t just arbitrarily lift interfaces from unrelated applications and inject them into your application without a bit of work.

> If the user interface needs to be shared among many applications you must build a commons code base to host your components.

Again, how is this any different from a non-SPA? You UI isn’t going to magically share itself between applications just because you don’t have an SPA.

I’ve worked on all types of applications and I don’t think SPAs should be the defacto approach, but I really feel you’re clutching at straws with all of your arguments.


Recently we went through an exercise where we built a to-do simple app using react and rewrote it using HTMX.

The functionality was identical between the two apps. The amount of tooling code and duplicative logic was massively higher because of SPA and all the fundamental things it demands.

Now if you really need an SPA for your requirements because you have an intrinsically complex front end and you've mastered the hoops to jump through good for you! There's nothing wrong with that. But there is something seriously wrong with building the same user interfaces we've needed for decades but the time code and complexity drastically increasing for no justifiable reason.


> Recently we went through an exercise where we built a to-do simple app using react and rewrote it using HTMX.

React is boilerplate madness.

Do the same in Svelte.

I did a form heavy app in Svelte, literally took 1/5th the time it would have taken in React.

SPA fundamentally means that instead of refreshing the page, just the data needed to update what is on screen is sent down to the user.

Ideally, "send data about products on next page of search results" is less than "send all HTML needed to render the next page of search results."

Also the backend ends up simpler, instead of trying to template strings together, the code can just worry about fetching and returning needed data.

I am legit confused why people think generating HTML in some other language (Python, Ruby, etc) is a good idea.

Keep HTML in the browser (easier to develop and debug!) and keep backend business logic someplace else.


When you have a team with predominantly back-end knowledge expertise using a templating language they are familiar with plays to their strengths. MVC applications were written for over a decade. Perhaps it is a subjective thing because I don't see any logical difficulty in a web page that exchanges partials instead of JSON. I was programming that way for over 10 years.

Svelte really sounds compelling from what you're telling me. I'll check it out. But unless it is a drastic simplification it brings with it the fundamentals of effectively writing a thick client in JavaScript or TypeScript and all the things that come with it. React and angular have left a very bad taste in my mouth. The time and code cost for building basic user interfaces should go down not up. We should be spending less time talking about how to do something and more time talking about what to do


> But unless it is a drastic simplification

95% of what you write in Svelte is just HTML. You then databind whatever you need using an obscenely lightweight syntax.

Svelte also has an optional SRS framework called SvelteKit that auto creates REST endpoints for you, and auto binds stuff to them, but all that is optional and not needed.

My issue with backend HTML templates is that essentially you always have to know HTML + CSS anyway because browsers suck and they still have differences, so I always end up spending a ton of time fixing CSS and HTML issues. Having to fix HTML issues by way of the backend that then generates HTML feels like an unneeded abstraction.

Instead I can just... write HTML and CSS.


I won't touch upon most of the points (because those are highly situational), but I'll offer my opinion on the following:

>> By choosing an SPA. You must choose a dedicated static site hosting which is separate from your web application.

> No you don’t.

It is true that you typically don't have to do that: you can just package the built assets of your SPA into whatever serves the rest of it, provided that you don't mind a monolithic deployment. I've seen many applications like that, both in Java (Spring Boot), PHP (Laravel), Ruby (on Rails) and so on, it generally works okay.

However, I'll also say that it's pretty great to have two separate folders in your project: "back-end" and "front-end" (or separate repos altogether) and to be able to open those in different IDEs, as well as to be able to deploy them separately, especially if you want to create a new front end (say, migrating over from OLD_TECH to NEW_TECH while maintaining the old one in parallel) or something like that. Otherwise you have to mess around with excluded folders for your IDEs, or if you open everything in a single one everything kind of mushes together which can be annoying, and your build also cannot be parallelized as nicely, unless you separate the back end and front end compile steps from the package step, where you shove everything together.

Personally, having a container (or a different type of package) with Nginx/Apache/Caddy/... with all of the static assets and a separate one with the back end API feels like a really nice approach to me. In addition, your back end framework doesn't need a whole bunch of configuration to successfully serve the static assets and your back end access logs won't be as full of boring stuff. If you want to throw your SPA assets into an S3 bucket, or use one of the CDNs you can find online, that's fine, too. Whatever feels comfortable and manageable.

Now, whether SPA is a good fit for what you're trying to build in the first place, that's another story. Personally, I like the simplicity of server side rendering, but also like the self-contained nature and tooling/architectures behind SPAs (though personally I'm more of a Vue + Pinia person, than a React/Angular user, though they're passable too).


What backend tech do you like to pair HTMX with?


That's the thing. It doesn't really matter. It's sort of like asking what backend tech you pair with jQuery


I wish there was a "works for Pentium III" label that would help indicate that the app's usability hits necessary minimums on a 1Ghz Pentium III computer. IMO that would be a good optimization floor for avoiding the hidden monstrosity of electron apps and that type of stuff.

If your McCrud app can't be responsive on a baseline 1Ghz PIII with 1GB of RAM, then there needs to be some sort of shame pushback. Moore's law is effectively coming to a close, there will need to be more optimization in the future.


Why Pentium III? That's nearly 25 years old. You couldn't run Windows 10 on such a processor, let alone a modern browser, and a $200 mobile phone would beat it in benchmarks. Surely you can have a higher floor than that.


The Pentium III was the around a half a gigahertz, and we were starting to get into multi-hundreds of gigabytes.

... that sounds small compared to today's specs, but IMO this is when PCs had plenty of horsepower to run "real" operating systems (32-bit preemptive multitasking), "real" browsers, 3D gaming was into it's fifth year or so, etc.

So this wouldn't be a badge where you say "wow we fit it into this impossibly limited device". The dirty secret of the PC business is that this hardware spec is more than enough for practically all productivity and browsing (and video with hardware acceleration). Now, high polygon high res high antialiased games... but that has actual hardware horsepower needs you can quantify.

The amount of wasted resources from the year 2000 to now is stupefying. Intel and AMD love it! DRAM makers love it! But as an industry we have squandered the last two decades (and the last two decades of CPU improvement), right as gigahertz scaling disappeared, Moore's law is probably going to collapse under its economic weight, Amdahl's law says parallelism won't save us forever.

So if I look at some software and wonder why this relatively straightforward app is hogging along on a PC that is effectively 10-50x faster than a Pentium III 500Mhz (8x-10x in clock speed, then massive improvements to cache, branch prediction, multiple ALUs, speculative execution)... something is wrong.


Google Maps ran like a dream on Pentium M systems back in 2005. Gmail was also smooth as butter.

Pentium M was a derivative of the Pentium III design.

Ignoring high resolution image assets, there is no reason a website shouldn't degrade and be able run on any machine faster than 300mhz.


Works on a KaiOS feature phone support would be a relevant metric today with similar goals that you mention. They explicitly state in their docs that React will be too heavy for your app.


Is there some VM that allows limiting the CPU to a performance similar to a P4? (A PIII is way too bad for my tastes.)

I imagine Linux would have a bad time running in it.


Pretty sure PIII's beat P4s at a lot of benchmarks. :-D Thus why AMD is around today.


Well, I was using AMD at the time.

AFAIK, the P4 faced badly on jump-happy code, but this was not common enough to be a problem on the real world when compared to a PIII. It was also a power hog, that could barely outrun a snail if you didn't have proper thermal management, but that also doesn't means the processor is slow.


I should have specified perf / watt. Pentium M's came and cleaned up compared to P4s, there was a fair bit of time there when, excluding massive power hungry desktop monsters, a beefy laptop with a Pentium M could easily beat an average desktop with a P4.

And IIRC pipeline stalls on the P4 hurt, badly.

Oh and RAMBUS, I had forgotten about RAMBUS. That also hindered the platform.


It's actually the opposite. MPAs are pure overhead. In theory SPAs are faster because they only require a minimum of 1 user blocking network request, while MPAs need at least 1 for each page. Everything else is up to the implementation. So if you are doing heavy performance optimizations, SPAs will always end up faster. However that's not the full picture, and in practice there is a lot of nuance, but SPAs definitely have a higher performance ceiling.


Not for large DOMs. And for websites which Don't require support for low internet bandwidth this is optimizing for the wrong problem


Network latency is your no.1 bottleneck for every modern device, everything else is a distant second. Also you can optimize everything, but you can't make MPAs navigate without a network roundtrip.


Mpa still is faster so spa must have another type of bottleneck


This is very true. It's also why we have svelte-kit, remix, astrojs, and other frameworks that take a transitional app approach. They are server-rendered where it makes sense and client-rendered where necessary. Before developers had to choose between a server-rendered website and full on single-page application, now there are great options that blend the two depending on needs.


Totally agree. Definitely a reason I avoided any SPA on findthca.com. I've never had a user complain that it needed one.


Inferno.js uses VDOM https://github.com/infernojs/inferno and is faster than Svelte according to these benchmarks https://krausest.github.io/js-framework-benchmark/2023/table.... Sooo, VDOM can improve performance?


That's an interesting comparison because:

- Svelte is actually strangely slow, I mean there's *one* interesting optimization that having a custom compiler/transform allows you do to for free, which is deep cloning nodes in one go rather than creating them one by one each time, and they ain't doing it. Also, I don't have proof of this anymore, but I had tried running my relatively naive framework without the deep cloning trick, and without any custom transform or compiler at all, on that benchmark, and it was _still_ significantly faster than Svelte. Like Svelte is not that fast when you look at it closely, despite what the perception of the average developer might be, or what the marketing might say.

- Inferno is fast for real in that benchmark, and it isn't using signals, which is very interesting. I don't know how Inferno works in depth, but looking at the Inferno implementation for that benchmark [0] I see some shenanigans. Like what's that "$HasTextChildren" attribute? Why is my event handler created like that? Like I'm doubtful that the result in the benchmark will actually translate exactly to the real world.

- It's interesting also: if the VDOM is pure overhead why is Svelte creating an object for each instance of a component, kinda like React is doing? You don't strictly need to do that, as proof of that Solid doesn't do that (in production builds), because that's pure overhead for real there.

[0]: https://github.com/krausest/js-framework-benchmark/blob/6388...


Inferno was one of the first frameworks to embrace compiling JSX as an opportunity for advanced performance. the `$HasTextChildren` is a special attribute their JSX compiler (its a babel plugin) uses to optimize the tree at that point in time the that flag is found. It can do advanced optimization knowing that the children of that component are purely text VNodes. There are other flags available too that optimize different aspects[0]

This does translate into the real world, if developers use the flags. I know their babel plugin uses some heuristics to auto apply some of these things, but its extremely conservative.

The flags themselves are available in the real world though and can be used to achieve high performance.

Its really a shame Inferno never caught on the same way as other frameworks. Its extremely fast and intuitive, and had a nice take on functional components (just add the lifecycle methods as props, instead of introducing what is now React Hooks, though I think Inferno is held back not having a hooks API for some level of mindshare and compat there).

Even SolidJS hasn't quite crept the performance Inferno has managed to achieve.

EDIT: If memory services, the creator of Inferno works (worked?) at Meta (Facebook) as well. For whatever reason, it never garnered mindshare at FB either, despite arguably being a better solution than React in many real world scenarios and coming around at roughly the same time. I have always wondered what the story was there

[0]: https://www.infernojs.org/docs/guides/optimizations


I think React won partly because one of the most important tools for Facebook's revenue, the "Power Editor", was built on react (before react even existed, I suppose)

As one of the first Facebook PMDs (later FMPs) part of my job back then (around 2010-2012) was to keep up to date with changes in the ads API, but our main contacts were two guys in Ireland and themselves not always kept up to date with every development out of Palo Alto – I realised that the Power Editor was a client-side app, so I would reverse engineer it to find new features that were being run as internal experiments and stay up to date.

I realised that they had broken up the app into classes that kept their own state, using a framework that they called Bolt/Javelin – which would later become React – so I ended up writing what was probably one of the first browser extensions to debug "React" :)

Their ads team grew and grew and suddenly the two blokes in Ireland became hundreds and thousands. I can't imagine a better POC for a technology than the power editor was, because of how much of an impact it had for Facebook's ads business exponential growth.


I’d love to hear more about the earth days like this. It’s amazing to me just how special a time this was in web development


> Even SolidJS hasn't quite crept the performance Inferno has managed to achieve.

I see Solid to the left of Inferno in that benchmark, though they are very close indeed. Solid's code looks weird in its own ways I guess, but it looks less hacky/hand-optimized to me.

Inferno seems to use less memory though, which seems interesting. Solid isn't fully memory optimized though, it could beat Inferno with more memory optimizations potentially.


Yep, Solid is among the fastest but requires more cognitive overhead.

Svelte requires very little over and above HTML and JS while still being closer to Solid in performance than React, Vue, or Angular.

And the latest interactions of React and its ecosystem have both high cognitive overhead AND lackluster speed. At least Angular is opinionated. React is just a YOLO ball of yarn for large codebases.


Can you expand a bit more on cognitive overhead in Solid? What are the examples?


JSX. This was never zero cognitive overhead as compared to plain HTML. Folks have simply had 10 years of practice with their Stockholm Syndrome.

With Svelte, you see a script tag with 99% plain JS, some HTML with some basic control and binding syntax, and a style tag with 100% plain CSS/SCSS.

No createSignal(…) with [foo, setFoo]. No props objects. No onCleanup(…) handlers. No createEffect(…) to track reactivity. No render(…) function just to show some HTML. No string template literals to use the framework. No worrying about when to use createMemo(…) or not. Nothing more than a $ prefix to use a store.

Solid (and React et al) is to Svelte as vanilla DOM is to JQuery.


These benchmarks say SolidJS is faster than Inferno. Maybe that's a recent thing?

https://krausest.github.io/js-framework-benchmark/2023/table...


Solid.js is even faster than inferno, and it doesn't really use a VDOM strategy, uses a strategy much more like svelte. IMO svelte is just poorly implemented from a benchmark perspective.

In reality, most of these benchmarks are not meaningful when talking about real app performance. What's meaningful is how you do global state updates in your app. If you use a react app with react-hook based context providers that unnecessarily update hundreds of components on simple changes, you perf is going to suck. If you use a react app and don't use React.memo anywhere, perf is going to suck. If you use react very carefully and are fully aware of when the vDOM is going to run and use small components that only update when their data actually changes, and ideally avoid running vDOM 60 - 120fps a second for animations, performance is going to be good.

I like Solid.js because it does all this for you by nature of just using the framework. Svelte does some of this for you so for real world apps performance is likely to better than react, but it doesn't do it as well as Solid by nature of it's state management strategy, not by nature of it's DOM update strategy.

The less you update, the faster your app will be. Then the DOM diffing strategy doesn't matter.


damn, just when i thought the "1 new js framework a day" race had calmed down, i'm reading your comment and realize it hasn't one bit :)))


You _can_ get vdom to be fast if you hoist static subtrees, memoize, and skip the diff entirely for some operations. Inferno is known for all kinds of these tricks, but you need compilers for that and in the end vdom is just getting in the way.


> Sooo, VDOM can improve performance?

This article doesn't really argue against that. They say the VDOM is a "means to an end" and is "generally good enough".

The thrust of the article seems to be that a virtual DOM isn't a guarantee of performance. Rather it's just one solution that can be pretty fast. Svelte happens to take a different approach which is also pretty fast.


Svelte is great. React is great. X, Y and Z are also great. And you know what they all share as well? Speed. They are all fast. Definitely fast enough for 99% of all uses cases if not more. The benchmarks they all provide are just benchmarks. I treat them like I treat car range reports by the car makers. I personally use react because I know it well, and it allows me super speedy development cycle once all the base components are done. I'm sure another person will say "I use Svelte because A, B and C". etc.


> They are all fast

I would say they all can be fast. But try browsing the web on a low end Android device and tell me all sites are fast. To my mind the differentiator is how easy a framework makes it to shoot yourself in the foot. And React makes it very easy to re-render a huge swathe of your app when you've only changed one tiny element. React also needs to hydrate every element even when it isn't ever going to change, which usually involves parsing some JSON payload for props on page load.

None of this is world-ending stuff. But it is very easy to keep putting wonderful, carefully crafted components together and not realise the entirety of what you've made is getting slower and slower over time.


> React also needs to hydrate every element even when it isn't ever going to change

That is no longer true with Server Components https://beta.nextjs.org/docs/rendering/server-and-client-com...


...which are a whole damn thing.

Don't get me wrong I'm glad the React team is tackling the problem but it's telling that the answer requires an entire server side solution when other JS frameworks are able to solve this in the client or at build time. And just looking at those docs screams "patching over a fundamental issue" to me.


You lose all client-side performance benefits if the components are only on the server. Ideally, you would improve the performance of hydration or only hydrate when necessary.


There are problems that have to do with the systems built on top of React if not React itself.

It is not unusual at all to find some "simple" UI update causes the render() method to be called 20 times.

You might blame the application developer for this but other than "keep all the state at the top level of the application and pass it down in props", React doesn't provide a systematic answer for handling state in apps if data is flowing up, down and sideways. There are a number of half-baked libraries such as Redux, MobX that maybe help some of the time but frequently make very simple application logic very complicated to write. When I started writing high-complexity SPAs circa 2005 or so I realized right away that you had to be very systematic about what happened when an AJAX call returned and where the data goes, something the industry still hasn't entirely learned.

It is possible to make React applications work right but I think people work harder at it than they have to and there is a lot of reciting shibboleths that people don't understand (hooks for one thing) and the cost of it is the render function getting called over and over and over again. But maybe it is not a bug but a feature for the advertising supported web where every rerender and layout shift creates a chance you'll accidentally click on an an ad when you are trying to click on something else. Most studies seem to show that psychologically normal people of normal intelligence in possession of their faculties never choose to click on ads and maybe Google's whole business is driven by accidental clicks caused by layout shifts and doing something about those layout shifts would put them out of business.


> React doesn't provide a systematic answer for handling state in apps if data is flowing up, down and sideways.

The built-in React way of doing that is with context.


A classic example of: "you had one problem, now you have ten problems".

That's fine if you aren't writing any unit tests or trying to fix bugs with the debugger. If context are in use you might have some 'simple' system with 10 components that shows 150 components in the React component viewer most of which are worthless context blocks that are just there to waste your attention and probably the CPU and memory of your computer. Does Micron pay Facebook a commission for all the RAM this sells? Maybe people who are trying to keep their code obfuscated think it is a big win.

I am glad that the React team has painted themselves into a corner and they can't seem to successfully land new malfeatures like context, hooks, etc. It seems like they are re-arraigning the deck chairs on the Titanic repeatedly to prepare for the threaded rendering changes that they (hopefully) won't be able to deliver so at least the React development experience is not going to degrade quickly.


>React doesn't provide a systematic answer for handling state in apps if data is flowing up, down and sideways

So let's first back up and recognize that this earlier statement was flat out wrong. React does provide a systematic answer for this.

Second, not only does it have a systematic answer, but it memoizes quite well because React will not re-render children if the `children` prop is identical to the previous render, even if you don't use `memo()`. This means it is quite cheap to have context providers update, even if you nest 2 or 3 of them.

The big issue with React in my experience is just that developers are lazy af and will stubbornly refuse to read even the tersest of docs even if they are encountering a new paradigm, like declarative and reactive UI. The result is a giant spaghetti mess of their own creation, which they then blame the framework for.

You can make React fast and you can keep it clean, all you have to do is topologically sort your data by the frequency of how quickly it changes. That's it. That's the trick.


My issue with React is that it's truly hard. It markets itself as easy but it's not. I have 20 years of programming experience, I dealt with UI a lot, I used WinAPI, Java Swing, I know JS and HTML pretty well. I'm fine with reactive programming or async stuff. Yet I often struggle with React. I'm not a full-time web developer, I admit, I'm more like full-stack developer but when I need to write novel React code, I struggle a lot.

For example recently I wanted to use a promises in React app. I mean: promises are as native to JS as it gets. Surely React should have first-class support for promises.

Nope.

So I started to write custom hook. usePromise. Like useEffect, but for promises.

Well, it would not be hard. But apparently React likes to call useEffect twice for dev mode. So I need to have a reliable cancellation. How do we cancel stuff in web? With AbortController, right. Does React heard about AbortController? Nope. So I need to integrate AbortSignal within my usePromise hook. I read famous Dan Abramov article, I read other articles, I spent days tinkering around this thing, I wrote several implementations.

All of those implementations are faulty.

Here's my latest one: https://pastebin.com/WBctCBpc. Technically it works. But it contains unpure reducer function. It's not broken in current React version. But who knows how react dev mode will torture my reducer in the next version.

I have to admit that I enjoyed toying with this stuff. But it definitely counter-productive to business values.

Now I know that this is all solved and I should just use react-query or something similar. Well, I have my reasons to avoid it. But my point still holds: React is hard, React is not well integrated with JS and Web. And probably React will get better in the future. I've heard about suspension stuff which might just be what I need, but it's not there yet.


I feel more and more like React wants to be separate from JS and the web. Perhaps so that it can better fit React Native, I don't know. But it wants to be its own entire world and it's an exhausting thing to pick up at times.


I'm sorry but I don't share your experience. I find React very easy, and short of a period of creating the baseline components and skeleton, everything else flows very fast in terms of development time. By the way, I think react in strict mode does run components twice in dev, so not running in strict mode will prevent that, and you can use a regular Promise in your useEffect.


Strict mode is not something that should be avoided. In the future versions React will do stuff that it does with strict mode today. Of course you can use a regular async function in useEffect but you'll quickly notice that it's called twice in strict mode. And you'll want to abort running fetch. Then you'll notice that responses can arrive out of order and your state updated with outdated response which happened to arrive last. It's easy to use async code in useEffect. It's not easy to use it correctly.


Whether it should or should not be avoided is a preference. That is why it's not forced. I don't want or need it. And if I do, and it's caveated with double-useEffect - so be it. I have a feeling there is a lot of overkill in your approach but of course I lack context so apologies if I'm wrong.


The fact that “strict mode” means useEffect gets called twice feels like a great example of the ways in which React is not simple.

It’s not quite directly using a promise but I was surprised I can’t use an async function in useEffect. It’s pretty common to perform async operations there, after all.


useEffect IS a great (the best?) place to put async code. I do it all the time. The reason for strict mode rendering twice is to spot strictness related issues. Honestly I never even thought of using it so I've never experienced this.


What does usePromise is supposed to do?


It's supposed to run provided promise and return its status. If deps changed or component is unmounted while promise is pending, it should inform currently running promise using AbortSignal. And it should handle edge cases (e.g. promise is changed, second promise is started but first promise ignored abort signal and resolved to a value. This value should be ignored).

Basically it should remove any boilerplate from user of this API and handle edge cases.


Thanks. But what do you use it for? What promises do you want your components to be involved with and in what way?


For example HTTP request. Anything, actually. Some rough code:

    function Item({id}) {
      const r = usePromise(async (signal) => {
          const resp = await fetch(`/item/${id}`, {signal});
          return await resp.json();
      }, [id]);
      if (r.status == "pending") {
        return <div>Loading</div>;
      }
      if (r.status == "rejected") {
        return <div>Error: {r.reason}</div>;
      }
      return <div>{r.value}</div>;
    }
It's really like useEffect but provides better support for cancellation and properly tracks promise. Rewriting this snippet with useEffect correctly would require quite a lot of code (although rewriting this snippet with useEffect incorrectly is possible with not a lot of code, but you don't want to write incorrect code). Which has to be repeated everywhere.

Again, this task is better solved by react-query or its alternatives. What I'm writing is not strictly web-site, but rather a web-interface on embedded device and web-server is not remote web-server but thing that works on the same device, so for now I decided not to use those libraries which made for slightly different use-cases.


I think I'd go about it using redux-thunk because I feel like render function is not a great place for complex async state changes and chcecking internal status of a promise is a bit low level, but you've built a nice, easy to use thing. If you published it some people might find it to be exactly what they need. Plus they might help you debug some corner cases.


FYI, we recommend that most folks should not write promise management, data fetching, or loading status tracking code directly

If you're using just React, use React Query or something like `react-async`.

If you're using Redux, use the "RTK Query" data fetching and caching API in our official Redux Toolkit package:

- https://redux.js.org/usage/side-effects-approaches


Thanks, I'll think about it.


> So let's first back up and recognize that this earlier statement was flat out wrong. React does provide a systematic answer for this.

Context was never a systematic answer. Even today the docs say:

  Apply it sparingly because it makes component reuse more difficult.

  If you only want to avoid passing some props through many levels, component composition is often a simpler solution than context.
https://reactjs.org/docs/context.html#before-you-use-context

And IIRC older docs would be even more harsh at recommending not to use context.


No, the troubles building an SPA have a lot to do with the complexity of your app.

If you are building the average mobile app it is often really clear when you are writing code what needs to be updated in the UI when a piece comes in.

If you are building something more like Adobe Photoshop or Eclipse the user has the ability to open up property sheets and create other UI elements that could be affected by data structures anywhere in the application. In that case you need some systematic answer such as components registering to get notifications when something happens but you can run into some pretty bad problems such as having to hang on to references which keep the garbage collector from working as expected. My first SPA was a knowledge graph editor in GWT that I managed to get correct (though it probably leaked memory a little) and since then I haven't known whether to laugh or cry about the state of SPA frameworks.

As for the manuals I think the React manuals are some of the worst in the business. I have no problems finding answers in the Java manual or the Python manual or the Postgres manual or many others but the React manual baffles me.


> differentiator is how easy a framework makes it to shoot yourself in the foot

I agree. In fact this goes beyond frontend frameworks. One should apply the same approach to all methodologies and practices: OKRs, TDD, Agile, etc.

When framework/methodology is being sold to you, people talk about all the wonderful properties it has. But what you should really be care about is how easy it is to misuse and what happens when it does get misused. Because, trust me, it will get misused.

One of the most important things about particular technology is whether it lands you in a Pit of Success: https://blog.codinghorror.com/falling-into-the-pit-of-succes...


Not much in 2023 is fast on an old Android device from 2013, unless it's something from 2013...


> But try browsing the web on a low end Android device and tell me all sites are fast.

Holy smokes, that's more like a straw kaiju than a strawman. Obviously slow sites exist. That has almost nothing to do with the inherent overhead of recently created JS frameworks.


> Svelte is great. React is great. X, Y and Z are also great. And you know what they all share as well? Speed. They are all fast.

we don't live in the same universe. even with powerful computers, browsing any friggin modern website is an exercice in pain and frustration, everything, literally every interaction is slow when you compare to the average desktop app


Agree, and I think this is the point of the article too.

If we are going to say things like "React is fast" then it needs a further clarification - fast compared to what?

Are we comparing it to C, jQuery, Angular, Pure Javascript, or a Commedore 64? Because it doesn't make sense to say it is fast if there isn't something to compare it against.

(In reality, I suspect it is only really "fast" if you compare it with something slow).


If a developer coded a slow app in react they will code a slow app in svelte. We need to stop blaming the language.


Saying they're fast is a relative statement. I primarily use an MNT Reform. On 4x ARM Cortex-A53 cores, most modern web apps are slow (the new reddit interface, the new gmail, virtually every airline booking UI, my music player of choice, etc.). I hate the web.


Reddit is just written s**tty. It is the worst case you can use to recommend react to peoples. Their desktop version is even worse. It eats a whole core of r7 2700x for 1 seconds just to update vote counters on page.

https://www.reddit.com/r/bugs/comments/rj0u77/reddit_redesig...

Although I also think it is fault of react partially. React don't really have a proper guideline about how to not write page like this.


Do you believe that these sites would be fast if built with other non-SPA or non-“modern” libraries?

It’s like thinking that a faster car or a bicycle could be faster in a city with bad traffic light logistics. All of Svelte, React, Vue, jQuery, DOM are equally visibly fast until you attach these 10 megabytes of /metrics-n-spyware/**/*.js.


> They are all fast

When you say "fast" - I assume you mean runtime speed, not time to market/developer speed? Because FTA (quoting Pete Hunt in regard to React itself):

> Just like you can drop into assembler with C and beat the C compiler, you can drop into raw DOM operations and DOM API calls and beat React if you wanted to

Even the React people acknowledge that just working with plain-old Javascript is going to beat React (or Svelte, or X, Y, Z) is going to perform better at run time, they're just offering to speed up the development cycle - always with the tradeoff of runtime performance.


React is fast in the sense it eat tons of computation power without lagging the ui. In my opinion, this is a dead end. They should really try to reduce the actual computation power required to render the ui. The growing rate of cpu speed is stalled. There already isn't too much room for it.

And what about user with a low end Android machine? Not lagging the ui isn't helpful here. Because it still need seconds to render the whole thing.


I agree. Getting kind of sick of these posts that are thinly veiled political campaigns against the other framework. Great libraries tend to speak for themselves in terms of adoption. You shouldn't have to convince people not to use other options.


It’s not always bashing other frameworks gratuitously; an important aspect of human progress is recognizing what works well, what works less well, and what seemed like a good idea at the time but either became obsolete or was a bad idea to begin with.

JSX and VDOM were at one time necessary (or at least helpful), but Web Components and Tagged Template Literals can do everything React does, only better and with less overhead (in both the developer’s mind as well as the computer’s runtime). I say that as someone who learned and taught bootcamps with React, and has yet to dive too deeply into lit-html and LitElement.


100%

There's a vanishingly small number of applications where it's really going to make a difference. Use what your work uses, or if personal project what you like. The more I use React and co. the more I feel like it's all the same thing.


> There's a vanishingly small number of applications where it's really going to make a difference

Developers (and especially deadline-conscious managers) keep saying this, but their web sites keep slowing down my computer to a crawl. As a consumer, I really wish that development teams paid at least some attention to performance.


The truth is, performance doesn't bring in the dollars, features do, and marketing, and sales. The incentives are just not there, and these issues what we're having, show that.


I haven't see any project which feels fast which are written in React. But most important thing for me which is broken (because it is too difficult to catch all edge cases) in all the JS frontends is the broken navigation (browser's back and forward almost never work as expected, bookmarking links are broken because state is in JS etc.)


This can definitely be fixed (it involves making sure the relevant operations in the app manipulate `window.history` and either indicating location state in the browser via the hash portion of the URL or building the server to work hand-in-glove with subpaths), but it requires more work than the default navigation one gets with a multi-page app.

I've seen good frameworks for managing this but I agree that developers tend to forget it.


I always find these a little funny. "[Commonly held thought with some convenient tweaks] is WRONG...so use our stuff!"

Even open source projects are guilty of this type of grifting, everyone wants to win, even without money in the game.


I take “pure overhead” to mean a cost with literally no benefit. To me that just makes it sound like Svelte is being pushed by idiots, because clearly there’s a substantial benefit (regardless of whether or not VDOM is an optimal strategy).

From the end of this article: “Virtual DOM is valuable because it allows you to build apps without thinking about state transitions, with performance that is generally good enough. That means less buggy code, and more time spent on creative tasks instead of tedious ones.“

Okay great, it’s not pure overhead.


We live in the era of click-bait titles and these manifest even by, what you would hope, are more reputable establishments. The author obviously agrees with your sentiment (given the quote you cited), but it's too easy and just the norm to be hyperbolic in your blog title, articles or essays.


No need to be pedantic. He's obviously saying that there are better strategies then vDOM with lower overhead.

This is proven by Solid.js which is faster then every VDOM framework, has an API that is functionally the same as react, and doesn't use VDOM.


I believe Solid has also kinda shown that.. Well, I like its low-touch approach compared to svelte's need for heavy compiling and a new language essentially.

With Solid can use JSX and it doesn't even need, last I checked, anything beyond standard JSX transformation to get pretty good results. It's the better direction IMHO even if the Solid APIs feel like they need another iteration or so.


Sadly, it seems like nobody is considering the best optimization: make DOM operations fast. I think if you could batch DOM operations together you could avoid a lot of wasted relayout and duplicate calculations.


Agreed, I'd love to see HTMLElement.beginTransaction() or something similar.


Document fragments are like transactions for the DOM. Alternatively you could just learn which Dom operations force a layout shift and batch those.


> Document fragments are like transactions for the DOM.

About the same way innerHTML is which is completely unhelpful: during reconciliation you need to copy, update, and reset the subtree which contains all the update points, which is almost certainly a lot more than you need.

You also likely need to reconcile document state (e.g. focus) by hand.


You are correct, but if you think about it, you're talking about parsing and tokenizing before the operation can even occur. That's really heavy. I think it could be better than reading .innerHTML


I was thinking of how to improve DOM updates. One of ideas is to add Element.update() method:

   Element.update(function(updateCtx) {
      updateCtx.setInnerText(this, "new text");
      updateCtx.setAttribute(this, "title", "new title");
      ...      
   });
This has two benefits: 1) transactional update, 2) for contenteditable scenarios it can group DOM mutations in atomic undo-able action.

But I've discarded that in lieu of Element.patch(vDOM):

   Element.patch(<div title="new title">new text</div>);
as the later is more humanistic I would say.


My feeling is that the browser already does this in that it considers all DOM apis within a single 16ms (requestAnimationFrame?) as a single transaction.

The trouble for browsers, is if certain DOM apis have a dependency on the layout of another element. My naive and unvalidated understanding:

    // Good: These DOM calls in a single frame will trigger layout-paint-composite (1 loop)
    - e.style.backgroundColor = "red";
    - e.style.width = "20px";
    - e.style.transform = "translateX(10px);

    // Bad: These DOM calls in a single frame will trigger layout-?-layout-paint-composite (2 loops)
    - ...
    - e.style.height = otherElement.offsetWidth + 200 + "px"
    - ...
The reason being that without knowing the width of "otherElement", there's no way for the js runtime to execute the "e.style.height" line and execution needs to be paused while layout occurs.

If you're looking for a transactional syntax (similar to what you've proposed) that also addresses this though, fastdom looks like a good option:

    fastdom.mutate(() => { element.style.width = "20px" });

I'm not a browser expert though so if I"m misunderstanding something, would love to know.


I actually think the former example is more clear. It's a bit verbose but every part is simple. The second example is very "magic", it takes a lot of thought to understand


> Sadly, it seems like nobody is considering the best optimization: make DOM operations fast.

On the contrary, there is evidence that quite a few people are considering that.


Well, slight amendment: Lots of people want to make the current api fast, but it seems there's little movement on a new api.


You can't parallelize DOM updates. All has to happen in the main loop. This is not gonna change for the web as it is today.


I don't think that's what OP is saying. They're saying that (e.g.) calculations are made on each appendChild() call when it would be more efficient (when you know you're going to be inserting a ton) to suspend all calculation, insert 1000 nodes, then resume calculations. Something akin to setNeedsLayout() on iOS:

https://developer.apple.com/documentation/uikit/uiview/16226...


I good parallel might be database development, the difference between taking a cursor and looping through to make changes vs a set based operation that understands how to specify all the needed changes at once.


Yes, that's exactly my point, thanks for expressing it better than I could


My point is that if you change the DOM api itself thats the win. Right now its just individual property updates, so the browser can't know when to delay a computation. So definitely not part of the web today, but it seems worth considering.


Reading this as a native developer is a bit like reading about alchemy or astrology - two fields with their own vast suite of terminology and internal logic that doesn't fully correspond to anything real ...

... only to find out that this stuff is actually real and is how a big chunk of the visible web actually works.


> native developer

You probably haven't done any native UI as native UI uses exactly the same idiom.

    CWnd* parent = ...
    parent->appendChild( new EditBox() );
native UI also uses DOM concept, it is just that instead of child elements it uses term child windows or [Gtk]widgets or [ns]View s.


> You probably haven't done any native UI

Hah. https://github.com/Ardour/ardour/tree/master/gtk2_ardour ... c'est moi

Anyway, that's not really the point I was making. Native UI can be thought of and used as a DOM model, but that's not inherent to the process unless you're literally writing traditional database+presentation+edit applications.

I was more poking light-hearted fun at the explosion in terminology and concepts exposed to someone doing web-based "frontend" development, and how little most of this has to do with HTML, CSS and the general classical model of "a browser".


Come back when you know about HWND, boy.


I can't trade TSX for any text templates. Being able to write tags and having them syntax-checked with types support is indispensable. I wish that those frameworks embrace TSX rather than trying to drag users to the dark past.


IMO, TSX really is beautiful, I can’t see how it can be beat. I’m not a big fan of how React handles state and effects, but TSX itself is joy to work with.


I'd rather type checked functions that don't have to futz XML syntax and unexpected incompatiblities with HTML while superficially looking similar. A lot of compile-to-JS languages that don't have a the C-like syntax, the results are highly legible which being more condense and not require a separate parse/compile step.


Svelte supports TS.


Including types for component properties? Typescript and react work so well together since you can type properties to any type and have them type checked.


Yes.


I've heard the term "virtual dom" for years. This article made me want to understand. It gives this example for explaining "what is a virtual dom?"

  function HelloMessage(props) {
    return (
      <div className="greeting">
        Hello {props.name}
      </div>
    );
  }
And that returns "an object representing how the page should now look"

Aren't we developers here. How about an object type. I assume it's a DocumentFragment. Is that correct?

Then it talks in broad (i.e. useless) terms about using this object.

So my next question is: what's exactly is wrong with using a DocumentFragment to just replace that part of the DOM? For example:

   let frag = renderMyModel();

   if (destElt.firstElementChild) {
      destElt.replaceChild (frag, destElt.firstElementChild);
   } else {
      destElt.appendChild (frag);
   }
I do this with a massive DOM tree and rendering is like instant.


I don't think that's a great explanation of the concept of VDOM. For example, SolidJS looks a lot like the example given, and it also arguably returns an object that represents how the page looks, but it didn't use a virtual DOM under the hood.

I think it's easier to think of virtual DOM implementations as doing two things: (1) describing the desired state of the DOM in some sort of structure, and then (2) diffing that structure against the actual DOM in order to make the changes. The key part is that the virtual DOM implementation does the diffing itself in order to make the changes itself.

This is similar to your example, in that you have generated the desired tree and are rendering it in the right place. However, it is different, because the browser will not do a fine-grained diff of all the elements, it's just going to replace them with the new elements that you've given it. This works fairly well if you're trying to replace a large chunk of elements with a new set of elements, but it has problems if you just want to make smaller modifications.

For example, consider some JSX for an arbitrary framework that looks something like this:

    function MyInput({ isGreen }) {
        return <input class={isGreen && "text-green"} />
    }
When `isGreen` changes, we want to update the current value of the DOM with the new version of MyInput, which should just be the same thing with the class changed. The naive (although often simplest and most practical) solution would be to just replace the contents of the element with the return value of MyInput - i.e. replace the input with a new copy, just with the class name changed. The problem comes when a person is typing into the input - if we replace it completely, the use will lose everything they've already written.

The VDOM solution is, as I said, to (1) generate, but then (2) to diff. So if the current state is an input with a "text-green" class, and the desired state is an input without this class, it would work recursively: First, is there an element? Second, does the element have the right tag name? Third, for each attribute on the two elements, do they have the same values? Etc. And every time there's a change, it will make the smallest change possible to ensure the current state becomes the same as the desired state. Here, that means removing the class name, but leaving the rest of the input as it is (i.e. with any input from the user left alone).

The third option here is what frameworks like Svelte and SolidJS do: they skip the part of the process where you generate a new desired DOM, and jump straight to the diffing. That is to say, when you create a component, inside that component will be event listeners for each part of the component that needs to change (and only those parts). This listen to changes in the props and state and then change only those parts of the DOM that need to change. That way, you don't need to go through the entire DOM, iterating through all those nodes that stayed the same just to find the class that needs to be updated. Instead, you just directly update the class.


For some reason I thought the virtual DOM was a native feature of the browser, in the form of the DocumentFragment class, and had nothing to do with diffing. But you are saying that it's just a concept, and is implemented by frameworks such as React, and involves the two steps of generating and diffing.

I'll still ask why. Is it just something that React needs to do?


Using some sort of VDOM is still pretty standard in most frameworks, it's definitely not just React doing this. The motivation is generally an attempt to model an application as a function `(state) => DOM`. This is probably similar to the `renderMyModel` function in your example: take state in, and return what the document should look like given this state.

The problem with this is that if you rerendering the entire DOM every time the user interacts with the page at all (and therefore changes the state) then you will run into issues. The main one of these is that the DOM itself has state, such as event listeners, or the contents of input fields. We don't want to throw this state away as well, instead we want to sing m synchronise it with the state that _should_ be. I mean, in your example, you wouldn't run the `replaceElement` function every time the user presses a key or moves their mouse, right?

VDOM is basically the solution. It can run every time a mouse is moved, because instead of just replacing everything, it replaces only the things that have changed. Assuming the input hasn't been swapped out for a different element, then it can stay. It might gain or lose a class, or the value attribute might get updated, but the element itself stays were it is.


>rerendering the entire DOM

No, just rerendering the viewport that was affected by the model. I have no need to re-render headers, footers, menus, etc.

>DOM itself has state

No. The viewport has no state. It's just a transformation on the model. And I don't place any event listeners in my viewport DOM.

> every time the user presses a key or moves their mouse

Of course not - and I wouldn't expect an application to be updating the model on key presses.


Calling the VDOM pure overhead is a strong statement when there are patterns that are more difficult to express in Svelte because of how it manages views.

Once a view is created, it can't be processed by JS. It can't be stored in an array or an object. You can't count or individually wrap children. This makes it harder to create flexible API's [1].

The question is: are we willing to give up the expressivity of React for extra performance?

I am leaning towards "no", because I believe React's performance issues mainly come from its memoization-based reactivity model rather than the VDOM. When applying `.useMemo` in the right places I can create perfectly performant apps. However, this requires profiling and is often unintuitive.

[1]: for example https://news.ycombinator.com/item?id=33990947


MobX and chill.

Seriously though, modelling the FE business logic entirely outside the view library and just wiring it up to observables where necessary is extremely refreshing, maintainable, and FAST.


Related:

Virtual DOM is pure overhead (2018) - https://news.ycombinator.com/item?id=27675371 - June 2021 (289 comments)

Virtual DOM is pure overhead (2018) - https://news.ycombinator.com/item?id=19950253 - May 2019 (332 comments)


Rules of thumb is:

- Make it work

- Make it fast

Reality is, most of developers just want to get shit done and go home. The bosses of course never want to pay you for "make it fast".

The point of React is of course, low overhead JS class/function to decompose large UI. That made the job done.


Switching from vuejs to svelte these days, indeed svelte is much easier to write and understand.


I'd argue this is essentially just an optimized (and therefore potentially more buggy) virtual dom.

Svelte is being smart and skipping comparisons in the places it knows the result is static. That's nifty. But it also means you have to depend on svelte getting it right every time, in all scenarios.

Long term - I think this is probably the right approach, but it feels very similar to the -03 c++ optimization flag: There was a fairly long period where enabling that flag was considered risky. Each extra transformation carries opportunities for bugs.

It also means extra work at code generation time - it's building a vdom engine specific to your template (again - this is nifty!). Probably not a huge deal, since js build tooling is seeing a LOT of focus on speed, but it's there.


Tell us you've never tried Svelte without actually saying you've never tried Svelte.

You would have likely not said this if you had ever looked at Svelte-compiled JS. The amount of mutation is surprisingly small and easy to follow. Especially when coming from a world with JSX.


So what part of this statement do you disagree with?

> "it's building a vdom engine specific to your template"

Because that's... exactly what it's doing. It's doing it at compile time, and so yes - the amount of mutation in the output is small, which is not surprising at all because most templates are fairly static.

We can quibble over exactly what a VDOM is, but I don't really know that tracking only a subset of the DOM the app might change (aka: svelte) vs the entire DOM (aka: react) really matters. In both cases you need to map changes to DOM updates, Svelte is just being smarter about it.


I'm not sure we can quibble over the definition of a vDOM. There is the DOM as found in browsers and then there's a layer on top of that, proxying access to the DOM, which we call a vDOM.

Svelte does not have this construct. All actions are performed on the DOM without an intermediary proxy object.

Tracking 100% matters, because you must iterate through the vDOM to determine the diffs that must be applied to the underlying DOM. On every change to the vDOM. This is not free. It is why the following abstraction leaks exist in React:

    • shouldComponentUpdate
    • React.PureComponent
    • useMemo
    • useCallback
This is making the developer worry about things the computer should've be able to suss out for itself. You expressed concern that Svelte's compiler approach is new and therefore subject to bugs. Aside from Svelte being over 6 years old now and having gone through 3 major versions already, my assertion is that any potential bugs present in Svelte's compiler at this point pale in comparison to the number of potential (and actual) bugs or performance gaffes present in the equivalent React app due to excess lines of code, complexity, and abstraction leaks such as those listed above. The cognitive overhead is real and has a measurable effect on developer output. Bugs are proportional to number of lines of code regardless of the language being used.

I assert that we developers, people, human beings, are far more fallible in this regard as to render concerns about the Svelte compiler's accuracy relatively moot. This is analogous to the switchover from Assembly to C. Early C compilers certainly had their issues, but even in the earliest days, the increase in productivity and reliability as a whole far outweighed the benefit of manually generating all instructions at that lower level. Even though today there are still optimizations to be made from analyzing hotspots and replacing generated code with assembly in those few spots, the compiler-first approach stills wins out because 90%+ of the time, it does as well or better than the hand-crafted Assembly... err... React code can in a tenth the developer time simply because compilers are now and have always been better than human beings at managing rote development tasks. Rote as in flagging variables for reactivity, (un)subscription to stores, accessing data properties, handling data binding, marking sections as immutable, etc.


Well now we can quibble! :D

Svelte IS storing a representation of the expected layout of the DOM (at least the limited set it cares about). That's precisely what "reactivity" is. It's aggregating the values we've stated are important to our template (either by hand ($) or automatically (=)) and monitoring changes to those values to trigger a set of updates.

The diffing happens right there. It's also why you end up with some unusual restraints on how you do assignments - if you're not careful the autodetection of "this bit was important and needs to be diffed and trigger code on changes" fails (they talk about this quite clearly, and very early: https://svelte.dev/tutorial/updating-arrays-and-objects)

I also find it bit disingenuous to call out things like "shouldComponentUpdate". That's very, very nearly the same as reactive declarations. You're having to indicate to svelte that "yes - this declaration should update". Again - it's more precise than react allows you to be, but conceptually I find it just a slightly different take on the same function.

It's also why you can easily get into the same sort of cyclical dependencies that you can with react (although I think the messaging from the svelte compiler here is pretty on point, compared to react)

Now - I generally find it a bit easier to suss out why the loop is happening in svelte than react (where it's not always easy or intuitive to understand that a hidden reference value may have been changed and is triggering quite a bit of downstream updates) but it's essentially the same problem space, just inverted:

React makes you mark places you don't want to care about changes (useMemo, useCallback)

Svelte makes you mark places you do want to care about changes ($, =)

You're basically arguing that opt-in is better than opt-out, and I think that you may have a fairly compelling argument, but lets not paper over the fact that both frameworks are tackling the problem is similar ways, and both end up with their own DSLs to indicate intent with regards to changing values, and when to run code.


Note that a virtual DOM is pure overhead if you already have a real DOM to work with.

That's kind of the ignored superpower of a framework like React, which makes the virtual DOM the authority: there might be a DOM, but there also might not be. Whether the virtual DOM reflects to a real DOM, or native UI, or Qt, or an ASCII terminal interface, it doesn't care.

This is also the part that most web devs have the hardest time with: React (and some other frameworks) are not web frameworks. They really are just UI frameworks, that happen to (also) work in a browser. Even if they were original born out of a web need way back when.


Why did I have to scroll this far down for the real answer?

This is the true power of the VDOM, to abstract the view from platforms.

Why be a web developer when you can be a cross-platform app developer?

That's why I don't use Svelte and other web SPA frameworks...


The point is, if you want to make your web app the most performant as possible, cross-platform solution has limitation.


Absolutely, but conversely: the performance hit of the virtual DOM _should_ (and that's doing a lot of heavy lifting) be irrelevant in a well designed UI, because aside from initial load, most parts of the UI don't change for the majority of the lifetime of the app. Only tiny portions of the UI update at a time. The parts that do update can most definitely update fast enough on even moderately modern hardware (both desktop and mobile).

That does, however, require having properly designed your UI, with a knowledge of where the power of your framework of choice is. And that's where a lot of apps fail. Even something as simple yet critical as using vdom keys tends to (for various reasons) never register for many folks, leading to terrible performance.


That's a pre-optimization. React Native Web has been very performant for all my real world use cases.

I'd rather a small perf hit than have another code base for web. You get web for basically free when making a RN app.

That is for APPS you want your WEBPAGE to perform better, don't use a SPA, use Astro or something similar.

Might as well do everything in Assembly or C because it's faster right? Same argument. Development speed matters too.


Really they should just add DOM morphing into the DOM APIs. React has proved there is some value to this approach, whether it is speed or mental model, there are enough positives to just add it to the platform.


Way I remember, the message was that it was too hard for most devs to be careful enough in their Backbone.View.render code to consistently write fast DOM manipulation that scaled. That may not be whats in the cited talk but that's my recall from the various bits of early marketing by Facebook Engineering. So the sell was alievating that burden.


Js frameworks are also pure overhead. I have never seen any benchmark where they have ever been better than vanillaJs. /s


There's no real benchmark for developer experience. Performance only matters up to a point. And it depends what you're building. If you're building pages where people just read content - sure - server render + vanilla js it. If you're building something more interactive where you think you'll have greater than 50k lines of JS prepare for pain.

Last note - picking the right tool for the job is a quote we repeat often in the industry, realistically though that sort of optimization actually makes it harder to reason about writing code especially in a large org. The great thing about these frameworks is that it's a single way to build and you get server and client rendering. I remember back in the day folks would ask "how do you build a page here" and it would kill me to say "it depends I have 10 questions for you"


Yep, that's my point. The abstractions are a means to an ends. And are worth their overhead in many cases. So calling them pure overhead, for sake of performance, is absurd as calling js framework as pure overhead


Love Svelte but well, raw DOM manipulation isn't fast either. I have a recursive Svelte tree component which, as I collapse node's contents, destroys all of its children. Which is fine with small trees but once they get big there's a noticeable lag between collapsing/uncollapsing trees. Once I deploy the site it gets faster but still, destroying children components is slow.

I think I'll try next just toggling visibility instead and skipping the removal of children. Sure this is a bit of edge-case but there's no definite silver-bullet here. Don't know about Svelte internals if this is something they could do eg hiding the components before destroying them. But well it'll still jank (but not as visibly) as everything is done in the same UI thread.

Virtual list of course would probably be the optimal solution.


For me, whole DOM is a useless overhead. Why don’t we just have some nice transactional browser API? Client side frameworks don’t really need to generate HTML that is immediately parsed. It’s a total waste.


I was thinking about this recently while teaching a student how to make a page that captured a camera's MJPEG feed.

... it's real hard to get past the simplicity of "Just change the URL property to a new value" that the current DOM API provides. Losing that would suck.


I absolutely love articles like this. Written by people with more critical thinking skills than technology fetishism.

Great to see.


Can we please talk about how much reactive programming sucks for UIs? I miss all the people who switched from angular to react for a reason in these discussions... I'd bet none of those are going to move to svelte.


I switched from Angular to React 8 years ago, and I've no interest in Svelte. If I recall correctly the primary problem with Angular was $watch, race conditions, etc, meanwhile with React your state is your state, purely functional and idempotent, which is a feature of "reactive programming". Reactive programming is fantastic.


Yes, please do talk about how reactive programming sucks. Because I've come to believe it's absolutely mandatory for keeping an advanced UI performant, mostly bug-free, and limited in technical debt. Absolutely mandatory. And yet I agree: it sucks. It sucks donkey balls. Why does this absolutely critical technology suck so much?


Is there any discussion in webdev community whether using typesetting engine from 80s is even a good idea for modern performant UI apps? Or it's just taken for granted and never questioned?


I don't think that perspective is fair to modern browser engines. The additions of flexbox and grid layouts took them beyond just typesetting, I think (not to mention all of the work on JS engines, providing APIs for a whole bunch more device functionality, etc.)

So it's true that there's still a "typesetting engine from the 80s" in there, but there's also a powerful layer of app functionality built-in as well. It's reasonable to question whether all of this belongs together, but all of this evolution _has_ allowed for whole new classes of applications to be delivered securely and quickly to all sorts of devices in the browser.


As a thought experiment, imagine that we would have started with Lotus 1-2-3/Excel type spreadsheets instead of formatted text. I believe that we would develop tons of layers of abstraction on top of this too, in order to make cells and functions to emulate UI framework. It would probably work as well, and had some weird quirks related to the cell-based nature of this super-powerful engine. Yet, the question remains - would it be a good foundation for modern UI apps?

To me, most of the issues with web apps stem from the fact that foundation is just not built for that task. Like these weird selection issues, where you move cursor 1 pixel more to the bottom and it selects the whole visible area instead of text - that doesn't even make sense. Or jumping page and blocks all over the page while it loads fonts and stylesheets. Sure, we build more hacks on top of existing hacks to mitigate this, but that's just duct taping with complexity, not the engineering.


Right. It's been discussed before, but the web browser is one of the greatest examples of "write once, run anywhere" in terms of the markup that is HTML. If the evolution had happened differently, maybe we'd be all be using Java applets or Flash applications instead. I believe HTML/browser is superior, but I wonder where other platforms would have evolved to.


Flash authoring tools are still unmatched


Oh, it's discussed all the time. But we're extremely stuck with it; jettisoning the DOM touches such a huge range of concerns (from rendering through security) that it's basically a non-starter of a conversation.


In the first example showing the diff of JSX and React.createElement:

> You can do the same thing without JSX...

Well, the end result is that there is no JSX. It's gone. It's syntactic sugar.


Rethinking Reactivity by Rich Harris (2019)

https://youtu.be/AdNJ3fydeao


https://twitter.com/dan_abramov/status/1135424423668920326

Above thread summarizes the issue pretty well I think. Optimizing for DOM updates is nice, but you also want to optimize for bundle size and page load time, and at a certain app size the compiler output is always going to be bigger than just using a virtual DOM.


It is true that Svelte and React bundles will grow at different speeds as the app grows. Redundancy gets compressed in React library, but it just stays there in Svelte.

That said, since it is redundancy, I wonder if Svelte bundles are more gzippable (or at least, could be made so).


As far as I understand, in React, there is no redundancy to compress. You have one algorithm for diffing the virtual DOM and you're done. On a spectrum of biggest bundle size to smallest possible bundle size, the React model (more specifically Preact) would be as far to the right as you can go. Whereas any solution which does more specific compilation on individual operations to minimize DOM update work is going to have special code for each case.


If that's true, then what is needing a custom compiler for your components to work?


Isn’t svelte essentially just a cached vdom or do I have the wrong mental model


If I see html in a javascript file, I boycott the codebase


Steve is is HTML in a .svelte file, React was HTML in a .jsx or .tsx file. I’m going to assume you’re boycotting neither or both.


And having a custom JS compiler isn't pure overhead?


Are you calling a C compiler overhead as well? Ahead of time compilation is not runtime overhead, and for users of the web, runtime and network latency are all that matters with regard to the perception of speed.


Mental overhead matters much more than performance overhead to most applications. The whole plot of JS is that it’s easy enough to work with that it overpowers any performance limitations. Svelte invents a completely new execution model for JS which adds overhead.


I honestly don't believe you've actually used Svelte even in an experimental context to make such a comment. Go check out the tutorial on the Svelte site and get back to us.

It's literally >90% just HTML, CSS, and JS. The last <10% is split between stores, if- and each-blocks, and data binding syntax. If you don't know HTML, CSS, and JS as a web developer, I don't know what to tell you. If you do, the notion that Svelte has a substantial mental overhead compared to any other web framework—especially React—is utterly ludicrous.


And that’s similarly how I feel about people who think Svelte is significantly less complex than React. At least React is just Javascript. Svelte tries to override assignment to trigger side effects. That’s crazy.


Aren't Hooks also a significant deviation from the JS execution model? That they needed to write whole new documentation to clarify the model and its application seems to speak to some deviation


No, hooks live strictly within the JS execution model. If they didn’t, they’d need a special compiler or some kind of special low level instruction.


Gotcha


Anything beyond HTML and CSS is a mistake.


An article that talks about performance and "fast" and "slow" but does not quote a single benchmark?

Sorry this is nonsense.


(2018)


Honestly I feel like anyone that hasn’t written Svelte by now isn’t an active front end developer.


State of JS disagrees



All abstractions are pure overhead. Let's code everything by hand by flipping bits in memory using a tiny magnetic needle.


This is not correct. Zero cost abstractions are not overhead. Some abstractions are zero cost abstractions. Thus, not all abstractions are pure overhead.

More on zero cost abstractions here: https://stackoverflow.com/a/69178445/315168


The Stroustrup quote in that thread is better than what you linked to.

The concept was popularized by C++ templates. The idea is that the templated code would be just as good as if you hand wrote it without generics. There's no extra function pointers or virtual calls, extra indirection by pointing to some user data, etc.; the tree node or whatever and data struct are declared as a single entity, there's no runtime callbacks, etc.


> Thus, all abstractions are not pure overhead

Sorry for the pedantry, but I believe this would still be wrong as phrased. Maybe "not all abstractions are pure overhead" would work?


I thought the second half would make it clear that I'm being sarcastic but clearly I'm wrong. Pendants gonna pedant I guess.


This is over-simplified: the overhead of an abstraction can be cancelled out by work or optimizations which you wouldn't have done without the abstraction. As a simple example, most Python programs are faster than the C code the same developer would have written in the same time because they have a rich library of optimized code and getting it working quickly means that they had more time to focus on algorithm-level improvements. That of course doesn't mean that a C programmer can't beat it in performance but once you're past very simple examples those abstractions are harder to beat than they might seem — I've seen multiple cases where someone thought they could do that and wasted days (or in one case, months) only to eke out less than a 10% improvement.

In this case, React is slow and memory hungry because when you make a change you go through this process:

1. Update some value

2. … triggering updates to the virtual DOM

3. … requiring it to caluclate the difference between the real DOM and the virtual DOM

4. … and finally apply the changes to the real DOM

That abstraction requires substantial extra state to be stored and managed. If you have a different abstraction which directly manages the DOM, you can avoid steps 2 and 3. The big question is a) does your code do enough manipulation for this to be noticeable? (React is less slow than it used to be but I've seen 4 order of magnitude deltas in optimized React code so it's not uncommon to see it chug with large pages, especially on older hardware like a lot of the public uses) and b) does that other abstraction have drawbacks for your developers which cost you more than the performance savings?


Tough crowd today... My comment was a reference to xkcd: https://xkcd.com/378/

Maybe I should have made it more obvious by referencing the butterflies instead.


Heh, yes. This runs into something like Poe’s law because you can find people who sound exactly like that but are serious. I especially remember someone’s reaction when their hand-tuned assembly started under-performing compared to GCC once we recompiled for (IIRC) Pentium 4 chips.


So is the JS runtime. So why don't we just write apps in raw WASM?


Serious answer? SEO and accessibility. HTML lets search engines crawl pages and screen readers read pages (which can often be a legal requirement).

If we're rethinking the web stack I'd advocate for htmx with wasm-based web components for more complicated stuff like if you needed to polyfil in some new image format, or run a terminal emulator, or do webrtc calls with your own fancy custom noise reduction algorithm.

Yes I realize I'm essentially advocating for jquery with java applets, but it could really work this time! (I think a lot of the issues originally were political)

Make htmx like attributes part of the HTML spec, keep working on web components. Still don't know why web components haven't taken off.


> Serious answer? SEO and accessibility.

How is WASM less accessible than Javascript? Are crawlers parsing minified and obfuscated Javascript sources and deriving meaning from them in a way they couldn't from WASM code?


No, as far as I know they render the page as part of the crawling. I'm not a web developer but I don't see why that couldn't be done with WASM.

Don't know about screen readers. I'd be surprised if they weren't using the live DOM, though.


I presumed that you were talking about replacing the HTML with a wasm-rendered app, if you're just talking about replacing javascript with compiled javascript blobs, well we already do that general type of transpilation using tools like babel.


The original comment wasn't mine, but there was no mention of HTML anywhere. Where did that come from?


99% of the time WASM is being used right now it isn't being used to manipulate the DOM, and last time I checked the actual DOM bindings for WASM didn't really work.

There was no mention of HTML and right now no one is using WASM for html, so I assumed GP meant normal WASM type stuff, not theoretical WASM-dom bindings that I haven't seen anyone use yet.


> 99% of the time WASM is being used right now it isn't being used to manipulate the DOM

Almost certainly. I would expect that 50% of WASM being used right now is outside of the browser entirely. Cloudflare Workers, for example. Same goes for Javascript.

Regardless, I expect the real serious answer is that writing raw WASM isn't particularly ergonomic. You could do it if you had to, but it is very much designed to be a compiler target. The OP was no doubt alluding to Javascript providing better developer ergonomics much like virtual DOM solutions have done over 'raw' DOM manipulation.

Reading the article he would have realized that Svelte offers much the same without a virtual DOM, and that the headline refers to that, but when was the last time anyone on HN read the article?


> The OP was no doubt alluding to Javascript providing better developer ergonomics much like virtual DOM solutions have done over 'raw' DOM manipulation.

Were they? Alternative interpretations abound. "Raw WASM" implies that they were sarcastically saying the virtual DOM makes things a whole lot easier, and if all you care about is performance you might as well hand write a bunch of assembly.

Another interpretation would be that the whole DOM is inefficient, so you might as well transpile some kind of native toolkit to a WASM engine.

There are a lot of different interpretations for that pithy sentence. Does raw mean written by hand or does it mean not using DOM bindings?


Short answer: yes.

Crawlers are based on consuming text.

HTML is text. Sites that optimize for SEO also use JavaScript to provide SEO context. The specific standard is called JSON+LD; pretty much any site that you use where SEO matters has JSON+LD, RDF-a, or Microdata embedded in the HTML.

You can see these structures if you use the Schema.org validator: https://validator.schema.org/

Try plugging in a URL like Reddit.com and see for yourself. On e-commerce websites, it's a *must have*. For example, try this Amazon page: https://www.amazon.com/dp/B09V3GZD32.

TL;DR: crawlers are parsing RDF-a and Microdata in the HTML or JSON+LD embedded in `<script/>` tags.

You can learn more about it here: https://developers.google.com/search/docs/appearance/structu...


Here's an excerpt of some Javascript found on the Amazon link:

    window.ue_ihb = (window.ue_ihb || window.ueinit || 0) + 1;
        if (window.ue_ihb === 1) {

            var ue_csm = window,
                ue_hob = +new Date();
            (function(d) {
                var e = d.ue = d.ue || {},
                    f = Date.now || function() {
                        return +new Date
                    };
                e.d = function(b) {
                    return f() - (b ? 0 : d.ue_t0)
                };
                e.stub = function(b, a) {
Feel free to visit it to find the entire script. It is much too large to post here. What is a crawler learning from that program that would be lost if the equivalent code was bundled as WASM instead? Why couldn't its WASM parser pull out the same information? The JS/WASM runtime in the browser has to produce the same result regardless of which encoding is chosen, so everything will be encoded in there somehow.


> Why couldn't its WASM parser pull out the same information

There's currently no standard. If there's a will, there's a way.

JSON+LD is the standard for JavaScript based metadata.


I don't get it. JSON+LD is not Javascript. It's not even spelled the same? If you are meaning that your Javascript is able to read JSON+LD, so too could you WASM in this hypothetical world we're talking about.


JSON is literally JavaScript Object Notation, my friend.


Which, humorously, isn't compatible with Javascript object notation. { foo: "bar" } is a valid Javascript object, but not valid JSON.

Regardless, I don't get what you are trying to say. Pretty much every language still in existence is able to work with JSON (even SQL!). JSON is not Javascript. It's not clear why moving code from the Javascript runtime to the WASM runtime would magically make JSON+LD inoperable or whatever it is you are trying to say.


WASM doesn't access anything without javascript. (in a browser)


The translation layer of calling a web assembly function from javascript is a bit heavy, at the moment. And you have concerns around modeling a flat memory space with pointers, so it can be greedy on memory use. That said, I wouldn't be surprised if this is where the industry moves 5-10 years


I guess the sarcasm was too thick.

My point was that yes, VDOM has overhead. But we accept it as a tradeoff for app development in the name of DevX.


I suspect that in 15 years that or something similar will be a compile target for typescript or similar.


React-DOM will probably just compile to WASM.


Do you have benchmarks showing that it is worthwhile?

Anyway, another idea is to ditch the entire DOM and render on the canvas.


That would mean we need no re-invent all form controls and other gui elements. And it's super hard. Desktop envs are trying to do it for decades and Electron just came and ate their lunch.


Poorly written react code isn't performant and removing the Virtual DOM will not fix your problem. It's a hill I'm willing to die on. Many engineers seem to struggle with unnecessary re-renders, to the point where I see long tasks in the performance tab. Clicking a button shouldn't lock the UI thread for 2 seconds.


I don't think that many junior/early mid React developers know that the whole tree gets re-rendered.

Those that do, I don't think they fully understand when and where to `useMemo` and `useCallback` to optimize. It tends to get overused or used in a way that doesn't actually memoize the parts of the component that doesn't change.

Then adding in state management only makes it more complicated in some cases depending on the state paradigm.

It's a mystery that React is as prevalent as it is given how hard it is to actually do well. I think Solid.js and Vue have a much cleaner paradigm as far as re-renders goes (with React being explicit opt-out and Solid and Vue being opt-in).


You're right. There is some cognitive bias, when I was a junior, I didn't lean on useMemo in the way I see today's juniors using it. The mis-applications revealed that some engineers don't totally get the paradigm, and that is a weakness. Most seem to get, at a high level, that "fn(data) = view" but then caught up of the weeds of unnecessary memo'ing to paper over performance issues.


> removing the Virtual DOM will not fix your problem

Having seat belts will not fix car crash deaths.

But it does make them less likely to occur, doesn't it?


I think if you take someone who isn't too skilled in React, and then give them a different framework, that you'll end up with too much different of a result. I don't write much Vue, so I don't know what performance issues there look like, but I do know it isn't discussed much since it has a much smaller market share. If going VDOM-less really is a silver bullet, I'd be curious to hear more.


I like the looks of Svelte, but this argument is a bit strong. The supposed benefit of virtual DOM being:

X application-level virtual DOM changes -> differ detects only Y < X real DOM changes -> Y final DOM operations

is faster than

X application-level virtual DOM changes -> no virtual DOM diffing -> X DOM operations

this depends a lot of how fast the diffing is and how fast the DOM is but unless DOM operations are instant now (and with CSS, layout reflow, etc. I'm not sure how they could be) then there must remain some situations where VDOM has a perf advantage.


The browser renders the DOM, so everything pays the cost of updating it. In modern browsers that's really, really, really fast compared to IE6 (the baseline when React was designed), so there are basically two things which make one tool slower than another:

1. Are you updating nodes unnecessarily, especially in ways which force the browser to do more work (see next point)? In general, a tool which does only does the necessary updates is going to win.

2. Are you forcing the browser to do work only to throw it away? The common cause of this in the past was sloppy event handling code where there was a mix of operations updating the DOM interleaved with calls which forced the browser to recalculate the layout (e.g. change the size of an element by changing its contents or formatting, call something which forces the browser to calculate its size, then repeating that cycle so the browser had to repeat the layout calculations it had just made – I remember things like layout code which is now obsolete thanks to CSS flexbox/grids having pathological states where that could happen dozens of times in response to a single update).

That leaves plenty of room for differences from either of the scenarios you listed: for example, a library which doesn't use a virtual DOM at all can avoid all of the overhead related to managing one and diffing it but it has to keep track of its DOM elements to avoid needing to update all of them on any changes. This can be much faster and easy to write for simpler apps but has coordination challenges if your app gets large and especially if it has multiple teams working in the same codebase. The promise of React is that while it's never the fastest it'll be a reasonable balance for not being too slow while scaling up to larger teams.


3. Does the solution require more thought and library expertise to get the same task done?

Compilers were a huge boon over hand-built machine code and assembly. In specific hot spots, someone can eke out better performance sometimes, but compilers emit pretty good performance all the time with much lower effort from the programmer. Early compilers were just okay. Modern compilers can regularly kick 99% of human skills to the curb with aggressive pipelining, speculation, and vector operations.

React is the assembly language in this analogy.

    let count = 1;
is demonstrably better than

    let [ count, setCount ] = useState(0);
Not having to keep useMemo() in mind all the time is demonstrably better when performance can be maintained without having to worry about it.

Less code = fewer bugs

Less code with equal or better performance is golden.


None of the good frameworks really do any more DOM operations than any other. It's all about how you find out which set of DOM operations need to be done. Svelte does this with a compiler, Lit does it with tagged template literals, and a bunch do it with vdom.

Calculating the vdom diff is pure overhead in that if you have better syntax (or a compiler) you can just skip it.


They don't because since React set the bar, any new framework that didn't solve the problem VDOM did in some way, was dead on arrival.

If Svelte way is better at minimizing and batching DOM updates, they should probably argue and show that, not misrepresent what VDOM does (while blaming strawmanning on others no less).


> any new framework that didn't solve the problem VDOM did in some way, was dead on arrival.

Most frameworks before and after React were solving that problem one way or another.

The reason React won was that V = f(S) turned out to be most user-friendly solution. VDOM is exactly what Rich is saying: a means to efficiently implement V = f(S)

Vue/MobX are another, better iteration on the same idea. Svelte, arguably, is even better iteration.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: