I think this article - and many of the comments on this thread are forgetting the context of how DOM manipulation was typically done when the virtual DOM approach was introduced.
Here's the gist of how folks would often update an element. You'd subscribe to events on the root element of your component. And if your component is of any complexity at all - first thing you'd probably do is ask jQuery to go find any child elements that need updating - inspecting the DOM in various ways so as to determine the component's current state.
If your component needed to affect components higher up, or sibling to the current instance - then your application is often doing a search of the DOM to find the nodes.. and yes if you architect things well then you could avoid a lot of these - but let's face it, front end developers weren't typically renown for their application architecture skills.
In short - the DOM was often used to store state. And this just isn't a very efficient approach.
This is what I understood the claim that VDOMs are faster than the real DOM meant - and the article is pretty much eliding this detail.
As far as I'm aware React and its VDOM approach was the framework that deserves the credit for changing the culture of how we thought about state management on the frontend. That newer frameworks have been able to build upon this core insight - in ways that are even more efficient than the VDOM approach is great - but they should pay homage to that original insight and change in perspective React made possible.
I feel this article and many of the comments here so far - fail to do that - and worse, seem to be trying to present React's claim of the VDOM faster than the DOM as some kind of toddler mistake.
Every once in a while I'm reminded that I'm mostly disconnected from the way "most" people build things. Thanks for this insight. It finally explains why I hear people talking down about "jQuery developers", if that was something that people actually did.
But wow. I've been building javascript-heavy web stuff since the mid 90's and it had never occurred to me to do that. You have your object model, and each thing had a reference back to its DOM node and some methods to update itself if necessary. All jQuery did was make it less typing to initially grab the DOM node (or create it), and give you some shorthand for setting classes on them.
It also explains why people liked React, which has always seemed completely overcomplicated to me, but which probably simplified things a lot if you didn't ever have a proper place to keep your data model.
I can't imagine I was the only one who had things figured out back then, though. The idea you're talking about sounds pretty terrible.
Bare in mind that most people using jQuery weren't writing JavaScript applications. They were writing backend-driven applications with jQuery enhancements, so there was no real concept of frontend 'state' that was separate to the DOM itself. If your frontend code needed to work with 'state' like form values or element attributes you had to read them, and because there could be multiple separate bits of code working with the same form or element you had to write values back to the DOM so the next bit of code had the correct 'state'.
The thing that changed to make frontend development improve dramatically was hash based routing with ajax, and later the introduction of the history API. That caused frontend development to have a need to retain state between 'pages', so then was a need to find a better way to store it than using DOM attributes.
I get what you're saying, but anecodtally I can say I've never worked on a codebase like that. The pattern I came across most frequently would use JS values to store state, scoped as tightly to the relevant event handler as possible, e.g.
Interaction with the DOM was treated as I/O, akin to file or stdio access: for reading, get it into a sensible internal variable as soon as possible; for writing, dump it out as the final step. Using the DOM to hold state seems, to me, akin to holding state in an external file (reading and writing as needed), rather than a variable.
> he thing that changed to make frontend development improve dramatically was hash based routing with ajax...
I think that what's changed is simply that people realized that it's way less messy to use the backend only as a data source (with ajax calls), and leave everything else to the frontend. The cognitive overhead of having the server producing html with some implicit state, then updating that state interactively, and then losing everything again by posting the whole page to the server, was simply unbearable.
When I started building web applications in 2004 I had some experience in writing desktop apps: I simply created a js library to create and destroy UI elements, and wrote "desktop" apps running in the browser.
While I agree in theory, in practice I find that the frontend still has all sorts of warts that don't quite make it a great solution (yet).
I mean, it's better than having to maintain both server- and client-side logic and state (and having to sync all that), and definitely better than the days where we also had to manage DOM diffing manually.
But I still get headaches from the NPM/node ecosystem, the build steps, having to decide whether logic goes on the client, server, or both, and to some extent javascript itself. And you can never fully let go of the server-side of things.
I'm very intrigued by the alternative idea of moving (almost) everything to the backend, and maintaining the dynamic bits by sending all events to the server and sending back diffs based on that which update the page. Phoenix'/Elixir's LiveView made some great progress in this area, and from what I hear other ecosystems are experimenting with the same thing.
It's not a panacea: if you need offline functionality you're gonna have to deal with js and everything that comes with it. But in many projects that's not a deal-breaker, and in practice the approach actually leads to smaller payloads sent between client and server, and significantly simpler codebases.
I don't know JSF so I can't really argue this point, but I'd really love to hear what you think about the argument Jose put forward and where you might disagree!
I've recentlt started working on a project where they store data / state in the ids of dom objects. Sometimes even something like dash separated strings that need to then be parsed.
I come from having done pretty much no Web development and this seems like a hateful way to do development.
It is, and basically the selling point of modern front end practices like APIs, state managed only in JS, SPAs, etc. Maybe this tech is overused in some cases, but it's vastly better than the "JavaScript Sauce" type development you describe.
And this is why I've been developing all my modern web applications as essentially an S3 bucket of flat HTML with vanilla javascript and jquery sprinkled in sitting behind cloudfront, connected to a fast API built of cloud functions / lambdas written in crystal/rust/etc. I use a custom routing system (I have S3 set up to respond with a 200 at the index in the event of a 404, so I have unlimited control over pathing from within my js logic) and I never let node touch anything at all. And I'm super happy about it. Never has it been easier to get things done. I don't have to fight with any system because there is no system to get in my way.
This gives me:
1. 2-4 second deploys
2. full control over assets pipeline (I call html-minifier, etc., manually in my build script)
3. literally serverless -- S3 or lambda would have to go down for there to be an issue (ignoring db)
4. caching at the edge for everything because of cloudfront
5. zero headaches because I don't have to do battle with node or react or anyone's stupid packages
6. (surprisingly) compatibility with googlebot! It turns out that the googlebot will index js-created content if it is immediate (loaded from a js file that is generated by a lambda and included in the document head tag as an external script, for example)
7. full control over routing, so I don't have to follow some opinionated system's rules and can actually implement the requirements the project manager asks me to implement without making technical excuses.
This does not give me:
1. A magical database that has perfect automatic horizontal scaling. Right now there is no magic bullet for that yet. Some come close but eschew the transactional part of ACID, making themselves basically useless for many applications.
And the parent post exactly matches my usage of jQuery :D
Pricing aside (as it's almost unreasonably expansive if your app requires frequent db writes), firestore is indeed "A magical database that has perfect automatic horizontal scaling". But as you have your happy setup on aws it probably makes little sense to switch.
Yeah there are a few in that category also Google Cloud Spanner and the stuff by CitusData. All of them work, but are prohibitively expensive to get started. I've harangued them a number of times about how people aren't going to want to use something they can't scale up from $1/month to $10000/month without migrating any data (that's the whole point of an auto-scaling horizontal service imo), but so far no changes from them. Like why would you design potentially infinite auto-scaling, and then lock it up behind a $90/month minimum fee. They could have been making money all along on those $20/month or $40/month or $5/monthers, who vastly outnumber those who _need_ autoscaling, but what the peace of mind that auto-scaling provides.
Hmm I wonder if spanner has more minimum hardware costs or something. Like if they have to provision you at least one standalone atomic clock to get started.
But like that's sort of my point. They could have data default to going into a tiny VPS slice that would be free-tier territory, and then automatically move it to whatever infrastructure spanner requires when the time comes. That could all be seamless. Why keep the seams in?
$90/month is really not that much, it's like an m5.large on AWS which is pretty much the minimum for a medium/high traffic website. Well, I guess the scene might have changed since they introduced t2 unlimited, but that would only take it down to $40/month. Support costs are still there for a $5/month user.
About your cloud functions / lambdas: do they return HTML content or "pure" data (as JSON for example)?
If your cloud functions return "pure" data, then the client-side JS is doing the rendering: do you manually create the DOM nodes or do you use some templating engine?
If you’ve ever written a video game, I think it’s quite obvious as well. Video games have a main loop, take input, compute the next state, and then merely render the state to the screen. There is no point in manipulating the "UI". The data flow is very clear.
Of course, your game can run entirely "in memory" without a render function, which is basically what the game server does.
So I guess, many people have figured it out but didn’t transfer the knowledge from one area to the other.
Thanks for the insight, because I also always wondered what’s considered so new about React, although I have to admit that I still wrote my fair share of jQuery Spaghetti code.
Before anybody runs away and thinks there's a fundamental insight here, I have to say that all that's really changing is who owns the retained model: application or UI library.
I mean retained by contrast with immediate, as in the early DirectX jargon.
By retained model, I mean the source of truth as to the current state of the UI. Games normally use an immediate mode API, but they still render the UI from a model; it's just that they own the model, whereas with UI toolkits, you generally manipulate a model the UI toolkit maintains and the UI toolkit renders from that model.
Retained mode UIs are much easier to start with. You can get something that looks good up and running very quickly, because you don't need to think through the best representation for your UI - skilled designers and engineers have already done the work, and developed composition and drawing routines so that it all hangs together.
The trouble starts in more sophisticated applications, where the internal model has grown more complex, and UI code is increasingly an exercise in duplicating changes from one model to the other, via reactive techniques like data binding, or more imperatively from events. Having components denormalize state is a recipe for desynchronization bugs.
The problem doesn't totally go away if you move wholesale to immediate mode where the app owns the model, though. There are UI concerns that don't really belong in most application models; things like focus, selection, non-local UI idioms like radio buttons, caret position in text, etc. Doing these things right involves a lot of subtle design. Most app developers are better off handing these concerns to experts who focus on them.
How are these other state concerns you mention (e.g., focus) handles in immediate mode GUIs? Do the components retain those states or do you keep a separate model for those states that needs to interact with the application model? If so, how is that interaction wired up?
I would argue that the state of the global model includes some information about the state of a widget (e.g. selected="true") and some state is unique to the widget itself (hover).
The trick here is how do you reconcile those two states when both have a copy of the data. For instance, the <input type="text" /> holds its own state and you hold the value in the global state. When a user types a character into the input, the input hold a copy of the letter in its "value", and the state is updated with the same value, which then triggers the input to compare its state with that of the global state for equality before attempting to render the value in the global state. Tricky stuff.
If I understand correctly, the distinction between immediate mode GUI and retained is that in the former, there aren’t stateful widgets so there is no copy to reconcile.
That's correct. State is handled by the user, which is great for some things(dynamic quantities of elements - no caching layers, just feed it a for loop) and awful for others(maintaining pre-committed state for e.g. a configuration panel with checkboxes, text fields, etc.)
Ultimately the ground truth in both instances is that you have potentially many data models that the UI has to aggregate, and the source model, layout, display properties and hitboxes of elements are usually but not totally related and can change due to many kinds of events. Any formal structure you might come up with is bound to run into exceptions. As a result I tend to have this policy:
* I don't trust the framework
* But I leverage the framework to produce early results, and both immediate and retained modes offer ways of doing that
* I expect long-term maintenance to involve a customized framework design regardless
> Video games have a main loop, take input, compute the next state, and then merely render the state to the screen. There is no point in manipulating the "UI". The data flow is very clear.
I always thought this is basically the MVC pattern. And it's obviously the only sane way to do things.
Edit: I don't mind the donwvotes, but am I wrong? The MVC pattern simply seems to mandate, at its core, the separation between the model, its graphical representation, and the input from the user. Input -> model -> view. So a game loop where the user input is gathered, the model is updated by calculating the next state, and the view is displayed seems to me an instance of MVC.
Didn't down vote, but the original mvc was a bit subtler than that - originally dubbed user-model-view-controller - the idea was that the user has a mental model of the domain, and the computer has a concrete data model - and the user can act on a view via controllers to translate changes in their mental model to the underlying data model.
(one example of this, while not very graphical, might be a withdrawal from a bank account;the program stores a transaction - the user sees a "Change" in amount available in the account - in the user's mind there's 300 dollars in the account before withdrawing 100 dollars, and 200 after - but this view is a (beneficial lie - the "account" is merely a sum of transactions).
In short mvc wasn't really about a tight mapping between widgets and internal state of the data model - but about translating between the users idea of the domain model and the implementation.
But certainly implementing mvc in a object/message oriented language like smalltalk leads to data flow of input event > controller translation > model update > view update.
Actually that's one way to do things, the two ways being retained mode and immediate mode.
And each has their own use cases and challenges.
Traditionally, however, most well known GUIs (most of the apps people use on the desktop) were done with retained mode libraries (GTK, Qt, Cocoa, and so on).
>So a game loop where the user input is gathered, the model is updated by calculating the next state, and the view is displayed seems to me an instance of MVC.
There's nothing preventing the view being a retained mode UI widget tree -- which is how MVC was commonly implemented iirc.
To me the core problem wasn't with the data model, it's just that a declarative approach to binding events and reactive updates that knockout, vue, react, etc. provide reduces significantly the amount of repetitive, boilerplate code that you need to write. All that searching for nodes, adding handlers and callbacks, then updating the DOM when data model changes is now handled by frameworks. As you've said, this is not really jQuery vs React issue at all, jQuery just provides some nice cross-browser shortcuts. The main problem IMHO in jQuery era was in keeping the binding/callback logic separated from the html templates that define the dom structure itself. In big apps it made it difficult to follow which code binds to what part of UI, and there was no way to prevent someone from binding to the same node you're working on from some completely different part of the app. Now with declarative approach it's all in the same place and it's immediately clear what handlers you have in place on any html element, making refactoring much less stressful.
When I got into front-end coming from Design, jQuery just got huge. Due to lack of senior front-end-devs in the company, my JS was this exact pile of jQuery with state in data-attributes. The things weren't really complex (Modals, Tabs, Form-Validation etc.), so it was never a problem.
Nowadays I'd still do simpler components this way. For anything heavier I'll grab React from the beginning, because of it's enforcement of modularity and state-management.
I'm interested in the way of architecting things you described, since I used to try similar ways, but always ended up state-in-dom. Would you have examples or literature on this?
I started out much the same way, and took that approach for quite a while.
These days I avoid putting state in my markup even for small stuff. I remember some good articles on the topic, but sadly can't find them.
But at its most basic, I just make sure that any jQuery/dynamic component/widget, from the start, is essentially a render function that takes a bunch of data and updates the markup with this data (triggered via some event handler or setInterval(), or something like that). In many cases this doesn't have to be complex or efficient. Sometimes just an .innerHTML() or whatnot is fine. The crucial bit is that I always start with a js data structure and treat it as the single source of truth.
In most cases I just use Preact from the beginning, but I've worked on projects where that wasn't possible and I had to use jQuery or plain javascript. Whatever I used, I usually ended up regretting state in my markup.
I mean, I use to store data in my model and have the controller doing the rendering like anyone else that has some kind of experience on how ugly the opposite would become, but to outright demonize state within html seems quite misguided as well
there are things that are massively annoying to do without sneaking a data-uuid and data-type here and there, like drag&drop or specific customer requests - if anyone has ever written a cms, imagine the customer coming and telling "I want this product image in black and white" - if you had data attributes around, it's a css two liners, if you didn't because you're trapped into a either-or mindset, you have to also add the image-bw class on the rendering code with "customer ==" and "product ==" checks
That's just a symptom of Javascript being the entry-level language. You can be sort of productive without ever understanding how anything works.
Where these frameworks really come into their own is when you want to create reusable components and share them outside a team. jQuery did a good job with their plugins back in the day but Angular 2 (and React, and soon native Web Components) so that far better.
I hear the tale of "soon native Web Components" for years and still none in sight yet.
As far as I remember there are some attempts from Chrome and Mozilla but not sure I saw a real cross-platform spec out, so I stopped tracking the news around it really.
v1 shipped with Chrome 53, Safari 10 and Firefox 63. (And there's a polyfill.)
It's not "soon", it's very much "been in production for a while".
A base class like LitElement https://lit-element.polymer-project.org is all you need to achieve a sort of "react-like" development style (components with unidirectional data flow) WITHOUT build tools() and WITHOUT dom diffing! :)
Few years ago I just did a 'react + redux in jquery' on a smaller project. A state of app was in a single object (= super easy debug, undo or save), any user action generates a state delta (which properties of state needs changing), and a central dispatcher updates the UI according to state changes. Quick and painless.
> Thanks for this insight. It finally explains why I hear people talking down about "jQuery developers", if that was something that people actually did.
Yes. It seems that most of the frameworks people are so enthusiastic about are invented just to prevent bad developers from writing their awful code. Which they end up doing anyway.
Anecdote: years ago, when he decided moving to Angular.js, my boss justified it with "following best practices". Then he proceeded to rewrite all the application components as controllers quering the context to decide how to behave, and talking to everything else through global events. Took me weeks to rewrite everything in a sane way. (Years later it was decided everything was too slow, and it was rewritten in good vanilla js, with a huge performance boost).
complexity also kept me away from React. The main reason that it got popular is because it's by Facebook and they know how to manipulate people to use certain products over others (that's their entire business). There have always been simpler and cleaner alternatives.
If React was not by Facebook, it would not have gotten popular at all except as a 'cool hack/experiment' - Nobody would have seriously tried to incorporate it into a production application.
As soon as you scratch the surface, you understand that it's one of those tools that tries to take away complexity by adding complexity on top.
The story has gotten more complicated though because now React has so many components and there is all this conveninent tooling and boilerplate around it but still I would never choose it over VueJS.
Anything that comes out of Facebook is just pure manipulation.
Do you really believe all of us React users are patsies and you're conveniently one of the smart guys who hasn't fallen for it?
Surely a more reasonable perspective is that React actually solved a real problem, and the fact that there's a big company backing it is actually a benefit because 1) there's a lot of money and experience poured into the project, 2) it's likely to be supported even if the lead dev gets hit by a bus, 3) it's an easier sell to management over <insert alternative by random dev>?
I'm no fan of Facebook the company and product, and I avoid client-side code as much as possible (yay Phoenix LiveView), but I think React is quite neat and made my client-side work easier. I'd also argue that it paved the way for an important shift in how front-end frameworks do their thing, and we're all better off for it.
Sorry but what complexity are you talking about? React (particularly in the early days) has always had a comparatively small API surface.
You have components with a render method, and in that render method you return other components which you can pass data to via "props" - that's basically react in a nutshell.
I've noticed React is often conflated with the wider ecosystem it is a part of (Webpack, Redux, Babel) - perhaps this is the complexity you are referring to, but to be clear React can be used without any of these things.
And sure React being developed by Facebook couldn't have hurt in terms of it gaining popularity - but the real reason it took off is that in contrast to what you said it removed a huge amount of complexity by alleviating the burden of developers having to manipulate the DOM directly in ad hoc ways. Instead of having to manually add classes, add elements, remove elements, append children, you could just say given this piece of state, give me this.
> Anything that comes out of Facebook is just pure manipulation.
This just shows that your issue lies with facebook the company, which is biasing you against react - the technology.
Sure you could use react without JSX compilation, State libraries etc. But you'd just end up writing more code to do less. If your app isn't complex enough to merit the use of webpack, redux, babel etc etc. it probably isn't complex enough to even worry about DOM performance.
>> I've noticed React is often conflated with the wider ecosystem it is a part of (Webpack, Redux, Babel) - perhaps this is the complexity you are referring to, but to be clear React can be used without any of these things.
I disagree with this. Everyone claims that you can use React without JSX but no one does it. Aside from ugliness, the main reason why no ones does this is because you would be missing the most useful aspect of React which is compatibility and consistency with the rest of the React ecosystem. So you cannot separate React from its ecosystem. All this complexity has become tightly intertwined.
If you use VueJS without a bundler (which is actually feasible), you will be surprised how much simpler and elegant the whole development experience is. Once HTTP servers start supporting static push of scripts, we will not even need bundling in production.
>> the main reason why no ones does this is because you would be missing the most useful aspect of React which is compatibility and consistency with the rest of the React ecosystem. So you cannot separate React from its ecosystem. All this complexity has become tightly intertwined.
This is just plain wrong. Firstly using JSX or not will have 0 impact on compatibility with other libraries in the ecosystem because they are not coded against JSX they are coded against what JSX is compiled down to. The main reason people use JSX is because people like it.
Also you say the complexity has become intertwined but I don't hear any examples of how that is the case?
This is awfully dismissive without sound reasoning, I think. It doesn't take long for anyone working with the old ways outlined above (for example, me five years ago) to see why React was in many ways a step forward, defining what we should expect of any new front-end libraries these days. Also, everything starts out as a cool hack/experiment at some point.
> it's one of those tools that tries to take away complexity by adding complexity on top
It's one of those tools that purposely abstracts unproductive, inefficient complexity that we were not aware of before into a higher-level one, forcing us to be more mindful of how we do things. A matter of tradeoffs, I would say, and there's gotta be a limit to how 'simple' one thing can be. Also, you're conflating React the library with the ecosystem around it.
On a sidenote, all the tooling and boilerplate React came hand-in-hand with was markedly a net positive for me. It was in picking up React, during my career, that I learned why tooling is important, picked up tools like Webpack and grunt, learned myself how to do CI/CD and other stuff.
Convenience is to be taken with a grain of salt, though. In the node.js world, "is-odd" can be a package, and it can have 926,000 weekly downloads and 22 dependents.
"Zero is not even or odd."
"Zero could be even."
"Zero is not odd."
"Zero has to be an even."
"Zero is not an even number."
"Zero is always going to be an even number."
"Zero is not always going to be an even number."
"Zero is even."
"Zero is special."
Just so you know (not just for this comment)--you are my favorite HN user. Everything you post is either funny, quirky, or extremely interesting (mostly all of the above).
And, to add to this (I'm not being snarky), he could be the father/grandfather of many HNers. I recall one of his comments about how he was doing stuff on computers in 1965.
is-odd doesn't have any tests for handling zero. What would they be anyway?
What is-odd does is to throw an exception if you pass anything that isn't a safe integer or a string representation of a safe integer. Otherwise it just returns n % 2 === 1 (after converting string to int if necessary)
You don't want to start a religious war between the people who believe zero is special, and the ones who believe it should throw an error if you pass a string, and the people who believe it should attempt to convert the string to an integer, and the people who believe you should either round or truncate when the parameter is a floating point number, and the people who disagree about which direction to round, and the people who can't agree whether you should truncate towards zero or negative infinity.
Let the fight begin! It goes well with some popcorn.
I can understand how fundamental education teachers don't grasp modern Algebrism and believe that Math is formed by fundamental, sacred definitions. But programmers ought to know better.
In short - the DOM was often used to store state. And this just isn't a very efficient approach.
By some people, sure, but separating state and business logic from presentation and rendering logic was a well-known idea many, many years before React was around.
I think the basic premise of the article here is correct. The important development with React that hadn’t previously been widely seen in front-end, JS-based web development wasn’t the virtual DOM, it was the declarative description of the rendered content — building it in absolute terms from the current state, not in relative terms from the current and previous state. The virtual DOM is a means to that end: it makes that approach fast enough that it can be used with acceptable performance for a lot of realistic applications.
This doesn’t change the fact that the declare-and-diff strategy is extremely expensive compared to actively observing only necessary changes in the underlying state and making only necessary local updates in the (real) DOM. In a typical web app, if there is such a thing, that might not matter very much. In more demanding cases, say when you’ve got tables with thousands of cells or you’re drawing a complicated diagram with SVG, it’s still all too easy to run into performance lag with any library that uses this strategy. Then you start using escape hatches like shouldComponentUpdate or using lifecycle methods to manipulate the (real) DOM directly rather than rendering through React, at which point you’re not really benefitting from React at all for that particular component (though of course you might still be benefitting from it for the other 90% of your UI code and incorporating the rest into the same overall design using those escape hatches might make sense in that situation).
>By some people, sure, but separating state and business logic from presentation and rendering logic was a well-known idea many, many years before React was around.
My claim is that this isn't a good representation of the FE culture as a whole - even if there were islands of enlightment out there. Hell, just about all of the enterprise codebases I get contracted to work on are STILL do state management the old way.
So no - I don't agree the basic premise of the article is correct.
>The important development with React that hadn’t previously been widely seen in front-end, JS-based web development wasn’t the virtual DOM, it was the declarative description of the rendered content — building it in absolute terms from the current state, not in relative terms from the current and previous state.
You are ignoring that fact that the VDOM was the imperative implementation they used under the hood to make the declarative api work. It's the VDOM that allowed them to take state management out of the DOM - and thus why they claimed that the VDOM was faster than the DOM.
Sure - if other folks are finding other implementation approaches that allow one to keep state out of the DOM in even better ways - that's great. That doesn't mean it wasn't reasonable at the time to identify the VDOM with that central innovation since there wasn't many other people doing it in other ways.
>This doesn’t change the fact that the declare-and-diff strategy is extremely expensive compared to actively observing only necessary changes in the underlying state and making only necessary local updates in the (real) DOM.
I'm not arguing this case one way or another... I don't claim to know. The OP is certainly welcome by me to try to continue making that case. I think that's a good contribution to make.
I'm saying that OP could do this without crapping all over React's contribution - which tbh honest smells like a status move in order to obtain more market share. Wanting more market share is fine - but it should be enough to just compare the performance between the two to make that case. Don't straw man a "meme" to attack a competitor's status.
My claim is that this isn't a good representation of the FE culture as a whole - even if there were islands of enlightment out there.
I can’t really dispute that since I don’t have any useful statistical evidence. All I can say is that in my experience there were mostly two types of people working on front-ends to run in browsers in the early days: Web designers who learned programming, and programmers who learned the basic Web technologies and/or plugins. (Today, with the industry being somewhat more mature, I’d say there’s a third group, who come straight into full-on front-end programming of JS-based web UIs at the same time as they’re learning HTML and CSS.)
Crucially, this means you have some web developers who have broader programming experience and/or formal training, and others whose knowledge mostly comes from online tutorials and QA sites and from the knowledge and culture passed on by their peers. The former type probably have some understanding of software architecture for more substantial applications, and might well have had a reasonably systematic data model, separated business rules from the rendering(s) of the underlying data, and so on. The latter type often aren’t aware of these concepts and tend to write much more hacky code — such as keeping application state in the DOM — because they don’t know any better.
Adopting a comprehensive framework can mitigate that lack of knowledge to some extent, which I suspect is a big part of why the heavyweight frameworks caught on in front-end work just as they did in desktop UI work earlier, but ultimately there’s still no substitute for knowing what you’re doing. Today the same type of people who used to keep their application state in an ad-hoc mix of scattered variables and the DOM are trying to build their entire application architecture with React components and either scattering the state and business logic throughout those components or, in some cases, adopting a heavyweight data management architecture like Redux instead.
There is a fourth group of people who used to build desktop apps applying the same approach to building webapps (no server generated HTML, treat the DOM as a retained mode widget set, and have a pure js data model). Two examples of this are JupyterLab and VSCode.
> I'm saying that OP could do this without crapping all over React's contribution
What are some specific quotes from the article that you feel are "crapping all over React's contribution"? Genuinely curious, because I don't take the tone of the article that way at all.
> but separating state and business logic from presentation and rendering logic
I think the parent was referring to something different. When you work directly with the DOM your _view_ logic is stateful. In the old days (jQuery, Bootstrap, Knockout, etc) you were spending a time simply keeping your data and your view in sync -- and god forbid you were trying to re-use some of that view logic in multiple places.
In the old days (jQuery, Bootstrap, Knockout, etc) you were spending a time simply keeping your data and your view in sync -- and god forbid you were trying to re-use some of that view logic in multiple places.
Sure, and we had the era of one-way or two-way data binding libraries as a response to doing that manually, which at least provided a quick, simple solution to that problem in the relatively common case where you were presenting a set of mostly independent data points.
However, we also had designs that were based on ideas like MVC (the original one, not the server-side framework style that hijacked the term later) where you had part of your code storing the real state, event handlers triggering updates to that state in response to user actions, and rendering code that was triggered in turn to redraw the relevant parts of the UI or update the contents of any affected form fields. This sort of architecture was using essentially the same set of software architecture ideas that we’d been using in desktop or more traditional client-server software for a long time and applying them in the context of JS (or Flash, Java, etc.) running the browser.
> it was the declarative description of the rendered content
this is absolutely the 'revolution' that React brought front end centre - declarative UI's for the web. This is why people that enjoy working with react enjoy it so much, whether they know it or not.
What I think this article does very well is rebut the myth that the DOM is slow. The DOM is not slow -- on the contrary, it is very fast. What is slow is browser reflow, page refreshes, display calculations, etc. In web app development of yesteryear, browser reflow was typically triggered by poorly conceived manual DOM manipulations -- which gave birth to the myth that the DOM itself is slow.
Implementing a virtual DOM and VDOM diffing is just one way to manipulate the DOM more efficiently and intelligently. At my work, we've chosen a different path without the overhead and leaky abstraction of a virtual DOM.
We built our own component-based SPA framework and recently open sourced it ( https://github.com/ElliotNB/nimbly ). Each component must have a definition of what state mutations should trigger what portions of the component DOM (via CSS selectors) to refresh. There's no extra overhead for a VDOM and VDOM diffing at all. The only overhead is accrued ahead of time by the developers who must write a definition of how their component should update in response to state changes. When state does change, the framework bundles up the queued DOM changes between all components on the page, identifies/eliminates any redundant changes and refreshes the DOM in one go.
This is how I remember things as well. The Virtual DOM was a huge improvement compared to other contemporary frameworks because it consolidated multiple changes into a single DOM operation.
For one thing, all the frameworks at the time we’re doing 2 way bindings. Which meant that the smallest change could end up triggering a bunch of computeds and observables, to the point where any change would trigger a bunch of re renders.
React bundled all of those into a single rerender. Further, and I may be mistaken about this, react helped dispel 2 way binding and show that 2 way binding was a performance and reasoning disaster. If that was the case, I’d suppose eliminatin g 2 way binding also likely played a large role in the performance improvements which may potentially have been incorrectly attributed to the VDom.
To me it was not only that as I didn't do games or highly interactive UI's where DOM updates would really matter but it freed me from the burden of ad-hoc updating with jquery as everybody was used to. Unless you had some framework or had invented ways to centrally store state chances are your code was all over the place.
I tried 2-way binding in ExtJS and it seemed to work well at first then I started adding more and more binds to the component and at certain point performance just tanked.
And it was after I started with React already so I just didn't bother to investigate, issue actually was some kind of loop being formed and things aborting due to hitting limits. Which I guess means I used it somewhat incorrectly but I never had that issue with react.
Well written. From my memory I remember situations trying to isolate cascade of rerenderes in backbonejs caused by multiple isolated event handlers. Also state used to problems, before state management libraries existed. And actually messenger (built in) was first app that was written in react, because it was plagued by several bugs, like not showing/resetting unread message count correctly...
Not really a JS developer, but from my recollection, everything you said is essentially correct. I was on the periphery of the Ember community when React dropped. I remember a blog post or something where the Ember developers basically acknowledged that React's virtual DOM approach was significantly more performant than what Ember was doing and, to the Ember community's credit, resolved to re-architect their view rendering layer to shrink the performance gap.
I love how many people there are in this thread that somehow avoided the jQuery hell a lot of us battled against.
I was involved in that, mostly from when my paycheque involved throwing together websites in Drupal 6. Holding that architecture together was trouble enough without having to also care about the frontend, we were only scripting it. The idea of the HTML being considered an 'app' was utterly alien to me and my colleagues at that time.
I fondly remember the short period of time where we had post after post attempting to explain what a closure is, because for most of the authors back then the concept of a function pulling outside variables into its scope was utterly alien to us. Even now this practical meme persists[0].
Ten years later and I find closures more intuitive than half the stuff we've concocted in OOP land.
More than that, jQuery was a means to an end and to shove low-effort animation and UI into an app to make it look snazzy and 'Web 2.0' like (glass effect banners, drop shadows and all). If it wasn't jQuery it was script.aculo.us.
Then we got Backbone and Coffeescript at around the same time, by which time I was a Ruby dev. Backbone contributed to a fundamental shift in how we build a frontend, and we had Knockout, Sencha, ExtJS, etc. following along. And then the concept of 'comet' (keeping an HTTP connection alive for long polling) and MeteorJS.
The impact of React and its concept of the VDOM has been phenomenal. It may be overhead as the Svelte authors say, but the experience of working with React, and any similar library in the ecosystem, is a boon to anyone who wants to do serious work in the browser. Without being hyperbolic this feels like the legacy of smalltalk: programming in a dynamic environment, only you're not actually aware that you are.
There has to be a fantastic retrospective on the progression of JS since that initial ten-day genesis.
Not just jQuery... we had some crazy times with Dojo, MooTools, and so on. Closures and "this" being screwy (IMO) led to junior developers spending an insane amount of effort shoving things into DOM elements because it was a free global store that you could inspect and work with.
The VDOM is cool, but React (& co's) appeal is the streamlined development tooling, approaches, and ecosystem where there's a mostly agreed upon way to do things.
>I fondly remember the short period of time where we had post after post attempting to explain what a closure is, because for most of the authors back then the concept of a function pulling outside variables into its scope was utterly alien to us. Even now this practical meme persists[0].
My favorite was actually when we needed a groundswell movement to make people realize that $() wasn't a variable, but a function call, and for complex selectors (before querySelectorAll, when Sizzle was a far more complex beast) you really wanted to cache it. Or, getting people to stop attaching event handlers to every row in a 10k row table, and to just match the click to the row once.
It really does feel like people have forgotten just how rough the gap was back then.
For real though. I feel comfortable building something in VanillaJS... e.g, building up a fragment before shoving it into the DOM, architecting how updates are applied, etc. This is all stuff that was hard learned, and became easier the more I stepped outside JS later on. I would in no way want a junior trying to do this without learning the better practices found in React - _not_ because they need the crutch, but because I learned those some lifecycle methods and such from Cocoa/UIKit and you need a frame of reference to internalize it all.
> This is what I understood the claim that VDOMs are faster than the real DOM meant - and the article is pretty much eliding this detail.
I disagree with this. I think the major key insight and innovation with React, which this article fully acknowledges, is that it is much easier to think about declarative UI as solely a function of the current state without having to think about the transitions to arrive at that state, and, importantly, the virtual DOM lets you do that performantly.
In other words, to take the example from the article, it would be great if we could have an "onEveryStateChange() { document.body.innerHTML = renderMyApp(); }" function, but doing that would be much too slow because it would recreate the full, real DOM. Using the virtual DOM lets you write essentially the same code, but in a performant manner, and I think the article is clear on this fact.
I'm not familiar with Svelte, but the article has peaked my interest because it is making it sound like it lets you write declarative UI but without needing to do the full virtual DOM diffing.
lit-html also lets you write declarative UI without diffing (and use actual HTML tag syntax without any build tools): https://lit-html.polymer-project.org
Svelte is a compiler, so I think it's figuring out the possible state changes as part of compilation. Contrast with React where any component can return any elements at any time.
> yes if you architect things well then you could avoid a lot of these - but let's face it, front end developers weren't typically renown for their application architecture skills. … This is what I understood the claim that VDOMs are faster than the real DOM meant - and the article is pretty much eliding this detail.
I agree that a large part of the problem is the lack of proper architecture and general poor quality of practice but that’s also a problem for the distinction which you’re attempting to draw. I think the core React team likely meant what you meant but the community’s love of both fads and crapping on whatever isn’t new and shiny meant that nuance was deeply buried under the “it’s go-faster magic from Facebook!!!” marketing train.
I remember having absolutely surreal experiences where it was like “why are you saying it’s faster? Here’s a benchmark showing it’s 5 orders of magnitude slower.” “It uses a virtual DOM” “I know, but don’t you have a benchmark where it’s actually faster?” “You just don’t get it”.
I do think React helped bring some improvements around architecture but I think an under-appreciated part of that was that since it required a full compiler toolchain, 100% of projects could use the latest JavaScript features (notably ES6 classes and arrow functions), data structures, modules rather than rewriting everything, etc. which noticeably reduced the number of complex things people had to get right, tune, and reason about.
I still do some old school JQuery manipulations, and a lot of what kills you on the performance front is also the repeated manipulations of the same set of elements in the same frame. Often, you go in and change an element. Only to have another piece of javascript change it again. Each of those modifications then require a complete relayout of the page, that's when it gets expensive.
You could say that we should just optimize our JQuery, and you'd be right. We just don't have structured way of figuring out everything that touches a "component". (What is a component anyway, when it's all adhoc).
Absolutely, and it ignores that React has never been designed purely to render into the browser DOM as defined by the w3. The browser's DOM is simply one target, though certainly the most popular one. I use React every day for building other things: xml targets like PDFs, word docs, and SVGs, raw strings, canvases, native, etc. Far from a toddler's mistake, the team was well aware of this from the start, and the shift from everything in the 'react' package to splitting out 'react-dom' is clear evidence of this.
Components are a very powerful concept that goes way beyond the common scenario of building a web page and updating it with some data.
Why don't the credit of "changing the culture of how we thought about state management on the frontend" go to AngularJS? At least Angular is what changed it for me, and it is the oldest of them.
> was typically done when the virtual DOM approach was introduced.
Don't care and I am guessing you didn't read the article. Ignorance of the DOM does not redefine what it is. This is just as true for jQuery stupidity as it is for React virtual DOM nonsense. Fortunately, the DOM is defined in a standard specification so there is a document of truth that you can go read.
> In short - the DOM was often used to store state. And this just isn't a very efficient approach.
Again, don't care. Other people's misuse and stupidity is their problem. That stupidity does not alter the technology specification.
First, I think anyone using React solely because of the virtual DOM implementation is largely missing the point. IMHO, the real win of React is the functional and composable way components can be designed and implemented.
Second, no disrespect to Svelte, but I think there's a huge trade-off between the React approach and the Svelte approach that developers should be aware of. React is a pretty unopinionated library, all things considered. The only compilation step necessary is JSX to Javascript. JSX maps pretty directly to React's API. This means compilation is pretty simple. So much so that you can do it by hand really easily if you really wanted to. Svelte, on the other hand, is pretty compilation-heavy. There's a lot of what I'd consider to be non-trivial transformation going on between the code you pass to the Svelte compiler and what comes out of it and runs in the browser. Personally, I'm less comfortable with that compared to React's runtime library approach. But if you are comfortable with that trade-off, that's perfectly fine. It is worth being aware of it, though.
It always bugs me when I'm using a framework with custom HTML templating language (Angular, Vue or possibly Svelte), it's never clear what's the differences between them.
It's almost a new language but similar every time, with different pitfalls -- an ad-hoc, informally-specified, bug-ridden, sometimes slow implementation of half of HTML and half of JavaScript.
For example, a framework Foo does not have the concept 'else' at all in HTML template. Another framework Bar has an 'else' like <div bar:else="expr" />, but the scope of else is totally different from
another framework Baz or JavaScript itself.
JSX on the other hand, is straightforward -- when you open a curly bracket, it's just JavaScript expressions -- map, condition, lexical closure, everything works out of the box.
Anything with a DSL is evil. That's why I never liked Vue and don't understand its huge popularity - you get none of the functional benefits of React, you might get a minor speed increase, and you DO get to write code the old style with custom DSL and no clear components.
Don't forget to mention the poor (and complicated) editor support for "these custom HTML templating languages". JSX is very well supported in most editors.
> It always bugs me when I'm using a framework with custom HTML templating language (Angular, Vue or possibly Svelte)
This is the most ridiculous thing I hear when people compare frameworks.
I don't know about angular anymore, but with vue you can use jsx if you wanted to. It's in the official docs, so it's not some random third party support either.
Also, my dude, there's like half a dozen rules when it comes to vue templates. Jsx has lots of small rules about component names and things like class vs classname as well.
As for putting javascript expressions in your templates... Each to his own I guess, because imo it's a pretty bad anti-pattern to put an extensive amount of procedural code in the template code. Again with vue, using things like computed properties in single file components (SFC) makes it very easy to read and maintain code.
Even without knowing vue, all of those examples are very straightforward in what they're doing. Just because the click event can take multiple options doesn't mean that it's inconsistent.
And yea, it's more complicated than "everything in the brackets is javascript and you already know javascript so it's all super simple". If this is where the bottleneck is for you then, fine.
All I can say for myself is that I find vue's templating easy enough that it's a non issue.
The fact that it is restrictive because it's a DSL is a plus for me because it avoids some really ugly code that I've seen in some react projects where the programmer puts a tonne of js code into the templates which as I mentioned before I find an anti-pattern.
As for the magic... yea vue is more magicky, which is why I like it. It's a framework, it's supposed to magic away the stupid boilerplate code. In some ways this is going to be relative because there are people out there that think frameworks like react are too magic and require too much tool-specific knowledge when vanilla js can get the job done. And people that make this argument are technically right in the same way you're technically right that vue is more complicated than react.
> Just because the click event can take multiple options doesn't mean that it's inconsistent.
That's what inconsistency means. It has multiple v-* attributes and each has different rules on what it accepts.
> ugly code that I've seen in some react projects where the programmer puts a tonne of js code into the templates which as I mentioned before I find an anti-pattern.
JSX isn't templates ;)
> It's a framework, it's supposed to magic away the stupid boilerplate code.
I don't mind magicking away the boilerplate code. I do mind when it's once again so inconsistent in how it magics away that code. For example, in my second link multiple nested properties become properties of `this`, and then:
// `this.isFolder` magically hoisted into `this` from
// `object.computed.isFolder`
//
// this.open can be set directly. Magic.
// Even though `this.open` magically hoisted
// into `this` from `object.data` which is a function that
// returns an object whose keys and values are hoisted
// into `this`
//
// this.model.children cannot be set directly. Not magic
1 - things defined in data, computed, methods, and props can be accessed on the component directly as a shortcut. This is in the fairly short documentation and everywhere in the code examples. It's not inconsistent if you don't know the rule. It's like complaining that variable names in some languages can start with _ or other special characters but can't start with '1' or start with '2', or start with '3'...
2- and this.model.children definitely can be set directly, and it will be reactive. Unless you're passing an object literal without binding it or if it's not a data object, although I am not 100% sure on this since all my props are usually made reactive by vue and so they are bound. But I know for sure that code like this should work for sure because I've done it.
3 - I don't think that's the correct use of hoisted.
I feel like maybe you'd like vue a lot more if you gave it a chance and went through the documentation (which is pretty good, short and simple). I don't disagree that it has a little bit of magic, but it probably looks worse than it really is if you don't know the rules. Once you know a handful of rules, things are fairly easy to reason about. IMO, easier than angular, and sort of easier than react because there's less code.
I love Vue and I vastly prefer it instead of React (have even been using Nuxt lately, now that is some real magic) but that comment makes really good points about Vue!
JSX is clearly a second class citizen in Vue. Last time I tried, using TypesScript with Vue made JSX unavailable. And it was not documented, so the process was try, fail, look around, find the Github issue about this.
You can't expect most code bases use JSX as template. It's not even praiseworthy if the framework provide every possible choices. Just like you can do anything in C++, but in practice it's a terrible language to work with.
For class part I'm sure it's just whatabouism...
And for expressions it's not praiseworthy to put in the template, but my point is why not reusing JavaScript semantics rather than implementing you own that differs from JavaScript?
> There's a lot of what I'd consider to be non-trivial transformation going on between the code you pass to the Svelte compiler and what comes out of it and runs in the browser. Personally, I'm less comfortable with that...
How is this different than the "non-trivial" transformations that V8 makes to actually compile and run your code? Does svelte do unpredictable / unexpected things? Don't you make runtime calls to the react lib where they can do whatever they want? I'm genuinely confused.
I don't care one way or the other - I'm not a web dev. It seems from this comment that you're just scared of compilers, which is strange. No matter what you're relying on third party libs in your code. Why is it somehow safer for that third party code to be used at run time rather than compile time? I would probably argue the opposite. Why the strong aversion to compilers?
I don't actually have a strong aversion to compilers. I use tools like Babel and Webpack regularly. However, I've seen the types of transformations the Svelte compiler does and they tend to hide complexity, making it harder to trace and debug code at runtime. Source maps can only do so much. It's much harder to debug code that doesn't resemble what you wrote in the first place.
> However, I've seen the types of transformations the Svelte compiler does and they tend to hide complexity, making it harder to trace and debug code at runtime.
Are you saying the original source code hides complexity that is present in the generated code? If so, I guess that's the whole point, but then a runtime framework also hides lots of complexity that your code doesn't have to manage (which, again, is the entire point of using a framework).
> It's much harder to debug code that doesn't resemble what you wrote in the first place.
With a runtime framework, there's lots of code running that isn't your code, which can also make debugging difficult. With Svelte, at least the generated code is fairly straightforward and easy to step through. In many case, I think it's actually easier, not harder to debug.
> Svelte, on the other hand, is pretty compilation-heavy. Personally, I'm less comfortable with that compared to React's runtime library approach.
Svelte compiles, React runs at runtime, that's true.
I've spent the last week (and weekend) doing the UI for a new project in Svelte. The compiler approach is pretty rad as it seems to catch more errors before I test them in browser.
You can download any project from the https://svelte.dev/ tutorial / online REPL and it'll have a rollup file, watching files, compiling them and telling about broken code.
vscode also has a plugin for Svelte components that shows pretty underlines while you work. The compiler approach means I see more warnings faster and save time.
I know that in concept not needing compilation is nice because it’s one less thing to have to worry about, but I don’t think I’d want to use JavaScript without any compilation. Just curious what the use case for not doing compilation is?
>Just curious what the use case for not doing compilation is?
I'll add another one - the code that comes out is the code that goes in. Remember the days of Coffeescript and minimization before sourcemaps?
When most of your work comes from maintaining a codebase being able to effectively debug your code is crucial and hitting an error in production that is only painfully traced back to development will quickly offset any advantage that framework gives you.
I think this is the the big reason to prefer plain old JavaScript. Compilation is ok in Java etc. where you can still debug your Java-code. But with many of these JavaScript frameworks I don't think that is possible, is it?
I would add that "debugger" is not mostly a tool for finding and fixing bugs. It is tool for code-understanding, giving you a "live view" of your code, for READING your (or someone else's) code in the order it executes.
>I would add that "debugger" is not mostly a tool for finding and fixing bugs. It is tool for code-understanding, giving you a "live view" of your code, for READING your (or someone else's) code in the order it executes.
Absolutely! I couldn't agree more.
From the discussion about forking sub-processes from the shell:
The compiler/assembler/disassembler/debugger should be built into the shell, just like ITS DDT at the MIT-AI Lab in 1969! ;)
Some folks simply don't want to use a build system, whether it be for experimentation or not wanting to deal with the overhead of tooling.
Others are looking for options that might minimize overall script / JS size (which is a hallmark of Jason Miller, author of both Preact and HTM).
Another might be to make this easier for beginners. For example, the React docs link to an example HTML page that uses the `babel-standalone` build [0] as a way to try out JSX syntax. However, that's a hefty piece of JS, and it's not at all advisable for real use. HTM might be a good alternative to that.
I think you have answered your own question: it's one less thing to worry about in your stack.
If you're targeting modern evergreen browsers you already have a lot of modern features at your disposal, including ES6 modules, async/await, string interpolation, but we're not using them.
In fact, I'd say that it's way more than "one thing" that you can stop worrying about: you won't need Webpack/Rollup/etc, Babel, NPM/Yarn, Node.js itself, etc.
React is more like 'Lambda the Ultimate Web Component'.
A component is almost a function returns element. Expanding a component is like calling a function and give it the property.
So you can have some abstract common behavior in HOC f and g, then you can have HOC `h = compose(f(g))`.
A quick comparison with Angular: @Component({template, style}) seems composable if we stretch a lot. But why make template and style in the decorator... They are not something we consider most abstract at all.
The Great Quux's Lisp Microprocessor is the big one on the left of the second image, and you can see his name "(C) 1978 GUY L STEELE JR" if you zoom in:
Also according to Wikipedia[0] Brendan Eich was supposed to create 'Scheme in browser' but in the end it became scheme with Java syntax. I always wonder if it was real 'scheme in browser' the web would advanced faster, at least S-expression is good at expression HTML. Therefore we wouldn't have to wait JSX until 20 years later...
whether getting a component via a function call, vs an named export identifier (component class) - They have essentially the same net benefit over time.
A function that returns a component has no more reusability than a component class' identifier. That component itself has a very narrow use case of it being a UI component.
Functional programming folks love React- Pretty much every clojure web framework is built on React. But yes, with some effort you can come up with a definition for "functional" which React will fail to meet.
I love Vue.js. I've never really caught onto the JSX stuff. If you have ".Vue" files then you get nice separation of the template html, methods, and the scoped styling. The Javascript syntax is pretty straightforward, and the templates just add nice directives like v-if, v-for, etc. I think it look pretty clean and is fairly easy for JS developers to pick up. Integration into a project is pretty straightforward as well. We have a webpack installation that pulls in the Vue files and bundles everything and it is quite clean.
I've never understood how people view the separation of template, styles, and business logic into separate files as simpler. Now, to work on a single component, I need to open three files in my editor, instead of one.
The "separate files" argument is a red herring. It is really about separate "mindsets" or "modes of thinking".
In effect, JavaScript logic tends to be procedural/imperative, while templates allow declarative semantics, and styles are nearly a 2.5D constraint language. "Separation of concerns" here means only having to think in a particular mode, rather than blending all of those modes of thought into a single eyespan.
Notably, Vue allows for single-file components, while preserving the familiar and intentionally designed separation of declarative (HTML), imperative (JavaScript), and aesthetic (CSS) code.
In vue (or atleast the way the majority of people use vue), each component is separated into a .vue file. That component's template, style and business logic is all encapsulated in that one file. A basic .vue file starts out with <template></template><style></style><script></script>. It keeps everything nice and simple, in my opinion. Each different "mode of thinking" is separated out, but still all together in one file.
Check out web components with LitElement and lit-html.
You get a very React-like experience with components and functional templates in JS, but it's all standard JS, and there's no framework, just standard web components. The lock-in and risk is very low for enterprises.
Went down this rabbit hole yesterday and played with LitElement/lit-html for the first time... great experience for folks who don't want much "ceremony". Was also SUPER impressed with AppRun.
The more that I depart from my "bare metal" web tooling the riskier/dumber things get. I always want to see a path back to a basic HTML5 shell, driven by almost-pure JS (w/tiny helper libs), and basic CSS. Just like basic UNIX tooling - basic web tooling just works!
To those thinking about trying lit-html; it IS as simple as the example on the GitHub project page. I was able to build it into a semi-complex application within a couple of hours and it had massive performance payoff w/o compromising how I want to build things. It definitely gets my "KISS" approval stamp.
They allow you to use anything you want, as long as it's not React?
I am very curious about this kind of decision. I realize you may not be able to share details, but whatever you can share would certainly be interesting.
Well, the _license_ certainly shouldn't be an issue at this point. It was changed to a standard MIT license a couple years ago, same as all the other major JS frameworks.
If your company has issues with React being developed by Facebook, that's an entirely different question.
There was a license controversy a couple of years back, yes, but that was solved rather quickly - I understand that you as an intern don’t necessarily have any sway over legal, but they’re not up to date.
Vanilla JS with a good understanding of MVC serves quite nicely in most cases. I wrote a few introductory programs to clarify it (https://github.com/madhadron/mvc_for_the_web).
Why don't they allow react? One reason I can think of is they have a server side side rendered architecture and they want people to continue to use that. They don't want new devs to use company time to buff their resume with unmaintainable learning front end code. At least that's why I generally shoot down attempts at using FE js frameworks over here. We have some really awful react 0.11 pages that are years old that will take weeks to redo properly.
Anyway, my point is you might want to check if it is okay to use any FE framework at all. It seems like a very strange policy to say "you can use any FE framework except React".
Virtual dom is an implementation decision for performance a developer shouldn't even be very aware of. The main upside to react is that it has a huge ecosystem.
> 1. Changes to the DOM cost a ton more than executing JS code.
The point of the article, and of the performance problems that people actually have with React is that this might be true for a small number of JS operations, but that
1) Tree diffs are computationally complex operations that add up for real-sized apps
2) The diff is actually unnecessary if you simply take into account the structure of templates, so diffs are pure overhead.
So _do_ feel bad when you have a no-op render() in React at least, because the resulting VDOM diff just chewed up CPU and battery for no reason.
> There's a lot of what I'd consider to be non-trivial transformation going on between the code you pass to the Svelte compiler and what comes out of it and runs in the browser. Personally, I'm less comfortable with that compared to React's runtime library approach.
I initially had a similar concern, but so far, the opposite appears to be true. The Svelte compiled code is quite readable and easy to follow, and because there is no runtime, it's much easier to walk through exactly what is happening. With a complex runtime, it can sometimes be difficult to figure why something isn't working as expected without having a deep understanding of the runtime codebase.
> Time slicing keeps React responsive while it runs your code. Your code isn’t just DOM updates or “diffing”. It’s any JS logic you do in your components! Sometimes you gotta calculate things. No framework can magically speed up arbitrary code.
In my experience, as your app grows, the amount of time you spend on dom reconciliation becomes negligible compared to your own business logic. In this case, having a framework like React (especially with concurrent mode) will really help improve perceived user experience over a naive compiled implementation.
> In my experience, as your app grows, the amount of time you spend on dom reconciliation becomes negligible compared to your own business logic. In this case, having a framework like React (especially with concurrent mode) will really help improve perceived user experience over a naive compiled implementation.
In my experience, the exact opposite occurs. If there is ever any heavy computation I need to do, I usually try spawn a web worker or offload it to the server. In contrast, as your app tree grows reconciliation costs grow (super?)linearly, and more importantly there is (currently) no way to offload reconciliation.
Same, the only time I've run into performance issues with Vue after building many very complex deeply nested components prior to this was one which ground to a halt on re-rendering because I was simply rendering too many elements into the DOM with their subsequent watchers filling up memory.
After hours of combing through frames of the memory profiler and seeing only highly concurrent framework calls the only solution was to paginate the particular content. 99% of the users never had this issue but it was 1-2 customers who had thousands of components to render instead of the usual hundreds.
I'm really curious now if Svelte would have helped with that because it was a huge dev timesink and one where I was never satisfied with the solution. As it really should be able to render that amount of data. It obviously wasn't a problem in the jQuery/Rails version I was replacing and improving upon (although page load times was higher).
The new React concurrency model wouldn't have helped from what I've read. I just needed something lighter weight from the rendering model itself. Vue 3.0 is apparently going to come with plenty of performance improvements so I'm looking forward to that as well.
> After hours of combing through frames of the memory profiler and seeing only highly concurrent framework calls the only solution was to paginate the particular content. 99% of the users never had this issue but it was 1-2 customers who had thousands of components to render instead of the usual hundreds.
Did you try something like https://github.com/Akryum/vue-virtual-scroller? The trick is if you know the height/width of the elements, you can only render the elements directly in the viewport (+ some padding) and replace the missing elements with fixed-size blank divs, whose width and height you can find with some math. That way, you don't have to rely on the browser to layout your elements, nor do you have to reconcile hidden elements. (Essentially, element occlusion culling for the virtual DOM.)
Looks like vue-virtual-scroller only works with fixed-height elements (because n * m is easier to compute than n_1 + ... + n_m), but as long as you don't rely on the browser for layout the same trick works with preknown variable element sizes.
That wouldn't have worked for the problem unfortunately. It was a pretty compact UI with a lot going on so scrolls wouldn't have masked enough components. Thanks for the link though.
You could use svelte just for that heavy component and see if it makes a difference? Svelte compiled output is very small (only brings what you need) so you can quite easily embed it without dragging along a whole extra framework.
I think it neglects Dan's original point. Say you're doing a search input that filters a list of elements using fuzzy matching. No amount of optimization in your components or the framework is going to make the fuzzy string matching library you use work faster. Faster renders might make more room for the main thread to update, but fundamentally the problem persists. Concurrent React would allow you to type while the fuzzy matching happens asynchronously.
This is different than the demo shown. The demo with the charts is very render heavy. It's an unrealistic experience (and in my opinion, it is regretful that it was used to show the power of async rendering). In any real application, if you're rerendering thousands of nodes on every key press when their component instances are not changing input/state, something is very wrong. A fuzzy string matching filter is a much better example of this, since the visibility of each item is dependent on the state of the text input.
Sure, not all items in your list will update simultaneously. But are they really all visible on screen? Do they really all do need to be updating instantly? The overhead of the framework is almost certainly negligible here either way. But the scheduling that takes place is going to be critical, because that's what will directly affect how the app's performance is perceived by the user.
> Say you're doing a search input that filters a list of elements using fuzzy matching. No amount of optimization in your components or the framework is going to make the fuzzy string matching library you use work faster.
> Concurrent React would allow you to type while the fuzzy matching happens asynchronously.
Are you sure that is the case?
From what i gathered about concurrent React and Fiber is that it can split the rendering of multiple components into different time slices, and also give more priority to certain events, so that some re-renders are kept smooth and responsive (the input typing on the demo), while others can be split into multiple frames and delayed a bit (the update on the carts).
But all React can do is to split up multiple render() calls (or function component calls) into different time slices. It cannot "slice" a single render() call any further. So, if there's an expensive synchronous computation inside that render(), like the hypothetical `fuzzySearch(query)`, that computation will block the main thread no matter what, and the application will be unresponsive until that computation finishes.
There's no going around that if the fuzzy match function is synchronous. That function would have to be changed not to block the main thread. E.g., it could have a timeout so it doesn't take longer than, say, 10ms, and return a partial list of results on that case. Or be a generator, so you can consume its matches one by one and split that work into different frames. Or you could move that computation to a webworker, thus making it effectively asynchronous, and avoiding blocking the main thread.
I don't think there's much React, or any framework, can magically do in this case to make a synchronous expensive operation not block the app. But maybe i'm completely wrong and that's exactly what concurrent React does; or maybe i misunderstood your scenario entirely :)
Instead of results.filter(fuzzySearch(query).match) at the list level, you'd simply map all results and do the fuzzy matching within each result's render function (returning null if it doesn't match).
In that case, instead of fuzzy matching each result in one render, it's spread over many smaller renders. When those get executed is much less important and can be scheduled for idle time by the browser.
That's actually what I didn't understand in Dan Abramov's response ...
If you run a synchronous function that takes 2 seconds your app will block whether you use Svelte or React or whatever. You need to offload it to a webworker anyway.
Still I think it's a good idea to update the DOM not more often than needed: Only update the last change of an element every 23 ms and skip the changes that have been overriden. You can do this without a virt DOM.
Virtual DOM diffs do a huge amount of unneeded work because in the vast majority of cases a renderer does not need to morph between two arbitrary DOM trees, it needs to update a DOM tree according to a predefined structure, and the developer has already described this structure in their template code!
A large portion of JSX expressions are static, and renderers should never waste the time to diff them. The dynamic portions are clearly denoted by expression delimiters, and any change detection should be limited to those dynamic locations.
This realization is one of the reasons for the design of lit-html. lit-html has an almost 1-to-1 correspondence with JSX, but by utilizing the static/dynamic split it doesn't have to do VDOM diffs. You still have UI = f(data), UI as value, and the full power of JavaScript, but no diff overhead and standard syntax that clearly separates static and dynamic parts.
I really think the future is not VDOM, but more efficient systems, and hopefully new proposals like Template Instantiation can advance and let the browser handle most of the DOM updates natively.
> I really think the future is not VDOM, but more efficient systems, and hopefully new proposals like Template Instantiation can advance
Template Instantiation is like a half of a half of 1% "advance" in the best case scenario. It's being rushed forward despite the fact that no one sat down and listed all the benefits vs. all the downsides of implementing it in the browser.
What browsers do need is a declarative DOM API and a native "DOM as a function of state" which renders the whole instantiation proposal moot, and at the same time actually advances the browser as a platform.
One thing nice about React is that it can take care of quoting for you depending on the method call the jsx template translates into (attribute, value, element name). String templates doesn’t have that nice property.
Angular also works in a somewhat similar way, there is also no virtual DOM.
Instead, the modern compiler is used at build time to generate what looks like a change detection function and a DOM update function per component.
These functions will detect changes and update the DOM in an optimal way without any DOM diffing.
However, because Javascript objects by default are mutable, after each browser event Angular in its default change detection mode has to check all the template expressions in all the components for changes, because the browser event might have potentially triggered changes in any part of the component tree.
If we want to introduce some restrictions and make the data immutable, then we can check only the components that received new data by using OnPush change detection, and even bypass whole branches of the component tree.
This is the current state of things, for the near future Angular is having it's internals rebuilt in a project called Ivy.
One of the main goals of Icy is to implement a principle called component locality.
Ivy aims at getting to a point where if we change only one component, we only have to recompile that component and not the whole application.
I think the article puts the focus on the wrong thing. The current change detection and DOM update mechanisms made available by modern frameworks virtual DOM or not are more than fast enough for users to notice, including on mobile and once the application is started.
What we need is ways to ship less code to the browser, because that extra payload makes a huge difference in application startup time.
Thanks for the write up - it's a very succinct explanation of how Angular works in comparison.
I found the original article to be a really good read, and the Svelte approach in general seems rather neat. I do however find that in this current front-end framework sphere, there seems to be a huge amount of religiosity and one-upping going on.
I hear routinely (on-line and off) developers vocalising some anti-[jQuery,angular,etc.] mantra, which to be honest saddens me. Yes the jQuery approach was flawed in so many ways in comparison to the modern frameworks. Yes Angular 1.x was flawed in many ways compared to what we have on offer today. But those tools were still great improvements on what we had before (for anyone who knew the DOM-API standardisation nightmares pre-jQuery, or state management / testability woes pre angular/react).
Svelte may take us down the next path, and if it allows us to produce better, smaller, more testable code then it has my full backing. But I think as a community we need to strive to be less polarising - from my perspective its likely to be mostly reductive, and lead to even more JavaScript fatigue.
I would guess this is partly because the modern front-end framework leaderboard is a zero-sum game: you can't possibly be sane to use two or more different frameworks for most of your day-to-day work. Maybe you have one for work and one for hobby development, but that's about it. I'd be torn to remember quirks of both React and Vue, for example.
And thus you see it in discussions that people feel the need to pull one down to put their preferred one on the top. We know what happens to a library without a critical mass of adopters: they lose contributors, which in turn reduces the rate of growth, and in turn, the quality of the library over time.
Which is kinda sad. A lot of work goes into these frameworks that I really respect. There's no logical rule that says these new ideas completely succeed their precedents. I wonder what needs to be done to get us over that JS fatigue.
>The current change detection and DOM update mechanisms made available by modern frameworks virtual DOM or not are more than fast enough for users to notice, including on mobile and once the application is started. What we need is ways to ship less code to the browser [...]
I wonder how it affects battery usage though. Downloading the code doesn't happen as often as running the code, if it's really an app and not content needlessly packaged as an app.
So glad to see this article, I've long wondered how this "virtual DOM is faster" myth got accepted as gospel when clearly it's pure overhead, compared to a well written app that updates the DOM directly only when needed (which I find is easy to accomplish in most apps).
Can't speak to the svelte approach due to inexperience with it, but good to see this myth challenged - react.js is fine but I worry there's been a cargo cult mentality around it, that it's The One True Modern Way To Do Web Apps, when really it's a tradeoff that involves some extra layers and performance baggage, and like any tool you need to weigh the pros and cons.
React and friends made themself show up on every party. Even if the model isn't remotely appropriate for the taks at hand, you have to argue about them. I am convinced that half the modern frontend devs don't even know how a "classic" web app could work. SPAs or frameworks usually used for SPAs are the default state and there isnn't any competitive option with a real name except jquery/ajax/html. Even talking about the latter will brand you as dinosaur in many circles.
You can look at my SPA that maintains state perfectly well and persistently without any framework. It isn't hard, but you would have to be willing to write original code.
> compared to a well written app that updates the DOM directly only when needed (which I find is easy to accomplish in most apps).
> Can't speak to the svelte approach due to inexperience with it
Heya! I'm been using Svelte for the last week for a new project - knowing React has the lion's share of community right now, but feeling like Svelte is where things are going to be.
Regarding: "compared to a well written app that updates the DOM directly only when needed" - exactly! Svelte actually does this for you. Given the following Svelte code:
age = 7;
That just updated anything bound to 'age' in the DOM. No set() or setState() or whatever. Or for an array:
Svelte definitely is compelling, but one thing I really like about using React or Vue.js are the very mature communities and in particular the full-featured styled widget libraries. Projects like Semantic UI React[1] or Buefy[2] (Bulma/Vue.js) give you so many basic components that would be major time sink to create yourself in every project. Does Svelte have anything like this?
Which sometimes is not bad at all. Ever tried to ctrl-click an interface element (a navigation button, a menu) because you want to open its view in a new window?
A lot of (admittedly badly coded) "modern" web apps ignore basic web idioms (like hyperlinks) and assume as unique single user workflow the one its designer tought the app (and the only one he tested).
A nice example of this: with the new Reddit interface, only visible comments are rendered, so you can't use your browser search functionality to search all comments on the page, just the ones currently in the viewport.
Amen to this. The new Reddit interface is a step backwards in functionality due to this sorta stuff and effectively breaks user experience.
Just to give an idea how bad it is: loading the front page cold of new reddit = 9685KB. Loading the front page on the old reddit (also cold) = 737KB. The compute profile is literally half for old.reddit.com (new peak 16%, old peak 7.5% - both metrics core-distributed over an 8c system; 1s sample rate).
I LOVE all of this talk that front-end devs like to have on optimization/state stability. Talk to an browser automation expert, esp. one that does it at scale. Almost 100% of the time older/simpler front-end tooling/development is faster and less error prone. Older also takes a FRACTION of the compute/memory/proxy resources. It's been this way for years!!!
ducks for the inevitable storm of HN hate for having a strong opinion
Yeah, you're right. But then you get others now trying to say "No no no, you have it all wrong. We didn't really say VDOM was faster. You misunderstood."
I thought this was well known years ago. A better description for VDOM should be 'It's not fast, and is not slow either'.
But frankly, what I see in virtual DOM is not about speed. It's a declarative interface, an abstraction. It's more like a blueprint that's easier to interpret across different environments like React Native, WebGL. Even if you don't need any of these cross-platform benefits it's still good for testing -- without real DOM.
As for performance, it could be an aspect of advertising but I doubt it really matters anymore.
I saw many applications where AngularJS is too slow, and I even worked on one for quite a while -- it's just a fairly typical 'enterprise application'. But I still yet to see a real-world front-end project where React is too slow.
Users won't even care about if it is 10ms or 30ms.
I work on one with React that is too slow in the browser with a team that only has senior devs, and users even filed bugs about the performance - we do heavy computations, and React's model of blocking rendering on having everything updated can freeze our UI for up to 10s while data comes in from various API requests. I believe our app would be performing much better for the end user if we were using Angular 2+ interestingly enough due to its built in incremental updating - there would be other tradeoffs though.
Part of the problem is not having good enough APIs currently (we have to make too many API requests and data payloads are too fat, sometimes up to 2 MB per request), but imperfect APIs tend to be the case in a lot of apps early in their lifecycle. I've actually been a bit disappointed in React's performance from a UX perspective.
This seems almost entirely unrelated yYou framework though.
You are blocking rendering on IO. This can be greatly improved by offering
- Good loading indicators that stay consistent for related IO.
- Caching to speed up or remove the need for requests.
- Fetching related data in parallel.
- Prefetching data.
I don't understand what this has to do with React. You are sending 2MB of data to the frontend. Can you paginate it? Request a subset?
If you are fetching data from multiple endpoints and waiting to render anything until ALL of them come back, that has nothing to do with React. You can start rendering the components that already have their data as soon as it comes back, not wait for all of the other components to have fetched.
I'm not sure if it's still an 'interface' since it's compiling another language to imperative code which operates DOM API. So it requires other platforms implements a similar
imperative API surface as DOM.
Which is the same thing that a virtual DOM does, it also has to have an adapter(react-dom in Reacts case) which applies the VDOM to the real DOM via it's imperative API.
> Virtual DOM is valuable because it allows you to build apps without thinking about state transitions, with performance that is generally good enough.
In other words, Virtual DOM is somewhat-valuable overhead. This is a cool alternative, seemingly sort of a compile-time version of Knockout. It's probably worth a try for writing an efficient client app, but I have a hunch that I'd miss the "HTML-in-JS(X)" pattern if I went back to using "JS(?)-in-HTML" instead. A VDOM runtime allows you to write plain JS that "just works", at least until certain parts need to run faster. This means junior programmers can pick it up and become productive quickly, and avoid driving their projects off a metaphorical cliff.
Of course this is bought with bandwidth and CPU overhead, lots of it in some cases. The call you should make when considering a VDOM is whether the safety and familiarity benefits are worth the overhead. If your team is experienced enough to take on a new DSL for rendering markup (which every template-binding tool really is) and meticulous enough to assign instead of mutate and avoid two-way binding pitfalls, go for it. If not, be careful.
This is not meant as a challenge. Personally I wouldn't want to work on a big application that is wholesale optimized in this way, unless there was no alternative. I wouldn't write my own game engine (if it was for a job) either.
I was running some quite complex UI systems in several of my projects with vanillaJS, cached DOM elements, all that jazz. Nothing like VDOM. Then eventually, it started to finally bog me down. Like rendering an inventory in an RPG system where you can buy stuff from the vendors: I started to get dissatisfied with change operations that lasted upwards to 2-3ms in a bad day. Then I started caching even more DOM elements, and started to build local data structures to see which was rendered last,and to what, to avoid rerendering everything, and to optimize to the change-based render only.
A few weeks of this, and it dawned on me that had I needed to generalize my solution, I would have arrived at the exacty same model that hyperscript does for its diffing, and something similar to whats underneath React's diffing method (or Preact since I prefer that, but they share the API).
So yeah, virtual dom is just a more clever and straightforward way to map your state to the dom, identifyng exactly where the changes happened, and only updating those nodes, instead of doing any queries towards the dom api (costly, can cause rerender,like when checking for bounding boxes, etc).
It IS more useful because you no longer need to maintain a hyper-specific update function per project and manuallí created/maintained differs.
Not to be grating but your problem sounds more like using the wrong tool for the job. DOM is not made to render video game UI, it is a bad tool to do so as you discovered yourself.
But putting even those games aside which use webview for UI, there are still video games made inside the browser, and for those, having sub-1ms rerender times with React is perfectly reasonable. We could get into immediate mode vs retained mode debates right about here, but that is a slightly different topic :D
2 things I'm not seeing in the article or in the comments so far:
1) The virtual DOM is an abstraction that allows rendering to multiple view implementations. The virtual DOM can be rendered to the browser, native phone UI, or to the desktop.
2) The virtual DOM can, and should, be built with immutable objects which enables very quick reference checks during the change detection cycle.
There are other ways to represent a UI as data that don't require a diff. JSX's default compiler output throws away information needed to do efficient updates, and instead requires diffing the entire old and new trees.
Immutable objects may optimize for checking for data changes, but only if you do that, as in shouldComponentUpdate or checking inside render(). They don't optimize the _diff_, which is done against the DOM.
On point #2, ClojureScript not only provides immutability out of the box, but also has libraries for replacing JSX with the same stuff everything else is built with. It's an insanely beautiful way to work with React.
1: there’s no reason at all why VDOM should be the abstraction over the multiple view implementations; there’s no need: it’s all duck typed, so make DOM (or at least the subset of it that Svelte will generate) the abstraction that other things must implement. I believe this is how Svelte Native works.
Furthermore, as a compiler, Svelte is well placed to render to multiple implementations, efficiently—though implementing it is likely to take more effort if you’re dealing with a different-shaped API. This is demonstrated by the fact that Svelte has two compilation targets at present. First, dom, which is designed for client-side DOM operation; and secondly, ssr, server-side rendering, which is based on emitting strings, and never constructs a DOM.
2: even if you can do things that way, you’re still doing more work than is necessary, because you’re calling the whole render method and performing even simple comparisons that just aren’t necessary. VDOMs render methods are allocation-heavy, because they deliberately create new objects all over the place. In the words of the article, which I assert does deal with this, albeit obliquely: virtual DOM is pure overhead.
I keep hearing this and find it really hard to care about. Runtime performance is not a bottleneck for me. Once in a blue moon I'll have to optimize a React component with shouldComponentUpdate but otherwise I have no performance concerns even on old browsers.
There are other characteristics that are very, very important like build size. VDOM is not worth thinking about.
Honestly I don't understand Svelte. It sounds like it's very good at the things it does but the things it does are not the things I need.
If you value build size you might want to take another look at Svelte. Build size is one of its strengths. For example, the Svelte implementation of RealWorld is roughly 10% of the size of the React/MobX implementation:
I feel as if Svelte came at the wrong time. These days, when most people know either React or Vue or some other thing, and computing devices are performing better over time, there's diminishing returns on performance optimization. Sure, you do a bit of it, and then you're often better doing something else, like enhancing developer experience for example.
I really like the idea, and will play around with it, but fat chance it's getting into production with me. I am much more productive with React now, and I worry more about business requirements than raw performance (that I almost never worry about these days).
> Sure, you do a bit of it, and then you're often better doing something else, like enhancing developer experience for example.
Aside from its performance optimizations, arguably one of the biggest selling points of Svelte is its developer experience (which is enabled by it being a compiler). See https://svelte.dev/blog/svelte-3-rethinking-reactivity. And because only the features used in your components get included in the final bundle, the framework is free to add nice extras (like its transitions/animations system) without negatively impacting apps that don't need those extras.
To my understanding, one of (or the singular) author of Svelte was a developer for the NY Times who wanted to easily create visualizations with thousands of data points, and the existing UI libraries weren't cutting it on performance. Depending on the types of applications you build day-to-day, this problem space might be niche. Your standard CRUD app (tables, forms, etc) would not be leveraging Svelete's capabilities; in a sense, it could be a pre-mature optimization.
The ideas of svelte are great. It reminds me of snabbdom thunks and inferno blueprints. If you know the view code ahead of time, you could do plenty of perf optimizations since you exactly what changes and what to react to.
But sometimes I dynamically generate vdom nodes. Like markdown to vdom. There vdom shines. It’s a simple elegant idea.
I think svelte is exaggerating a bit.
React and vdom family of libraries are great. Svelte is great too. Not mutually exclusive.
Someone should write a Babel jsx transpiler that does svelte like compile time optimizations for react. At the same time still allows dynamic runtime diffing if needed.
I do this with React all the time as well, though via ClojureScript and Re-frame[1], in which nodes are represented as plain Clojure data structures.
E.g., send an article from the server, formatted in EDN/Hiccup[2][3]. Insert it into a component in the frontend, and it's converted to VDOM nodes. No further logic or conversion required.
I just noticed that Reframe README is very amusing and wickedly hilarious in some parts. It is very refreshing to read compared to other bland/formal/techinical READMEs ... Maybe in future I will try and see if using Reframe itself is joyful like its Readme ...
Once you get into it, it's very good. It wrinkled my brain at first though. It does include a bit more ceremony than plain Reagent, which means the app should be sufficiently advanced for the benefits to outweigh the overhead.
My rule of thumb is to start with Reagent, and as soon as I notice the desire to wrap and abstract away plain atom swaps, I switch over to re-frame.
Re-frame is largely an abstraction over atom swaps, and guaranteed to be better than the half-baked, 10% coverage of the Re-frame functionality that I would end up with if I did it myself.
Migrating from Reagent to Re-frame doesn't have to be done in one huge refactoring. It can be done by introducing re-frame into your app function by function.
We implemented a library at my company that does not use a virtual DOM, but instead captures reactive "change functions".
The framework captures dependencies between the reactive "change functions" and underlying variables, and executes the functions whenever a variable's value changes. You can also have dependencies between variables (like computed vars in Vue), and the lib works out the correct order for calculation and execution. All the change functions get queued up and applied in order before the next repaint.
Everything is component based, and there is even a nice kind of inheritance (with lazy, async component loading).
It works rather well. I'd be happy to share it with anyone that's interested! Not open sourcing it yet, as I envisage it would be a full time job to support it!
Edit: the upside of the change functions is that YOU decide how the DOM is updated. It's really quite cool to be able to implement a function like the following and have it run to update this.$dateOfBirth whenever this.data.dateOfBirth changes:
Yes, it's a similar idea. The only difference I can see is that in our library, the dependencies are automatically tracked, and there is some additional clever scheduling around when a function should be run.
* Compared to doing static analysis and optimizing your UI updates at build time.
While I certainly agree that svelte's approach may be the future, I think React and others, are very much a needed stepping stone (especially when you consider all the work done transpiling JS code).
The Virtual DOM was the most performant solution that applied generally to many a large number of cases. The reason almost everyone did `x.innerHtml = html` is that it was the most general and widely available solution.
No, you don't need to do anything at build time! (You don't even have to have a build time.)
You can just… instantiate a template, remembering where the "holes" are, to get precise update functions for every data field that gets inserted into the template. This is what lit-html does, and it's such an obvious approach I'm really surprised that VDOM took off before it.
Svelte's philosophy on turning the virtual DOM concept inside out sounds like it has merit, and is very promising. But it's going to take a lot more than that, in my opinion, before a large number of people consider switching from React, Ember, etc.
I don't see that as a drawback, I see it as an open opportunity for Svelte to keep building out on improvements other than the DOM updates, and catching up with everything else the SPA alternatives provide that have nothing to do with the virtual DOM.
For example, Ember is just a joy to work with, and makes it easy to rapidly prototype reactive frontends in a way that reminds me of Ruby on Rails's initial appeal to developer happiness, and the tooling is very mature. If you could unlock all those benefits while keeping the blazing fast DOM updates, oh boy!
Tooling is the key word here. Svelte simply doesn't have the tooling needed for any big project.
- testing/testability (unit-tests are easy, but what about functional, e2e?)
- strong-typing support (flow, typescript)
- good IDE support?
- i18n? ICU support, etc? They need to redo what ember-intl or react-intl do.
Without these things it's simply not viable to start bigger projects with new framework.
It is simply mind-boggling how much effort the JS community has put into working around the performance properties of a document layout engine to make it 'interactive' and 'responsive'.
With only 3 major rendering engines left standing, where is the concerted push to turn these document renders into general purpose, fast, desktop-quality rendering engines?
Back to Svelte vs. React vs. Reagent vs. Vue.JS vs. Angular vs. (insert framework-of-the-month-here)
One common theme seems to be: Run code to manipulate a tree-like data structure (DOM) efficiently.
This obviously needs to become: Submit data to the rendering engine to manipulate the tree-like structure.
(and in a way .innerHtml is doing that for a sub-tree, but is not suitable for general-purpose tree manipulation)
For an individual developer it is not obvious how you should structure your usage of the DOM API to avoid the performance potholes and other problems. Even a team of experts in the DOM will sometimes take shortcuts or one expert's technique doesn't play nicely with another's.
Some parts of the DOM are extremely slow if the API is used naturally, and obvious usage of the API has other side-effect penalties (e.g. stored state, difficult component destruction, or reference loops causing memory blowouts).
So the vast majority of developers use the native DOM in such a way that the page is slow and buggy.
React et al provide a clean API that avoids the worst performance problems, while providing a framework that steers a team towards good practices, so the average developer can be productive.
The framework has a bunch of extra overhead, but the overhead is far less than the average overhead of not using the framework.
I hope you read the original article, that clearly lays out the (non) optimizations being done in VDOM.
In any compute environment, doing more than necessary spins CPU cycles wastefully. I believe the optimizations you speak of try to limit this work to the least possible, by telling on the framework's Dev world. But this is surprisingly easy to achieve without frameworks, see [1] and [2]
This is why I find the flutter approach interesting: it has its own rendering engine. I would guess it is optimized for this sort of thing (dynamically rendering UI elements) without depending on an external entity (DOM).
Maybe there is a future where flutter style renderers become standard, have a container like a browser (to avoid the entire runtime baggage when it's deployed), and people target it instead of the DOM? This gives best of both worlds--write apps in a declarative way, without the need of any external "optimizing" framework.
>The danger of defaulting to doing unnecessary work, even if that work is trivial, is that your app will eventually succumb to 'death by a thousand cuts' with no clear bottleneck to aim at once it's time to optimise.
What a strange post. Yes, virtual DOM is overhead, much like JIT compilation is an "overhead". But this overhead ultimately translates to better performance because many virtual DOM transformations can be buffered into 1 transformation of the real DOM.
For a library like React, which re-renders the DOM tree every time component’s props or state change, virtual DOM with diffing and patching is indeed a better approach as compared to naive re-rendering of the whole DOM.
But as Rich Harris said during his talk about Svelte v.3.0, whenever he hears claims about better performance of frameworks based on virtual DOM, illustrated with benchmarks, he runs the same benchmarks with Svelte (not based on virtual DOM), and inevitably gets better results.
>But as Rich Harris said during his talk about Svelte v.3.0, whenever he hears claims about better performance of frameworks based on virtual DOM, illustrated with benchmarks, he runs the same benchmarks with Svelte (not based on virtual DOM), and inevitably gets better results.
That isn't a fair comparison. Consider this analogy - React is the JVM and Svelte is Rust. JS before either is C. Now it can be shown that in most cases that C is faster than anything on JVM, but in reality it wasn't, and that C code was riddled with bugs. The JIT'd JVM comes a long and guarantees safer and more performance code. Then someone comes a long and says the JIT is overhead rewrites everything in Rust and shows how fast the program is.
What's being ignored is the man-years, technology and insights that made Rust possible. The fact of the matter is, (1) code that is written like Svelte generates was incredibly rare and difficult to achieve and (2) Svelte is pretty radical in how the framework is implemented (at a high level, Svelte is essentially doing static analysis on your code to figure out where the DOM is being updated, serving a like-minding purpose of Rust's borrow checker).
> Now it can be shown that in most cases that C is faster than anything on JVM, but in reality it wasn't, and that C code was riddled with bugs.
Um, what? Your metaphor breaks down because it doesn’t really connect with reality. The operating system and the browser you typed that in are written in C/C++ because the performance hit of doing that in a language like java would be absurd. Practically every performance sensitive application is still written in C/C++ (games, productivity tools, desktop apps, etc). Rust is, outside of places like HN, just an interesting novelty to like 99.999% of the software industry, and your average java app is glue code between a gui and a database.
I had a feeling it was a poor analogy after I submitted it. I'll leave it up, but the analogy doesn't properly convey the point I was trying to get across. This isn't a discussion about Java's relative performance with C++.
It's more about the performance gains of properly optimizing your DOM modifications aren't generally possible with a static analysis like Svelte employs.
Better performance than completely recreating the DOM, sure. But all the time spent constructing and diffing the virtual DOM is pure overhead compared to simply doing that 1 real transformation directly.
Direct manipulations of DOM are expensive. It is vastly more cheaper to create or update JS object than create or manipulate DOM node. So the claim that VirtualDom is always an overhead is not true. The diff algorithm can give a set of DOM operations that are less expensive than typical sequence of manual mutations. So virtual DOM can be faster if savings from less DOM operations are bigger than extra JS work.
Surely carefully crafted direct DOM mutations will be the fastest approach, but it typically leads to hard to maintain code.
I'm not sure you understood the article. The Svelte compiler does in fact generate code that performs "carefully crafted direct DOM mutations," though it is not hard to maintain, because the compiler handles it. Given code that already knows exactly which DOM updates to make, virtual DOM would indeed be pure overhead.
Typical React code does not know which updates to make. It always builds the Virtual DOM from scratch as is was the first time. It is the diff algorithm then figures out the set of changes.
If code knows which updates to make, it essentially embeds a particular form of the diff algorithm. That inevitably leads to more code to write as besides the initial construction of DOM one has to track changes. And such manual tracking is not necessary optimal as the diff algorithm has a global picture and can target the globally optimally set of mutations, while manual in-component tracking optimizes for a particular component.
> If code knows which updates to make, it essentially embeds a particular form of the diff algorithm.
Having an alternative means of identifying where to make targeted updates on data changes (i.e., via static analysis during a compilation step) is not the same thing as embedding "a particular form of the diff algorithm" (which would be a runtime operation). Svelte does not produce anything comparable to a diff algorithm.
> That inevitably leads to more code to write as besides the initial construction of DOM one has to track changes.
With Svelte, there is actually less code to write, as the compiler handles generation of the change tracking code. And even the generated code is minimal and generally leads to much smaller bundles, as no runtime gets shipped with the code. This leads to faster startup times, which is often the real performance bottleneck for SPAs.
> And such manual tracking is not necessary optimal as the diff algorithm has a global picture and can target the globally optimally set of mutations, while manual in-component tracking optimizes for a particular component.
Can you offer an example where a "globally optimal set of mutations" would be different from the set of mutations Svelte would make on a given change?
the author and others did not avocate manual DOM manipulations. He wrote a framework that can generate optimal DOM manipulation code without the need of a virtual DOM.
Personally, I find modern template based approaches like lit-html, hyperhtml/lighterhtml better and faster. And also being far, far smaller. Throw in a CSS framework like bulma or tailwind-css and you are good to go at a smaller footprint and better performance.
the main reason templates are bad is not that it needs a build step, it's more about being non-standard (a problem lit-html doesn't have), and not statically analyzable (think typescript's jsx), which lit-html doesn't solve.
Correct me if I'm mistaken since the motivation of the project isn't that clear on the home page: the main thing that makes lit-html special is the fact that it's just standard javascript.
This won't appear to typescript developers because they have jsx already built into the compiler. They can use lit-html and have their template type-checked too but it requires extra setup. What's the benefit here? That they can use class= and for=?
lit-html isn't _just_ about using standard JavaScript syntax. It's also about being more efficient than VDOM, very small, and having great HTML integration (you can set any property, attribute, or listen to any event, unlike with React).
Yes, you need a plugin for type checking with TypeScript. That doesn't seem to be a huge problem for users so far.
I appreciate that it's more efficient than VDOM, if it's true and continue to be true in larger scale apps. I don't see "can set any property, attribute, or listen to any event" a benefit though. To me it's such an unpleasant way of writing code to have to keep in your brain a map of all these event and prop bindings.
If you need events, Rx is much better to work with. Probably the one thing react is not as good as html elements is the lack of attributes, this causes a lot of defaultValue and shouldComponentUpdate kind of complications. However, if you really think about it all props should just be immutable (attributes) and if you want some to be updated or emit events, just make them Observable and Observer accordingly.
Web Templates support slots and are analyzed at definition time. All this is native code supported by the browser so big heavy JS doesn't need go be sent across the network and then run even more slowly in order to produce HTML. All web-template based libraries beat React/Angular/any VDOM based alternative in performance and footprint.
"...support slots and are analyzed at definition time"
I don't know what you meant by this. As we speak, is there a standard implementation of this "Web Templates" thing? Can I open an editor, type "native" html, incorrectly assign to some attribute some callback with the wrong type and the editor tells me it's wrong without running the website?
As for "ALL web-template based libraries beat React...", is there evidence to substantiate such claim? Surely SOME of them are not that performant.
Also slots exist in most frameworks including Angular and in various 3rd packages in the React ecosystem.
Only ever having used MFC and Swing, this seems odd to me. A diff of the entire DOM on every state change? You never see anything like that in native toolkits. ELI5: What problem is that solving?
The problem it's solving is the DOM being unsuited to writing the kind of applications that MFC and Swing are used for writing. (which, for the web, is a SPA)
isn’t this pointing to more of a fundamental problem with how browsers/html/document model are designed/implemented than anything else?
html was originally a document format, not an app framework, and i think all of these frameworks (large and small) are just workarounds when what we really need is a fundamental re-imagining of what the browser should/could be.
sometimes i wonder if alan kay was right when he said the browser should have been basically just byte-code interpreter[1]...
> The original promise of React was that you could re-render your entire app on every single state change without worrying about performance. In practice, I don't think that's turned out to be accurate. If it was, there'd be no need for optimisations like shouldComponentUpdate (which is a way of telling React when it can safely skip a component).
It's shouldComponentUpdate(), not shouldDOMUpdate(). Even if DOM operations are direct, or the virtual DOM is infinitely fast, there are plenty of situations where you want to avoid running application code on every update.
Some frameworks use data binding to track if a component update is necessary. This is what Svelte does, but because there is no explicit checks they have some weird conventions around annotating certain bound values:
<script>
export let num;
$: squared = num * num;
</script>
React just happens to implement this behaviour different: it assumes a component needs updating unless the shouldComponentUpdate() hook says otherwise. The advantage (ironically) is that React is "just JavaScript", whereas Svelte needs a compiler that can instrument the code.
This design decision shouldn't be confusing to the author; I assume he would have made this design decision consciously?
A React component can do arbitrary, computationally expensive, logic inside components. React gives you the ability to memoize that work. Without shouldComponentUpdate developers would need to hand roll a caching layer or stick computed values in a store.
With the current release, you can use a hook, such as `useMemo`, which would memoize the value for the entire lifecycle of your component. I’ve been writing in react for around 3 years now, and never had to use shouldComponentUpdate, but instead compute whatever needed either on mount or when props changed. Curious what prompted the case you mentioned?
> Here, we're generating a new array of virtual <li> elements — each with their own inline event handler — on every state change, regardless of whether props.items has changed. Unless you're unhealthily obsessed with performance, you're not going to optimise that.
React makes it pretty trivial to prevent rerenders when props have not changed
This is why I use Aurelia. It's a Javascript framework many here have probably never heard of or used, it debuted in 2015 and I have been working with it for four years now. Sadly Aurelia debuted at the height of the React hype and soon after, Vue hype.
Rob Eisenberg (the man in charge of the Aurelia project) had the right idea straight out of the gate. A reactive binding and observation system that worked like a virtual DOM (isolated specific non-destructive DOM operations) without the need for an actual virtual DOM. Which allows you to use any third-party library without worrying about compatibility or timing issues with the UI.
This is one area where React falters, at least when I used it. third party libraries clashed with the virtual DOM. When you start introducing abstractions to solve imaginary problems caused by improperly written code (the myth of the DOM being slow) you introduce issues you have to battle later on as your application scales.
The default behavior for javascript interacting with the DOM is incredibly slow once the page gets complicated enough. I've certainly seen it first-hand. This may not be a problem you have, and indeed maybe not everybody needs react. But the problems things like react/vue/whatever solve (correctly or not) isn't imaginary.
If you like the React documentation with real code examples, you'll love the Svelte tutorial with both code examples and a live playground. The UI is beautiful, too:
I have never used modern JS frameworks like Angular, React and Vue and I have always assumed (hoped) that they contained optimisations that you would be unlikely to use in your vanilla JS code even though you could. Something like FastDOM which batches read/write operations to avoid unnecessary reflows. Do they contain anything like that?
To varying degrees depending on the library, I believe the answer is "yes, they sometimes do". Angular and Ember at least have systems for batching user interactions, which translate into model updates, and thus potentially DOM updates. I believe the respective systems for handling this are called Zone and Backburner for Angular and Ember, but I've been out of touch with those projects for a couple of years.
There's definitely a trade-off, however. They make a huge difference for noisy events (like mouse move, scrolling, dragging, etc), but tend to make debugging much harder in my experience. When things go wrong, the stack traces nest deeply into the event handling systems and code paths no longer resemble the relatively straightforward world of traditional event calls, where a callback handler is invoked directly in response to a single event.
I'm not sure if this is exactly what you mean, but Ember.js has a concept called a "runloop" which batches different actions into queues, which does seem to help with rendering/reflows.
Seems like just an ad for Svelte, and fairly FUD-ish on that. React will warn you by default when doing that exact example, asking you to provide a key to the `<li>` to avoid re-rendering it unnecesarily. Instead, this states "Unless you're unhealthily obsessed with performance, you're not going to optimise that."
This is what I use when I need to do very minimalistic UI and feel that bringing Vue/React is an overkill https://github.com/adamhaile/S - reactivity without DOM superglue to it.
I think most people in this thread focus too much on the vdom, and miss the important part of the post:
> Unlike traditional UI frameworks, Svelte is a compiler that knows at build time how things could change in your app, rather than waiting to do the work at run time.
Virtual DOM is a shaky long term bet since it's essentially betting that the cost of DOM operations will always be high enough to justify all the work you're doing (both at runtime and at trying-to-figure-out-how-to-write-this-code time). When it's easy to do your virtualization, you can just go 'well this is an optimization i can remove at any point', but if it's suddenly so complex it introduces bugs, you're in trouble.
Naturally, the cost of DOM operations didn't go unnoticed and while people have been going in on virtual dom solutions like React, the devs of Firefox, Chrome and Safari have all been aggressively optimizing the dom - making the native code bits faster and moving more of the DOM into javascript so all your JS can get inlined and optimized. It gets harder and harder for libraries to compete with regular DOM as a result.
Well, you could also potentially have a future where the VDom is the primary Dom in itself.
As in, the VDom doesn’t need to be converted to HTML for the browser to render, but rather, the browser directly converts the VDom objects into Pixels.
react-dom (https://github.com/facebook/react/tree/master/packages/react...) is shipped as a dep of react, and seems to be where all the heavy lifting is. I believe that, should DOM ops become fast enough that you don't need vDOM anymore, you could easily no-op all of this out with direct DOM calls.
Surely JS engines are also optimising React-style code. e.g. the article says that react style code does a lot of unnecessary object creation (e.g. map) but if that is now a common pattern then JS engines can do a lot to optimise that away.
DOM operations (node insertions, deletions) are trivial. It's the CSS reflow/relayout that's expensive. Though that can be trivially solved by preparing a shadow DOM off-screen, then finally replacing the changed fragment by the newly build-up one. DOM diffing is just a convenient method to do it.
Ew, not really. Even the getters can be performance-heavy in DOM-land. If you store your data in JS-land, and only diff against that, you never need to touch the dom for anything, except the endpoint insertions. At worst, it is matched evenly by any sort of custom vanillaJS+Dom manipulation tool (which needs to be perfectly well written, and hand-crafted to match what you are currently working on). At best, its several orders of magnitudes faster compared to a badly written DOM-manipulation-heavy code.
The idea that the DOM is slow and that v-DOM makes it faster is completely ridiculous. V-DOM can only optimize user land code. DOM manipulation will always be at the mercy of the browser implementation.
Yeah, it is a better approach than badly optimized jQuery style code, no shit, but it's not really the fastest approach either as others like Svelte and Imba have demonstrated.
You overestimate the cost of modern DOM operations. It's a battle between your virtual DOM (to optimize DOM operations without lots of diffing overhead) and the native DOM (which is frequently a bunch of JS that can get inlined and avoid duplicate work)
No I do not. You will have to work very hard to back up the claim that modern DOM operations are not costly. Plenty of lookups can cause repaint or reflow.
Scenario A: (Vdom-like approach)
- Render x with dimensions x,y,z,w.
- Store the reference to the created DOM elements.
- Store dimensions it was created with
If you do not go towards this way, and actually query DOM-land to get the dimensions you need to check against, you will run into issues. By storing your state alongside the endpoint, you will only have to diff against the stored state and props, not the endpoints themselves.
That COMPLETELY bypasses DOM access, and only updates the changed elements.
Show me how bypassing this and just using dom alone is superior. Or how adding layers of layers of js-land checks is not approaching the solution that vdom and differs already arrived at.
That's the cost of the operation you're performing, it has nothing to do with whether you're using the DOM or not. If you want the size of the element, someone has to perform layout - either the browser engine, or your virtual DOM.
The dimensions it was created with are in attributes and CSS. You can check those without causing layout. If you want to check the layout result, you have to perform layout. Virtual DOMs are not magic.
I've worked on browser engines. You misunderstand why parts of the DOM are and aren't slow, even if you understand Virtual DOMs. Property accessors (what was the 'width' attribute set to?) are easy to optimize and most of the obvious slow ones have been optimized by self-hosting to the point that they can be inlined.
Writing some basic apps with vanila js is fine. But I think we use browsers for much more than this these days. For that we do need this level of overhead because DOM is just too simple for the task.
Seems plausible to me but the examples don’t work on mobile (iPhone iOS 12.3 with safari) nor is the layout mobile friendly at least when render in portrait modes on my phone.
hello very great article. i've been working as a developer for over 20 years now and have to say that do more experience you have on fewer and fewer trains you jump on what should not be heisen that i don't look at frameworks anymore but to describe it it's just often too much of a good thing. every framework has good pages but you should keep it simple. ciao
People were upset about being FORCED to use typescript. It was hard for some people to accept both being bundled together. Not to mention I would say this is before when typescript had even less support and there was a chance it was going to be like coffeescript. Hell typescript adoption is still pretty low according to metrics like the TIOBE index (although that doesn't mean much).
lit-html essentially grabs references to elements automatically when parsing a template. You get react-like rendering without any diffing! And does not require build tools.
When everything is event based, you know. You can trivially even record the entire call stack on an object attached to the DOM element, when your framework is in debug mode. You can see the whole history of call stacks if you so choose.
Here's the gist of how folks would often update an element. You'd subscribe to events on the root element of your component. And if your component is of any complexity at all - first thing you'd probably do is ask jQuery to go find any child elements that need updating - inspecting the DOM in various ways so as to determine the component's current state.
If your component needed to affect components higher up, or sibling to the current instance - then your application is often doing a search of the DOM to find the nodes.. and yes if you architect things well then you could avoid a lot of these - but let's face it, front end developers weren't typically renown for their application architecture skills.
In short - the DOM was often used to store state. And this just isn't a very efficient approach.
This is what I understood the claim that VDOMs are faster than the real DOM meant - and the article is pretty much eliding this detail.
As far as I'm aware React and its VDOM approach was the framework that deserves the credit for changing the culture of how we thought about state management on the frontend. That newer frameworks have been able to build upon this core insight - in ways that are even more efficient than the VDOM approach is great - but they should pay homage to that original insight and change in perspective React made possible.
I feel this article and many of the comments here so far - fail to do that - and worse, seem to be trying to present React's claim of the VDOM faster than the DOM as some kind of toddler mistake.