The virtual DOM idea is nowadays a myth. It stems from the time when Facebook was still a Web App and was running on IE8 and IE6.
It was a decent idea to optimize for these browsers and FB's nature of featuring hundreds of comments etc. that could be updated somewhere outside your view port and cascade nevertheless into repaints and reflows. Remember the FB feed?
Also Google sheets results show overall, that DOM is faster than any virtual DOM, so why produce overhead? https://millionjs.org/benchmarks
Virtual DOMs are a thing that people long time ago forgot why and for what technology it was invented. Then came Google Chrome and their optimizations.
The VDom was never intended as a performance optimization over direct dom manipulation.
It is a tool to make functional reactive programming style _almost_ as performant as direct dom manipulation.
The comparison that's relevant here is, how long does it take to repaint the view if I recompute the view as a function of my state every time my state changes?
Compared to rebuilding the dom tree on every change, using a vdoms and diffing offer a huge speedup.
Using jquery/vanilla js to update thr dom in an adhoc fashion in response to user input (when the user clicks "next", then add a class to this dom element, remove this other element from this other random place...) has always had faster runtime, it's just more likely to be coded wrong.
FRP is an abstraction that makes writing and reasoning about correct UIs easier. Vdom is a technique that makes this abstraction practical.
There's reasons more fundamental than speed you can't just blow away the DOM and recreate it all the time. You'd lose stuff like the user's selection, the contents of forms, etc.
The link in the previous comment explains why the virtual DOM is overhead, and you haven’t responded to that. Svelte and other current gen compiler-based approaches allow FRP and do not require a virtual DOM.
> The VDom was never intended as a performance optimization over direct dom manipulation.
Performance was definitely it's primary purpose.
Remember Flux? THAT was the architecture that introduced FRP to the masses (based on the Elm architecture). The VDom was invented because large dom trees were too slow when you got to thousands (maybe tens of thousands, I forget) of nodes. Which is hilarious if you think about it .. somehow the C++ DOM update loop was slower than someone clever doing it in single threaded JS.
> Using jquery/vanilla js to update thr dom in an adhoc fashion in response to user input [...] has always had faster runtime
This is very incorrect.
> it's just more likely to be coded wrong.
While not outright wrong, I find this somewhat debatable. React is pretty complicated these days and it's quite easy to get tangled up if you're not conscious of what you're doing. FRP isn't a silver bullet.
The point of virtual DOM is correctness, not performance. The reason performance is so often discussed in conjunction with virtual DOM is exactly because it involves so much overhead, making performance optimization much more important. Carefully handcrafting the minimum number of DOM manipulations for a given state change makes it possible to run much less code than virtual DOM needs, but it’s a nightmare to work with.
It doesn’t become true just because you keep repeating it. The point was always to allow for an API where you only had to think about how to render one snapshot, and not care about what the previous snapshot looked like. Virtual DOM made that possible, but it was never more performant than just writing the resulting DOM manipulations manually.
Then let an old fogey make it more clear: browsers repaint/recalculate if you read after write. This required batching writes separate from reads if you wanted decent performance on older browsers and especially on older machines.
Reading an element property, adding/updating a nearby element, and then reading another element's property took FOREVER. Enter the virtual DOM. Since it did not engage the actual rendering engine, the reads and writes did not trigger reflow. At the end of the code segment, the actual DOM actions became effectively write-only. Even though the virtual DOM was slower per access than the actual DOM, the end result was a MASSIVE speed up.
This message brought to you by someone who honed their skills for a decade to batch their reads and writes in vanilla JS only to have those new-fangled frameworks take care of it (and data binding) for you. Jerks.
So what you're you saying is that at the granulatity of a single tick a VDom increases performance significantly due to not having to wait for the browser to recompute the dom after writes.. correct? It effectively batches writes, and thus the need for the renderer to get involved, which increases read throughput because reads block till after the DOM was recomputed. And the DOM is recomputed on every write thats followed by a read.
Makes a lot of sense, thanks for the input; I was completely unaware of this case. Any idea if this is still the case? Do you happen to remember what browsers and/or hardware that saw dramatic improvements (CPU gen would be great)? I'm thinking of doing some deeper perf investigation/spelunking on the subject to satisfy my curiosity. I remember things one way but a lot of people seem to think the opposite..
The view is recalculated/re-rendered/repainted. The DOM is the single-threaded-access data structure that the rendering engine ties into.
Part of the browser API is querying all current CSS properties of an element, e.g., getComputedStyle(…). The only way to get this is by having the layout engine do all its work, so properties like height and width can return accurate info.
Most virtual DOM implementations just skip parts of the API like this. At best, they make an educated guess without hitting the actual renderer. Or they just allow a pass through to getComputedStyle(…) and warn you away from using it due to performance concerns.
It's all smoke and mirrors topped above a bed of lies.
How big of a role does React's ecosystem(libraries,plugins,developers) play than say for the aforementioned(debated) performance/stability/security etc. in a someone choosing it over Svelte?
> The VDom was never intended as a performance optimization over direct dom manipulation.
This is untrue as written after a decade of fanboys making this exact claim. I think your point that it’s an optimization for a particular style is correct but starting with this encourages knowledgeable readers to skip your comment.
> how long does it take to repaint the view if I recompute the view as a function of my state every time my state changes?
you don't need vdom to make this fast.
you compute your state in whatever data structure you like and then requestAnimationFrame updates the DOM if and where it is needed.
VDom is much like double buffering in video games, it was useful when video memory was scarce and access times were slow, but it has lost much of the appeal on modern HW.
> VDom is much like double buffering in video games, it was useful when video memory was scarce and access times were slow, but it has lost much of the appeal on modern HW.
Double buffering still has a huge appeal on modern hardware - tearing. At 144Hz/fps, a single frame is 7ms, and at 30, a single frame is 33.3333ms - the margin for error is significantly lower. At high refresh rates, I'd rather an extra frames latency to tearing.
browsers have been doing smooth 60fps rendering for at least 5 years now, but you'll find stackoverflow threads about it dated 2015, they can do 144fps on much more powerful and better supported HW accelerated devices in 2022.
or to make it more clear: VDom is unnecessary because the browser engines have adopted optimizations that made it obsolete, just like many rendering techniques are not used anymore because the HW implements them in a more efficient manner.
For fast rendering speed in browsers, support for HW acceleration has done more than anything else combined.
Double buffering is still widely used by native code (e.g. inside the browser, inside video games, inside native toolkits). Ergo, "VDOM is obsolete the way that double buffering is" makes no sense.
OTOH, "double buffering from code running in the browser is now obsolete" may make perfect sense, following changes in how the browser itself interacts with the HW.
I think we're simply lost in translation, I'm no native speaker and sometimes I don't make myself clear.
AFAIK (not worked on a game in years) double buffering, when I first heard about it, was used to overcome a limitation of the HW, we had graphic/video cards (when I started programming EGA was the standard), GPU did not exist, the amount of memory was limited, data transfers were slow, double buffering meant having two in memory buffers, the active one and the next one, being built in the background, that were alternatively sent to the graphic card.
Then VGA added page flipping, you could write two buffers in the graphic card memory and instruct the card to swap the active page by flipping a bit in a register during vsync and then write the next frame in the inactive page.
From then on things have improved exponentially, to the point that now GPUs can buffer multiple high res frames, so while frame N is being displayed, the CPU can elaborate frame N+1 or N+2 or even N+3 on some GPU. The framework configures the GPU to automatically swap frames (usually in a FIFO queue) on vsync.
I think in Vulkan this workflow is called swapchain.
HW implemented what was possible only in SW, double buffering is still in use of course, habits die hard, but the issue it solved is not remotely as bad as it used to be.
VDom followed the same path, it was invented to overcome a browser's limitation: DOM access was painfully slow, especially on legacy browsers like IE.
Now they are fast enough that VDom, even though technically still largely in use, is not an hard requirement to make fast DOM updates like it was 10 years ago.
It stayed there in many frameworks, IMO, because you'll never know what HW/SW combination your users are running, backward compatibility, "if it ain't broke, don't fix it"
> From then on things have improved exponentially, to the point that now GPUs can buffer multiple high res frames, so while frame N is being displayed, the CPU can elaborate frame N+1 or N+2 or even N+3 on some GPU.
Not sure why you'd call this an "exponential improvement". Using more than 2 buffers increases display latency, which for most (not all) purposes is undesirable. Double buffering (that is, just using an active/inactive buffer) is almost always the best thing to do, regardless of where the memory is located or who is responsible for the buffer swap.
> VDom is much like double buffering in video games, it was useful when video memory was scarce and access times were slow, but it has lost much of the appeal on modern HW.
It's funny to read this when Google Chrome has constant tearing on Windows.
Haven't used Chrome in years, but AFAIK screen tearing is caused by incorrect vsync settings. disabling HW acceleration can fix it because software rendering is synced to the screen refresh rate.
if you rendered your web app in an animationFrame in the allotted time (60hz usually, 1/60 of a second) you won't notice any flickering or tearing even without double buffering.
modern HW (screens and GPU) use complete frames, so flickering and tearing are not an issue if the writes are vsynced.
Vsync needs double buffering. You swap the buffers during vsync to kill tearing. You will have tearing whenever you update the currently scanned-out buffer (except when you do stuff like racing the beam).
Plenty of libraries have functional components and a react-like experience without the need for a virtual dom. Lack of vdom does not necessitate imperative dom usage.
It’s the same observation TDD makes. If you can figure out what a sane interface looks like, it doesn’t matter how it’s implemented underneath. You might even want to change your mind for performance or new feature purposes.
I don’t really understand what you’re point is? Isn’t the ‘dom is slow’ thing about comparing to e.g.
- creating actual DOM elements during render and diffing that with the DOM, or
- using the existing DOM as the thing being diffed against your new vdom (slow because calling into the browser many times may be slow and you need to check eg there aren’t attributes to be removed from your elements)
Perhaps it’s also necessary to mention that the main point of vdom was to allow react to offer the API it does performantly but maybe with compiler-based solutions like svelte that isn’t necessary. Lots of people still use react, however.
Look at their benchmarks. They compare against innerHTML and against (I assume) inserting new DOM nodes. Million is slightly faster than the former, slightly slower than the latter. DOM manipulations are fast these days
Wow, im surprised at how fast that is now - although the benchmarks themselves are doing bulk ops for all the innerHTML stuff, so it will lose some UI state (cursor, etc)
TBH, looking at the benchmark code for partial updates, im not sure I have much faith in the benchmarks overall, but certainly eye-opening regarding the innerHTML performance.
Yeah I was surprised at the numbers too. I'm wondering if those numbers actually imply that reconstructing the Dom from scratch every update is now faster than using a VDom..?
InfernoJS uses a vdom is faster than Svelte in the js framework benchmark.
Even if we set that aside, the differences between something like Svelte and React are still likely to be far less than the differences you'll gain from changing to a more efficient algorithm.
Over a certain size, some level of abstraction will usually produce a meaningful reduction in code size in most applications. It's possible for some applications a virtual DOM is the abstraction to achieve that, although I don't think I've seen this in practice. It would require an absolute mountain of "document.createElement() / appendChild() / setAttribute()" code before the lighter syntax of a virtual DOM would net a meaningful size reduction with a modern compressor like Brotli.
Just saying there are probably situations where these might still offer a genuine functional benefit, simply by virtue of offering abstraction.
There are a bunch of replies here focusing on how nice to use virtual DOMs can be, would gently encourage those folk find another industry to ruin.
Very few apps that use React require sustained, fast DOM changes for a long period. Most web apps sit idle 99% of the time, and then need to manipulate a handful of DOM elements based on a user action - the user clicks a button or types in a box and a few things around the page update. That means the framework needs to make 4 or 5 DOM changes in one frame (about 10ms including the browser's overhead to render things.) The fact that this is slow in some web apps is beyond disappointing. They're doing practically nothing and they still feel terrible.
This is where I think React is getting things right. The work that's been done on React 18 attempts to work out what events are most important, and schedules the DOM changes from those interactions ahead of changes from other events. It batches the changes over a number of frames if there's a lot of them. This means that a UI made with React will probably be slower than other frameworks, but it'll feel faster. The changes that result from your interaction happen first. That's what users want.
Ultimately every framework has an upper bound for performance, and if you're not hitting it then the framework speed doesn't really matter. If you are hitting it though, then React's approach is better because it optimizes for the bit the user cares about. The fact that Solid, Svelte, etc are technically better, and therefore faster, means there's lots of additional headroom for using that speed, but once you actually cross the threshold of what they can do things will start to feel slow very quickly.
And this is where that matters - many web developers just aren't great at writing code, so if the framework can scale a 'fix' for what they build that will result in a better user experience than simply giving the developer more speed. A faster VDOM is a good thing, and no VDOM at all is an even better thing, but ultimately you could make the fastest framework ever and some developers will still write things in ways that feel slow.
The right approach for fixing UI on the web is to make a framework that focuses on doing the important DOM changes first, even if it does them a bit slower than the other approaches.
I work on a gigantic react app...It would take me days to explain everything this app does..and we've not once had performance concerns with react..so I'm really wondering what the hell kind of apps everyone else works on that they have so many issues with it.
Been a while since I worked on WebApps. I agree with your sentiment that a lot of up front and then small changes over time. VDOM and Just refresh the whole page seem like consequences of how we express our DOM construction.
I'm curious if anyone is working on and had success with using differentiable (in the math sense) expressions of dom construction from state/events in order to allow the runtime to easily calculate diffs given state changes/events.
Poor/no architecture. On the client side - requiring multiple API calls that take tens of ms to respond,doing them serially, trying to mask this with loading transitions that take up to a second that are hit multiple times during common use.
On the server side, a pile of microservices that are designed around team responsibility, with requests that require requests that require requests to respond to common API calls. The desire to write the backend in JS causes a very low perf ceiling on a single instance which means a medium sized web app needs a dns lookup, load balancer hit plus reverse proxy in place for (m)any of the API calls, even if they're internal/trusted.
These apps are tested and benchmarked on the highest performance professional computing devices on local networks with gigabit connections, and then deployed to 5-10 year old computers running on 10Mb connections shared between 4 people. The servers are deployed to a large number of low cost cloud instances running on a virtualization layer inside a virtualization layer on "enterprise grade" (read: slow) hardware with real world disk and network speeds that are orders of magnitude slower than what is used for testing.
It's comforting reading this. Too often web aps are developed thinking about ease of development/developer convenience alone, on fast hw, with fast and reliable network.
If those same developers used a 10 years old PC and a semicrappy 4g netwrk I bet the overall quality of the products that comes out would be ten times better.
I don't think that extreme is true either. I work in games and we have access to workstation hardware even though most of our players will inevitably end up playing on substantially lower end hardware. We set performance targets for subsystems, have thorough profiling available, and regularly _test_ on consumer hardware to gain the above metrics and work with them. That can be (and often is) done.
I think game devs are on the opposite side of the spectrum from web devs when it comes to performance testing, though.
PC gaming has always been performance-driven, and performance variability/customization has been part of the end-user experience since the first 3dfx cards and graphics settings screen (and before that too).
Web... until recently it wasn't really a consideration for users or devs, because everything was just basic HTML and CSS and it was big images or videos that was the problem. Then, within like a decade, suddenly all these JS-heavy frameworks took over and everyone jumped on board, and connectivity has struggled to keep pace. Web developers (the humans) too struggled to keep up, with everyone having to relearn the framework du jour every year or two, with all the optimization techniques of the previous generations thrown out or made irrelevant by new frameworks & browser optimizations. I've never met a web dev who seriously even considered performance beyond some superficial metrics -- I've never seen anyone use the profiler in Chrome or their IDE at all -- much less knew what to do about it even if they did. It's just not really a thing, at least in the small to med business space. Maybe if you're working in big tech or framework development that's different, but otherwise, performance is near the bottom of considerations for web dev. Which is why we have articles like this every once in a while... it's actually newsworthy when people go "hey, JS is slow again, here's technique #33 million to speed it up", to which most of us will go "oh, that's nice, but I can't replace my whole stack just for one speedup, and besides, this new thing is going to be obsolete by September anyway."
You are saying that web apps are slow because of the server, when we all have seen web apps that are laggy even when showing animations when you hover over elements. Users with slow CPUs exist!
No, I'm saying that web apps are slow because there is no thought put into the architecture of the application, and the structure of the app likely matches the structure of the team, not the design of someone who considered holisticly how it would work.
Mostly due to developers writing code that synchronously waits for network requests to complete before updating the DOM. The user clicks, then nothing happens while the app sends a request, and app only updates the DOM after the request completes. It's that 'nothing happens' step that makes things feel slow. They do other things like rerendering part of the app every time any part of the state changes rather than limiting the updates to just the bit that matters to that component too, so the app is doing a ton of unnecessary work (React helps here; it ignores state updates that don't change anything).
There's a lot of problems in web apps that make them feel slow, but they mostly distill down to developers following some bad practises that are easily avoidable. Web apps that are slow because they're maxing out what the framework is capable of are very, very rare. Making a faster framework won't fix the ones that are just coded badly. Making a more intelligent framework might.
> The user clicks, then nothing happens while the app sends a request, and app only updates the DOM after the request completes
I've not seen this too often frankly - that tends to result in a "hung" state for a UI where the app appears to not respond. The most frequent issue I've seen is long transition animations (500+ms) to mask network calls, even when they're not necessary.
> Making a faster framework won't fix the ones that are just coded badly. Making a more intelligent framework might.
React does everything wrong: vDom, components rendering multiple times instead of once then reacting to changes, manual memoization techniques. Hence why Svelte and Solid are not only have a better dev experience, they're also much, much, much faster and eventually will take over React as it will go the way of jQuery, Knockout, Backbone, etc
I've built apps in both react and svelte, and I personally don't see a significant difference between them HTML-wise. .map vs each isn't a big deal, especially when you're usually constructing whatever you're iterating over anyway, etc.
JSX is just syntactic sugar for React.createElement [0]. This means that I can write any valid JS I want in there. For example, if I use a functional programming library with a match statement, I can have that in my JSX, I don't need to adhere to what the templating authors came up with.
...but what does React.createElement compile down to? HTML and JS. Whether you do it with a clientside runtime or a serverside render, it still gets compiled or transpiled down to DOM objects in the end.
I think of "pure JS" as something more like a standalone node function that takes an input and gives you some abstract data output, vs templating code whose main purpose is to define elements in the DOM.
That you can intersperse with JS with DOM-like props in a JSX component (styles, states, handlers, whatever) doesn't mean that JSX isn't a templating language. It's just one that also accepts inline JS. Hell, you can do that with PHP and heredocs/template literals.
Aren't all templating languages "syntactic sugar"? Isn't that their point?
Page speed is key. You can see in search console how changes to page speed effects how often a page is re-indexed and how many pages are indexed. For some sites that does matter. It basically comes down to how much money Google will spend on your domain.
If you need your site to be indexed by Google it should be very close to plain HTML and CSS with pretty much no JS that changes the DOM at all. The Venn diagram of 'pages that need fast DOM updates' and 'pages that need to be indexed by Google' should really be two separate circles... That's why I suggested SSR or SSG.
There are very few React+backend websites that need the content to be indexed by Google. That part is important. The overwhelming majority of React code is sat behind a login page that Google can't get past.
Anywhere an app is serving public content using React it should be using some sort of server side generation with hydration and progressive enhancement, which entirely negates the need for a fast VDOM for SEO reasons.
This reminds me so much of MithrilJS, and i also remember how terribly unergonomic it was, a terrible DX. The Vite JSX plugin is nice tho, I can see myself using this for something, but i will probably get tired of the ergonomics once i hit some complex component.
I am currently in early stages of writing a lightweight reactive framework more of the likes of Alpine and there are some things in this source that can prove fruitful. The rendering process is one of the most interesting things here imo.
This looks nothing like Mithril.js outside of the superficial comparison of the hyperscript helper using `m` instead of `h`, but almost every popular reactive front-end lib has a hyperscript-like API.
I actually like hyperscript DX more than JSX/HTML. What I’ve found out though is that most “prettifiers” and formatters in editors (unlike vim) suck so much at hierarchical calls formatting.
ya,. developer experience. thought it was kind of a lame term and first but now i kinda love it. dx all the way. aint using no framework i have to fight with even if it produces optimal ux. sorry users, i love me more than you
I just used "dev experience" yesterday when I was going to use "DX" at first, just in case someone didn't know the term. I thought I was being silly and that by now everyone would know the term, but apparently I did the right thing to be a bit over paranoid here! Thanks for confirming:
it's pretty hard to break into the sub-1.15 factor club without code smell. in fact, mikado is suspiciously fast for not having any caveats listed; something about its impl might be unusually bench-specific.
you're definitely the most qualified to do a js-framework-benchmark implementation. it's pretty odd that this hasnt been done given how much the marketing leans on performance claims.
"do your own benchmarks" is too handwavy a dismissal, imo. btw, my own framework is in this list, and has been for a long time, to keep me honest.
That's a fair point, seems like the current issue owner is inactive on the task, I'll go in myself and start work on a benchmark. Million started out as only a virtual dom (and therefore was never added to the js-framework-benchmark). Only recent was the React compat added, which is when we were comfortable to start working on benches.
My intention wasn't to dismiss your point at all btw. My intention was if there wasn't any 3rd party benchmark for a library (like js-framework-benchmark), you shouldn't take claims at face value unless you've done due diligence. It's great to hear that you're keeping yourself accountable, hopefully Million will also sometime soon :)
Bit of feedback: marketing it as just "a virtual DOM" and not "a framework" or "a React alternative" made me think it would just be a VDOM, and not include things like state/hooks. Instead, it looks more like a Preact type thing: a framework that's mostly API-compatible with React, aiming to make different tradeoffs around bundle size, performance, etc. Is that accurate?
Million was originally created as a Virtual DOM -- and still ships as a Virtual DOM. More recently, Million added an optional react compat library in order to make it easier for users to learn about Million.
I'm open to different ways to market Million if there is a specific tagline you have in mind
Ah, this is important new information for me! I just visited the website, and looked for a virtual DOM API that I can use inside of my library. Unfortunately, when I read the actual "getting started", it showed me how to import react libraries for hooks, and use JSX, and so I concluded that:
(a) This is actually a React replacement, like Preact, not just a virtual dom.
(b) The homepage marketing is misleading! The project lost a bit of integrity for me at that point, and I started trusting it less.
If the homepage marketing is declaring it to be a virtual dom, then I want to see how to use the virtual dom API so that I can verify what it is! If it then on top of that has a React Compatibility layer, that's just awesome and makes me really excited to understand it, and I'll probably want to look at that layer so that I can see what React is beyond a virtual dom.
Anyway, do you have a pointer to how to use it as a plain old virtual dom? I happen to be in the market for one, right now!
Do we need VDOM for fast rendering?
Would it be possible to make expressions that are re-evaluated only when their inputs change and bind them directly to real DOM?
I mean using immutable values and pure functions gives you the first (already used with react) and in the end VDOM mutates the DOM, so why not skip the VDOM altogether?
To better explain myself: VDOM takes two versions of the element tree, makes a diff and patches the DOM accordingly.
Why not take two versions of the state tree, diff that and patch DOM directly? What benefit the VDOM brings?
I remember that historically DOM used to be very slow on some browsers and virtual DOM was a huge performance boost. You could compute very quickly the difference in JavaScript and minimise the calls to the slow DOM browser API.
Now that Internet Explorer is really deprecated and DOM are fast enough in every browser, VDOM is not necessary.
I've written this library [1] as a clean reimplementation of a more complex beast we created for a (now dead) startup. Although the reimplementation has only seen very light use, we've used the concept extensively, and it's a joy to use. Take a look at the examples [2].
We don't need VDOM, but we can't live without it either. The problem is too many people are bought into not just React, but all of the open source components out there too (not a problem persay, it's created a ton of value for many). React does have an elegant API surface, not sure how possible is to keep that, but then somehow give it Svelte-like functionality. React is the golden handcuffs of front-end today.
I mean generally, yes? Evergreen browsers have been the standard for many years now in all major browsers.
The worst offender I guess is Safari which has historically tied updates to a few times a year rather than much shorter cycles used in Edge, Firefox and Chrome.
Whatever delays there may be in those update cycles I’m confident that you shouldn’t need to be catering to 2012 browsers. VDOM just isn’t necessary as a performance hack and hasn’t been for many years at this point.
This was downvoted but as someone not too familiar with modern frontend design I'm actually wondering about the comparison myself. When would you use this?
It is staggering to me the quantity of developers who are still made uncomfortable by the notion that you can construct a high-quality, modern web experience without pulling down a single 3rd party javascript or css dependency.
I really hope the pendulum starts to swing back the other way. It is so fast & easy now. MDN makes life a breeze. In 2022, there really isn't an excuse to not use direct DOM manipulation and bare-ass JS, other than pedantic ideological arguments such as "why is my hammer this shitty color?" and "My monster.com filter can't find 'vanilla js' skilled employees".
As noted by other posters, VDOM is essentially just overhead. It is both faster & easier to grab that element by its id and change it directly. Playing around with intermediate UI representations, 3rd party frameworks and splitting state between client & server is where 99% of your hell comes from on web development.
When you understand 100% of your javascript, setting breakpoints and tracing things in devtools has a lot more power. Hiding from the realities of the open web is only going to cause a prolonged suffering for the junior web developer. Frameworks will come and go. document.getElementById() will be here until the end of time.
bare js, or bare DOM specifically has no way of doing declarative changes. if you're okay writing purpose-specific imperative dom manip logic, it gets quite tedius and long in the tooth. your app that could have been 2k lines with a framework is now 10k lines without one, and will grow at 5x the rate, and amount of JS parsed and executed needed to do the same thing.
bare DOM works fine up to a certain size/complexity app, and up to a certain size dev team in the single digits.
Did some research on this a while ago and I‘d wager that 5x in LOC & growth is a bit off: https://github.com/morris/vanilla-todo - probably it’s more around 1.5x (which can still be problematic). Would be interesting to rewrite the study declaratively and compare again.
Wow, I really love that experiment. Really explains a lot about frameworks to me, despite being an exercise in not using them. Thank you so much for making that.
When I look at what you've written under those restrictions, the biggest problem I flag as a general software engineer is how the events get created, defined, listened to, etc. This tells me that if I was building something "large" like that, the first third-party library I'd want to integrate is something for handling those event streams in a more rigorous and robust fashion.
It is regrettable that "vanilla JS" seems to imply a total lack of structure and abstraction.
In my view, the size or complexity of the app has nothing to do with whether or not you should use a javascript/css framework. I believe it's more about having the experience & leadership required to pull the team together on complex, open-web technologies relative to the problem you are trying to solve.
Once a pattern has been established, its exceedingly easy for the junior developer to catch on and help out. Starting from zero is where most seem to struggle with the web. I've never had someone come to me and complain about the difficulty around adding a new field to an existing web form.
We were using a blend of Angular JS, Riot JS and Blazor over a period of ~8 years before we had the experience & confidence to dive into a 100% vanilla JS product. I can't fault anyone for reaching for something off the shelf to get started. But make no mistake - If you have the team, opportunity & experience to pull it off, vanilla JS products are the most enjoyable to develop & maintain. It is incredibly empowering relative to managing something like an Angular 1.x/2.x codebase, or even a modern Blazor server-side app.
I will also say this - The state of the open web standards is amazing in 2022, so you might be able to shortcut the training wheels a whole lot faster. Imagine if you had CSS grid from the very beginning. There are a lot of self-professed web developers who don't even know about this today because of the layers of ridiculous abstractions they are buried underneath.
declarative templates are by no means a ridiculous abstraction. graphics programming, like GLSL shaders dont have to worry about what "was" to represent what "is", they simply rerender the entire scene because it's fast. you cannot do this with .innerHTML because it's slow, loses input state (form inputs, focused elements), resets dom text selection, breaks transitions / animations, undo history, video playback, layout cache, etc.
whatever method you choose to solve this in a uniform, non ad-hoc manner will be your invented version of existing vdom or dom template frameworks, which save you from this madness. i use direct DOM only for writing high per libraries and when there is a need to optimize beyond what is provided by the framework.
i would like to see an open source, large web app written by a dozen+ engineers that manages without any ui layer abstraction and only uses vanillajs + discipline (jQuery+structure style!); in my experience this "utopia" is mostly a pipe dream, and rarely well justified.
Why don't browsers make their DOMs into Virtual DOMs? Then every application can be fast ... and websites don't have to carry the weight of their own vdom implementation.
I wouldn't be surprised if some people at google are working on a transactional approach to DOM manipulation, or perhaps something even exists already I'm not aware of. I think what we already have is implicit transactions, where updates are batched, so long as they don't depend on each other.
So, is this a replacement backend for JSX? Or what is it actually? Is ot tied into React (is React's virtual DOM implementation pluggable by design?)?
I don't quite get how a framework can be described as "it's a virtual DOM that's fast". I kind of get what they mean, but is there no better term? E.g. "virtual DOM engine", "virtual DOM runtime", "virtual DOM renderer", maybe?
It’s not tied to React at all. JSX is (intentionally) designed to have unspecified semantics, and be used as a compiler mechanism to implement arbitrary semantics. Most usage compiles to something resembling a function call or a memo of same. AFAIK all VDOMs use that approach.
Some non-VDOM JSX libraries like Solid use it as a static analysis target for optimal output.
Some libraries including React use it to target non-browser render targets. Yes, React is designed to render to arbitrary views or even non-views. It’s used for rendering WebGL, native mobile apps, TV interfaces, CLIs, PDFs and probably a lot else.
As far as what’s described here, “that’s fast” is meant as “which is fast”, it’s distinguishing itself from slower VDOMs (such as React, which is exceedingly well engineered but seldom wins performance contests in recent years).
It's a subjective context-dependent phrase used by someone to say that you don't need extra things with the tool, e.g. "Python is batteries-included, it can load json, send http and talk to an sql DB without any additional packages"
The imagery that jumps to mind is all T&A and once you've seen it as a mushroom-tip... it's impressive how many NSFW boundaries it manages to blend together.
Also, that reaction is so widespread at this point, it's become boring to hear it.
It's a lot more interesting figuring out how the JS ecosystem has been resilient and has not gone down the drain (unlike a lot more "critically" acclaimed languages like Lisp)
we'll never know what could have happened to JavaScript if browsers supported more than a language (Lisp, for example, Brendan Eich plan was to write Scheme for the browser but then Java came out, Netscape had a deal with Sun already and so JavaScript was born)
When things were not settled like they are today, Microsoft IE supported multiple languages and many people were using VBscript because it made more sense than JavaScript at the time.
Also, many new languages that transpile to JavaScript exist, because, well, for many people JS simply sucks.
I hear TypeScript is pretty big in that space.
Who knows if JS will survive WASM that promise to free programmers from having to use a single language to write web apps.
It will definitely be interesting to see how WASM changes the landscape. But if I were to bet my money, I'd say JS will survive well even in that age. The huge huge ecosystem and with TS being pretty conducive for large teams to use is just a pretty large momentum on JS's side.
Pretty much. I keep asking people "Why not Elm?" ( https://elm-lang.org/ ) but no one has a very good answer.
The weird thing to me is that the two main customers or consumers of front-end programming (publishers and users) never see and do not care about the underlying language or implementation! Businesses and other orgs (schools, NGOs, etc.) as well as individuals (FB users, WordPress blogs, etc.) that publish sites/apps on the web and their visitors and users never see under the hood except when something goes wrong or they deliberately click "View Source", eh?
Ergo, all this whole Javascript ecosystem is solely for the developers. It's like a giant and largely irrelevant MMORG that we play that produces mostly-working websites and apps almost as a side effect.
Not Javascript because it's objectively a lousy language. (Imagine if Eich had been allowed to embed Scheme!? What a wonderful world that would be. I'm not going to look if there's a Scheme implemented in JS, I assume there is.)
An interesting question is why Elm instead of Scheme-in-JS or Scheme-in-WASM?
The answer is to that is also the answer to "Why Elm?"
Elm is extremely user-friendly, domain-specific language that makes it nigh impossible to write buggy code. You can write code that does the wrong thing, but it will do it correctly. In practice, what this means is that Elm web apps have had zero front-end bugs.
If we allow that end-users and management do not care what system we use to make the website "go", then it's pretty obvious that they could make do with a much cheaper solution than the JS ecosystem. Elm is so simple and easy that anybody can use it to maintain their own web sites or web apps.
Wondering how in compares with Imba.io … which is the first way I heard of virtual DOM. I vaguely remember watching Imba demo videos and being astonished to discover that, despite the playback controls, they were _not_ videos but rather fully interactive and you could edit them on the fly. Imba seemed amazingly performant at the time, but then I never had a project the could really use it (except one the sunsetted before I could make major alterations to it).
Does this require node for development? From the getting started, it looks like it does, but wondering if there was a way to include the library as a script, like with AlpineJS.
Million looks nice, but I'm not into developing with node at all.
Because it adds to mystique and yields a huge amount of imaginary internet points, which are main currency when it comes to trying to impress someone with your vocabulary.
It was a decent idea to optimize for these browsers and FB's nature of featuring hundreds of comments etc. that could be updated somewhere outside your view port and cascade nevertheless into repaints and reflows. Remember the FB feed?
I also worked on virtual DOM optimizations back then for these IEs, but have long since abandoned optimizations. I take it with Svelte: https://svelte.dev/blog/virtual-dom-is-pure-overhead
Also Google sheets results show overall, that DOM is faster than any virtual DOM, so why produce overhead? https://millionjs.org/benchmarks
Virtual DOMs are a thing that people long time ago forgot why and for what technology it was invented. Then came Google Chrome and their optimizations.
For benchmarks, I think this is still the best around: https://krausest.github.io/js-framework-benchmark/current.ht...