> The VDom was never intended as a performance optimization over direct dom manipulation.
Performance was definitely it's primary purpose.
Remember Flux? THAT was the architecture that introduced FRP to the masses (based on the Elm architecture). The VDom was invented because large dom trees were too slow when you got to thousands (maybe tens of thousands, I forget) of nodes. Which is hilarious if you think about it .. somehow the C++ DOM update loop was slower than someone clever doing it in single threaded JS.
> Using jquery/vanilla js to update thr dom in an adhoc fashion in response to user input [...] has always had faster runtime
This is very incorrect.
> it's just more likely to be coded wrong.
While not outright wrong, I find this somewhat debatable. React is pretty complicated these days and it's quite easy to get tangled up if you're not conscious of what you're doing. FRP isn't a silver bullet.
The point of virtual DOM is correctness, not performance. The reason performance is so often discussed in conjunction with virtual DOM is exactly because it involves so much overhead, making performance optimization much more important. Carefully handcrafting the minimum number of DOM manipulations for a given state change makes it possible to run much less code than virtual DOM needs, but it’s a nightmare to work with.
It doesn’t become true just because you keep repeating it. The point was always to allow for an API where you only had to think about how to render one snapshot, and not care about what the previous snapshot looked like. Virtual DOM made that possible, but it was never more performant than just writing the resulting DOM manipulations manually.
Then let an old fogey make it more clear: browsers repaint/recalculate if you read after write. This required batching writes separate from reads if you wanted decent performance on older browsers and especially on older machines.
Reading an element property, adding/updating a nearby element, and then reading another element's property took FOREVER. Enter the virtual DOM. Since it did not engage the actual rendering engine, the reads and writes did not trigger reflow. At the end of the code segment, the actual DOM actions became effectively write-only. Even though the virtual DOM was slower per access than the actual DOM, the end result was a MASSIVE speed up.
This message brought to you by someone who honed their skills for a decade to batch their reads and writes in vanilla JS only to have those new-fangled frameworks take care of it (and data binding) for you. Jerks.
So what you're you saying is that at the granulatity of a single tick a VDom increases performance significantly due to not having to wait for the browser to recompute the dom after writes.. correct? It effectively batches writes, and thus the need for the renderer to get involved, which increases read throughput because reads block till after the DOM was recomputed. And the DOM is recomputed on every write thats followed by a read.
Makes a lot of sense, thanks for the input; I was completely unaware of this case. Any idea if this is still the case? Do you happen to remember what browsers and/or hardware that saw dramatic improvements (CPU gen would be great)? I'm thinking of doing some deeper perf investigation/spelunking on the subject to satisfy my curiosity. I remember things one way but a lot of people seem to think the opposite..
The view is recalculated/re-rendered/repainted. The DOM is the single-threaded-access data structure that the rendering engine ties into.
Part of the browser API is querying all current CSS properties of an element, e.g., getComputedStyle(…). The only way to get this is by having the layout engine do all its work, so properties like height and width can return accurate info.
Most virtual DOM implementations just skip parts of the API like this. At best, they make an educated guess without hitting the actual renderer. Or they just allow a pass through to getComputedStyle(…) and warn you away from using it due to performance concerns.
It's all smoke and mirrors topped above a bed of lies.
How big of a role does React's ecosystem(libraries,plugins,developers) play than say for the aforementioned(debated) performance/stability/security etc. in a someone choosing it over Svelte?
Performance was definitely it's primary purpose.
Remember Flux? THAT was the architecture that introduced FRP to the masses (based on the Elm architecture). The VDom was invented because large dom trees were too slow when you got to thousands (maybe tens of thousands, I forget) of nodes. Which is hilarious if you think about it .. somehow the C++ DOM update loop was slower than someone clever doing it in single threaded JS.
> Using jquery/vanilla js to update thr dom in an adhoc fashion in response to user input [...] has always had faster runtime
This is very incorrect.
> it's just more likely to be coded wrong.
While not outright wrong, I find this somewhat debatable. React is pretty complicated these days and it's quite easy to get tangled up if you're not conscious of what you're doing. FRP isn't a silver bullet.