browsers have been doing smooth 60fps rendering for at least 5 years now, but you'll find stackoverflow threads about it dated 2015, they can do 144fps on much more powerful and better supported HW accelerated devices in 2022.
or to make it more clear: VDom is unnecessary because the browser engines have adopted optimizations that made it obsolete, just like many rendering techniques are not used anymore because the HW implements them in a more efficient manner.
For fast rendering speed in browsers, support for HW acceleration has done more than anything else combined.
Double buffering is still widely used by native code (e.g. inside the browser, inside video games, inside native toolkits). Ergo, "VDOM is obsolete the way that double buffering is" makes no sense.
OTOH, "double buffering from code running in the browser is now obsolete" may make perfect sense, following changes in how the browser itself interacts with the HW.
I think we're simply lost in translation, I'm no native speaker and sometimes I don't make myself clear.
AFAIK (not worked on a game in years) double buffering, when I first heard about it, was used to overcome a limitation of the HW, we had graphic/video cards (when I started programming EGA was the standard), GPU did not exist, the amount of memory was limited, data transfers were slow, double buffering meant having two in memory buffers, the active one and the next one, being built in the background, that were alternatively sent to the graphic card.
Then VGA added page flipping, you could write two buffers in the graphic card memory and instruct the card to swap the active page by flipping a bit in a register during vsync and then write the next frame in the inactive page.
From then on things have improved exponentially, to the point that now GPUs can buffer multiple high res frames, so while frame N is being displayed, the CPU can elaborate frame N+1 or N+2 or even N+3 on some GPU. The framework configures the GPU to automatically swap frames (usually in a FIFO queue) on vsync.
I think in Vulkan this workflow is called swapchain.
HW implemented what was possible only in SW, double buffering is still in use of course, habits die hard, but the issue it solved is not remotely as bad as it used to be.
VDom followed the same path, it was invented to overcome a browser's limitation: DOM access was painfully slow, especially on legacy browsers like IE.
Now they are fast enough that VDom, even though technically still largely in use, is not an hard requirement to make fast DOM updates like it was 10 years ago.
It stayed there in many frameworks, IMO, because you'll never know what HW/SW combination your users are running, backward compatibility, "if it ain't broke, don't fix it"
> From then on things have improved exponentially, to the point that now GPUs can buffer multiple high res frames, so while frame N is being displayed, the CPU can elaborate frame N+1 or N+2 or even N+3 on some GPU.
Not sure why you'd call this an "exponential improvement". Using more than 2 buffers increases display latency, which for most (not all) purposes is undesirable. Double buffering (that is, just using an active/inactive buffer) is almost always the best thing to do, regardless of where the memory is located or who is responsible for the buffer swap.
or to make it more clear: VDom is unnecessary because the browser engines have adopted optimizations that made it obsolete, just like many rendering techniques are not used anymore because the HW implements them in a more efficient manner.
For fast rendering speed in browsers, support for HW acceleration has done more than anything else combined.