I think the charitable interpretation is there are a whole class of web sites/applications (news sites, e-commerce, etc.) that haven't appreciably changed in the last 15 years. Yeah, there's been some incremental improvement, but the core experience is the same. However, the versions today are generally resource hogs. It's not rare for me to leave a browser tab open and find it grows to > 2GB RAM. I had one hit 28 GB and it would have kept going if I hadn't killed it. One thing I miss on the M1 is the fan kicking on so I know when a browser tab has run away.
I think the OP has a point. We've been building a massively complex ecosystem on a very shaky foundation. The web has indeed advanced, but most of the time it feels like duct taping something onto this monster of a creation. Between backwards compatibility and competing browser vendor concerns, it's hard to push meaningful change through. So many security (especially supply chain) and dependency issues in JS would go away if there were a reasonable standard library. But, there isn't, so we're doomed a large graph of inter-related dependencies that break regularly. The constant churn in frameworks is mostly related to novel ways to work around limitations in the web platform and bridging behavioral differences across browsers.
It's more than just churn in frameworks though. It's depressing how many person-years have been spent on build systems and bundlers or CJS vs ES6. Participating in the open source ecosystem means needing to be familiar with all of it, so it's not enough to pick one and run with it.
Prior to flexbox, people struggled to center content with CSS; it was bad for layout. There's a ton of old CSS out there, much of it accomplishing the same thing in different ways, often with vendor-specific options. You still run into this if you have to style HTML email. Given the difficulty in mapping CSS back to its usages, it's incredibly challenging to refactor. It requires familiarity with the entire product and a lot of guess work to intent since comments don't exist.
Moreover, there have been massive changes in "best practices". React and Tailwind violate the previous principles of unobstructive JS and semantic naming that were the prevailing practices not too long ago. My cynical take is a lot of that is driven by consultants looking to stand out as thought leaders. Regardless, it adds to the complexity of what you need to know if you do have to work with older (often other people's) code.
I'm fairly confident that if we had today's use cases in mind when designing the foundational web technologies we'd have a very different platform to work with. It almost certainly would be more efficient, less error-prone, and likely more secure. It'd be nice if we could take a step back and revisit things. A clean break would be painful, but could work. Developers are already building web apps that only work on Chrome, so much as it pains me to say, having to get all the vendors on board isn't entirely necessary.
The point you appear to be making is the same point made at every step in computing history. It's an old and predictable refrain going back to assembly programmers complaining about the laziness and bloat of C development.
Yes, bad software can suck up all your RAM and CPU time. This was a concern on the family's XT clone with 1MB of RAM. The only reason the bad software back then didn't suck down 32GB of RAM was because they hit the system's limit far earlier than that.
That said, there have always been efforts to reduce the overhead of individual apps. Docker for all its bloat was a dramatic memory and CPU boon over full virtualization. New languages like Go and Rust have much better memory profiles than Java and C# before them. Qwik, Svelte, and Solid are all worlds better in terms of efficiency than React and Angular.
There are clear improvements and also clear highlights of waste within our industry as there has always been. But if you can only see the waste and bloat, you are sadly missing large parts of the whole picture. The truth lies somewhere as a mix of the two extremes, and the overall capabilities of the current landscape are unambiguously more advanced than they were 5, 10, and 15 years ago.
I think your summary is a bit too reductive. I laid out a fairly comprehensive list of issues that are unique to the JavaScript ecosystem and you highlight one point I made about wasting resources. I don't see as many parallels with the past as you. Certainly software has bloated. But, few application platforms are as difficult to optimize for as the web. Working with HTML fragments, the shadow DOM, reducing unnecessary renders, figuring out the most efficient way to find nodes, etc. are things you don't have to deal with with most GUI toolkits. The closest thing I've seen is avoiding a full window repaint by only painting a region that's changed.
To tackle that complexity we keep creating new frameworks, but those often bring their own bloat. So, we create new frameworks. Any time a new framework comes out, there's a light-weight variant that follows (e.g., React and Preact). There's potentially money to be made by building an influential library or framework and I think that has had an adverse impact as well. The entire JS ecosystem is built around a commercial package management system. That's virtually unheard of. I guess Maven Central gets kinda close?
My contention is that if we took a step back and re-evaluated the platform and added a real standard library, I posit we could cut a lot of this out. You could take everything that's good and make it better.
> My contention is that if we took a step back and re-evaluated the platform and added a real standard library, I posit we could cut a lot of this out. You could take everything that's good and make it better.
Many have tried over the years, including in the early years before we had a neutron star's worth of code built on top of it over the decades that followed.
To be pedantic, browser APIs are already a standard library. The behavior of the HTML parser is 100% defined even in the case of imperfect (to say the least) input. Things like classlist, dataset, history, tab index, event handling... they're all there. If you don't LIKE the standard library or you think it is insufficiently elegant, that's your prerogative, but it's a standard library (emphasis on published standard) with multiple implementations.
As a counterpoint, Qt, GTK+, Tk, Swing, Godot, Kivy, FLTK, etc. are not standards nor are they faster for development, more secure, or easier to start working with. Sure, after you get past the (much larger) barriers to entry and actually get something working on one platform like Ubuntu Desktop, you have to refactor/rewrite for MacOS, Windows, and... other flavors of Linux. And no cheating! A Qt app running on MacOS needs to look like a Mac app, not a Linux app that happens to run on MacOS. Every UI toolkit out there with a "better" API just has one implementation (aka not a standard) and does not handle all of the use cases the web has grown into. And the toolkits that approach all of the browser's use cases is inevitably not a cleaner API.
This problem is deceptively hard and hasn't been solved over and over again as folks stepped back.
As for fragments, shadow DOM, unnecessary renders, efficient ways to find nodes, etc., I strongly encourage you to try Svelte, especially as a PWA. You might be pleasantly surprised how many of these issues are addressed and surprisingly cleanly.
> To be pedantic, browser APIs are already a standard library. The behavior of the HTML parser is 100% defined even in the case of imperfect (to say the least) input. Things like classlist, dataset, history, tab index, event handling... they're all there. If you don't LIKE the standard library or you think it is insufficiently elegant, that's your prerogative, but it's a standard library (emphasis on published standard) with multiple implementations.
By standard library, I mean the comprehensive libraries that are shipped out of the box with modern languages. Hundreds of module dependencies (including transitive ones) are added to modern JS applications in order to perform operations that you get out of the box in C++, Java, C#, Go, Rust, Ruby, Python, etc. That means having to pull in 3rd party dependencies just to get a basic app running. It introduces a ton of headaches around compatibility of transitive dependencies, supply chain security concerns, upstream availability, and so on. Yes, ECMA has been progressively adding stuff, but it's a depressingly slow process and often insufficient.
As for the web, I'd barely consider what we have a standard any longer. The W3C was largely cut out of the process. Many "web apps" really only run on Chrome. Google implements what it's going to implement and puts out a token document as a standard. If Apple doesn't like it, they say "no" and developers have to accommodate.
> As a counterpoint, Qt, GTK+, Tk, Swing, Godot, Kivy, FLTK, etc. are not standards nor are they faster for development, more secure, or easier to start working with. Sure, after you get past the (much larger) barriers to entry and actually get something working on one platform like Ubuntu Desktop, you have to refactor/rewrite for MacOS, Windows, and... other flavors of Linux. And no cheating! A Qt app running on MacOS needs to look like a Mac app, not a Linux app that happens to run on MacOS. Every UI toolkit out there with a "better" API just has one implementation (aka not a standard) and does not handle all of the use cases the web has grown into. And the toolkits that approach all of the browser's use cases is inevitably not a cleaner API.
I don't understand the limitation you're imposing on look and feel. It's not like web applications look like native applications in the slightest. It's a big usability problem since you can't learn one set of widgets and consistently use them. I think that was a major step backwards with CSS. While sites can look nicer today than the stodgy look of Netscape, way too often I have to play "guess where the link is" because some intrepid designer removed the underline and coloring so the link looks like normal text. Now that we've decided scrollbars are bad, whole UI elements are completely hidden unless you stumble upon them. All of this has bled into the desktop space since we now have Electron apps and to me it's decidedly worse than what GTK apps look like on macOS.
I'm going to have to disagree on the cleanliness of the API as well. A good UI toolkit provides the ability to layout and scale elements. The also come with a stock set of widgets. It's incomprehensible to me that we don't have a standard tree or tab widget in the browser. I've been pleased by what Shoelaces is doing with web components, but I'd rather just have a standard set of elements that match what I can get on any other platform.
The layout story is a bit better with flexbox and grid layouts, but it took a long time to get there. And good luck if you need to support HTML email; you're back to using tables or a limited subset of inlined CSS. That's stuff Win32 solved and shipped with decades ago. I wasn't part of the design discussion, but I'd imagine it's because CSS wasn't designed for what it's used for today. It was solving the proliferation of display-centered tags, like <font> and <b>. Using the browser as an application platform just wasn't anything seriously considered until MS added XMLHttpRequest for Outlook Web Acces.
Consequently, we have a plethora of ways to layout stuff in the web. Much of that CSS doesn't actually work if you try to resize things so we dropped fluid layouts for fixed ones. Responsive design rectifies some of that, but still often works in discrete fixed-width chunks. Moreover, many developers have no idea what the box model is so they just throw CSS at the page until it looks roughly correct on their browser and never bother testing it anywhere else. Without the ability to use comments or use variables (finally being addressed as well) and CSS is just a mess to work with. And since you can't be sure where a stylesheet is used, they're hard to refactor and hard to detect dead code in. None of those are problems you generally have when working with a platform designed for application development from the outset.
Just to clarify though, I'm not advocating we abandon the web platform. I'm saying its evolution has made it an at-times thorny way to develop applications. Applications have different needs than the largely text-oriented pages the foundations were designed for. It has advantages over Win32, GTK, and others, particularly in distribution. But, it also makes it much harder to do things that are straightforward in toolkits designed for GUI applications.
> As for fragments, shadow DOM, unnecessary renders, efficient ways to find nodes, etc., I strongly encourage you to try Svelte, especially as a PWA. You might be pleasantly surprised how many of these issues are addressed and surprisingly cleanly.
Much of the discussion on this article is about the huge churn in the JS world. Every year or two I'm urged to learn a new toolkit that's going to fix all the sins of the previous ones. Maybe that works great if you're maintaining one project. In practice, it means I've needed to learn React, Angular, Svelte, Backbone, ExtJS, and others. That's not time that was well spent. Some of it I still need to keep in my head to work on open source projects that can't afford a rewrite. Maybe Svelte is the endgame product and we can all rally behind that; I have my doubts.
I've tried the whole PWA thing out. It's unnecessarily complicated and stymied by iOS limitations. I got it working, but I had to work around some major libraries. Offline GraphQL cache access is an exercise in frustration. I ended up using IndexedDB and custom replication. I needed to add another auth system because JWTs aren't secure to store in the browser storage. Debugging service workers wasn't fun. Debugging in a mobile browser even less so. I'd love for PWAs to supplant all these simple mobile apps. Although, the irony that mobile app development might be the great app reset I'm looking for isn't lost on me. I just wish it didn't come with 30% tax.
By way of my background, I've worked extensively with browser rendering engines. For about five years I ran a TechStars-backed startup that detected rendering difference is pages across browsers and over time. A sort of automated visual regression testing (called web consistency testing), but did so by analyzing the DOM and detected the exact element that was a problem (as opposed to doing image comparisons). I became intimately familiar with the major browsers of the time and had built up a comprehensive knowledge base of how they render things differently. I had to write native browser extensions to work around JS performance issues or interface with the browser in ways not exposed via the DOM. I ended contributing pretty heavily to Selenium in the process as well. I don't mean to say I know everything, but I get the sense you think I'm arguing in the abstract and haven't actually tried working with all of this stuff.
I've also worked pretty extensively with the web building sites and applications over the past 25 years. Initially I was excited by it. No more minimum system requirements! Now I'm dejected to see what we actually have compared to what could have been. I've been burnt out on side project because I'm invariably fighting the dependency graph. I've been told I need to upgrade constantly for security reasons. Semantic versioning doesn't mean much when you have to constantly upgrade because very few projects maintain more than one line of development. TypeScript helps, but I've also had to debug and deal with libraries shipping incorrect TypeScript definitions, undermining the whole system. I've been working to remove dependencies, but in general you run into NIH objections.
Much of that has little to do with the browser APIs other than that the major browsers all use JS as their lingua franca. I've tried using other languages that compile to JS and I'm currently building something with WebAssembly, but so far I always get dragged back into the JS layer. That's why my critique has a lot riding on the JS layer. I think the web platform as a whole could be vastly improved by modernizing JS. It's not simply used to glue DOM interactions to event listeners. It should have a standard library.
I'm happy to keep discussing, but I think we're probably going to have to just agree to disagree. As an industry, we'll continue to incrementally improve things. We'll almost certainly make backwards steps as well. The bolted together monster is what we have and it'd be almost impossible to replace. That doesn't mean it couldn't be better with a comprehensive rethinking.
> React and Tailwind violate the previous principles of unobstructive JS and semantic naming that were the prevailing practices not too long ago. My cynical take is a lot of that is driven by consultants looking to stand out as thought leaders.
You're missing that people are adopting this on their own because they enjoy the benefits during development.
This is also my explanation for why apps with the same complexity as 15 years ago are now slower: What Andy Giveth, Bill taketh away. Developers (or rather their management) are choosing to make "lazy" trade-offs, i.e. prioritizing programmer productivity over execution performance. Which makes economical sense.
I wish there was a way to shift the economic incentives because that would change things very quickly. Maybe some browser performance limits.
I think the OP has a point. We've been building a massively complex ecosystem on a very shaky foundation. The web has indeed advanced, but most of the time it feels like duct taping something onto this monster of a creation. Between backwards compatibility and competing browser vendor concerns, it's hard to push meaningful change through. So many security (especially supply chain) and dependency issues in JS would go away if there were a reasonable standard library. But, there isn't, so we're doomed a large graph of inter-related dependencies that break regularly. The constant churn in frameworks is mostly related to novel ways to work around limitations in the web platform and bridging behavioral differences across browsers.
It's more than just churn in frameworks though. It's depressing how many person-years have been spent on build systems and bundlers or CJS vs ES6. Participating in the open source ecosystem means needing to be familiar with all of it, so it's not enough to pick one and run with it.
Prior to flexbox, people struggled to center content with CSS; it was bad for layout. There's a ton of old CSS out there, much of it accomplishing the same thing in different ways, often with vendor-specific options. You still run into this if you have to style HTML email. Given the difficulty in mapping CSS back to its usages, it's incredibly challenging to refactor. It requires familiarity with the entire product and a lot of guess work to intent since comments don't exist.
Moreover, there have been massive changes in "best practices". React and Tailwind violate the previous principles of unobstructive JS and semantic naming that were the prevailing practices not too long ago. My cynical take is a lot of that is driven by consultants looking to stand out as thought leaders. Regardless, it adds to the complexity of what you need to know if you do have to work with older (often other people's) code.
I'm fairly confident that if we had today's use cases in mind when designing the foundational web technologies we'd have a very different platform to work with. It almost certainly would be more efficient, less error-prone, and likely more secure. It'd be nice if we could take a step back and revisit things. A clean break would be painful, but could work. Developers are already building web apps that only work on Chrome, so much as it pains me to say, having to get all the vendors on board isn't entirely necessary.