For years website size was an acceptable proxy for user experience, but that's really not true today. Web browsers are much more complex about what they download, when, and how that impacts the user experience. Looking at page weight and say "that's bad" is too coarse of a perspective.
Is a page downloading a 600KB "main.min.css" that's blocking the rendering path? yeah, that's bad, especially when only 60% of the rules are used on your most common pages. Downloading 1-2 MB of blocking JS, also a bad experience.
Lazy loading a hero video while also using a "poster" attribute as a placeholder? :shrug: Asyncing in 800KB of 3rd party tags after onload that don't cause reflows or impact the Time-To-Interactive/Total-block-time? :shrug:
Even those 2 bad examples can be made better or worse. 600KB for a CDN? better. 600KB from the origin? worse. 600KB from a totally new hostname? bad. 600KB from a hostname you already have an active HTTPS connection to? better.
Not all resource weight is equal. You can't even say all weight for the same resource type (CSS, Images, Font, JS, JSON, etc) is equal. It depends when and for what purpose a heavy resource is requested. And yes, this still applies even for mobile connections that can suck, be slow, and have high latency.
Of course, there are still operational cost aspects here for the bloated sites. Bloated images on your CDN? Misconfiguring it so it's not using Brotli? Yeah, you can cut significant spend by applying the appropriate optimizations for sure.
>> For years website size was an acceptable proxy for user experience, but that's really not true today. Web browsers are much more complex about what they download, when, and how that impacts the user experience. Looking at page weight and say "that's bad" is too coarse of a perspective.
Has it really changed? Yes, browser perf was an issue then, but perhaps browsers have optimized away other issues. It is unclear to me if anything has improved over time for the large segment of users with limited or expensive bandwidth.
This was a very large segment affected by bloated sites when the problem started being highlighted in articles like this one. Have bandwidth and latency issues been resolved for many people who had significant issues in 2015? I'm guessing 4G and the like have been able to penetrate areas fairly quickly, but I've not looked at global data on this topic for a very long time.
> I'm guessing 4G and the like have been able to penetrate areas fairly quickly, but I've not looked at global data on this topic for a very long time.
Just having 4G available doesn't mean much if the cells are congested, the networks have poor backhaul capacity, or just in a bad radio environment. There can also be a lot of jitter in latency even if there's decent bandwidth. A bunch of scripts can load fine but then grabbing that one necessary resource hangs and the whole page hangs.
There's so many stupid interactivity blockers with big JavaScript pages that are made worse by poor network connectivity. Stuff like infinite scroll or blocking page visibility until an ad is visible breaks in jarring ways.
High average speed is nice but cellular connectivity has issues in good conditions and a lot of unpredictability. Huge numbers of web users are on cellular data only (or shit public WiFi). I find too many websites ignore these users with fragile designs because of a preponderance of "works on my machine".
Also evaluating and profiling all of these things and more has become much more convenient and accessible.
Personally I tend to avoid third party frontend libraries where feasible for example, but we still need JS to give immediate feedback and to make bespoke designs work. But we found that reasonably bundled JS is almost never the problem.
It’s almost always images, especially user generated ones. JS/CSS/rendering code that is unoptimized and uses the wrong approach can hurt too (often unrelated to actual size). And of course lack of caching.
But all of those things have solid solutions that really pay off.
There’s simply websites that have a ton of content or designs that you as the dev cannot control. But during concept/feedback/consulting you have to give those things some weight and try your best when things are decided.
Edit: Myself and I bet most other devs hate bloated websites the most, even when we implement them. Because we suffer the most from performance and unnecessary complexity during development and testing. Especially when we need to turn off all of the goodies that make the website fast.
Reminds me of another thing. Compatibility through code generation (babel) is one of the biggest offenders for JS bloat, especially when you need legacy support like IE11.
> It’s almost always images, especially user generated ones.
After recently doing a lot of work on a system that takes user-originated images for the first time in a while, this has started to become my go-to check. I shouldn't have been surprised at how common it is, but I was and I hope from now on I'm thinking about optimization as soon as user-provided media comes up.
I also think the tooling-takeover on the front end has had a real effect; it automates away decisions at development time in situations where you usually have ideal connectivity. Then when you're doing QA and realize that the site's heavier/slower than will work for some user-device profiles, you face sunk costs for decisions you didn't even really make so much as implicitly invoke. Possibly getting better as tooling gets less naive and more sophisticated about shaking out unused code, but I worry there's also a Jevons paradox that can come into play (your tooling is good at automatically optimizing out stuff you don't use? You might automatically use more of other stuff!).
HTTPArchive's "State of the Web" report from a couple of years ago looked at over 6 million URLs and discovered the median size increased by 354% between 2010 and 2021, from 467KB to 2123.6KB. [1]
That isn't bad in itself.
More data to download could simply mean higher resolution images, or an increased use of video, or more JS functionality that the user has requested. Those can be useful things. The problem is if the sites aren't delivering any additional benefit to the user in return for that bandwidth. If it's things like tracking, ads, and pointless gimmicks then most people consider it wasted. That's what we need to look at. Not simply 'bigger is worse'.
That's the entire point of the article. It isn't a complaint about the size of websites. It's a complaint about the ratio of size to usefulness. News websites in particular are getting bigger, but they're not getting better.
> The problem is if the sites aren't delivering any additional benefit to the user in return for that bandwidth.
While this isn't directly tied to bandwidth metrics per se, there's also something to be said about usefulness from a utilitarian perspective: Instagram infinite scrolling experiences full of HD beach selfies aren't exactly advancing civilization in any meaningful way, both in the sense that the impetus to appear "successful" is essentially glorifying sloth, and in the sense that consuming that type of content takes time away from more productive pursuits. Even political news, which in theory ought to serve to increase understanding of policy, these days often looks more like real life soap operas.
It's tricky to talk about this in terms of metrics, since obviously downtime and "useless" entertainment have their place as a decompression activities, but I often get a feeling that nowadays there's too much emphasis on vapid fun at the expense of "productive" fun (e.g. activities like hacking on a side project, or even just reading a properly intellectually stimulating article over yet another shallow hot take).
Sometimes I can't help but cynically think that perverse "engagement" incentives are at least somewhat connected to ballooning bandwidth consumption.
> Sometimes I can't help but cynically think that perverse "engagement" incentives are at least somewhat connected to ballooning bandwidth consumption.
Companies building these user experiences are often incentivized to have fast experiences that could avoid bandwidth consumption. For example, some sites started serving GIFs as videos because the images were used as discussion replies. The video files were smaller for users. I won't claim this is universal.
IMO the woes of modern sites and social media are much more connected to everyone now being connected on a mobile device. They are reachable and you can manipulate our vices. A cross-cutting concern is that this has created significant demand for developers and product managers. People are throwing together products more quickly than they will admit and then moving on. The next set of people are only allowed to pile on. Lots of experienced people know the issues and how to fix them, but across the whole industry they have little sway.
Overall, it's a much more complicated human-centric issue than the one you've described.
Can you really say that websites today aren't delivering more value to users than they did in 2010? Just looking at the sheer growth in internet users, online services, online communication, e-commerce etc. in that time frame shows that that isn't true.
News sites are most susceptible to bloat. They deliver exactly the same amount of news as ten years ago, but today they have a million more trackers and other bits of javascript.
Web developers need to take a step back and realize that plain HTML/CSS can fulfill the needs of 90% of the basic text-on-a-page format of most websites out there, especially with the advances in CSS and form/input controls we've had in the past few years.
When I was putting together the Standard Ebooks website[1], I wanted to take cues from the epub books we were producing: having a classic document-centric format without JS, based on traditional, accessible HTML elements, without frameworks, and with an emphasis on semantic structure without <div> soup.
The result is a site that loads almost instantly, whose HTML transfer is small--12kb for the homepage and 24kb for the ebook grid--whose CSS transfer is 10kb, and whose JS transfer is 0. The bulk of the network transfer goes to the initial download of our self-hosted fonts, and cover images. Yet the site looks perfectly modern and even has effects like grids, form effects, transparency, responsive layout, and dark mode. It's hosted on an inexpensive DO droplet and has survived multiple HN hugs without breaking a sweat.
If you're making a website, chances are high that you're just making a document that maybe is a CRUD app. Plain HTML was designed with documents in mind, HTTP/REST was designed with document display and CRUD in mind, and CSS is advanced enough where we can make it all look really nice. Go back to the basics!
Just checked it out, this really is a lovely site you've built.
Side benefits include way better accessibility, zoom isn't broken!, and I can browse with javascript turned off with no problems.
My only complaint on usability is an inability to select intermediate pages down at the bottom on the "ebooks" page. My options are basically just 1st, current and last page. I can see some downsides to that, but the filter functionality works well.
The problem isn't necessarily web developers, or certainly not all web developers. Thanks to constant penny pinching, very little gets developed from scratch in-house anymore - instead everything is put together using a myriad of third party code (often cheap third party code) that does things in very opinionated ways, meaning all of their dependencies become your dependencies, and every time you add another third party lib you get more dependencies. So you end up with a gigantic base of dependencies - none of which can easily be eliminated because its just a giant house of cards.
The alternative is every shop building their own functionality from scratch, with the result being 100s of competing libraries all doing very similar things but slightly different.
What needs to happen is the same as in most industries - end users / customers needs to realise the real cost of things and pay that full price. No more cheapest possible solutions. No more making do with lowest common denominators. Pay more money for better solutions.
The point is that front-end libraries are not as critical as most of today's web developers think they are. Sure, you can use a well-known templating library on the back end to assist in constructing HTML before you output it. But once the HTML is in the browser, 90% of it is page-based documents with some CRUD sprinkled on top. Front-end libraries are rarely required given modern HTML/CSS and modern back-end languages like PHP/RoR/Python or even JS, and it's front-end libraries that are the causing the bloat.
There's no need to develop from scratch: Use third party libraries on the back end to output plain HTML/CSS on the front end.
I really long for a modern, declarative web without client-side scripting. 99% of websites should not use JavaScript at all. There's a lot we could do if everything was more standardized and implemented client-side.
For example why can't we have auto-refreshed content standardized? Something like <section refresh-from="https://foo.bar/example-section" refresh-time="60s"> where user could accept/deny the auto-refresh and adjust the timer would make JavaScript completely meaningless for websites like Twitter.
The overall experience would be much greater: no trackers, no tab using half of your RAM/CPU to mine cryptocoins, no targeted ads, no weird behavior when switching internet connection (because everything would be handled by the browser itself, not by JS requests which may or may not randomly fail).
Gemini is a really cool protocol but i appreciate some aspects of the web like images and videos (with the builtin players, no JS please).
We actually do have auto-refreshed content standardised. <frame> instead of <section>, and <meta http-equiv="refresh" content="60; https://foo.bar/example-section" /> (where the URL to refresh from is optional).
The web could do all the things we really needed it to (except embedded video and games), a long time ago. But programmers wanted more natural ways of doing it, which converted a document format into an application framework into an operating system.
Yeah you're correct what i had in mind as more like dynamically loading new articles, for example for a webchat. Not "refreshing" as in reloading the whole iframe. Like built-in RSS but using semantic HTML.
> But programmers wanted more natural ways of doing it, which converted a document format into an application framework into an operating system.
I think it's more like hostile actors (including Microsoft itself) had so much of the market share that trying to standardize things was perceived as a lot of effort without success, despite lots of people trying. JS was the worst we could come up with, but everybody supported it more or less...
I make a website builder and it is a constant battle to try to keep websites to a certain size.
Imagery and video are the biggest enemies. Partly because users don't know the filesizes of their imagery (so we either recompress / resample and then image quality suffers, or we leave imagery untouched and then you have a bloated website).
Then if you start embedding 3rd party things like video players, size just explodes. I've noticed, much to my chagrin, that my own homepage (https://pinkpigeon.co.uk) does 8mb in transfers, which on my 10mbit connection is not exactly snappy.
The irony is, that in terms of pure JS and CSS our sites are comparitively light (Wordpress and Squarespace sites are often much heavier on those two metrics).
But, people want rich content. They want imagery and videos. I have considered using WebP, just to find out that Safari doesn't support it. So I'm back to JPEG until Apple leaves the dark ages, or we can all agree on an image format like AVIF etc.
Page size is pointless if you delay your users for a few seconds with cookie banners, newsletter prompts, ads and SEO keyword soup.
I built a fast website. It's not just small or quick. It's also straightforward. There's the main article, and a sidebar with the table of contents. Way below is a list of related articles. The content itself is also straightforward.
At this point, such websites are so rare that I consider it a competitive advantage. Donors often mention the design, so I figure it's working. Search engines also seem to like fast and straightforward websites.
We need to create a measure of web page size that, like wire gauge, is a smaller number the bigger the size. Call it "performance index" or something like that.
Then disparage the small number earned by bloated sites and demand bigger numbers. Brag about the big numbers you achieve and charge more.
We aren't smart. Not smarter than the people who made the last few thousand years of history, anyway. We're the same dumb animals that are attracted to bigger everything even when it otherwise negatively impacts our own personal quality of life.
Call it the Predicted Impact of Performance on Engagement. PIPE. This web site has a pathetically small pipe. It needs to have a much bigger pipe. Yes, we are this dumb, and we ought to account for it.
The most common device used to access the internet worldwide is still a cheap Android phone on a slow (and likely expensive by local standards) network.
We continue to fail those users completely - no wonder they are forced to stick to Facebook and WhatsApp on zero-rated internet plans.
Most developers have good connectivity and powerful hardware and don't seem to care that 2 billion people in the world have very poor and unaffordable connectivity.
I think it has to do with making money. People on cheap android phones may be less likely to buy your product, so it might not make sense to prioritize their needs.
Anecdotal experience but I have found mobile data rates to be extremely cheap in developing countries compared to western ones, even after adjusting for local purchasing power. "Expensive data" isn't really a thing outside of USA, Canada and Europe and maybe a few other countries. In India, for example, the cheapest 4G data plans will give you 50GB+ a month of usage.
I don't think the issue is always about the cost of plans but the fact the network backhaul isn't great in a lot of places. I don't know if it's still the case but India's 4G had wide coverage but terrible speeds and latency (anecdata from Indian friends).
Cost does then come into play when bloated sites eat into small data caps. A 50GB plan seems like a lot if you've got a wired connection at home you use more often. If that 50GB is shared by multiple lines and is the only Internet service you've got it doesn't go that far.
As someone who runs a SaaS[1] I'm always really surprised by how slow many websites are, even those clearly run by well funded teams pushing hard to grow their service.
The following two things seem axiomatic by now to me:
1. It's well established that faster websites convert better in a strongly correlated relationship. From Google/Amazon scale case studies down to smaller ones, the faster your website is the more money you'll make...
2. How fast your website runs is almost entirely within your control, whereas factors like competitor positioning, Google's ranking algorithm, whether your service is mentioned in a viral twitter thread etc. etc. are all not!
Given these it seems spending a few developer hours after each set of site changes to keep things at a perfect 100 PageSpeed score is a very high return investment of time. Especially when some performance improvements affect 100% of prospective buyers, whereas say adding a specific feature might only improve conversion for a small segment of your funnel's first step.
Your arguments are what makes landing pages be fast. It’s still not enough to push for the SaaS internals to be fast, or really any other page not related to conversion.
This is definitely a good point. I wonder does anyone know if the Chrome User Experience Report [2] dataset will be used in the upcoming Google Page Experience Update [1]? If it will be included then perhaps this will create pressure for faster whole-site experiences. Basically if your landing pages are fast but your actual SaaS dashboard is slow then this will be reported back to Google by Chrome users - and then potentially affect your ranking.
For those who aren't familiar with Chrome User Experience, Chrome collects website performance data while you browse under certain circumstances (I think you need to be opted into sync) and so Google essentially has detailed RUM data for any web origin with a decent amount of traffic.
It would be nice to have a protocol where the HTTP client can say "I'm on a limited connection, give me the good stuff" - as often I only need a few kilobytes of data but I cannot view them until the entire multi-megabyte site finishes downloading - which it never does.
Of course it would immediately be abused to mark advertisements as "high priority".
This exists already. There's a 'downlink' client hint in Chromium that can inform the server what bandwidth the user has available[1]. The server can then use that information to tailor what it delivers to the browser. It wouldn't be that hard to add a dropdown or a dialogue to let the user set the value manually.
There's also a less fine-grained "save-data' hint that just asks for a smaller download[2].
"Reader" modes mostly seemed to fill this gap in browsers when all you were dealing with was text. However, Google (AMP) and Facebook (Instant Articles) attempts to apply this model to all browser content were largely failures.
We can debate whether this was due to the fact that both companies get most of their revenue from ads (probably correct) or just that the protocols were made to appease publishers - and content owners more generally - as opposed to the people who actually read websites who vastly outnumber the former two groups.
On the plus side, it does bring a feeling of satisfaction that with things like Create React App, I can send a really small payload to my server (~60 LoC) and have a site come out on the other end.
I use ff mobile, and sadly, many sites break reader mode. Especially paywall sites have gotten smart -- if you view an article in reader mode, it will display a different article instead of the one you meant to read
Highly recommend browsing while keeping the js on your browser disabled. I know it's impractical for pretty much most web users, but you can eliminate 95% of the annoyance of the modern web in one whole swoop with just this one option.
- WordPress: "WordPress is actually very noscript-friendly"
- Amazon: "At first glance, Amazon does a cracking job with its non-JavaScript solution...On closer inspection, quite a few things were a bit broken"
- Google Calendar: "I was disappointed that there wasn’t a noscript fallback"
- Facebook: "flat-out refuses to load without JavaScript, but it does offer a fallback option... a site which offers a separate version of its content for noscript or feature phone users."
- LinkedIn: "LinkedIn...never loads, so all I could do was stare at the logo."
- Instragram: "Instagram gave me literally nothing. A blank page."
- GitHub: "GitHub looks almost exactly the same as its JavaScript-enabled counterpart."
Note: some of these sites may have changed (improved?) since 2018.
I guess it depends on the annoyance? I use NoScript, and it's true that sites load a lot faster. Because an empty page that reads "You must enable JavaScript to view this app" would qualify as a very small payload by almost any standard.
I use noscript too. I try to avoid such websites as much as possible, because they obviously don't want me browsing their website. We need to apply pressure on webmins so they make sure the experience stays good with JS disabled.
Can you blame them for not wanting you to browse their website? Using noscript is probably a pretty decent litmus test for non- or anti-consumerist values, which, in turn, implies being a low-value audience for advertising. Why waste any more bandwidth and server resources than you have to on someone who would be, best case scenario, negatively profitable?
Third party Javascript is the problem, and adblockers take care of that. Turning off JS entirely often blocks content on sites, be it images or something else.
Third-party JS isn't behind sites that perform so badly they make the whole system slow down even when you're not on the page, or eat memory like it's free. Both those things can happen on sites that aren't "apps" at all, or that don't need to be as huge and wasteful as they are to achieve the same levels of "app-ness" that they have (gmail is a notable offender in the latter category, but it's far from the only one). Those problems are largely due to really bad judgement, in first-party code, about where (and whether) to attach Javascript event listeners, terrible use of timers, and all manner of programmer-productivity-and-QOL enhancements that invariably (it seems) manifest as much higher end-user memory use and lots of bad (for performance) memory access patterns.
My bosses put up a website for a client, that had a 12Mb (!) image on it. It took half a minute to download the image from the hosting server to my desktop. (One of the bosses was a photography nerd; I guess he was proud of that photo, and didn't want it compressed)
I drew their attention to the problem, but they wouldn't even give me the two minutes it would have taken to reduce the image to (say) 512Kb.
TBH I don't care that much about size bloat. Yes, some networks are faster/slower. Images can be rendered later, for most articles I don't even care about them. What really annoys me is the useless, endless amounts of javascript crap that are required to render the essential parts, often even the body copy, of websites on the client. I regularly just close many articles now because they keep jumping around in the browser for minutes while some scripts are loading and transforming the website, some idiotic mailing list overlays appear etc etc. This is not bloat, it's cancer.
4GB of RAM is already not enough to browse the web. (My web browser's sitting comfortably at 2GB, but sometimes it reaches the 4GB limit and locks up the entire OS – fortunately, I have a key combo set to kill the browser.)
Content that's valuable has already shifted to other mediums/media: newsletters, Telegram channels, video courses, webinars, Discord etc.
The age of good, free content is gone. Less than 10 years ago you could read every major newspaper (barring FT/WSJ) online for free. No paywall, no article limit. Those days are long gone, and the pageviews that went to them are now going to less reputable pubs that don't do their own journalism and just write 'takes' on what other big publications are saying.
The Internet as we know it has become a bottomless landfill of non-original, third-hand information, written by non-experts either trying to position themselves as thought leaders, book authors or speakers at your next conference.
It would be interesting to see a plot of website size vs average bandwidth over time. I do wonder if perhaps a 10MB page size on a modern connection is any worse than a 10k page size on a dial up modem.
This is obviously a simplification that ignores bandwith prices, the wide spread in connection speeds, latency, etc.
The flip side is that these pages often do a lot more (and often for user benefit) than old websites. Everyone hates ad filled news websites, but we've always had ad/popup issues, and users do increasingly demand interactivity on pages and high quality imagery, which take a lot of bandwidth.
Might be worth looking at median rather than average. Or even some target percentile. My impression is that the bandwidth available to the top quartile of users has increased at a much faster rate than the bandwidth available to the third and fourth quartiles.
For example, last year I moved to a new neighborhood with a different (and wider) of internet service providers. My effective (as measured by actual, not advertised, speed) increased by two orders of magnitude at the same time that my monthly Internet service bill dropped by ~15%.
It was a particularly noticeable change for me because it made the difference between me having a pleasant experience using my company's product from home, and my company's product being simply unusable from my home.
(But don't mention that to our front-end developers or they'll just get in a huff about the modern web and people needing to get with the times. As if the lack of fiber in my [urban] neighborhood were somehow a choice that I made for myself.)
The key is density. I live near a large coastal city and have 500Mbps-1Gbps fiber internet, my family lives in a very small town in the Midwest and can get a 50Mbps-300Mbps cable internet, but people who live in deep rural areas have limited options.
The reality is nowhere near as cut-and-dried as urban vs rural. I moved 2 miles, from a denser area to a less dense (but still urban) neighborhood. Also the newer neighborhood has a much lower median household income. Still. Two orders of magnitude faster. Not exaggerating.
I wouldn't look at the average or even the median. I would look at the 90%.
In cities and for many tech workers it's easy to get really fast Internet connections. What about rural locations? For example, 90% of the US is rural. What about suburb or low income locations? What about countries that do not have the infrastructure investment of the west?
As more things move online only or mostly online it's important to not look at the experience for those with the top speeds but those who are near the bottom. Especially for those who are on the bottom end of speeds and bandwidth limits.
We're so far past the days when competitions were held for the best website under 5 kilobytes... which was then increased to 10kB to allow for JS [1]. Last time it was run was 2002.
Just goes to show that the software devs always squander more resources faster than hardware developers can create it. It is kind of like when you increase the radius of a ball by 2X it's volume grows by 8X.
Except this shouldn't be a law of physics. Most of it seems to be for developer convenience and faddishness. Perhaps if they noticed just how much everyone rushing off to the latest framework resembled ever teenager rushing off to buy the latest fast-fashion trends, it wouldn't happen so much.
It is just incredibly disappointing to see performance of even major websites visibly degrading every month to the point of unusability (yes, even on a CAD-/gaming-level rig), while obscure sites are still snappy. These are the very well-funded sites that should be able to optimize and streamline everything.
"Most of it seems to be for developer convenience and faddishness."
Something like that.
The devs want to use the latest frameworks and libraries, because it makes their CV look good. Or the bosses want them to use a CMS like Drupal (it's possible to build a slim Drupal site, but in practice everything is conspiring against you).
Brochure websites are typically written once, thrown over the wall to the client, and never edited again for 5 years. The client knows no better, the web-shop has been paid, and any techie reporting problems with bugs or performance is ignored.
I particularly deplore the custom of loading Javascript libraries from some CDN. If you do that, you're jamming code into my browser that you didn't write. It can change from one day to the next, without the website operator knowing that's happened, let alone reviewing the changes (I've never known such a review to happen).
There's a handful of sites I care about enough to make a NoScript exception (banking, government). Apart from those, if a site demands that I run arbitrary code on MY computer, and fails to load if I refuse, then there's always other sites. Byeee!
> The devs want to use the latest frameworks and libraries, because it makes their CV look good
could be this, could also be due to bootcamps that teach the latest and greatest to keep up with the latest tech fads, also maybe because of the mirage that "if I become a react wiz, then I have better chances at landing a job at Facebook! $$$"
The author calls out the Apple Watch review website from The Verge for hijacking navigation, but honestly I remember reading this review and being really delighted by all the magical effects.
Do people actually enjoy browsing 'scrollytelling" web pages? That's not a name I made up, there are enough of them out there that someone else has graciously come up with the moniker.
I always feel like I have to scroll slowly, much slower than I normally do, so I don't accidentally miss something. That's awful UX IMO. They would be better off creating two pages: one for regular users that want to simply read this extremely lengthy article. The other could be self-contained "web app" type interactive experience where you aren't limited to the CMS' page template to design the layout the way you want.
>Out of an abundance of love for the mobile web, Google has volunteered to run the infrastructure, especially the user tracking parts of it.
Very well written article, even if it is almost as long as a Russian novel:) But i don't know how to check, can someone tell me how big the page is? And maybe by comparison, how big this HN page is.
The article is about 1 MB as transferred; essentially all of that is composed of the images (the HTML itself is small at ~23.3 kB uncompressed, and the page loads no other resources).
The Hacker News homepage, when I tested it, transferred about 13.1 kB.
There is a lot of images and 1mb, by the exacting standards of the author is a (little) bit big i think. Also, the page seems to be a bit....boring? Static images are unexciting especially the stock ones the author uses. Not only that, but the text seems to be unnecessarily narrow and long. I do think a case could be made for so- called "bloated" webpages.
I can't browse twitter anymore with my iOS 3rd gen ipad. It was impossibly slow as recently as three years ago, and sometime in the last year they blocked my OS/browser version altogether with a "not supported" page.
This is a site that hosts 148-character text strings.
The worst imo is the trendy "app shell" approach where the skeleton loads and then the content. Never quite sure if things are ready to interact with. Also ideally the shell should load the parts appropriately sized so when the bloated json payload is eventually rendered, the page doesn't jump around. That's quite rare in practice though.
I was a contributor to http://alt-f4.blog/ and helped get the site down to 1.2-1.9 MB, depending on thumbnail size.
500KB of that is the background image alone, and each of the 10 thumbnails are ~50KB.
Images take a lot more room than text.
Google’s Core Web Vitals and changing search ranking algorithm is going to force the issue. As soon as marketers are all aware that “obese” websites are going to have penalized search, things will shift quickly.
Google's own sites don't pass CWV. Amazon product pages will never pass them. Facebook is a walled garden and doesn't care about SEO.
That's 3 of the largest sites in the world. Your ad- and tracker-ridden local newspaper website isn't trying to compete against them, and likely won't implement any changes.
The problem here is that Google has a severe conflict of interest; those very same marketers are the ones who provide Google with its advertising revenue.
I'm both lucky and unlucky to have used dial-up for a significant period of time. Unlucky because there has always been content that made my modem cry. Lucky because I know first hand what it's like to have a shitty connection and remember the experience vividly.
There's always Some Asshole that demands a website be some all-singing all-dancing multimedia clusterfuck. That's a problem as old as the web. In 1996 it was a single giant gif with an client side image map or worse, the instrument of Beelzebub, the server side image map. Just like today until that giant image downloaded there was no good way to navigate the page.
So web bloat has always been a problem. However I think the problem is worse today than it's been in the past. In the 90s there were fewer connection "classes": dial-up, ISDN, and fast enough not to matter. A vast majority of end users were on dial-up so the pain of web bloat was pretty egalitarian. Device capability also had a narrow range. The capabilities of the crappiest machine able to run a graphical web browser wasn't too far below a top of the line consumer system.
Today there's a far wider range of connection classes and device capabilities. Any given end user might even use different connections and devices in the same day. There's still some dial-up users, broadband ranges from terrible to fantastic, and cellular connections can go from great to terrible by moving between rooms. The capability delta between consumer devices has a huge range in some areas. A bottom tier Android phone or Netbook is vastly inferior to a flagship phone or high end computer.
While even the crappiest devices today are orders of magnitude faster than devices from twenty five years ago, web bloat severely taxes their resources. Even high end devices on a shitty connection have issues with bloat. If a web page is unusable until a 5MB JavaScript payload is received your iPhone 12 can't do shit with that page when you're in some area with spotty 4G and no WiFi.
Besides the Some Assholes making stupid demands about page content, I think there's a whole cohort of web developers that have been a bit spoiled with always connected high bandwidth connections and powerful development machines. Technological progress is cool but now there's even senior web devs that have never known the pain of dial-up or 2G. Even if their media isn't stupid large there's a whole design paradigm of the client pulling in disparate resources and rendering a page all client-side. This is very much informed by thinking a network connection is always on, always reliable, and always low latency.
There's only so much you can do about demands from Some Asshole but designs that simply don't work without JavaScript or work poorly in bad network conditions really frustrate me. It's entirely possible to load content without JavaScript, browsers are really fucking good at it now. Building a usually page without images or video being loaded is also entirely possible.
While I don't think the world needs to go back to console browsers, unless your site needs some multimedia capability (YouTube, Imgur, etc) it should at least be viewable in lynx. If lynx can view it so can the world's microbrowsers, web spiders, and screen readers.
I shouldn't have to download the equivalent of ten copies of Doom to read a fucking blog or newspaper article.
Is a page downloading a 600KB "main.min.css" that's blocking the rendering path? yeah, that's bad, especially when only 60% of the rules are used on your most common pages. Downloading 1-2 MB of blocking JS, also a bad experience.
Lazy loading a hero video while also using a "poster" attribute as a placeholder? :shrug: Asyncing in 800KB of 3rd party tags after onload that don't cause reflows or impact the Time-To-Interactive/Total-block-time? :shrug:
Even those 2 bad examples can be made better or worse. 600KB for a CDN? better. 600KB from the origin? worse. 600KB from a totally new hostname? bad. 600KB from a hostname you already have an active HTTPS connection to? better.
Not all resource weight is equal. You can't even say all weight for the same resource type (CSS, Images, Font, JS, JSON, etc) is equal. It depends when and for what purpose a heavy resource is requested. And yes, this still applies even for mobile connections that can suck, be slow, and have high latency.
Of course, there are still operational cost aspects here for the bloated sites. Bloated images on your CDN? Misconfiguring it so it's not using Brotli? Yeah, you can cut significant spend by applying the appropriate optimizations for sure.
To be clear: I hate bloated websites.