I have seen a lot of front-end developers who have no clue about the technical aspects of their work. They just use multi-megapixel images because 'they will look sharper'. When they need an icon from Font Awesome they just include the whole library. And because Bootstrap told them to do so, they nest as much DOM as possible.
I have seen websites where the homepage's HTML alone was over 1MB! in size. The only thing that got exited was my CPU.
Let me tell you: if you want a header with a background image where the text is aligned at the bottom you can just write:
And if you want to stick the header to the top you can use `position: sticky` in CSS instead of including a huge Javascript file that can do all kinds of fancy stuff you don't need.
But I am not sure I can blame those front-end developers. Deadlines are tight and it takes effort to learn about the technical aspects of front-end development.
My personal standard is that a page should be ready in 1 second. For huge sites 3 seconds max. I've been creating small and huge websites for over 20 years now and never had a problem with these goals. This includes webapps built with Javascript.
The Thing is that most people here belittle Web Developers but they do not have a clue what we do. At all.
In your pretty short answer you’ve already caught mistake. Let‘s see:
> And if you want to stick the header to the top you can use `position: sticky` in CSS instead of including a huge Javascript file that can do all kinds of fancy stuff you don't need.
Your „simple trick“ to avoid „including a huge javascript file“ doesn‘t work in: Opera, Chrome for Android, IE and a couple more[1]. It does work in Safari, but not as you would expect. Also, position:sticky is still a Working Draft. It is suspect to be changed at all times. Want the same functionality across all Browsers? Better use Js.
Another thing is that most people here think „huge JS bundles“ are what make websites work. These are to be used for webapps though. If you want anything more than a static site, you‘ll need JavaScript. No way around it.
> doesn‘t work in: Opera, Chrome for Android, IE and a couple more[1]
Are we looking at the same compatibility tables? Caniuse lists it as working in android chrome and opera with the exception of <thead> elements, which are not relevant here because OP's example was about <h1>, not <thead>. I'll give you IE, but that's already ignored by many.
> If you want anything more than a static site, you‘ll need JavaScript.
Forms are still a thing. Then there's hover animations, summary/details, video elements and a few other interactive things that don't need javascript.
There also is a middle ground between "static page" and "huge JS bundle". A little javascript usually isn't what makes web pages slow.
Come on, even the caniuse page you linked says `position:sticky` is supported by 94.33% of browser. For that ~5% left (very old browser versions or specific cases like Opera Mini, most likely even less real cases due to bot noise in statistics) there is a thing called "Graceful Degradation". If you do things right your page won't break at all if that CSS property is not supported. Your header will just be positioned fixed or even static.
So, really, no huge (or small) JavaScript file needed at all in this case. Like most other bloat in web dev, that's just laziness that accumulates.
Well, statistics don't work in that linear way. For a start, that 94% is a global percentage that isn't evenly distributed worldwide, let alone on HN...
Anyway in my comment I was actually saying that if you do things right it will work even for that 5%, without any JS. Maybe it wasn't clear enough?
Flip it around: You're fucking up the browsing experience of 9.4 million people for a minor graphical effect that totally destroys scroll behavior and CPU.
Just use CSS. No one will miss the sticky bar if it doesn't show up on their browser.
> If you want anything more than a static site, you‘ll need JavaScript.
Who doesn't want static sites? I certainly do. They're faster and easier to use. I've never heard a user say "I wish my back button didn't work and the page rendered wonky so that buttons move as I try to click them".
It's web developers that push to turn simple HTML documents into "applications".
Every now and then I wish browser writers would compare timestamps between the link click and the last relayout, and throw away stale clicks instead of navigating to who knows where.
The problem, as always, is (poor) developers making poor decisions. Implementing static sites as a JS-required, JS-heavy pages is something that some poor development teams are doing.
Bungie, game developer, have been replatforming their site and they've turned what is mostly static content pages into purely client-side single page app which requires javascript to display anything https://www.bungie.net/7/en/Destiny/BeyondLight It's sad because there are extremely productive ways to build this out using their preferrred technoligies (react) but deliver the static content from the server in the initial page load AND continue doing SPA-style navigations if desired.
---
There are lots of webpages out there, that means there's lots of developers making them. Some developers will be good, some will be average, and some will be bad.
The 'problem' with websites is that it's a lot easier to see when they've been done poorly, compared to say native apps.
"Want the same functionality across all Browsers? Better use Js."
This is not the point. The point is some developers just include big Javascript files because they don't know what other methods and options there are.
I have seen developers include Javascript to 'fix' things across browsers but it made things worse because the `let` keyword was not supported in some browsers at that time.
So it is not about if you should use Javascript. It is about the fact a website or webapp can and should be fast. But when a developer does not know about the technical aspects we will end up with multi-megabyte pages that take 10 seconds to complete.
--
About the `position: sticky`: it is not smart to promise a client a pixel perfect experience across all browsers (unless they want or need to and will pay the extra price). In the case of a sticky header you can support all modern browsers with two lines of CSS (`position` and `top`). The other 6% won't have a sticky header but still a perfectly working website. So imho you did not caught a mistake but a real world example.
> If you want anything more than a static site, you‘ll need JavaScript. No way around it.
I’ve been noticing this kind of rebranding of what static means a lot. It seems to create lots of confusion (you can see it in this thread already).
Traditionally, “static” websites were websites without a server-side programming language backing them—sites whose content was unchanged by any users. Javascript, CSS, and HTML are the tools used to build static websites.
Today, some developers seem to use static to mean Javascriptless.
The term fits both situations pretty well, so I get why it happened. I don’t know if there’s a better less-ambiguous term used for either of these things now?! Non-database-backed?! Nonjavascript’d?
The term for JS enabled pages used to be DHTML back in the day. You'd be talking about a static-server DHTML site. Or something like that. https://en.m.wikipedia.org/wiki/Dynamic_HTML
I'm no expert, but I hope someone at quora has some sort of light-bulb moment reading this....
Looked at the source for one answer because of their login wall and couldn't be bothered to re-create my account and to get to the actual answer is about 30!! nested divs and then each line of text (provided you've returned to the next line in the visual text editor to write your answer)is in its own <p> which is fine on its own, but each <p> is styled with a boatload of classes for no apparent reason....
I'm fully with you as far as loading times, but even a simple wordpress install these days add incredible useless stuff that makes it non trivial for the average blogger to simplify...
1. Lack of expertise/interest, as you say. One root cause here is that there are many devs in web development who are self-taught, with no CS background.
2. Time pressure and contractual agreements: Things need to be finish ASAP to reduce cost, and implementation contracts usually state features and looks, but not performance or maintainability.
3. Many effects, such as sticky elements, couldn't be done in CSS just a few years ago, so devs got used to include libraries for these things. In fact, I think many modern CSS features got introduce because they were widely used, but with JS implementations.
> there are many devs in web development who are self-taught, with no CS background.
That's an excuse when you're just getting started, but after 3 or 4 years everyone has had time to figure these things out. There's papers, blog posts, youtube videos... even wikipedia can help in understanding performance on the web. If you've been doing web for 5 years and still don't know these things, it's not because you're self-taught; it's because you're lazy and/or being overworked with not enough room for learning.
> But I am not sure I can blame those front-end developers.
Also in a lot of cases you can build the fastest website with minimal markup and only the necessary CSS/JS libraries for the base functionality but if then for "business reasons" you need ad networks, google tag manager, analytics scripts, third party support popups there's only so much you can do as a front end developer to keep the site fast.
This all sounds true, but it breaks down about the fact that bare html and css are too low-level and quirk-rich for a high-level programming under reasonable budgets.
Also, not programmers are guilty of such bloat, but the tools that they use, html css included. There is no compiler that takes an idea / a representation of it and turns into highly compressed, DCE'd, -O3 LTO'ed bundle. There is some progress in this direction, but it is not that great and requires an effort to use.
because Bootstrap told them to do so, they nest as much DOM as possible
Bootstrap also does that for reasons. For reasons that are unclear to bootstrap itself, because these all lie in the inconsistencies, legacy-ness and incompatibilities of the allegedly most compatible platform ever. And when we try to discuss how to extend it (not even replace!), someone brings idealistic arguments about how it should stay as is, because otherwise (again allegedly) they couldn't print a webapp or screenread an html5 platformer game.
It sounds like what is needed is an `optimising compiler' for the entire front end that would spot patterns like those you mention and rewrite them into something better, and that would tree-shake out unused stuff.
I work for a company dominated by non CS . They brought in contractor since the contractor's CEO is a good talk. This contractor is full of full-stack JS devs. Now we have their ideas implemented on edge devices. Noice.
Edge devices are usually those operating on the edge of your network, topologically speaking (e.g. end-user devices which only have one connection to the network).
Noice is just an awkwardly phoneticized spelling of the word nice intended to convey a sarcastic tone.
What is going on here? Why have none of the commenters read the article? Perhaps because it's phrased as a question and people didn't realize it's a link?
Anyways, this matches my expectations--people tend to be overly negative and only remember the good part. The mobile web as a whole has gotten faster due to network speeds+cpu improvements.
It is worth noting that pages are doing more after loading now than they used to be though. This won't show up in onload or first meaningful paint, etc. So the first paint is fast, but then if you try to scroll immediately afterwards you'll probably hit some jankyness while the rest of the page loads asynchronously (but only kind of asynchronously since there's a single main thread).
Some other things that could cause the regression are that more people own a budget Android phone now than before. People may not realize how slow these phones are. The single core performance of the top budget phone, the Samsung A50, is comparable to an iPhone 6 which came out in 2015.
> It is worth noting that pages are doing more after loading now than they used to be though. This won't show up in onload or first meaningful paint, etc. So the first paint is fast, but then if you try to scroll immediately afterwards you'll probably hit some jankyness while the rest of the page loads asynchronously (but only kind of asynchronously since there's a single main thread).
The question is whether those pages are doing more for me, or whether they are doing more to me. When I load a page that would have been a normal hypertext document 10 years ago, instead I get a clown show filled with "we have cookies" pop-ups, tracking scripts, ads, ad-blocker-blockers, and more.
> Some other things that could cause the regression are that more people own a budget Android phone now than before. People may not realize how slow these phones are. The single core performance of the top budget phone, the Samsung A50, is comparable to an iPhone 6 which came out in 2015.
If I owned an iPhone 6 in 2015 and own a Samsung A50 today, and the web is slower and jankier today than it was five years ago, then isn't it fair to say that the web got slower?
Remarkable, isn’t it? A bit like paying for cable and getting more ads than content. Puts that recent post on auto play videos in new light, too.
It really sickens me how advertisers are allowed to tell us whatever they want if they just pay. Not surprisingly, the CCP took out a full page ad in my country’s most prominent newspaper to spread propaganda about the Hongkong protests.
> If I owned an iPhone 6 in 2015 and own a Samsung A50 today, and the web is slower and jankier today than it was five years ago, then isn't it fair to say that the web got slower?
Yes. But what do you suggest as an alternative? Refuse to take advantage of technological progress to cater to the small portion of the population who haven't updated their phones in half a decade?
I have and the article starts off with the assumption that it hasn't and then ties itself in knots to reach that conclusion.
The network speeds have increased, network latency has decreased, the hardware has gotten faster and we're at best stuck in the same place we were in 2010.
>Page weight has increased over time, but so has bandwidth. Round-trip latency has also gone down.
>Downloading a file the size of the median mobile website would have taken 1.7s in 2013. If your connection hasn't improved since then downloading this much data would now take 4.4s. But with an average connection today it would only take 0.9s.
The problem today is that the average website sends a couple dozen to a couple of hundred requests to complete a load. The average website 10 years ago sent a couple to a couple of dozen requests for the same thing.
So after 10 years of constantly improving technology and spending $5000 on phones to keep up with the latest CPUs the performance is pretty much the same.
Imagine if you had to buy a new car every 5 years to drive at the speed limit. That is the situation we are in.
It's not about "the web" it's about various schools of web building.
This is most noticable when a property fundamentally changes their approach (reddit) or when twitter did it a while back and then (sensibly) retreated.
A better, more sensible approach then say, graphing CPU clockspeeds, would be to fragment web development into these various schools, give them names and then characterize them accordingly.
There's really only two ways to talk about this problem: one is hopelessly divisive and factional and the other is irrelevant and useless.
That sounds unpleasant? Correct! That's why it's still a problem and getting worse.
When the "make things better" axe falls on the fingers of the "mostly harmless" it's the passions of the axe wielder that get the focus and the blame. So instead we all slide into mediocrity together. It's the path of human institutions and the web isn't immune from the pattern.
> The mobile web as a whole has gotten faster due to network speeds+cpu improvements
Making something faster by throwing more hardware at it doesn't meaningfully count as making something faster IMO, you can make the most inefficient piece of software "fast" by throwing the biggest CPU and network you can find at it.
The real issue is why should an otherwise capable CPU from 5 years ago struggle to render the average website today, when it really shouldn't be that hard. Scrolling through someone's marketing website _should_ be a painless experience on even a low-end budget phone.
I don't know the direct answer, but I have a deduction in my head to work with...? I'll just put it out there: I think speed is deeply impacted by high-level frameworks that parse or compile at runtime. React Native is a framework on Android, which is a framework on Java, which compiles at runtime. IDK if anything in that chain compiles down to Assembly or machine code before you open an app made in React Native. That tied with the bloat of JIC background services sitting idle eats bandwidth. Garbage collection checks operate in a loop, checking again and again if all these unused but loaded processes still exist at their addresses. And when you pile those on each other, it seems relatively easy to see how modern CPUs don't seem much faster than chipsets from 5-6 years ago.
I stopped programming around 8 years ago because I hate the current MVC model most software is created and maintained with. What got me interested recently in dipping back in was a video on branchless programming. I love the idea of unit testing at the machine code level for efficiency, and then figuring out how to trick the compiler or runtime and the chipset into making quick, predictive outputs to reduce idling on branches or making 15 steps for something doable in as little as 4.
That feels like a completely opposing direction to take given the current priorities of engineers across almost all industries, even oldtime ones like Gaming.
> The real issue is why should an otherwise capable CPU from 5 years ago struggle to render the average website today, when it really shouldn't be that hard. Scrolling through someone's marketing website _should_ be a painless experience on even a low-end budget phone.
Because marketing/product/design decides to add bells and whistles. Optimization is also not zero-cost effort. The business has to pay for it. I would assume that most business don't think it is worth it.
>The real issue is why should an otherwise capable CPU from 5 years ago struggle to render the average website today, when it really shouldn't be that hard.
That depends on what is a capable CPU. I would say even iPhone dont have an Capable CPU 5 years ago today. ( iPhone 6, iPhone 6s not released yet )
And it is even worst on Android, and the current state of things aren't much better. Hopefully ARM will catch up in the next 5 years, but that means it will take another 5 years to filter down to market.
iPhone 6 came out in 2014; iPhone 6s on this day five years ago. Both are capable smartphones; the latter being capability enough to support the latest version of iOS.
There are actually cases, were faster infrastructure had slowed down a system significantly. E.g., British railways (in various organisational form over the years) entertained rolling post office trains, which grabbed mail bags on the go, sorted the mail and dropped it again without any halts, since 1838. This played quite a role in the evolution of fast delivery of national news papers, up to 8 deliveries of mail per day in urban centers, etc. By the 1960s the procedure had become too dangerous for the increased speed of trains (with several firemen loosing their heads in accidents involving the scaffolds for handing over the mail bags) and the last Travelling Post Office ceased operations in 1971. Moral: by speeding up the network by a few miles per hours, mail delivery slowed down by a day.
Similarly, as mobile network speeds increased, expectations what could be done with this rose faster than the actual speed of infrastructure. Add high-res resources with previously unheard of page loads and you've established a system of ever increasing expectations and visions, which will be always bound to significantly outclass the real life capabilities of the infrastructure. As long as we stick to this paradigm, increasing network speed will always result in a slower web, due the Wirth factor involved. I'm afraid this will be even more true for any further significant speed-ups, like those promised by a fully operational 5G network. (Also, visions and concept that are apt to exploit and even challenge the capabilities of 5G will probably also pose a new challenge to any hardware on the end points, which may prove eventually financially challenging for an average user, by this introducing yet another significant gap and respective drops in average real-world performance.)
You can see the (preserved) system in action in this video ("Absolute History" YT channel) together with a bit of backstory on it: https://www.youtube.com/watch?v=GeMkOruNht8
Fascinating. I know what I'm doing for the next hour.
It does make an interesting comparison to the internet today. The Victorians had the wired telegraph for information, but without the trains most of the advancements of the 19th century would not have been possible. Looking naively at it, it seems the closest technology we have today would be flying drone swarms.
Another interesting aspect of Victorian railways: Originally, passenger coaches had compartments spanning the full width of the coach with doors on both sides, typically with room for 8 passengers. While this puts a maximum of passengers in a coach, each of the compartments is totally isolated and there is no shared infrastructure, like bathrooms or a chance to collect any sort of food, etc. Hence the train has to increase halts at stations to allow for any passenger needs (which may also collect a bit of extra profit at the stations). At some point, coaches with corridors where introduced, now offering room for just 6 passengers in any of the compartments. A drop of 25% in capacity! on the other hand, based on average speeds and frequency on your network, you may more than compensate for this by less frequent and shorter stops, by this increasing overall throughput of the system. Where is the exact point in the evolution of technology, of your system, and of market acceptance that this becomes a viable option? (Include any losses on side business at the stations in your considerations.)
I once saw a humble, but quite astounding artefact on TV: a box for mailing eggs, of course, Victorian. Behind this hides an entire system of postal service and mail train delivery. A, say, Cornish farmer would put fresh eggs for an individual customer in said box in the early morning. Those boxes were then collected by the postal service and shipped by train to London, where it was delivered to the customer's home, just in time for breakfast, the very same morning. (Amazon next day delivery pales in comparison.)
Trains were too fast to safely hand off bags of mail between train stations and trains while the trains were in motion, so instead they had to stop the trains entirely, which was slower than before. (I'm not sure why they couldn't just slow down the trains; maybe it was more a combination of higher speeds and changing priorities).
These were just special coaches added to high speed express trains. So the system was interconnected to the system of HS passenger trains as a shared infrastructure.
The network is an interconnected system relying on average speeds and throughput. Slowing down or adding halts for the safety of a particular service probably wasn't an option. Hence the end of service. (The speed of that particular service consequentially dropped back to stationary infrastructure and transit between those hubs, roughly what it had been before 1838.)
I think I’m getting it now. When high speed passenger service ran at 50mph it could also carry mail and deliver 8 times a day. When passenger service increased to 75mph it was too dangerous to carry mail the same way so mail service dropped back to once a day, probably on dedicated or slower trains.
I remember that the web was very usable with a 133 MHz CPU, 32 MiB of RAM, a 3600 RPM hard disk, and a 1.5 megabit (0.128 megabit up) connection.
That budget Android phone blows away the hardware that I was using. A quick search for the Samsung A50 tells me: "on Verizon's network in downtown Manhattan [...] average data speeds of 57.4Mbps down and 64.8Mbps up". That is 38 to 506 times faster. It has an absurdly fast 8-core CPU running at 2300 MHz. Ignoring the fact that MHz is a terrible benchmark, that is a factor of 138 faster. The RAM is bigger by a factor of 128 or 192. There isn't really any hard drive latency on the phone.
Yes, the web is slow.
The trouble is that browsers make no attempt to stop web sites from using infinite resources. The assumptions are that web sites will politely cooperate to share my computing resources, and of course I couldn't possibly want to actually use tabbed browsing to access lots of web sites, and we all discard our hardware as electronic waste after just a few years.
It's impossible to make any sane limits that would universally apply to web pages. An isolated environment for an arbitrary application is the web's purpose, not just loading a text document.
You can actually play hw-accelerated doom3 on a browser today no problems. No add-ons, no nothing needed.
You could require user consent for resource increases.
For example, start the RAM at 12 times the number of CSS pixels. When the limit is hit, freeze the allocations until the user authorizes a doubling of the limit. Web sites would need about 10 authorization clicks to go from 4 MiB to 4 GiB. That goes for everything on the page, all sharing the limit.
Web sites would quickly change to minimize that, out of fear that users might not keep accepting the resource usage.
CPU usage could be similar, probably based on threads. The default is that only a single tab in a single window gets any time at all. Everything else is suspended. Users can grant permission for stuff like music players.
Network usage would also need to be limited, though limiting the RAM and CPU will tend to limit network usage as a side effect.
It's more likely users would be mad that their browser update makes them play cookie-clicker to get to their sites.
By and large users are unaware of how much resources something uses, or should use. They don't really care about anything but getting from A to B as fast as possible, with as few interruptions as possible.
Computer resources are just like any other resource; expendable. User will always use more if that means it's more convenient. Human time is very valuable. Accepting multiple dialogues would take even more time than loading a fat page
as an add: there is no single resource you can bind the multiplier to; many sites use no css, but lots of js, or webgl, wasm, tables... There is simply no possible way to foresee what will be slow and what wont
The "CSS pixel" is just a pixel, unscaled for high-dpi displays. The point is to avoid revealing the hardware while allowing a bigger starting amount for a bigger window. If you prefer, just pretend I wrote "32 MiB".
Human time is valuable. That is the whole point of this. My time is wasted when my computer gets so slow that it takes 10 seconds for the Caps Lock light to respond. My time is wasted when the mouse lag is so awful that it takes me half an hour to kill a few tasks. My time is wasted when I have to walk away from an unusable computer, checking back every few hours to see if the OS might have killed the biggest task.
Web browser resource consumption is why I have a fresh new HN account. I had to power cycle the computer today, and it seems that Chromium won't save passwords over a restart unless I upload them all to Google.
All problems are fractal in nature, they require a problem-scope in order to be solvable.
In this case, it would be "how fast is fast enough?", which would be fast enough that the human operating the application doesen't lose focus.
For most pages I've viewed with my 150€ phone, submit->time to interactive is between 1-3 seconds for the first non-cached load, and much faster for when revisiting. This is sufficient worst-case for any conceivable tasks performed on web-apps today.
Some exceptions that come to mind (like Reddit) have ulterior motives to force slowness so that people have to use their native app instead (which isn't much better for what I've heard)
You say yourself that websites are now janky. That is also my experience.
In fact, despite new tech, it is a relatively recent phenomenon that I feel my phone slowed down by a freaking website.
Furthermore, phones may be faster, but websites load slower and are janky due to unnecessary async loading.
So what’s the point of faster phones?
The point is that webdev is terrible.
The article goes into all sorts of hoopla to claim that the web isn't slower. Thing is, I can still open those websites on of yore on my phone right now!
And to no one's surprise, the new web tech is indeed slower and imo also worse in user experience.
So yeah.
> The mobile web as a whole has gotten faster due to network speeds+cpu improvements
This point is agreeable, though if you browse the web on the same phone for 5+ years (my dad still uses his iPhone 6S), you may notice a difference over time.
Also, it's not a good sign if consumers are forced to go along with planned obsolescence just to keep their Internet browsing experience from becoming increasingly slow. The principle of progressive enhancement means that websites built today should work well on phones made years ago.
I have a plan for this, and it involves scraping content and using the web that way. This is something I have started work on, on a small scale for the stuff I care about the most.
Ideally (well ideally really, companies themselves would provide APIs for accessing the content but unfortunately that makes it difficult for them to make money, both directly through loss of ad revenue when clients don’t show their ads and indirectly by making it easier to pirate stuff), we’d have a joint open effort to do this on a massive scale. For now I am doing it on a small scale on my own, writing tools for my own use only.
In addition to this I have also started work on retrieving the content that is walled off in apps that I use. For example, there are some magazines that I used to subscribe to for a while, and I’d rather be able to keep my access to the content indefinitely than to see it disappear whenever the publisher decides that the magazine has run its course and subsequently stops updating the app and then shuts down the servers that host the data.
On top of this, I have for a longer period of time (years now) been using for example Facebook as little as possible. I only use it for Messenger and for upcoming events mostly. Meanwhile, Instagram also owned by Facebook is worth it for me to continue to use much more actively for now. But I am also slowly working to make something of my own to host content that I myself produce, with the intent of continuing to consume content shared on Instagram but to cut back on posting to Instagram and instead to post my stuff on my own server. It’s not like any of my stuff gets much attention anyways, so for me it will not be a big difference in terms of engagement. Mostly the way I use Instagram in terms of the content that I myself post, is that I post pictures and videos that I have made that I think are worth sharing and then when in conversation with friends and aquaintances I sometimes pull up my phone to show them in person something that I did or made recently. A self-hosted service could serve the same purpose.
As for the increasingly slow experience of browsing the web, I come to realize that this might in fact contribute to what the parent to yours said, about people on HN not reading the linked article. At least, for myself I find that I often don’t click through to the linked articles, and I think the experience of slow-scrolling, megabytes heavy pages is contributing to this. I try, however, to not comment on the story itself unless I have read it first. Meanwhile, HN itself is lightweight and comfortable to be on. And often the comments will encourage one to click through to the linked story if it is worth reading, either directly stating that it is worth reading or indirectly stating it by quoting something good from the page or talking about some good data points or novel information from the linked page. (Novel to me, I should note.)
I went down this road once and ultimately didn’t have the patience for what a nightmare scraping the modern web is, parts of what I built I still use for myself. I wish you luck though.
> The mobile web as a whole has gotten faster due to network speeds+cpu improvements.
What's distinctly lacking in that assessment is web applications getting more resource efficient, or more conservative with storage. I'd argue that hardware and CPU improvements are enabling bad tech stacks, like a friend might enable an alcoholic. Sure, you can minify, tree-shake, etc but with sufficient hardware, you don't strictly have to.
I also don't see TFA as an actual rebuttal of the hypothesis, since it focuses on the US. Half the planet has an uplink, so you're gonna end up with skewed results if you focus only on the top end of the technology distribution. While a rural connection in the US might be just as bad as a connection in rural India, I'd wager the mean and average connection speeds and latency are still way better in the US. The Internet is for everyone, not just Silicon Valley engineers on a MacBook Pro connected through fibre optics.
I did, I just have a different interpretation of it. For instance, this sentence:
"Still, I don't think the mobile web – as experienced by users – has become slower overall."
To me it's hardly a positive. people are paying for fastest speeds, fastest phones, fastest CPUs, and yet they get nothing in return.
The fact that hardware is faster is no excuse for the very real web bloat, specially when most of this bloat is due to stuff that adds 0 value to customers, like tracking scripts, anoying pop ups, overly complex and intentionally confusing tracking disclaimers, etc...
>People may not realize how slow these phones are. The single core performance of the top budget phone, the Samsung A50, is comparable to an iPhone 6 which came out in 2015.
Yes. And also.
The single core performance of Entry Level iPhone, iPhone SE; is faster than Flagship Android Phone.
And that is not counting the System and Software efficiency.
Yes, demonstrably and provably. Count the number of pageload indicators (spinners, throbbers...whatever you like to call them) in your daily browsing. They are everywhere...now.
AJAX (and, I suspect, the shadown DOM model) have proliferated in recent years, and there is no site design simple enough that someone hasn't thought to put every page element behind a JS call. Don't forget to put CSS in there, too!
Frontend developers are at fault for all this. There, I said it.
> Frontend developers are at fault for all this. There, I said it.
Let me trade one dead horse for another: Product and marketing teams are at fault for all this. Or if you prefer, the advertising industry is at fault.
Take the payload of any typical blog or (God help you) your small-town newspaper website and count how many requests correspond to the actual content. Now ask yourself, who's paying someone to bloat this website, and why?
Eh. If one wants to play the blame game, we can be here all day. The marketing dept asking to add sharing widgets are to blame. The team lead pushing for Squarespace so they can get away hiring junior devs under them is to blame. The frontend dev using half of the React ecosystem on a static site + simple form is to blame. The designer insisting on huge eye candy images is to blame. The UX person set on adding a carousel is to blame. The project manager prioritizing feature count is to blame. Etc etc etc.
The bottom line is that nobody really has any strong incentives to advocate for performance, so it falls by the sidelines.
I don't think there's really a problem with a couple 100ms of latency for web services. However, what often happens, is that "modern" websites take many seconds to actually do simple operations which is clearly horrible design.
I literally despise new UX of gmail, Ive switched to html version simply because I dont need the bloat crap they try to push at me while checking mailbox...
Yet they keep on adding more and more crap, soon Gmail is going to load for over a minute..
> The frontend dev using half of the React ecosystem on a static site + simple form is to blame.
I once saw a website that took a full minute to load, and it was literally a page with images, text overflowing around it, and then some download links. But the way it was built... my god, the way it was built.
That company brought in a new "CTO", and his solution was to dump the entire thing in google cloud. I shit you not, this CTO initially installed an EOL version of PHP, and when errors started happening, he started submitting PR's to revert the PHP code to be compatible with the EOL version of PHP.
His explanation is that's what google cloud defaulted too, although I have a really hard time believing that.
I'm more inclined to believe he googled and blindly followed an old guide with an old ppa than I am to believe that google cloud by default installed an EOL PHP server.
I'm pretty sure he was a node guy who found himself in the PHP world.
Raise you Automotive. Store sites are the worst. The industry reason: it's all about tracking conversions and answering the question of how much did it cost me to sell this.
I didn’t touch “web dev” since php/dream-weaver/myspace era.
I started working with some web developers for a simple project recently, my mock product was built with vue.js using the standard way of “including” it: But what really threw me for a loop was that the web developers told me that that’s wrong, they are now _compiling_ (Or, semantically “packing”) JavaScript blobs to make even very simple websites.
I don’t know why we as a community have started doing that, but it feels like an anti-pattern, it makes adding new dependencies opaque and we can very quickly end up including dozens and dozens of lines of code which must all be rendered by every device that comes into contact with my site.
Often this happens because we want only a small bit of the functionality too.
So.. There are benefits to this for complex apps - but 90% (charitably) of this kind of code is not in a complex app.
I think it's for two main reasons:
1. If you don't use complicated new technology you won't get a job doing web dev.
2. Many new JS devs don't understand JS/HTML/CSS well, they only understand a particular framework - and even then not in a sufficiently deep way.
When you come across a seemingly normal site that requires JS in order to display anything, it's pretty unlikely that anyone involved ever considered not doing it that way.
> Often this happens because we want only a small bit of the functionality too.
To be fair, this 'compiling' also allows for tree shaking, which if you use a well-structured library like lodash/date-fns etc, allows you to get that small bit of functionality without paying for the whole library.
Often this is to do with making a site work across browsers. IE11 is the bane of our lives! Plus doing the transpiling allows using things like Typescript which is a godsend on a complex site.
As a recent front end dev, I fought tooth and nail not to include all kinds of tracking drek into the company's site. I lost every time because management overruled me.
At some point, it occurred to me that it's not my site and I shouldn't be taking personally that which doesn't belong to me.
this is not about tracking though. Even with an ad blocker, just about every website and blog now presents multiple spinning boxes when its loading. This is unnecessary , timewasting, environmentally unfriendly programming. And yes , 100% of the time it s devs that choose to build a site this way.
It's come to the point where i feel bad for using plain old single-request-loading html, because it doesn't look 'spinny' enough, and users may think it's uncool.
If business wants something quick and the cost of the website being slow doesn't matter... Why bother?
On the other side, I've been building websites that gets +95 on mobile on Google page speed. If no image above the fold... It gets 100. In mobile, yes. And I still have analytics, YouTube and images...
> just about every website and blog now presents multiple spinning boxes when its loading
The vast majority of "page" content is baked into prerendered HTML and CSS rendered. I'm not sure what spinners you're talking about unless you're referring to web apps with timelines/news feeds that dynamically load.
Users love sites that load the important content and are interactive ASAP. Trust me, they appreciate your boring (fast) HTML.
An old colleague of mine in the 90's used to say "Software is like a gas. It expands to fill available space."
The behaviour of desktop software developers in 1990's was a prelude to web frontend and app developers today. As the resource capacity of PC's increased through the 90's, software developers grabbed it for themselves; for the end user, the experience of using the software never got any faster.
As the storage, processing power and network speeds have increased, web frontend and app developers seem to have grabbed it for themselves as well; the end user experience has generally not gotten faster (if the user follows the instructions: use popular, Javascript-enabled browser, enable images, video).
In the 1990's, users were asked to upgrade their computers so they could run the new, larger but not experientally faster, desktop software. Today, we see the same sort of OS and device upgrades; "unsupported" versions of software that will not run on newer version of the OS. Plus we have a culture of allowing automatic, remotely-controlled installation of new software. Automatic upgrades. Telemetry. "Field testing" of new features. A sizeable amount of user choice and necessity for user consent has been successfully eliminated.
The number of outgoing TCP connections made by today's average websites, usually triggered through cascades of Javascript files, is simply staggering. It would never have been acceptable to do that with the networks and server software of the 1990's and 2000's. Users were not involved in the decision of developers to grab the newly available network resources and CPU for themselves.
I do not use a popular browser for most web use. This allows me to only make a single TCP connection to a website when I access it and can retrieve dozens or hundreds of pages over a single connection, thanks to old, standard features of the web documented in RFCs (pipelining). It is very fast and efficient. For me, the web has gotten much faster, but only because I choose the software and I decide whether and when to upgrade. It is my computer, after all. I am not really impressed by the popular browsers of today because it does not feel at all like the user is in control. Users of these programs have been reduced to guinea pigs.
To be honest, we have decided that lowering the cost of building software is the best priority so we choose more and more abstracted software that requires more resources.
The javascript cascade from hell starts when you add a single ad. We wrote a test param to remove ads and the page load went from 35Mb to 1.2Mb... all of it was JS and ad tracking code.
Product managers add features and/or speed up development by using more resource-hungry runtimes up to the point where the users still tolerate the performance but barely so. Once the product has enough traction and switching costs, the user would rather tolerate the mild pain of slowness than the more acute pain of switching and adapting to a (so far) faster competitor.
This is very much like the idea of the optimal price point where the customer already cringes but still buys.
It's a culture problem in the web dev world. There is an obsession with new and shiny and building something "cool". It's all about the developer experience and not the product.
That's why there's such insane churn in the tooling and ecosystem. People rewrite and redesign constantly in order to work with the new cool framework.
Insane churn? Wordpress powers over 1/3 of the web. JQuery is still doing most heavy JS lifting on most blogs/CMS-driven sites.
React has dominated "real" frontend webapp development for a solid 5 years.
These things don't make headlines or get retweeted. The web produces a lot of variety because it has the widest distribution and the lowest barriers to entry.
Not sure what you mean. You're welcome to hand-write all of your web apps from the ground up with zero dependencies and no build system. It's still the web.
This gets parroted a lot around here but the last 5 years or so there's really only been a few major players in the framework space. Maybe the "framework of the day" mentality was pretty strong in the early 2010's but things have cooled off significantly since then and you'd be hard-pressed to find most shops using more than the big 3-4 contenders.
My experience is that it's just as relevant today.
I had a front-end developer come in and add node, bootstrap, and SCSS as a dependency to a very simple app. We went from having a super simple build chain to one that now needed maintenance. And part of his argument? he "thought better" in SCSS.
People have zero respect for the risk of dependencies, complicating the buildchain, or long term maintenance.
It's a combination of arrogance, lack of skill and sheer stupidity. For an example, look at nuxtjs docs, there is an example config, that everyone seem to use, that lints on each build. Because code using 4 spaces indentation is slower than code using 2 spaces...
The last ui dev I had to work with managed to produce in about 1 year a very complicated dockerfile, that was spewing, between other things, an image for e2e testing - but no actual tests and you needed a huge makefile to actually build the thing, a soup of copy-pasted config files that resulted in a vendors.js bundle of about 100 megs - there were more config files than source code, 2 branches named <username>.wip.somedate with commits that commented out some parts of a config, just to uncomment them back in the next commit. The app itself, with ssr no less, was a logo and some text, that was using some custom components, built with some third party vue component that was rendering css in js, thus you had to wait for the entire vendors.js to download before you got anything rendered. His answer to anyone questioning his choices was "this is how modern ux development works".
After he was let go, I took over the ui and along with another coworker we managed to have a working version of the app in 9 days, with no prior knowledge of vuejs and nuxtjs. You can imagine how complicated the app is...
I find this universally true in every corner of the stack and it has more to do with experience than anything else. It’s hardly “all front enders do this always.”
Still, to my point, the tools mentioned here have existed for a very long time with very large communities and are hardly the new shiny thing. Your complaint here seems to be more around using the wrong tool for the job.
I don't disagree, but it's particularly bad on the frontend.
This particular frontend person had convinced the owner of the company that there were things you could do in SCSS that you couldn't do in CSS. I had to explain to said owner that SCSS was compiled down to CSS because the browser didn't understand SCSS. That you literally couldn't do anything in SCSS that wasn't allowed in CSS.
The owner ended up having a come to jesus moment when he tried to take all of the tools that frontend developer added and make them available in docker so that no one else had to install custom development tools. He spent several days, failed on it, and realized exactly why I was being so hardheaded about the build chain.
We never ended up getting rid of those dependencies, but I know the owner finally understood my point. Instead the team bifurcated into those who had their local environment setup to be able to successfully work on the frontend, and those who didn't.
A news site. It does precisely the same thing as news sites did a decade ago.
On my machine, it loads slowly. Then, it needs to re-load every time you scroll. It's extremely slow. Indeed, on Firefox, it doesn't load everything consistently.
If you click on an article, things get even worse because it needs to reload text when you scroll. This website is what I would call "barely usable".
Let me reiterate: This site does nothing new compared to old news websites. There is zero benefit to the user. It simply is slow.
I don't know who is to blame for this. But if even your text loads slow, you are building a bad website. I am sure, some frontend developer got paid for making this website. I am not sure they should have been.
It’s important to make a distinction between webapps and websites here. We use the web now like we used desktop in the 90’s and 00’s.
For most users, your browser is your OS. Hell even when it isn’t, most desktop apps use HTML+CSS for their UI. Hell even many user-facing “embedded” apps (like TVs) are running linux with chrome and showing a webapp as their UI.
The layout engine is just that good and convenient. And downloading fresh app source on every visit solves a lot of problems.
This part of the web is getting bloated and slow.
On the other hand are websites. These are fat as heck thanks to CDN and broadband and advances in server compute power. They load faster than ever.
Remember when downloading memes required eMule? I do.
Now I go to imgur and watch 30MB high def gifs like it’s nothing. My 13 year old self would shit his pants in awe.
Are ads and trackers bloating websites? Yes. Are most websites built like webapps even when there’s no need? Yes.
Blame tooling. Go help. What can we as a profession do to make it easier for random developers with no skill build faster better websites?
Right now we’re actively telling everyone they need to build as if they’re FAANG. Then we complain when a part time dev working for a mom&pop shop can’t wrangle all this tooling built for teams of 1000’s into a solid experience.
This might be an opportune time to mention my attempt at helping. I made barleytea.js [1] as a light framework alternative to React that needs no webpack. I also recommend much more polished, production-ready things like µce [2] and heresy [3].
Anyway I just want people to know that there are lightweight alternatives out there, and they're worth at least checking out.
I've stayed with Angular v1 for a long time[0] due to it's simpler nature (no compiler, simply JS); but since that is going EOL I'm definitely going to look into Barleytea.
That description fits Braess's paradox https://en.wikipedia.org/wiki/Braess%27s_paradox more than the Downs-Thomson paradox. The latter is more about public transport being the dominating factor in congestion whereas the former is about adding roads in itself sometimes causing traffic to get worse.
In either case it's often overstated, expanding the road network (especially intelligently) usually results in less congestion. It's just a poor investment compared to other means of transportation investment in large cities.
If "adding capacity to existing lanes" was your intended wording it absolutely can - widening a lane, simplifying lane markings, restricting turns in/out of a lane, and more can be done to increase the efficiency of a lane.
If you meant adding capacity to existing roads (by adding lanes) it certainly can drop congestion as well. Expanding roads blindly CAN also make congestion worse but that's a possible outcome not a guarantee.
The point of the original "paradox" is that if you want the least amount of road congestion by private vehicles the best investment is to grow public transport but many places dump more money into roads and lower the percentage of users using public transport because "roads are where the congestion is". Hence the paradox, increasing road spending increases road congestion (in places with existing public transport).
The point of the second paradox is there exist ways to add more paths to a network that result in less flow than the network had before regardless of other conditions like public transport being available. The paradox is NOT that adding additional capacity to a network ALWAYS or NORMALLY results in less flow. In fact most of the time it does help flow (marginally) it's just a poor investment compared to the same amount spent on public transport.
I've always had a question about this. If more people are deciding to use the road afterwards surely it means that trips are being taken that weren't before? Wouldn't that be a good thing, economically speaking?
> Wouldn't that be a good thing, economically speaking?
Yes, because it means people can live in places they couldn't before, or could get to jobs they couldn't before. Of course, people who already used the road won't perceive these benefits; they are in the same situation as before.
Only applies "to regions in which the vast majority of peak-hour commuting is done on rapid transit systems with separate rights of way" according to your link. I think that's an important qualification.
That's certainly a useful concept, but I don't think it has anything to do with internet speeds here.
The page speeds that either have or have not gotten slower are generally understood to be purely a phenomenon of larger/bloated pages -- not a result of constricted internet "pipes".
If the paradox applied to the internet, then what you'd be seeing is that if pages loaded twice as fast, people would visit twice as many pages. But that's not at all how it works.
> If the paradox applied to the internet, then what you'd be seeing is that if pages loaded twice as fast, people would visit twice as many pages.
It is in fact a similar phenomenon. The cars are the bytes that developers are stuffing into what they build. It's not people visiting twice as many pages, it's developers continuously expanding the size of what they can stuff into sites to fill the capabilities of servers, end user systems and bandwidth today.
Some of it is unnecessary byte expansion because they can get away with it. The Web was very slow when everyone was on a 56k modem as well. The network pipes were smaller, the servers were drastically weaker, the consumer systems were weaker, everything was obnoxiously slow. It sucked to wait five to ten seconds or more to load a very simple website in 1996. It sucked to watch little real media files and wait a long time to load a couple mb file.
Developers are expanding their byte footprint to fill the available limits of patience, and often going over that line, exactly as they always will.
This article isn't very good, so I'm just gonna skip commenting on it.
Yes, the web is slower, but not in a the pure measurement of "is this js bigger, and does it take longer to execute" sort of way. It's more in the, now you have 39 js's trying to run on a page that's just some text and a few pictures, whereas that used to be 0-3 js's. For those of you who use ublock origin but not matrix, matrix will really open your eyes to this proliferation.
It's mostly management decisions that get shoved down the throats of devs, but I would say it's also devs who love to throw a thousand frameworks in-between content and delivery.
Simplification of stack is a market advantage I think eventually more and more companies will start to realize... at some point in the future.
I'm not so sure about that. Throwing layers between content and delivery seems to be pretty popular among developers that I work with. Particularly younger ones.
I've been working in the web performance monitoring space (https://speedcurve.com/) for 3 years now, and the 3 years prior to that I was doing a lot of performance work at the BBC. I can't share the data for obvious reasons but I can confidently say that the web is getting measurably slower every year, despite connection speeds increasing drastically over the last few years.
We like to over-simplify and try to attribute it to things like JS frameworks, advertising, media-heavy pages, etc. The truth is that it is all of these things, and so much more. Yes, devices are more powerful, but we are also asking our devices to do more. Yes, connections are faster, but bandwidth doesn't help with things like TCP slow start or browser concurrency limits. On top of all of this, our perception of speed is changing, so things can _feel_ slower than they really are.
This is one reason I created Trim [0]. I didn’t want to have to load 4-7 MB of stuff to read a stupid article. It often reduces an article page weight by 99% and uses no JavaScript.
This is one of the main reasons why I do 90% of my web browsing in emacs-w3m, which does not support Javascript.
Avoiding Javascript not only lets me avoid all that bloat and slowdown, but also avoid Javascript-based tracking, malware, and exposing myself to Javascript vulnerabilities.
I also have the power of the entire Emacs ecosystem at my fingertips when I surf the web this way, which can be very helpful in many ways.
Unfortunately, some sites I find essential will not work without Javascript, and for them I go back to Firefox.
In Firefox, I use uMatrix and uBlock Origin to only allow through the minimal amount of stuff that'll let the web page work, and filter out ads.
I have a really old, slow laptop, so web browsing is slow for me, but not nearly as slow as it would be if I consented to swallow all the crap that the modern web tries to force down my throat.
I yearn for the good old days without Javascript or Webassembly. There was Flash back in those days, but fortunately not a single serious site I ever visited required it, so I could avoid every Flash-using site like the plague. But today the Javascript plague is unavoidable.
uMatrix not only allows you to block JS on a site you're visiting, but gives you fine-grained control over blocking JS from sites that site calls out to, and subdomains of that site as well.
I find uMatrix much more flexible than uBO in this regard. I don't even know if doing all this is possible in uBO, and suspect it's not, or that at least it's hidden away pretty well, while this is front and center in uMatrix's interface.
You can do some of that with uBO (see "hard" blocking mode), but my understanding is that uMatrix gives you more granular control. I need to learn more about using uMatrix to confirm, though.
I started using w3m primarily a few weeks ago. It’s been awesome for actually grabbing the information I need and not getting sucked into attention sinks.
Similar, different in design and execution. As far as I know, Outline processes the article in browser using JavaScript. Trim uses no JavaScript but instead does the processing server-side. The response of the form post is the article.
I know most of the comments right now are an anecdotal resounding "Yes" from most folks (which I also agree anecdotally), I'd like to respond to the conclusion of the article (which is that as internet speed increased over the years, page loading times have roughly stayed constant). This makes sense in retrospect because product development tends to consider what the baseline acceptable speed/time to load is and then utilize that allowance fully and load whatever is possible (or optimize down until that point during the development process).
I only wish this wasn't the case, and that the internet speed gains over the years actually meant the browsing experince for the consumers actually improved.
Page load times, indexed for available bandwidth, appear to follow a sort of inverse Moore's law. We could call it Gerdes's random stab in the dark.
My first home internet connection ran at 9600 bps except when it clocked itself down to rather slower. At work at the time I had a synchronous pair of modems running at less than that for an IBM System/36 site to site link. Later on oooh V.FAST - 56Kbps except it wasn't really. More like 48Kbps plus a bit on a good day and a decent marketing department.
I am now using a 1Gbps connection.
My PC on the end of the 9600 bps system was a 80486 based beast running at 25Mhz. It cost me £1600 (thanks Granddad for the unexpected bequethement that was a major factor in getting me where I am today)
I now use a 8 core + H/T Corei7 beastie running at 1.8GHz laptop - its getting on a bit now but it can still churn out packets at quite a rate and crunch my CAD efforts.
Page sizes back in the day were rather small, say 10Kb. Nowadays 10Mb is pretty common (dodgy assertion with no proof)
Web pages do different things than they used to as well.
I don't think it is quite as simple as you suggest, wrt page load speeds. What page, on what, with what and why!
Contrast this with chip development, where baseline acceptable metrics were for decades set by an exponentially decreasing curve. If only we could proclaim a Moore’s law for web development that would be self-fulfilling for years to come…
Not only web is getting slower, it is actually everything. I've read a similar article but talking about input latency(from key stroke to displaying on screen), which is also steadily growing.
The reason behind might be the same: we have far more computing power than ever, so we start to abusing it by spending lots of it on visual elements which is good to have but not essential.
computers aren't becoming that faster though. And it s not like new affordances are being added to pages , it's mostly aesthetics and ... trendiness that is driving the crazy monstrosity that is most pages today.
Doesn't it have to do more with current software development practices? The requirement for reusable code , and the practice of coders having no loyalty and switching companies every very few years leads to the adoption of anti-optimizing standard practices. If Carmack was making doom today he would be ridiculed for not bundling his functional code and looked down upon for reinventing math functions. The idea that speed doesnt matter is just too prevalent
> And it s not like new affordances are being added to pages , it's mostly aesthetics
I agree on the point of websites getting slower, but not on that it doesn't add value. Website nowadays replace a lot of what would've been desktop applications ten years ago (see Dropbox, Slack, Google Docs, ...). It's not perfect, of course, - a lot of data now lies on PCs owned by somebody else - but skipping the installation/deinstallation part easily while also knowing that the app only has limited access to my PC is absolutely awesome.
Part of it is that modern screens have a much bigger latency than CRTs did. Largely I don't think it really matters. My desktop and phone feel very very responsive, Its only websites that are visibly slow to me. And most of that is waiting on the server. I click a link or submit a form and I have to wait seconds for the page to load where as with a local program its instant.
I keep hearing this come up. Do you have a modern citation for this with modern monitors, say made within the last 5 years?
It seems to be a constantly repeated myth or something of no consequence. (one article I read said LCDs have 2 ms higher latency than CRTs, which I would count as of no consequence)
The 3y old 4k Samsung TV in the living room can't be pushed below 50 to 60ms of latency, even with processing minimized, and compared to an audio signal NOT routed through it (the TV's line out obviously compensates for that, but is only stereo).
Back in 2011/2012, when I got Rock Smith, I used a dslr camera to determine the latency of my LCD compared to my old CRT. Just took a snap of a ms counter racing up on both screens (mirrored). Can't recall the exact difference, but definitely much more than 2ms. I measured because I noticed it during gameplay; I'm pretty sensitive to delayed video/audio, but not 2ms sensitive - that's one beat off at 30.000bpm. You can use that technique to check for yourself with a more modern display.
Yep, as a Dance Dance Revolution player, flat screens were pretty unusable from 2000 (at least for PS2 games) until recently because of lag even on settings that "minimize" it on upscale.
Similar issue for emulation: for the SNES games on the Wii emulator, the game was much harder because you have to adjust for noticeable lag between your presses and the response. I also saw that on the Virgin in-flight Pac-Man: with their emulator, you have lag that makes the game harder. I was able to get a lot farther on the first try by playing on a classic machine.
I specifically asked about monitors. TVs have a huge amount of latency and post processing that’s only recently gotten better. Computer monitors, on the other hand, have long since been very fast.
I ask for evidence because I’m not the one presenting the claim and suspect it’s no longer true.
Yes!!!! React/Redux and other SPA client-side rendering framework I'm talking about you! Just because you can, doesn't mean you need to make a page SPA acting like a desktop application. Stop it!!!! Stop making it a fashion industry out of web development. [End of Rant]
KISS, SSR is fine for most the time for web pages!!!
One exception to the above rule of SPA framework is mithril.
However, I understand not everyone can code in LISP/Scheme way. So I don't blame it for not being more used and popular than React camp. However, they seems to have JSX support now, not sure if the performance is still the same when using JSX.
I have the feeling that most of modern websites use too many cpu&gpu resources. My computer is just 5 years old and each time I visit a modern website my computer really suffers.
Please designers and engineers, I don't own the last macbook pro with maxed out specs and my internet connection is quite normal.
Start creating for the rest of us who don't have the resources or interest in upgrading the computer every couple of years!
It's not just web pages, modern software engineers just make terribly unoptimised programs. Why does a recent game on a powerful pc take longer to open that old DOS games? Not just loading a level, but even getting to the start screen takes ages. Wtf takes 20+ seconds to show a menu of New - Load - Options.... (drm probably)
I do own the latest MacBook Pro and have a mesh network with 300 Mbps downloads speeds and even I have started finding it painfully slow to load my banking website, cnn, etc. In particular layout shift has become awful and I often click on the wrong link as the page is still loading and widgets are being rearranged on the page.
Just like most things, the answer is "it depends" since it's really not that simple. This is a great article but the answer seems obvious to me.
The web today is starting to get divided into apps and sites. HN is a site for example. It is small, loads fast etc and works well for it's intended use case.
When I listen to music or podcasts in a web site, I want it to act like an app and for that to happen more stuff has to happen in the client. Thus leading to a longer load time and execution time. This is something I can live with though, since I can do stuff on the web that was impossible just a few years ago. I am also developing apps that was impossible to do on the web a few years ago.
I want to use both sites like looking up that store and order some stuff and apps like doing design or consuming music and video.
I prefer web apps that are done well rather than native apps. I don't have to download anything and they are free from the shackles of Apple, Google and Microsoft. Also, you don't have to make them bloated and big. You don't have to use a framework. You can use web components and maybe some small router library and you have the most important stuff a front end framework gives you.
Just check fastmail, their client is super fast and is a very well done SPA app. Then you can look at Reddit, which is a horrible mess. Like any app, any language you can make the experience shitty and you can make it awesome.
On my system, Firefox on Windows, these constant delayed loadings of JavaScript garbage make every website laggy.
Yes, the web has become slower.
The author then himself posts that page weights have increased by over 300 percent.
Clearly, usability of todays web is generally worse.
Web development has changed from a field where competent people cared about user experience given simple and limited tech, to a field where people care about ads and using the latest fad in terrible JavaScript framework tool blah.
Which should tell you something. The vast majority of tech advancements are because of hardware, not software. I think that software people should feel a bit of shame at this point in time.
No one feels any shame because the reason is obvious. Normal users are constantly demanding more features and not more speed. The average person just buys a new phone when things are slow. Companies don't have infinite time so when faced with the option between adding a new feature and speeding up an old one they always pick the new feature because thats worth more.
Now imagine if the people working on the lower level stuff had the same attitude. Imagine if the codec implementations were slow because we need more features. Imagine if the graphics people never gave a damn about perf, just like the "programmers" you're talking about. Imagine if your database was the main perf hog in your system etc. etc. I'm sorry, but here's an unpopular opinion: those people are just not good engineers. The increments of software development should be abstractions, not features.
Yes. It's the Jevons' paradox enabled and amplified by the corporate web abusing JavaScript (bloat, ads, trackers, and other attention-grabbing shenanigans).
You'd be able to boot an x86 emulator in JavaScript to a Groovy prompt in just a few seconds, so probably faster than you can load any major news site except text.npr.org ;)
Sometimes it's not because of the marketing team and all their tracking pixels. Here the dev team, in the name of innovation, slowed their own process to keep up with Jetsons [1]. Also, if you know any angular developers...
It's not only getting slower, but shit. Animations everywhere. Unecessary spaces everywhere. Infinite scroll bullshit everywhere. Still can't find shit on webpages without ctrl+f
Why do current webdevs think this is a good idea? Well yes, 'mobile first', but even on mobile it usually looks like absolute bullshit and animations are even more annoying on mobile. Why can't we have plain webpages that actually work?
If you want an example: I recently tried to find an appartment using airbnb. Since the last time I used it they changed their design and it's super slow, the animations don't even make sense, but rounded corners everywhere. Fuck. This. Shit.
I can remember when my house internet went from 20mbps to 1gbps. There was no perceived difference in web speed. 20mbps was fast enough and the conventions in place forced a low ceiling of performance.
As far as JavaScript goes any mention of performance improvements often results in hostility from JavaScript developers. Try taking away their 300mb framework or eliminating DOM navigation by use of clunky query selectors. The result is hostility even though you can prove performance increases of 500x in chrome or 20000x in Firefox.
The article explains what there is no way to really measure the page load of modern sites for a user point of view. Then it analyses the biased data, using the various flawed metrics that do not match the modern web experience. Consequently, its overall conclusion that networks speed gains have made the average web faster is not conclusive at all.
Here is an example where these metrics make no sense. I've been using the website of the national weather forecast (meteofrance) for years. It used to load the DOM in less than a second. Since the the content was in the HTML, the user perception was the same as DOM-loaded. Now, with the same DSL connection, it loads in 2.2s (DOM). On a mobile, since the network is many times faster than a few years ago, the DOM is probably loaded faster. But the real content is not in the initial HTML anymore. It is loaded through XHR among the 80+ HTTP queries sent. The forecast is now displayed 5 to 6 seconds after the initial request for the web page.
I've seen quite a few sites that went slower over time because their content was no more static but wrapped into some JS framework. It is not only slower, but less robust, and harder to monitor — I've seen a few blank or broken pages, and I'm pretty sure this was not logged on the server.
This does not imply about the average web, and I'm not sure any comparison would make sense, but it does show that some areas of the web have regressed over the last decade.
I don’t have numbers for it but I am sure the size of the web pages have increased by many multiples over the years and only getting worse. There aren’t any simple pages anymore, every page includes a bunch of JavaScript libraries embedded, tons of ad network code, tons of performance and tracking code, lots of images replaced text. I am not even talking about web assembly, web sockets and client side rendering etc.
edit: Just poked around a bit to find best worst examples. Unsurprisingly CNN is the worst I could find after a few mins - https://tools.pingdom.com/#5d1bf596ffc00000 543(!) requests, 9mb, 6 seconds. Just stroll through the list of bullshit it sucks into the front page...what a mess.
The reddit API is a legacy feature that is on the chopping block any day now. Its only real purpose now is 3rd party clients which don't show adverts or insert tracking scripts. It has a small amount of use for bots but those are mostly a negative user experience and likely to be killed soon as well.
Reddit is pushing to become facebook without your real name.
This seems very likely. Reddit admins have mentioned that they are trying to crack down on alt accounts and banned users signing up 2 seconds after being banned. A phone number requirement would solve that easily.
Ha ha, now we're back to secret incantations like when AdBlock first came out! You'd tell your friends and family, "Oh, you gotta get it!" for something whose "discoverability" is very, very low.
Also holy shit I just clicked on the first website I saw on that site and I'm getting about 3fps scrolling it and my desktop is a Ryzen 9 3900x with 24 threads, 48gb ram and an rx5700xt. You don't even get the excuse of "It works on my machine" because this desktop is literally as good as it gets. I just tried a couple sites and not a single one could scroll smoothly, one of them was live applying css transforms to about 30 images as I scroll.
They're nice and smooth on my Ryzen 5 4500U/Vega 6 with 8GB of RAM. Of course the 5 seconds of artsy chaos spinner each of them has on load is still absurd. But it's fine after that. Could it be the browser? I loaded them in Firefox.
Would love a CPU usage analysis of new reddit vs old reddit. You can feel your laptop choke under the effort it takes to render the post + 3 comments out of 200 it shows you vs the old loading the post and entire comment tree.
Trying to open new.reddit.com on my ~4 year old phone is slow enough that I have time to get a coffee before the annoying Install Our App popup appears.
Imgur is even worse; it takes multiple seconds to load a webpage whose only purpose is to display a single jpg image.
yeah, I've navigated there several times by accident on my phone while out and about. I never got past the loading screen. at least now you can just go to old.reddit.com and don't have to request the desktop site every time
New reddit vs Old reddit would be an interesting experiment. What if reddit asked you which version you want on your first load? How many people would go for new?
It genuinely feels like being on dial-up in the old days. I frequently visit reddit using Safari on my iPad and it often takes a good 3-5 seconds for the new page to load after I tap on a link. If anything, it at least dissuades me from wasting time on the site.
The article is a nice attempt at deconvoluting the various factors that may have changed.
From my personal experience the subjective sluggishness cones from the unbelievable number of third party connections on every webpage visit. Setting up a Pi-hole has done wonders for the responsiveness of most websites.
It's definitely slower and the easiest way to realize it is to use plugins like uBlock Origin combined with uMatrix and compare load times with them enabled, and then disabled. For me it's at least 50% faster when they're on.
No, it's faster. Disable JS and - what continues to work, which is quite a bit - is snappy. It's also snappier than it was a decade ago, even 3 years ago, very clearly so.
But while people will accept JS and general online abuse, it will get worse.
There’s a disconnect between the author and the comments section here.
The author claims that yes, websites are getting heavier but improvements in bandwidth, CPUs, protocols (like http2) and browsers offsets this to make the web as fast or faster for the median web user.
Comments here express frustration with those increased bundle sizes, especially in cases where we aren’t getting any features in return.
Both sets of folks are talking past each other because there is disagreement over the metric. The author wants to use load times as seen by users in wealthy countries. The comments folks want to use KB or features per KB (subjective).
There’s no right or wrong here, just different metrics.
From a hardware perspective, everything is many times faster that it was just ten years ago.
From a user of the web perspective, then I cannot even find the right words to describe the madness that front-end developers are causing with their utter clueless usage of absolutely ridicules JavaScript on top of frameworks on top whatever other useless crap they throw into the mix just to display basic static HTML!
The web is not getting slower, it's getting faster, but even amazing technology cannot compensate for front-end developer stupidity!
The network improvements are not only slower but also increasingly non-uniform as the limits of long copper are reached.
It's the problem with averages again, which are significantly inflated by a minority with ever increasing broadband speeds. It's easy to improve a select few areas to extreme; but hard to improve for the majority. I suspect pointing at figures of increasing averages to justify larger assets is part of the problem.
It's not only about stats/data, but also the user experience. Many web pages have many more ads, videos, gifs, and clutter than they did before. There's also more spam and more low-quality content. That makes finding the info that you want harder than before. I've noticed this especially when searching for cooking recipes online.
> Mobile page weight increased by 337% between 2013 and 2020
that is a huge span of time, with the birth and death of stars in-between in the Web Development cosmos. So much has changed in Web Dev, especially for mobile.
The issue is not the Web getting slower. It's with the developers and their companies focusing on the time to iterate. We have traded consumer convenience for speed to develop and using shiny new tech/methodologies.
We have traded doing things server-side to having most things in an SPA; using tracking pixels and Google Analytics to now also incorporating New Relic, Optimizely, a host of CDNs, bot protection scripts, social media scripts for "better personalisation", CAPTCHAs.
I feel we are moving backwards. Back to when developers used ActiveX plugins because it was more convenient for them. Only ActiveX is now a combination of "Javascript in the browser, and a powerful enough laptop to handle it without choking".
In relation to Web development, a lot can (and has) happened within 7 years.
If you were a Web developer in 2013, you could find plenty of jobs if you knew Django (Python), Rails (Ruby), Zend, CakePHP, Symfony (PHP), or Spring (Java). Those were the frameworks used by either companies (Zend, Spring, etc.), or loved by devs (especially Rails and Django). Node was starting to come into its own for Web development, but nowhere close to the others.
Fast forward to 2020. Rails and Django are on the decline from their fame peak. Most of those frameworks are still being used for legacy reasons. Web dev work I see is primarily related to Javascript (React, Vue, Typescript) or the JVM (Scala, Kotlin, Java). The other interpreted languages are nowhere close to their 2013 status.
Between 2013 and 2020, we also have a huge ecosystem of Golang and Rust coming out, let alone Docker. All of these helped influence or play a crucial role in developing web sites and applications.
While a lot of these technologies were around in 2013, they definitely weren't as pervasive or mature as now, and developing in 2013 was a whole different animal than doing so in 2020.
Not even the majority of the tech you mentioned is 7 y/o or less. And you can't name a single piece of tech on that list that isn't actively developed today because they're all actively developed, including Java, the tech that's been around for 25 years.
This is specifically why people talk about fad driven development and make fun of younger people who have no sense of history.
7 years just isn't a lot of time, anyone who thinks it is is a junior who thinks they're a senior.
-- I have deleted a long comment I was going to reply with as I'm sure you are not open to hearing actual counter arguments. I don't want to waste my time. --
My reply instead is to read more carefully, think more carefully, and do not throw puerile thoughtless sentences out like the last one in your reply above.
Watch as my eyeballs roll out of my head, onto the floor, and out the door.
Oh also...
-- I have deleted a long comment I was going to reply with as I'm sure you are not open to hearing actual counter arguments. I don't want to waste my time. --
My reply instead is to read more carefully, think more carefully, and do not throw puerile thoughtless sentences out like the last one in your reply above.
Weird... it seems I too can go with the better than thou act.
Conclusion > I don't think the mobile web – as experienced by users – has become slower overall.
Even as a subjective result this seems really bad. Most web-pages are text & images, outside of apps like maps/email, most content is static and while CPU's and bandwidth has skyrocketed the UX has stayed about the same... we think.
since we’re blaming here, I blame the dominance of web by multi billion dollar tech companies. This requires any site/app that wishes to be taken seriously to have a look and feel comparable to the big guys and the only way you do that on a budget is with a hefty heap of framework abstraction.
This article makes the reasonable point that the web is likely getting faster for people with cutting edge devices. For example, at one point they say
> Someone who used a Galaxy S4 in 2013 and now uses a Galaxy S10 will have seen their CPU processing power go up by a factor of 5. Let's assume that browsers have become 4x more efficient since then. If we naively multiply these numbers we get an overall 20x improvement.
> Since 2013, JavaScript page weight has increased 3.7x from 107KB to 392KB. Maybe minification and compression have improved a bit as well, so more JavaScript code now fits into fewer bytes. Let's round the multiple up to 6x. Let's pretend that JavaScript page weight is proportional to JavaScript execution time.
> We'd still end up with a 3.3x performance improvement.
But then the author concludes
> he web is slowly getting faster
Which ignores a pretty large fraction of users. A part of the article acknowledges that this all depends on the device, etc., but this is ignored in the conclusion!
Let's say that, as a first approximation, the first set of quotes is correct. I think most developers who look at user experience with respect to latency or performance today (or even ten years ago) would agree that we should not only consider the average and that we should also look at the tail. If we do so, we see that device age is increasing at the median and the tail, quite drastically in the tail even if we "only" look at p75 device age: https://danluu.com/android-updates/.
If we consider a user who's still using a 2013 Galaxy S4 and ask "does your phone feel 5 times faster than it did in 2013?", based on some js benchmarks improving by 5x, I think they'll laugh in our face. I've used a couple of Android devices that I tried to keep up to date (to the extent that's possible on Android) and each one became unbearably slow after taking some big Android update. Those updates probably included improvements in the Android Runtime as well as V8, and yet, the net effect was not positive. I don't think I'm alone in this -- if you read any forum where people discuss taking updates for the phones, one of the most common complaints is that their previously usable phone became unusable due to performance degradations caused by the update.
Sure, my personal user experience on my daily-driver phone is ok on my phone because I have very fast phone and I'm often using it from fast wifi. But it's terrible if I take a road trip across the U.S. and the experience is terrible anywhere in the U.S. with an old phone. I don't think we should just write off the experiences of people with old phones or who live in places where they can't get high-speed internet even if life is good for people like me when I'm at home on my 1Gb connection. When I looked at this with respect to bandwidth and latency (inspired by a road trip where I found every website from a major tech company to be unusable, excluding a few Google properties), I found that, on a slow connection like you get in many places in the U.S., websites can easily take more than 1 minute to load in a controlled benchmark: https://danluu.com/web-bloat/. My experience in real life (where I probably had higher variance in both latency, packet loss, and effective bandwidth) was that many websites simply wouldn't load.
One thing this post looks at is the 75%-ile onLoad time. When I travel through the U.S. on the interstate (major throughfares which will, in general, have better connectivity than analogous places off of major throughfares) most pages are so slow that they don't even load at all, so those attempts aren't counted in the statistics! I don't dispute that things are getting faster for the median user or even the 75%-ile slow user if you measure that in a specific way, but there are plenty of users whose experiences are getting worse who won't even be counted in this stat that's in the post because their experience is too slow to even get counted in the stats.
>Sure, my personal user experience on my daily-driver phone is ok on my phone because I have very fast phone and I'm often using it from fast wifi. But it's terrible if I take a road trip across the U.S. and the experience is terrible anywhere in the U.S. with an old phone.
This is a per peeve of mine, in my (perverted) mind the developers/programmers (not only web related) that usually work and use (it is fine, it is their work, they deserve the best of the best) high performance hardware and connections should have a low performance setup (simulated in a VM or simply some oldish hardware with a limited amount of RAM and a slowish processor) where to test what they release to the public for interaction/responsiveness/etc.
In my experience, bar the professional developers or programmers and a few designers, architects, engineers, etc. , the only people with high end hardware are gamers, all the rest (both at home and in the office), for different reasons tend to have relatively underpowered machines.
I’ve been using w3m as my primary browser for the past few weeks. Pages load faster, but the real awesome part has been recapturing attention from image based ads I didn’t even realize I was losing.
> Mobile page weight increased by 337% between 2013 and 2020. This is primarily driven by an increase in images and JavaScript code.
Not surprising. Most webpages are over-engineered blogs. If you're not organizing data other than text and a few multimedia elements then you probably don't need millions of lines of code of JS libraries. Not only do they slow transfer, they take time to execute and make most web pages do quirky things for 1 to 10 seconds before the page finishes loading.
I've recently tried to get a simple webpage snappier replacing hi-res images with appropriate sizes and including heights -- not only widths -- to prevent re-layouts was easy enough, but then…
The single biggest offender in both download and render time that I can do nothing about is youtube loading in over 400kb of Javascript for an embedded video, taking ages to first render, making everything feel glacially slow.
> However, the 1.6Mbps 3G connection emulated by HTTP Archive is very slow, so we should expect significant performance improvements as bandwidth improves. The average website downloads 1.7MB of data in 2020, which will take at least 9s to download on the HTTP Archive connection.
I have family in an African country where the only reasonably priced home internet connection is about 56kbps over DSL (yes, dialup speeds over DSL, very confusing). Web pages have gone from very slow to unusable very quick.
I believe and appreciate that the users most people target have faster devices now, especially in the core hubs of Web development like Silicon Valley. However, this trend of "we can fit more data in because devices are faster" is horrible for anyone already behind in available technology or even on a limited data plan.
My father uses my old smartphone, a Oneplus One with the latest release of LineageOS. This device runs a Qualcom 801, a chip that was considered to have flagship speeds t the time of its release. WiFi speeds are over 100mbps, but websites and applications are still getting slow somehow.
Even a mid-range or cheap smartphone has a better GPU than older devices, and there are many of those out there. The advice to test on budget smartphones is solid, but people often go out and buy a new cheap phone instead of using something that was popular a few years ago. People who can't afford or don't see the value in getting a new S20 don't necessarily have cheap smartphones, they often have second hand phones or hand-me-downs as well. Frameworks like Flutter and browser engines using canvas are very noticeably slow on those older devices because of the advancements GPU tech has made, as CPU tech improvements in smartphones have begun to slow down over the years.
Is the web slower? No, not for the people you want to sell your product to. The question is, why should we accept this bloat? Web applications can be as slow as you want to make them with festures and pretty designs for all I care, but the web in general doesn't need two megabytes of javascript to render a manual or a forum post. The metrics discussed here are wrong in the context they were originally used in, so the article does have a point. However, I don't think we should say the web hasn't gotten slower because computers have increased speeds to compensate with the load. Every byte you save, every script you ignore, every image you compress can have significant impact on hundreds of millions of not billions. If you can't justify that to your team lead, use the argument that the less resources you use, the better the users' battery lives are and the more responsive a site feels. The impact is about the same.
I'm interested if there's a break down on whether the web slows down by different regions. I imagine uptake of new web technologies and hardware differ according the GDP and baseline mobile or network capabilities.
TCP slow start (or QUIC equivalent) should be considered. Usually speedtest is downloading large files so TCP window size is well increased, but small websites are sometimes not enough big to increase window size.
It’s that time again when we need to swing from fat clients to thin clients, and think about what went wrong. It swings back every 20 years or so, so don’t worry.
Well, in a way, yes, especially when using older hardware like the Thinkpad T42p I'm using right now. Go to the Wayback machine or Internet Archive, find a page from when the machine was made (2004) and compare its loading time to the current version. Now take a recent piece of hardware and load the current page and marvel at the fact that the old page on the old machine loads about as fast as the new page on the new machine. The old page looks dated, of course, but that is more a matter of the layout than it is of the lack of 'modern' technology. It would be possible to make the page look close to the way the modern version looks without incurring all that extra load time. Yes, this would probably entail server side rendering and some judicious use of older but still useful tricks like server side includes but it is certainly doable. It isn't being done because modern pages load fast enough on modern hardware and developers are incentivised to track the latest technologies to keep their market value up.
I have seen websites where the homepage's HTML alone was over 1MB! in size. The only thing that got exited was my CPU.
Let me tell you: if you want a header with a background image where the text is aligned at the bottom you can just write:
There is no need to write it as: And if you want to stick the header to the top you can use `position: sticky` in CSS instead of including a huge Javascript file that can do all kinds of fancy stuff you don't need.But I am not sure I can blame those front-end developers. Deadlines are tight and it takes effort to learn about the technical aspects of front-end development.
My personal standard is that a page should be ready in 1 second. For huge sites 3 seconds max. I've been creating small and huge websites for over 20 years now and never had a problem with these goals. This includes webapps built with Javascript.