- Reducing DNS calls and server round trips. Loading fewer resources from fewer domains makes a huge difference. Using server push also helps, although it might get deprecated.
- Responsive images. Load small images on small displays. Don't load 2x images on lower density displays.
- Using the right load event for JavaScript. load fires long after DOMContentLoaded on some pages, especially on slow connections.
- Setting the size of elements before they load, to prevent the content from jumping all over the place.
- Loading fewer fonts, and serving them yourself. Google Fonts is significantly slower.
- Gzipping everything you send. It makes a significant difference on slow connections.
- Static caching. This will shave whole seconds off your page load time. Cache both on the server and in the browser, but mostly on the server.
- Consider perceived performance too. It doesn't matter how fast you force a coercive cookie notice on your readers.
- Performance extends to the content of the page. Don't make users chase answers. Give clear, concise information and use typography to guide your users to it.
> Setting the size of elements before they load, to prevent the content from jumping all over the place.
This is especially jarring on a phone. Multiple times a day, I see this, and it pisses me off each and every time. Nothing like reading an article to all of a sudden have some image load somewhere (likely not even visible on screen) suddenly cause what you're reading to disappear off screen. And I'm not on a slow connection, either. I'm typically on uncongested wifi connected to fiber when at home, 5G otherwise.
JavaScript in general is so bad, or rather is being used for so many bad practices, that I've disabled it on my main mobile browser. What little good it's used for doesn't out weigh all of the bad out there. I'm tired of malicious ads hijacking the page, either rendering popups or redirecting me somewhere I don't want to go. If a site doesn't render anything useful without javascript (Twitter is a prime example), I typically turn around and go elsewhere.
Layout shift due to responsive images is now easy to prevent, because most up-to-date browsers now correctly set the aspect ratio of a responsive img element if you add the height and width attributes. Before a year or so ago, you had to use hacks like padding percentages to correctly set the aspect ratio.
There are extensions (Dark Reader, a dark/light mode extension for Firefox, is an example) which trigger multiple redraws of a page on load. On an e-ink device (my principle driver these days), the extension is a near necessity (ironically, to force black-on-white themes on sites, much more readable on e-ink), and page re-paints are all the more jarring and expensive.
Simple pages still load quickly and without disruption. Many complex sites don't. (NPR's website comes particularly to mind.)
> This is especially jarring on a phone. Multiple times a day, I see this, and it pisses me off each and every time. Nothing like reading an article to all of a sudden have some image load somewhere (likely not even visible on screen) suddenly cause what you're reading to disappear off screen.
This is the worst on picture rich recipe blogs. I hate it.
Nothing saves as much as just writing the HTML, CSS, and JS by hand. No build step or anything but an editor. No need to minify anything, keeping it nice and simple.
Responsive images are kind of funny in the sense that my webpages are nearly twice the size on mobile vs my laptop. The screen may be smaller, but the CSS pixel ratio is 1 on my laptop vs 3 on an iPhone.
While true in theory, I believe that keeping things handmade means that the payload couldn't realistically become big enough for the parent's suggested optimizations to actually matter.
An unoptimized payload isn't that big of a deal if it's still smaller than the best optimized one.
Perhaps not all of the advice applies, but when it comes to images and fonts, I don't think any amount of keeping things handmade is going to outstrip the advice regarding static caching and loading fewer fonts.
The best advice for images is to delete them. There's a widespread belief that meaningless pictures at the top of a post are critical. Here's the first link that comes up on Medium: https://equinox-launch.medium.com/trade-hard-win-big-dopewar... The value added of that image at the top is negative. This is, sadly, the rule rather than the exception.
Heavy images & fonts on an otherwise plain HTML page will still be acceptable as they would be loading in parallel - in most cases the text will come first anyway.
I'd rather have a half-loaded page with images/fonts missing than a JS-based page which won't give me anything until the full JS payload is downloaded & executed.
Writing HTML by hand is pretty dubious advice (slightly impractical but doable, so not necessarily something to rule out on those grounds, but the factor that dominates in deciding whether or not to do it comes down negligible benefit). But writing CSS and JS by hand is both way more practical than people think, and the gains are real.
If HTML allowed you to trivially import reuseable components like you can with JSX it’d be killer. That’s the one thing that stops me writing my site by hand.
I’m looking at Astro for my site’s re-write as it seems to find a happy medium here.
>If HTML allowed you to trivially import reuseable components like you can with JSX
Sounds like WebComponents.
>I'm looking at Astro...
Never heard of it and just gave it a quick glance. Hadn't heard of the "islands architecture" it implements either, although it is similar to some familiar concepts.
In fact, it reminds me of the way we built sites years ago (pre-AJAX even), when static HTML was just delivered by the server and there was little-to-no JS involved, save for some effects.
And, that's the funny thing: in spite of all the myriad paths and paradigms, we seem to be coming full circle. It's funny in 2021 to see a "new" project advocating for a "new" approach that is essentially "SSR static HTML with minimal client JS FTW!"
WebComponents have been discussed for a while. They were just a dream not too long ago, and I'm not sure what their usability is right now. So, word of warning that I'm not advocating for or against their use at this point. Just mentioning that the description you gave previously reminded me of WebComponents.
If your project is that small that you can manage and implement on your own without teams having to work together and if you don't care about older browser. In that case you maybe have a point.
If you work on bigger projects you will run into many problems really soon.
Like some Safari strangeness when it comes to CSS. Someone using some old browser which you as a developer would never use but your user base still uses it and it makes 12 % of the revenue.
This really only works for relatively small sites that don't change much and aren't developed by multiple people. After that you'll at least want some static site generation (which by itself doesn't slow down the experience), bundling of multiple JS sources together (allows you to handwrite JS over multiple files without taking the perf hit of serving multiple files), and likely a CSS preprocessor (e.g. SCSS).
I write my CSS by hand. I just don't like setting up preprocessors unless I have colleagues working on the same code.
No, it does not help. CSS is trivially small, especially after it's compressed. Same for HTML and JS. JS will balloon in size if you start using Webpack, but your own code will stay fairly small.
A single image will outweigh your text files, so the effort isn't really worth it.
plus careful image scale, quality and progressive display. Often images appear to be print quality. You need to know about the difference of web and print, however.
I would hate giving any random piece of software full access to my computer instead of using a sandboxed version in a browser. The perf tradeoff is worth it for stuff like games and photo/video editors, but every random webapp? No way.
Wrong. Of course if you use a minifier you save bytes, it’s by definition. A decent built tool can still start from static files but also optimize them as much as possible. If there’s little CSS it might even just inline it.
How about no serving of fonts? I don't think I've ever looked at a webpage and thought it would be improved by a particular font family. What exactly can't be accomplished with standard web fonts? Icons? Haha, don't even go there.
I actually disabled loading of webfonts. The only two websites I've encountered where I actually notice it, and remember that I did that, is Google Translate and a particular government website I use about once a year.
Everything, everywhere else, seems to work just fine with using the browser defaults.
> The only two websites I've encountered where I actually notice it, and remember that I did that, is Google Translate and a particular government website I use about once a year.
That's probably because of the Google Material Icons font, I believe? It shows or used to show text instead of icons when it couldn't load.
> That's probably because of the Google Material Icons font, I believe? It shows or used to show text instead of icons when it couldn't load.
All icon fonts _should_ do this. It's just that the Google Translate site is designed in such a way that the fallback text doesn't render correctly, overflowing and overlapping over everything else, which makes it noticeable. It seems to very much be a 'pixel perfect' design, rather than responsive.
Everywhere else, the fallback fits naturally, or renders to a standardised UTF-8 symbol, so that it isn't noticeable to me for day-to-day usage.
Absolutely, especially with the recent "invention" of the native font stack as a solution that does not require loading external files while showing a nice modern font.
Imagine if the trade-off was more explicit to the visitors: would you want something that made your web browsing slower, your text flashes in like a cheap powerpoint slide, and sent your data to Google, in exchange for being able to read in a different font? External web fonts are purely for the ego of the web developer, not for the reader.
Another option is that you keep 90% of your text in the default font (e.g, -apple-system or the equivalent stack elsewhere), and then load only the specific weights for whatever title(s) or so on you need.
Hell, I've outright just crammed a few characters into a base64 encoded font in inline CSS, which - while I'm sure makes many here grimace - loads faster than anything else I've tested.
(I'm a big fan of just shipping everything down the pipe as one file, short of images - I do this on my personal site and it makes me happy that it "just works": https://rymc.io/ )
In my case it guarantees a certain level of readability that is consistent across platforms. I spent quite a bit of time fine tuning the font weights to help readability. That costs about 60 kb, so it's hardly a huge sacrifice.
I host a personal photo gallery online and I've been dreading implementing this for images to get rid of the jump when lazy loading. I'm not even sure there's a good way to do it.
Using a css flexbox or grid layout means you don't have to recalculate image div widths and heights when the browser viewport width changes, but you still need to give it enough information to back into the correct aspect ratio.
Also, using a srcset helps only load the necessary bytes, but means you'll need to either build the resized image on the fly (which is cpu-expensive) or prebuild them (which takes up some disk space).
In the interests of a performant UI, PhotoStructure extracts and remembers width and height of photos and videos during imports, but if you're only loading a couple images in a gallery, your could fetch this metadata on the fly as well.
Depending on your UI, you can also load the images into a hidden div (display:none), catch the load event for each image, and display the div once you've detected that the entire image set has loaded.
I want to say I did something like this for an infinite scroll, grid layout some time ago where I didn't have the image dimensions to help the browser. Details are fuzzy but there seemed to be a gotcha here and there, some of which was around the load event.
Of course, the user's perceived wait time goes up a bit, as you're not displaying each image as it loads. But, that can be mitigated somewhat by fewer images per pagination and/or preloads.
The overall solution worked well for our use case.
For a grid-style gallery you can have a container for each image with a fixed size (the maximum size you want an image to be) and the image itself can use max-width and max-height with the same size of the container. If your aspect ratios are too varied, you might have to fiddle with the ideal size to get an optimal result, but it gets the job done.
If it's another kind of gallery it could work if you accept those constraints. If you don't, then it's better to use width/height anyway.
This is off the top of my head but if your images are set to a percent based width so they scale (mine are) and you know the aspect ratio of the image then you can set the CSS ‘ aspect-ratio’ property (or there are some other hacks with padding on a wrapper div) to maintain the height.
You’d probably want to integrate this into the build process but I have not done this.
You don't even need the recently-added aspect-ratio property. In most up-to-date browsers you can now just add the height and width attributes to the img tag and the aspect ratio will render correctly even for responsive images:
Is there any chance in the future that browsers are going to support / implement lazy-loading of images themselves (i.e. based off what's visible in the viewport). Currently you have to (as far as I'm aware anyway, but I'm only just getting re-acquainted with web-dev) use JS to do it (things like LazySizes), which adds complexity, and is even more complicated if you want to combine that with responsive images...
Or am I missing something that's possible already?
You can set the attribute loading="lazy" on an <img> tag and the browser will do it for you. Looks like Edge and Chrome and Firefox support it, but not Safari yet (https://caniuse.com/?search=loading).
Server push has been deprecated by browsers for a while now.
To reduce round-trips, consider using TCP Fast Open and 0-RTT with TLS 1.3. Note that 0-RTT can enable replay attacks, so I'd only consider using it for static content. HTTP/3 and QUIC has improved support for 0-RTT.
Gzip is good for dynamic compression, especially with zlib-ng. For static compression, I'd rather use Brotli.
It’s points, not pixels. A modern smartphone usually has a scale factor of 2 or 3, giving the same logical screen space with a significantly sharper image.
It's not (as sibling comment says) points, it's logical pixels as declared by the device/user agent's pixel density. Device pixels are usually (but not always) some integer multiple of the logical pixels.
> find yourself greeted with a 75vh image carousel, social media links and banners, a subscribe card, a "Chat Now!" banner with a GPT bot typing, xyz.com wants to know you location, or a prompt for cookies
Don't also forget an "Allow xyz.com to send notifications?" pop up and an overlay ad banner with a 10 seconds timeout.
As admirable as the intention is, this is a losing battle. The biggest websites in the world have enormously bloated sites, because they can, and they won't be penalized for it.
Your mom-and-pop website's rank is entirely at the discretion of Google's Core Web Vitals measurements (and other SEO KPIs), but Amazon with its dumpster fire search pages with shifting layouts as content is dynamically loaded in....will be just fine.
The biggest websites in the world (Amazon, Google, Facebook, Twitter) try to push you to use their app whenever possible anyway. The web seems to be an incovenience for them, they certainly don't want you blocking any of the myriad scripts they have running under the hood to tally analytics, A/B testing, user sessions recording and whatever else.
If you have a personal website, as I do, chances are it's relatively lean, because we're the only ones designing it and keeping it updated. Chances are also that very few people visit it, because it's outranked by the thousand other bloated websites that will rank higher for whatever query it was that you typed.
Sure, but complaining and proving it is possible to have smaller/faster websites is important because it can inspire and lead to better tools, libraries, services, standards, techniques and even new fads (!) in the future.
I understand that lots of software engineers complain about new tooling, and would rather have nothing changing, but the reality is that things will change. So we might as well fight for them to change to the better, by proving it is possible and showing/teaching others.
I also understand that people don't think FAANGs will ever fall, but just as Google disrupted every search before and Amazon disrupted brick and mortar stores, there can be something with a differentiator that will disrupt them in the future, and that differentiator might be going back to the web. Of course, there's the possibility of Google killing this new disruptive corner of the web as soon as its born, but oh well...
The point would seem to be having a superior website for your use case. Discoverability through search is unimportant for many things like websites for private events. As long as people can find a personal site by typing the person's name in either as a search or the url, it's probably good enough.
The point is that MY website will be accessible, and so will the websites I link, and I will be able to enjoy my little corner of the Web from any device and location I choose. A nice bonus is saving a lot of time on maintenance.
I would want to consider the points in the article, but I can not focus to read the text on my 27inch/4k screen as the line width is too big. This also makes me doubt that whoever wrote those lines cares about UX and not only about having a minimal webpage.
For some reason this website is a lot worse than Wikipedia for me. I think it has to do with the font size and spacing. On Wikipedia the font size is smaller, so even though the line is long, the sentences are pretty short and you can read almost an entire sentence without moving your eyes/head. On this site it feels like a journey to read a sentence.
Exactly. Apparently these people don't stop to think why the majority of books are made in portrait format, and magazines, newspapers and some books split texts in columns.
250kb over <25 requests should really be an established standard at this point. It's not difficult to achieve and makes a massive difference in energy usage and server requirements for high traffic sites. There are plenty of <25kb frontend frameworks, and some of them are arguably better than their >100kb competitors.
I recently made a site that frontloads an entire Svelte app, every page and route, entirety of site data (120kb gzipped), all of the CSS and a font full of icons, a dozen images, service worker and manifest. Clocks in at exactly 250kb, scores 100 on LH, fully loads in 250ms uncached, and all navigation and searching is instantaneous.
That's not a brag, it just goes to show that anyone can make sites which are bandwidth and energy friendly, but for some reason very people actually do that anymore.
Disabling SSR and prerender within SvelteKit turns the output into an SPA, and the manualChunks option in vite-plugin-singlefile ensures the output is one file that includes every page of the site.
That's pretty nice. Surely if you have a big enough site with enough content loading all of it one go is not a pleasant experience on slow connections?
Depends on the site and content, for sure. This particular site is aggregating a subset of public profiles and posts from a social media network. To compare, the site I'm pulling the data from uses nearly 20MB to display a single uncached page of post titles and 2MB per subsequent page, while my site is able to display the entire userbase for a cost of 250kb and a single request for a ~5kb image on subsequent pages. The only difference is the choice of frameworks and component construction.
I would be utterly embarrassed to produce a 1mb payload on anything but an image- or video-heavy website (where the images and videos are the actual content, not superfluous crap). There's absolutely no reason for websites that are primarily text and hyperinks to ever approach anything like 1mb. Even 250kb is excessive for many cases.
My latest website[0] is 7.5kb, and I built it after getting fed up with the official MotoGP no-spoilers list which sends 2mb to surface the same information in a much less user friendly way. This is how the bulk of the internet should look.
> There's absolutely no reason for websites that are primarily text and hyperinks to ever approach anything like 1mb. Even 250kb is excessive for many cases.
Yep. My personal site (70+ posts) is 26kb. 17kb is the syntax highlighting library.
Another project site of mine is 2mb but that's because it has a lot of screenshots. I should probably shrink those screenshots though...
I used to run out of high-speed data on my cell phone plan towards the end of each billing period. Many websites/apps don't work at all at such low data speeds. Google Maps is basically unusable at low data speeds. Of course this trade off between performance for users vs ease of development has been with software for decades... see Andy and Bill's Law.
I design for those, because the experience is similar to that of someone in the Berlin U-Bahn. If you're just displaying text on a page, it shouldn't require a 4G connection.
I optimized my website the other week. Some things that I did that were relatively easy included: removing Font Awesome (I replaced the relevant icons with SVGs since I was only using a few of them), removing jQuery (replaced with vanilla JS), lots of CSS (removed a lot of cruft from unused rules, etc from my SASS rules), and got rid of as many full size images as possible (replaced with webp thumbnail links). I got the size of my website down to less than 100 kb gzipped for the pages with images and less than 50kb for those without.
I have been thinking about images on the web. And I still unsure, undecided on what sort of file size should we really be aiming at.
>1500x1750 and weighing in at 3MB! The webp version of the same size is ~79KB.
79KB is a very low 0.24 BPP ( bit per pixel ). for 1 BPP it would be 328KB.
AVIF is optimised for BPP below 1, while JPEP XL is optimised for BPP above 1. On a MacBook Pro, the first fold of the browser has 3456 x ~2200, an image filling up that screen with BPP 1 would equate to 950KB.
In a perfect would you would want JPEG XL to further optimise BPP below 1.0 and do progressive loading on all image.
We have also bumped out monitor resolution by at least 4x to Retina Standards.
Despite our broadband getting so much faster, file size reduction is still the most effective way to get a decent browser experience. At the same time we want higher quality images and beautiful layout ( CSS ) etc.
Balancing all of these is hard. I keep thinking if some of these are over optimising, when we can expect consumer internet speed to increase in the next 10 years.
YMMV but 99% the images I see on the web, I don't care about their quality. A large part of them are annoying, and most of the others could be low-resolution and low-quality, I would not even notice it, except for the faster page load. Once in a while, a photo is artistic or interesting enough to gain from a high quality, but most web sites I visit have far too many heavy images.
Great to see Puerto Rican tech content on Hacker News! I grew up in PR and have always dreamed that PR could become a Singapore or similar with regards to its tech ecosystem. Saludos boricua from Boston!
FYI, you probably know already, but there are lots of crypto/blockchain people living in PR now.
And, PR could definitely become a Singapore if the government would embrace that possibility (they're already part way there with the low taxes for expats), but I think they are too populist and just don't have that kind of vision for the future of the island.
Many sites are becoming practically unusable as more and more crap is loaded for not-good-enough reasons. When websites from social media to banks often take tens of seconds to load, it is obvious that the designers and coders could not care less about the UX.
Even the definition of a "small" site has grown three orders of magnitude. There used to be a 5KB website competition [0], which grew to a 10K contest. Now, the link is to a One Megabyte Club [1] list of small websites.
I don't know how to reverse the trend, but managers and coders need to realize that just because some is good, does not mean that more is better.
It's not Webpack, minifiers or build tools that make things bigger: if you feed them small inputs, you get small bundles.
Build pipelines don't merely concatenate a few scripts, they're meant to automate the boring parts we used to have to do by hand (like copying assets with cache-busting filenames and update matching references, set the correct image sizes, generate SRI hashes, generate sourcemaps, generate sitemaps, check the quality of pages, etc). This way you can focus on writing the application and keeping it maintainable without having to waste time remembering to use javascript hacks (like `!1` instead of `false`) to save a few bytes like we used to.
It's when people indiscriminately feed them big frameworks that things bloat, same way you won't be lightweight with a junk food diet.
This is down to no-one caring about speed for brochure websites. The client just wants their website up as cheaply as possible as quickly as possible. The quickest, cheapest way of building a brochure site is usually Wordpress using a off-the-shelf template. And no-one in that entire chain cares about the loading speed of the site.
Also, I'm not sure who the article is aimed at? Local lawyers are not going to read this and be able to fix their site. The owner of the web design agency that they bought the site from isn't going to read it because they moved from tech to biz. The web dev who is implementing the site already knows this, but has to knock out sites as quickly and cheaply as possible, and so this isn't something that matters.
One area where I'll disagree is minification. If you're gzipping (which you should be), try running your build without the minification step and compare the resulting over-the-wire file sizes. In my tests, the difference has been negligible. Which makes sense--you're just doing a limited text-based compression step before you throw the files at a real compressor, so it tends to be wasted effort.
You're certainly never going to enjoy anything like the big shrink of using gzip, optimizing images, or not importing a big library you don't need.
I think it's a big misconception that rendering code is responsible for most of this bloat. The great majority in my experience comes from tracking scripts and adware that get layered on top. This can be just as true on a React-based site as it is on a classic server-rendered Wordpress site, and is often out of the hands of the actual engineers. I suspect Google Tag Manager has done more harm to the web experience than any other single factor.
Working with designers, I'm provided final images at spec and can only optimize them losslessly; compressing an image changes its appearance. The CLI commands in this article use lossy compression. The point stands, alternative image formats should be used whenever possible, but I need a lossless compression pipeline to generate WEBP, AVIF, etc.
As someone who also works with designers, I understand at a deep level how you end up in this situation—but I also don't think it should be this way. Image compression should be primarily an engineering issue, and designers should be giving you lossless or effectively lossless[1] images to compress as you see fit.
The one thing I'd designers to be responsible for is image resolution, since this is intrinsically tied to website layout and affects white source photos can be used where.
[1] For example, a 100% quality jpeg is perfectly fine to work with.
PNG is lossless, the 49KB WebP will be lossy (WebP can be either). Few images beyond the size of icons warrant losslessness, which comes at an extreme size penalty.
On that note has anyone tried to "compress the Web" ? I know the Internet Archive is archiving it, but is there a browser or a tool in general that makes websites smaller while preserving functionality ?
Gzip does that, chrome mobile will proxy and recompress things in data saver mode, cloudflare will minify pages and scripts and recompress images and lazy load things for you if you let it.
You can also just run an ad blocker and most of the weight goes away
Probably the coolest trick I've seen is inlining everything into a single HTML file: JS, CSS, images, icons and fonts. One single large request seems to work much better than loads of small ones.
This approach prevents any of those resources from being cached and reused on other pages. Visitors will end up re-downloading all that content if they click on any links on your page.
Mind you, most web developers assume users spend a lot more time on their site than they actually do. Most visitors to your website will not click on any internal links on your page.
Deliberate inlining tends to make you more sensitive to size issues so that that’s also less likely to be an issue. Inlining everything also makes it so that you can at least theoretically do more effective dead code removal on CSS and JavaScript, page-by-page, though I’m not sure if I’ve ever seen that actually being done, though the tools mostly exist; but compared with things like GCC and LLVM, web tooling is atrocious at optimisations.
It's still worth it depending on the page size and average page views per visit. However we're talking about a very small, very fast website. The parse+request time is longer than the 1-6kb of inlined resources.
We're doing this right now with our latest generation web app.
It's 1 big file of 100% hand-rolled html/css/js. The entire file is 1600 lines last I checked. Loads about as fast as you'd expect. I actually found a bug with the gzip threshold in AspNetCore vs chromium (the file is that small), so had to pad the index file with extra content to force gzip. Most of the dynamic UI happens over a websocket using a server-side rendering approach similar to Blazor.
That’s… misleading, at best. Depends on how many resources you have, whether you’re using different domains, and whether you’re using HTTP/1 or something newer. Generally speaking, what you’re saying is only true if you have a lot of decently large resources, loaded from different domains or over HTTP/1.
If everything was coming from the same origin already, then:
• If you’re using HTTP/2 or HTTP/3 your statement is almost entirely false: whether you load in parallel or in serial, it’ll be downloading it at the same speed, except that by not inlining you’ve added at least one round trip for the client to know that it needs to request these other URLs; and now you mostly can’t control priority, whereas with everything inlined you know it’ll load everything in source order, which you can normally tweak to be pretty optimal.
• If you’re using HTTP/1, the use of multiple connections (typically up to 6) only kicks in if you’ve got plenty of stuff to download, but involves at least a couple of round trips to establish the connection, so even if using a single connection reduces the available bandwidth, it’s still normally worth it for many, probably most, sensibly-done sites.
If you want a fast overview of a page's JS and coverage:
We developed a quick way to view a treemap of all your pages JS (including inside bundles) from a Lighthouse report. Do it through the DevTools or from PageSpeed Insights - no need to reach for a CLI tool.
Of course, you need to have source maps accessible to see inside bundles.
It seems obvious that website designers, and tools/orgs that provide websites to others don't have "small download size" or "fast" as project requirements.
History API could improve on user experience. But I'm having issue with certain browser on iOS with swipe gestures that take you to the back and forward page. Firefox, Opera GX, etc will simple lose its scroll position whenever I swiped, while it doesn't happen on Safari and Google Chrome. This seem related to the infinite scroll page vs fixed page height.
Most of the site I have explore didn't appear to solve that annoyed issue as it happened to YouTube content on mobile too.
> it's easy to forget that there is a large number of users that are browsing the same websites as I am, however they are using an underpowered mobile device over a 3G network
What is even more interesting is that in some parts of the world 3G is about to be phased out.
Everybody on 4G and 5G?
I'm afraid not, the people who now are limited to 3G will be limited to 2G unless they invest into something new and shiny.
- Reducing DNS calls and server round trips. Loading fewer resources from fewer domains makes a huge difference. Using server push also helps, although it might get deprecated.
- Responsive images. Load small images on small displays. Don't load 2x images on lower density displays.
- Using the right load event for JavaScript. load fires long after DOMContentLoaded on some pages, especially on slow connections.
- Setting the size of elements before they load, to prevent the content from jumping all over the place.
- Loading fewer fonts, and serving them yourself. Google Fonts is significantly slower.
- Gzipping everything you send. It makes a significant difference on slow connections.
- Static caching. This will shave whole seconds off your page load time. Cache both on the server and in the browser, but mostly on the server.
- Consider perceived performance too. It doesn't matter how fast you force a coercive cookie notice on your readers.
- Performance extends to the content of the page. Don't make users chase answers. Give clear, concise information and use typography to guide your users to it.