The shared CDN model might have made sense back when browsers used a shared cache, but they dont even do that anymore.
Static files are cheap to serve. Unless your site is getting hundreds of millions of page views, just plop the js file on your webserver. With HTTP/2 it will probably be almost the same speed if not faster than a cdn in practise.
If you have hundreds of millions of pageviews, go with a trusted party - someone you actually pay money to - like Cloudflare, Akamai, or any major hosting / cloud party. But not to increase cache hit rate (what CDNs were originally intended for), but to reduce latency and move resources to the edge.
Does it even reduce latency that much (unless you have already squeezed latency out of everything else that you can)?
Presumably your backend at this point is not ultra optimized. If you send a link header and using http/2 the browser will download the js file while your backend is doing its thing. I'm doubtful that moving js to the edge would help that much in such a situation unless the client is on the literal other side of the world.
There of course comes a point where it does matter, i just think the cross over point is way later than people expect.
Stockholm <-> Tokyo is at least 400ms here, anytime you have multi-national sites having a CDN is important. For your local city, not so much (and of course you won't even see it locally).
I understand that ping times are different when geolocated. My point was that in fairly typical scenarios (worst cases are going to be worse) it would be hidden by backend latency since the fetch could be concurrent with link headers or http 103. Devil in details of course.
I'm so glad to find some sane voices here! I mean, sure, if you're really serving a lot of traffic to Mombasa, akamai will reduce latency. You could also try to avoid multi megabyte downloads for a simple page.
While there are lots of bad examples out there - keep in mind its not quite that straight forward as it can make a big difference whether those resources are on the critical path that blocks first paint or not.
It’s not an either or thing. Do both. Good sites are small and download locally. The CDN will work better (and be cheaper to use!) if you slim down your assets as well.
Even when it "made sense" from a page load performance perspective, plenty of us knew it was a security and privacy vulnerability just waiting to be exploited.
There was never really a compelling reason to use shared CDNs for most of the people I worked with, even among those obsessed with page load speeds.
In my experience, it was more about beating metrics in PageSpeed Insights and Pingdom, rather than actually thinking about the cost/risk ratio for end users. Often the people that were pushing for CDN usage were SEO/marketing people believing their website would rank higher for taking steps like these (rather than working with devs and having an open conversation about trade-offs, but maybe that's just my perspective from working in digital marketing agencies, rather than companies that took time to investigate all options).
I don’t think it ever even improved page load speeds, because it introduces another dns request, another tls handshake, and several network round trips just to what? Save a few kb on your js bundle size? That’s not a good deal! Just bundle small polyfills directly. At these sizes, network latency dominates download time for almost all users.
> I don’t think it ever even improved page load speeds, because it introduces another dns request, another tls handshake, and several network round trips just to what?
I think the original use case, was when every site on the internet was using jquery, and on a js based site this blocked display (this was also pre fancy things like HTTP/2 and TLS 0-RTT). Before cache partitioning you could reuse jquery js requested from a totally different site currently in cache as long as the js file had same url, which almost all clients already had since jquery was so popular.
So it made sense at one point but that was long ago and the world is different now.
I believe you could download from multiple domains at the same time, before HTTP/2 became more common, so even with the latency you'd still be ahead while your other resources were downloading. Then it became more difficult when you had things like plugins that depended on order of download.
You can download from multiple domains at once. But think about the order here:
1. The initial page load happens, which requires a DNS request, TLS handshake and finally HTML is downloaded. The TCP connection is kept alive for subsequent requests.
2. The HTML references javascript files - some of these are local URLs (locally hosted / bundled JS) and some are from 3rd party domains, like polyfill.
3a. Local JS is requested by having the browser send subsequent HTTP requests over the existing HTTP connection
3b. Content loaded from 3rd party domains (like this polyfill code) needs a new TCP connection handshake, a TLS handshake, and then finally the polyfills can be loaded. This requires several new round-trips to a different IP address.
4. The page is finally interactive - but only after all JS has been downloaded.
Your browser can do steps 3a and 3b in parallel. But I think it'll almost always be faster to just bundle the polyfill code in your existing JS bundle. Internet connections have very high bandwidth these days, but latency hasn't gotten better. The additional time to download (lets say) 10kb of JS is trivial. The extra time to do a DNS lookup, a TCP then TLS handshake and then send an HTTP request and get the response can be significant.
And you won't even notice when developing locally, because so much of this stuff will be cached on your local machine while you're working. You have to look at the performance profile to understand where the page load time is spent. Most web devs seem much more interested in chasing some new, shiny tech than learning how performance profiling works and how to make good websites with "old" (well loved, battle tested) techniques.
Aren't we also moving toward not even letting cross-origin scripts having very little access to information about the page? I read some stuff a couple years ago that gave me a very strong impression that running 3rd party scripts was quickly becoming an evolutionary dead end.
Definitely for browser extensions. It's become more difficult with needing to set up CORS, but like with most things that are difficult, you end up with developers that "open the floodgates" and allow as much as possible to get the job done without understanding the implications.
Static files are cheap to serve. Unless your site is getting hundreds of millions of page views, just plop the js file on your webserver. With HTTP/2 it will probably be almost the same speed if not faster than a cdn in practise.