>It may not have escaped your notice that 662,754 bytes is a lot of bytes for JavaScript for a single HTML page, and 296 files is a lot of files. "What are you doing on that page?" you may well be wondering.
> We are wondering that too. The page in question is the Khan Academy homepage for logged in users, and it's acquired a lot of, um, functionality over the years.
> The end result of our analysis is that our page isn't going to load any faster until we reduce the amount of JavaScript we use on the page. Tricks like trying to load a bunch of stuff in parallel, or aggressive caching, might seem like appealing shortcuts, but nothing replaces just auditing the code and making it need less "stuff."
At the end of the day, de-bloating is the best way to make your pages faster.
I love webpack and highly recommend it, but it is not trivial to integrate it into an existing project and pretty much nothing about webpack can be described as "done in 5 seconds".
I did it in a big project it took a couple days, really wasn't that big a deal, and legacy JavaScript still goes through webpack and works unmodified, then a couple weeks to iron out any kinks in the CI scripts (but was still developing during this time)
A couple days sounds about right and jives with my own experiences, but I feel like "do this in 5 seconds" is pretty misleading, especially for someone who may have never used webpack before. It's absolutely worth the effort, but I wouldn't want someone to be discouraged had they expected a "5 second" solution but then realized it's not quite that simple.
Webpack's algorithm for doing this is not black magic -- it has to consciously consider the same tradeoffs as were involved in this post. How many chunks should you have?
They need to use push to get the most out of http2.
If they prebuild a manifest they can add X-Associated-Content to the response headers and App Engine will take care of the rest. They of course need to make sure that their static file handling has very little overhead, since doing push this way does get each resource via a request.
Without using server push or server hinting, why would you even expected http2 to be faster? But as you point out, the article doesn't mention using either of these and the HAR seems to back that up. Weird!
> Without using server push or server hinting, why would you even expected http2 to be faster?
Because http/2 can fetch more resources in parallel, and avoid blocking one fetch behind another. http/1.1 can only open a few connections to the server at once, and due to widespread bugs, each of those connections can't pipeline by sending multiple requests and waiting for multiple responses. So you can't eliminate all the round-trip latencies. With http/2, you can send all the requests and then wait for all the responses. So even without server push or server hinting, you could request the HTML, get it back, then request all the CSS and JS files in parallel, and get them all back.
No I meant why would you expect it to be faster than bundling, not HTTP1.
When writing articles like this, there should really be more detail. We're all the scripts just script tags in the html? AMD dependencies discovered as the tree gets downloaded?
Unless you have an inefficient bundling scheme, bundling performs near optimally. It's biggest downside isn't perf but the annoyance related to having to build the bundles, scale issues on really large apps, debugging inconvenience, much worse caching characteristics, etc. HTTP2 can beat bundling perf but you're going to have to make use of http server push to do it.
It's understandable that individual files would be slightly larger in aggregate than a single combined file, due to the compression issue that was mentioned. However, since there is no more head-of-line blocking the overhead of an extra few bytes should be negligible compared to elimination of round-trip delays. Also, if you are expecting to serve HTTP1 clients for a while it's a good idea to use a hybrid approach where you combine files, just less aggressively than you may have in the past.
> The traditional advice for web developers is to bundle the JavaScript files used by their webpages into one or (at most) a few "packages."
They're called bundles. Not 'packages', which already has a very distinct meaning (eg, https://www.npmjs.com/ or similar systems) in the JavaScript community.
Reading this article didn't inspire much hope for a bundle-free JS environment - but if you want to play around with one, we've been working on such an environment that provides 2 applications:
1. A Koa + HTTP2 static asset (primarily JavaScript) server
2. A rendering application that loads a route-based JavaScript dependency tree using JSPM + react-router in order to isomorphically render a web-page.
Isn't this whole article completely specific to the Google App Engine use case? Google App Engine loads all of your static files onto shared servers in use by many other users and builds up some kind of lookup table to access requests. It's not entirely clear how they've architected this, but it's obviously unlikely that they store everything in RAM.
So the most likely cause for the random slowdowns are when javascript files are loaded from disk to ram or some similar process.
For most cases, you'd have your javascript files sitting in-ram on a dedicated static file server cache such as apache or nginx and would not experience this kind of random slowdown. So more than likely this article does not apply to anybody not using google app engine.
Since we are on the topic of effectively distributing JS code to clients...why is there no push for delta updates on the web? There was a proposal for delta caching back in 2002: https://tools.ietf.org/html/rfc3229
> We are wondering that too. The page in question is the Khan Academy homepage for logged in users, and it's acquired a lot of, um, functionality over the years.
> The end result of our analysis is that our page isn't going to load any faster until we reduce the amount of JavaScript we use on the page. Tricks like trying to load a bunch of stuff in parallel, or aggressive caching, might seem like appealing shortcuts, but nothing replaces just auditing the code and making it need less "stuff."
At the end of the day, de-bloating is the best way to make your pages faster.