JS bloat can have significant server overhead when data is loaded dynamically. It’s generally more efficient to have one GET request that can be heavily optimized than a lot of tiny XMLHttpRequest that need to be parsed separately. That may flip around when someone spends a long time interacting with a SPA, but there is plenty of truly terrible code in the wild.
I've built embedded web interfaces serving up static pages that were precompressed with gzip and then used XHR to fill in dynamic content. I kept it under 100K for the uncached download (zero third party scripts). Everything worked well and was reasonably lightweight as long as you avoided framework bloat. Not having to compress anything on device helps a bit on energy usage although that wasn't a concern.
> It’s generally more efficient to have one GET request that can be heavily optimized than a lot of tiny XMLHttpRequest that need to be parsed separately.
Without context, this statement is misleading at best and downright false at worst. You’re right that splitting up a single request into multiple would incur a small performance penalty, but you also generally gain other advantages like more localized caching and the ability to conditionally skip requests. In the long run, those advantages may actually make your app significantly more efficient. But without discussing details like this, it’s pointless to make wild assumptions about performance.
The context was JS bloat, so we are specifically talking about the subset of poorly written sites. When it’s possible to shoot yourself in the foot many people will do so.
That said, if you ever actually profile things you will find the overhead per request is actually quite high. There is a clear tendency to request far to little data at a time.