Hacker News new | past | comments | ask | show | jobs | submit login

I agree with much of what is written here, but the emphasis on code splitting seems like it risks hitting the wrong target. My first question is usually what caused even relatively simple web apps to need multi-megabytes of "optimised" JS code in the first place.

All too often, the answer is not the application code itself but rather all the direct and transitive dependencies, where the developers yarn-added first and didn't ask questions later. And while obviously that offers some advantages in terms of reusing code, it does also have a very high price in terms of bloating our deliverables, particularly given the still relatively immature tools for optimising JS compared to traditional compiled languages.

Maybe we should be looking at that problem first, before we start assuming that any large JS application will necessarily produce some huge monster outputs that need to rely on code splitting and all the overheads that brings just to get acceptable performance?




I think the author still has a point when talking about "very" large applications. His example of all the Google widgets is a good one. Even if you were to hand-craft all of them with optimised code (which I imagine is what they do at Google), it would still likely be too much code for a single package to load in a reasonable amount of time.

But I agree completely with you as far as other projects go. If you drop the "very", it should be entirely possible to build a large project without the need for code splitting. Almost nobody is doing anything to the scale of the example talked about in this article, and almost everybody is using code splitting.

In my experience, there's two root causes for this.

Cause #1 is that barely any front-end developers understand or consider the cost of abstractions. Something like code-splitting is commonly seen as "free" because it takes a couple of lines to implement. The permanently increased complexity of the software and all that extra brain power required to grok it over the development lifetime is never taken into account. At least half of the devs I know are happy just banging in code-splitting at the start of a project with zero thought and I guarantee you they'd read this article and not understand the "sync -> async" example given to explain the downsides of code-splitting.

Cause #2 is that devs are too eager to add new dependencies. By definition a popular library is going to be extremely bloated relative to your use case. When you 'yarn add react-redux-awesome-date-picker', what you're usually adding is an overly generic solution with a large amount of customisability and functionality that's not relevant to your use case.

My go-to example from my own experience is with rc-collapse. rc-collapse is a container that animates to and from 0 height. It's like 200 lines and has 4 dependencies (god knows how many KB that adds). I've been using my own version that's 50 lines and 0 dependencies in production code for years and never run into a problem with it. I'm sure rc-collapse works around some fancy edge cases or supports older browsers or something, but I'm almost positive the extra weight isn't necessary in 90% of the projects that use it.

This is kind of tangential, but another real problem with this mentality is that by implementing my own collapsible container, I learnt some important lessons about React, the DOM, browsers etc. Devs that play npm lego aren't generally going to get that extra knowledge, which will cost time in the long run.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: