Not to be too negative, but I'm seeing a lot of purely anecdotal comments on HN with lots of fuzzy gut feelings and very little to back up what seemingly amounts to elitism against JavaScript.
It's very easy to write fast JavaScript, with React or pick-your-framework. I would argue that these frameworks go out of their way to help you do that. But carelessly throwing modules at your problem is not going to get you that.
There are many reasons why SPAs can be slow. Whether that is due to truly lazy developers, overly tight deadlines, or management that just doesn't care is often left out of these discussions.
Everyone likes to complain about SPAs, as if terrible load times are a fundamental trait of frontend frameworks. That is not at all the case.
> I disagree. SPA frameworks attempt to re-implement browser UI and DOM within Javascript. Of course the result is buggy and slow.
SPA frameworks don't re-implement the DOM, they provide an interface to the DOM that is more usable. That may or may not have significant overhead, but it's not a given.
You have these benchmarks to show that "vanilla JS" is faster, but that's not real-world application code. Because of the usability issues, application authors are way more likely to not write optimal vanilla code. In particular, any DOM modification is extremely slow, so redundant modifications are very wasteful. Avoiding those is something that a framework/virtual-dom can do for you, to let you write simpler code.
Furthermore, anybody who writes a real application ends up abstracting the DOM interface, so they're already halfway at an ad-hoc framework. Not abstracting the DOM would result in an insane amount of boilerplate code, which would hamper developer productivity, which would result in even less time spent considering performance.
Disclaimer: All of this is true for applications, not for simple websites with minimal dynamic interaction. You're likely better off rendering those server-side and then attaching a couple of event handlers on page load.
The problem with UI/DOM is the mental model of working with it. For me I think SPA frameworks need to provide an abstraction layer that makes it easy to work with.
Thus React. I can think of it in a functional pattern and let it abstract the DOM/UI for me, ofc I understand there's a cost to it but that cost will greatly be overshadowed development & maintenance costs.
Whatever happened to the "think of the user experience first".
I am sad this shift to "it's easier for me to develop, who cares about the users" happened.
This is because Front-end web developement has been taken over by back-end developers.
Few developers complained about JavaScript until Back-end developers became 'Fullstack' and had to start working with it and tried to apply the same mental model they had used with Java or Python etc.
CSS was always something you had to dedicate time to, to master and learn the quirks of the current and past browsers, until Back-end developers decided it was 'terrible' and a race to the bottom of CSS frameworks was created and nobody remembered proper selection or specicifity as BEM was adopted.
HTML was a tool for layout which developers made good choices for thoughtful semantic code, so that it was easily readable, for both man and machine, yet it was a manual process. Then Back-end developers didn't have the time to learn something new and everything is a div and if you're lucky it will have a role.
This is the "if all you have is a hammer, everything looks like a nail" kind of view.
What i believe is, the complexity of developing software has been increasing and we have a lot of pressure specially due to time constraints that we are always reinventing the wheel in order to meet deadlines and try to make things easier.
I'm not convinced that software has been increasing in complexity, I just think that more complex sites are being created now than in the past, and there are less capable developers having to work on these sites.
I agree that pressure is high and time constraints are tight and this forces people to look for an answer outside of themselves.
Complains and jokes about JavaScript have been here from the first time JavaScript was released. Many of them fully rightful. There are even
years old memes about it like thousands of variants for this https://img.devrant.com/devrant/rant/r_1585_DY1Kk.jpg
The complains about CSS being something that is hard to learn and remember and about its shortcomings and hard maintenance have been here also for years. The thing people acknowledged was that it was improvement over writing it all into html.
Original HTML contained everything that CSS contains now - unseparated. And for that matter, everything is DIV because CSS people and designers like it so. It has nothing to do with backend people who would happily use table everywhere if only they were allowed to. Then again, css people and designers did not pushed for divs out of whim - they want it that way because it is practical for them.
Yes people acknowledged that CSS was an improvement, but most of the shortcomings were not because CSS was at fault but that browser vendor implementation was slow and inconsistent.
CSS has always been relatively simple, the complexity came about by having to juggle workarounds and hacks to support all the different browsers.
I completeley disagree with your last statement, divs had been around for a long time before they became a one-hit wonder for all elements, nobody needed or wanted them.
Once CSS 2.1 was adopted and table layout was no longer required semantic html ruled.
Divs have only become ubiquitous since CSS and front-end frameworks have become so popular
Pure css is not comfortably maintennable. It lacks variables for example. That is massive shortcoming - inability to say that this color is same as that color. Or alternatively that this selector here is same as the one over there.
Also, even things like making stuff same height were absurdly difficult with css even absent browser differences. It is oddly inconsistent language with hard to remember rules.
And it often ends with house of card constructions that break the moment anything changes.
Actually, pure CSS can be comfortably maintained--if you're willing to limit your project to specific parameters or, simply, use a well-defined methodology (i.e. BEM, OOCSS, ITCSS, etc).
I've read some developers limit their CSS file(s) to 50 lines (max.) while other developers build their CSS on a page-by-page basis dependent on a primary global style sheet (ex: content-sidebar.css is loaded on every static page and post while content-page.css is only loaded on static pages).
Where I currently work at, every static page gets its own CSS if it meets the following two parameters: (1) accessible from the root and (2) not part of a larger (or "nested") set of pages--two rules becomes necessary when you have a team who hates nested pages and prefers an internal taxonomy system. The styles are encapsulated by adding an additional HTML class (usually a sanitized version of the page title and its ID) to the body tag.
For pure CSS websites, OOCSS will be your best friend as it forces serious thought into how you define your style rules. Additional rules also include limiting the use of CSS vars only to elements that have an ID attribute (which is also limited as well).
Sorry that is a crazy thing to say. With node (front end tech at heart made by front end people) programmability invaded the frontend. Which gave myriads of new ways to solve problems and given that all the front end problems are new and the community is fairly young the whole thing is pretty wild currently. No backend dev would care about SPA madness is he had the choice.
a developer who doesn't think of that is going to do that with react or without react. freeing up development time to focus on better user experience is more possible by using a framework such as react.
The imperative DOM “framework” available is far more error-prone than the state->view, declarative model of React. It’s ugliness is more than skin deep.
Depends on the target audience. Many customers will be sold on how pretty an app is, sometimes to the point of overlooking missing features. Having a good looking app becomes a competitive advantage, and therefore important to spend time on.
Such is life.
Your point is orthogonal, even if true. My point is that SPA frameworks are necessarily buggy and inefficient - after all, their whole point is to reimplement in a bespoke fashion the stuff that the browser already implemented.
Buggy and inefficient stuff sometimes makes sense to a business, this is true.
That’s a common misunderstanding and absolutely not true. They are abstractions and not reimplementation, and in practice can help achieve way better performance than what would naively come out. React is not a particularly good example of that.
They also solve problems like state synchronization and mutations that the browser has zero facilities for. As mentioned in another comment above, in any medium sized project you’ll have implemented more than half a framework abstracting the DOM already.
A SPA is predictive of a web application that has substantial performance, UX, and accessibility issues. A high schooler can build a page with some simple HTML and some jQuery sprinkled in that makes for a substantially more pleasant experience than teams of professional developers regularly manage to do with SPAs. I'm sure there are SPAs that are pleasant and add value above the alternative, but they are clearly very hard to build correctly, as evidenced by how many multi-million dollar SPAs are pieces of shit. It's kind of besides the point whether that's an organizational or engineering failing, because the very fact that SPAs lend themselves to those kinds of failings is an indictment of them.
The fact that very large SPAs are harder to build and maintain and thus easier to mess up isn’t a very compelling argument for the people who have the ambition to build great SPAs. More ambitious projects are easier to mess up, of course.
Before I was a professional developer, I made a really small application using just HTML, CSS and some jQuery: just a site that listed about ten recipes. When you selected the next recipe, it would change the page's background colour, doing a full repaint in the process. It was terribly slow.
As a user of different websites, I have to say. I get extremely annoyed a SPA websites that should be server rendered sites. Over a 3G connection they are extremely slow and annoying. Especially when it should just be a simple page. And I know that it isn't the SPA itself that is the problem, since gmail loads its website extremely fast. I think the issue is poor practices in caching etc. But really, they don't need to worry about all those good practices if they would just render on the back end to begin with. Just give me simple pages with little unnecessary content so I can get on the page, get what I need and be off without having to wait 10 seconds between each page load (and yes, that is with the SPA).
It's elitism against low quality. Webdev inflated dramatically and cuts the costs to absolute minimum, so quality drops severely. It's not only javascript, everything in webdev is low quality, it's not a fundamental trait, it's a real life trait. Turing complete part is just more sensitive to low quality. Flash is what web 2.0 really wants, but has to emulate it with enormous hacks, while having similar security implications, it repeats history.
How to optimise front end apps? Don't use pure javascript based frontend. Do all normal crud apps on regular server side code. Eg. A normal rails CRUD app is much faster even with the intermediate page loads factored in, than a heavy react app with a spinner to make it appear fast. Want to give the feel of a JS app? Use turbolinks. Really want to use something fancy and dynamically update your content from the server? Use something like Phonix Liveview. I love VueJS and React. But these days I've switched to using Liveview so much that I don't miss them at all. The benefits are also huge. Near instant updates with very little server load (scalable) and ultra-light frontend. Win win win.
Front-end performance optimization is surprisingly lacking when it comes to web apps. We focus so much on shaving off 50 ms when it comes to server response time, but we're fine with a 7 second load time for front-end. It's very surprising that there isn't as much focus on front-end architecture performance.
Wait, are you replying to a guide on FE performance, completely ignoring the content, to say nothing is being said on FE performance and then promote your own stuff??
I'm not ignoring the content. I'm saying it's a big problem and there should be more focus on it. It's great that people are starting to pay more attention to it.
I removed my links although I think they're complementary to the article.
I know it is a chore, but since it seems to be unwelcome in your top level comment, do you mind posting the links here? I actually am interested in checking them out, and I bet I am not the only one.
In my experience, it's OK to have a (slightly) longer load time for content behind a login wall. I also spend extra effort caching whatever I can for return visits. Refreshing the page should give you an almost instant load time.
Yea, I've run user feedback sessions. Page loads never come up as a complaint (plenty of other things to complain about :P)
The goal was to keep an average of <3.0s for the first meaningful paint, after login. Before login, all pages were treated with the same respect you'd treat any marketing page; render what you can as fast as you can.
Back with AngularJS, when code-splitting and lazy-loading modules/controllers wasn't an option, you had to get creative. Since moving to Vue, initial page load has never been an issue.
Just for balance, the company behind the blog post is also a developer-ran performance monitoring service: https://www.debugbear.com/ (Disclosure: this person is not me, but I have used the service and I do know the person)
Well the main difference is, 50ms of processing time and memory usage on the server I'm paying AWS for, vs. 7 seconds of processing time and memory usage on the client machine the user brought to the party. The goals of performance engineering in the two spaces are different.
Sure, you are right that the goals are different. But good engineers respect the user. And that includes paying close and careful attention to client side performance. While you may not be directly paying for it, crappy performance there will cause users to bail on you - then you won't need to worry about how much the server is costing you.
Indeed, I think the notion that the user’s resources are “free” is deeply flawed. High load time, CPU usage, RAM usage, etc is merely tolerated, not accepted, by users depending on the value of your product, making whatever gains in productivity or reduction in expenses made by shoveling it all off onto the client more like a high interest loan than a freebie.
Good job. Just a minor nit: I assume most website these days have HTTP2 support. Otherwise considering the ROI I would work on that first so that should be optimization #1.
> In practice you'll rarely be able to optimize on all fronts. Find out what's having the biggest impact on your users and focus on that.
Can’t agree more. If your engineering team spent several sprints to focus on improving the app load time, it has to be something highly impactful on your conversion rate. Otherwise it’s just a waste of resources.
In my experience it does. A place I worked once had a mysterious dip in user activity with a deploy, and the deploy only included a shadow feature that nonetheless still loaded the required, but not optimized assets. When we removed the loading of the assets user activity went back up.
This was a decade ago so I don't remember the exact specifics. The dip was high enough that we basically stopped everything else and immediately investigated.
This is very shallow short term view on things. Performance also impacts customer retention, loyalty, satisfaction and word of mouth advertising because of that, makes engineers feel good and proud about the things they do, rewarded and motivated to keep going and not leaving to do something else, etc. Hardly "a waste of resources".
Assuming "fast enough for a good user experience", expecially early on in a product lifecycle, I prefer to focus on development speed than product speed.
I get your point. This is an outlier, but hear me out:
For some of the workloads I run on https://workers.dev, I can tell you that going from 100ms to 10ms is something I look for every single day. I am also very interested in anything that takes the memory usage down into KBs from high MBs, say.
In 2020, a webpage that "only" takes 10 seconds to load is a miracle of performance and user-friendliness. Load times of up to a minute are the new normal.
I'm pretty sure dismissive comments such as yours lead us to this bad place.
Of course by "load" I don't mean TTFB, I mean actually loading the full page, including all the spinners, reflows, banners and toolbars popping into your page, images, etc.
1) Use a framework/language that is optimized by default instead one that is slow by default. Ex: Svelte only recalculate only the part of the UI affected by a change. Most of the other frameworks recalculate the whole virtual DOM.
2) When you can write something fast or slow in the same amount of time go for the fast one. It seems trivial even mentioning it but sometimes people with "make it work first, make it better later" do not even spend few seconds pondering this.
Practically, not much can be quickly built in Svelte. There's thousands of React components I can download right now to solve my problems. Shipping features to users has a much bigger impact on a business than subpar page load times.
“not much can be quickly built in Svelte” - is this an experience you actually have from trying it? Because it’s very far from what I’ve experienced. So many things in Svelte can be done with surprisingly little code.
Also in my experience, those thousands of React components are going to be a source of new problems. The more of them you have, the more likely you’ll run into frustrating limitations that contorts your code to an unmanageable mess.
I do not. However, I have used React and Angular and they all take roughly some baseline amount of effort to achieve anything. And in all of them, I've had a times the need to rely on the community to ship product features faster. Have you tried implementing accessible drag-and-drop lists? Atlassian, for example, has spent years working on that problem, making a well-polished, enterprise-grade, cross-browser dnd solution (react-beautiful-dnd). And DND might be just a small feature of any otherwise much larger app. Foregoing all of these existing tools to start with a new UI framework is not a decision to be made lightly.
I spent the last 6 months optimizing front end performance at my last workplace.
What I can say is measure before just blindly following a checklist.
Chrome devtools performance and audit tools are awesome.
Measure, fix biggest bottleneck, repeat.
This post has some great recommendations but things could be simpler. Like use http cache headers instead of service workers. They help both the cdn and the browser. Use immutable urls for assets.
The best way to improve perf is to simply load less JS and CSS. Less is more. It’s not always possible but helps trim down large swaths of network load when you can keep things simple.
I love checklists but you may end up with people dumbly following them.
That may be good when a checklist is accurate.
Unfortunately models work in the constraints and simplifing assumptions they were built in.
"Web development" it is a pretty large subject to cover. Best practices for dynimic site that is image heavy do not match best practice for systems that is heavy in static content.
A check list for web delopment is going to be pretty messy if you want to account of all cases.
I'm surprised I haven't seen nuxt (an SSR framework for Vue.js) mentioned here. It makes many of these performance enhancements out of the box.
It enables some cool options, like pre-rendering static content with the generate flag. You can use it as a static site generator, or wrap dynamic stuff in a <client-only> component and only that portion will be rendered client-side.
Service workers & PWA stuff, code splitting and dynamic routes, its all made pretty easy.
I've been finding it quite easy to build performant front-ends with nuxt. It is opinionated, and it abstracts away the webpack config, so it certainly isn't perfect for everything. But for me, it has really been hitting a sweet spot.
Oh man, I hate to be negative, but this word is my #1 pet peeve in tech today. It's becoming more and more prevalent and is driving me crazy. Why do people keep using this non-word? What do they think it means, and why? It's pure jargon, at best. The word's meaning is completely ambiguous and loosely implies numerous qualities without actually committing to any of those potential meanings. Does it load fast? Consume little memory? Respond promptly/clearly to user input? Do an acrobatic dance?
When someone uses this word, I can't help but feel that they are trying to gain the approval/validation of people who like to hear that something "performs well", without having to support the assertion with any concrete substantiation.
My pet peeve is "monolithic". It implies that anything without microservices is Flintstones technology, encouraging clueless managers to bloat up software with microservices to get things like "separation of concerns". It's often separation of productivity from reality. A lot of IT press is "fake news".
The IT press is unnerving. It implies a whole ecosystem: managers making technical decisions they don't begin to understand, and predators grooming the managers' egos in order to pounce on their budgets.
I get that the right person to lead a large organization might not be technical, but you'd hope they'd delegate those calls to someone who is. Not try to figure it out personally based on what they read in a trade publication for "visionary thought leaders."
It is a word because people keep using it. That's where words come from. Apart from the number of characters, it is no better or worse than other words/phrases having the same meaning. Sometimes the precise meaning is clear from the background context. Sometimes additional details need to be supplied outside of the title or bullet point where it is used.
What does "it" mean above? Devoid of context, it could be just about anything.
I understand the concept of "it's a word because people use it" - but what does it actually mean? No one can give a consistent defininition because it's nebulous jargon with no clear or specific meaning. Without fail, every single usage is obfuscating or otherwise hiding the intended meaning.
Check out this Unity blog post I just came across: "Achieve beautiful, scalable, and performant graphics with the Universal Render Pipeline"
What exactly are "performant graphics"? Does that mean high frame-rate? High-poly? Extremely vibrant colors? HDR? Can run on a 486 with software rendering?
It doesn't tell me anything, and is essentially "vocabulary clickbait" - it "sounds good", without really communicating anything. This is why I despise this word.
In context, I would expect "performance" to refer to how quickly a scene can be rendered, and another buzzword, "scalable", to refer to the size and complexity of scenes that can be drawn. "Beautiful" probably means "lots of detail and snazzy effects."
Actually, the article says "scalable across platforms", which is confusing to me. Maybe they mean scalable to different screen sizes?
It seems pretty obvious to me in context what the word means. You could play that game with literally every word in the sentence.
What does "achieve" mean? It's that way out of the box? If I invest in a special team of Unity developers it's possible? They'll give it to me if I work hard enough?
What does "beautiful" mean? High poly? Vibrant color? Critics or fans say it looks good? There's an object beauty score that it has high marks in?
> it is no better or worse than other words/phrases having the same meaning
It is demonstrably worse than phrases like "low latency" which clarify the desired property of the system being discussed.
I'm not a huge fan of "high performance" but at least I kind of know what "high performance computing" means (usually systems that are capable of processing massive data sets with high throughput.)
In contrast, "performant" as it is commonly (mis)used doesn't seem have a precise meaning other than "good according to some unspecified metric."
What do you mean by "low latency"? Low latency before the user sees anything? Low latency before the user can interact with the app? Low latency navigating to the next page? It's hard to answer every conceivable question in a headline.
Everything is vague to some degree. The hand-wringing over "performant" is just a meme.
"Performs well" doing... what, exactly? And how is "performance" measured? (Some basic and often conflicting measures include latency, throughput, power/cooling, reliability, cost...)
As sensible as that is in a general sense, this article is actually heavily about dependencies and making sure the latency between when the browser knows about dependencies is small.
I don't know what/how Fastmail does it but their front end is amazing. So fast. I assumed their name was about speedy delivery but the whole UX is the fastest I've ever seen.
With a service worker, you can stream the header part of a new document instantly from the service worker, while fetching the content. That essentially makes an instance first paint.
I don't think that's what the example is showing, though.
That's a good point, I didn't consider having the browser cache the document. That means you can achieve the same thing with the HTTP cache. (I'm not sure if the retention logic between the service worker cache and the HTTP cache differs much.)
Service workers give you more control about what's in the cache, for example you can serve a stale version of the HTML and then fetch an up-to-date version in the background. But since last year you can also use `Cache-Control: max-age=1234, stale-while-revalidate=86400`, so it should be possible with the HTTP cache as well now.
Do you happen to have any link on a presentation or post about that? I did a quick search and couldn't find more about it, though maybe it has a more specific name or something? No worries if you don't know offhand, just curious :)
Modern front-end architecture has "fixed" the 1 second page load and replaced it with a 0.5 second page load... And a 3 second spinner... And a 1 second spinner... And a 4 second progress bar... But hey the TTFB is under 1s now!
This is such a tired old meme that conflates bad architecture with current professional standards. Bad architecture of anything has existed since forever but some bad apples don't spoil the barrel. Page render should occur within the first 300ms and page load should be complete in a second.
“Should” being the key part of your comment: it’s quite rare in practice. Most SPAs are lucky to have page render in 2-3 seconds, or fully loaded in 10 - and my experience is using the latest iPhone in a major metro area around 10 ms from most major CDNs.
Yeah it would be great to see some numbers from anyone who might have them. I'd put money on SPAs being slower, inside the bell curve than, than the average traditional page load app.
It's a shame more companies don't keep the old version of their website around when they launch a redesign. It's pretty easy to visit https://i.reddit.com/ and https://m.reddit.com/ on your phone to see which one feels faster.
Forget “feels faster”, I use that for stability every time new reddit hangs - usually 2-3 times a day I have to do that because they don’t handle errors yet, and they recently added a new bug where you can’t tell whether a reply was sent.
SPAs don't make sense if it's not an 'application', but wouldn't a SPA be faster over time not having to reload the whole page and all that duplicate markup? Wouldn't the simple JSON calls be smaller than refetching and rerendering all of the HTML? Front-end application offer more than just no-reload though, like push notifications, real-time data, and offline caches.
It’s not that simple: you need to factor in size and latency, too. If my SPA loads 2MB of JavaScript and then makes 50 API calls, it’s going to be a lot slower than the server sending 20kb of HTML in a single response.
JSON may or may not be smaller or faster: if you have to load data you don’t need or, worse, chase links it’ll be worse. GraphQL may help but that’s bringing it closer to server-side performance, not exceeding it.
Things which aren’t possible otherwise are the best argument for SPAs, but another approach is progressive enhancement: you can load quickly and then load features as needed rather than locking in a big up-front cost if all you need are real-time updates or push notifications. There’s a spectrum of capabilities here and there won’t be a single right answer for every project.
It depends, as always. (Compressed) JSON is probably smaller than (compressed) HTML, but that doesn't necessarily translate to more round trips. And browsers are pretty good at rendering HTML, and at the same time SPAs will get in the way of translating the JSON to rendered HTML on screen.
5 second page load is fine. This endless tuning is a great analytical puzzle, but implementing a new feature trumps these complex micro-optimizations that the consumer less and less about, with mobile having already become the common case access (mobile access is already going to be slow from network anyway). It's rare that you get a complaint. Once it's loaded and you have cached as much as possible, you're fine for returning customers which matter more to appease.
That analysis is from 10 years ago. Again, the landscape is different and spending time on this is a borderline foolish discussion (interesting from a technical perspective). It matters if you are the owners of a business, in the mass-retail space. Save yourself the time, add to your features or verticals.
5s is a pretty shit standard to set and in general customers or visitors that you lose to poor website won't tell you about it unless it directly impacts them. Also laughable that 5s should somehow become more acceptable even though we have better tools, faster networks, and faster hardware on all fronts.
It seems like your argument isn't that it's fine across the board, but rather that it's tolerable in a few circumstances. If I try and load a website that is packed with features but loads slowly and maybe has a choppy experience, I'm going to go somewhere else if I can. It's only in the few circumstances that you can't go somewhere else will you be relatively fine.
Hmm. There is a lot of difference between a sales brochure site, and a business to business site / app, with things in between.
If you are running a marketing or sales oriented site, you want to minimize friction.
It sounds like you are doing more of a “captive user” app, similar to the government/healthcare oriented stuff I have been working on the last few years. In that case, you still want to tune things a bit, but you do have goals and criteria which might not match this article.
Make sure things which can be cached are. (And don’t sweat the size of static assets too much, the second visit is just an HTTP header swap)
Minimize the number of requests.
Schedule requests for supplemental data in a component or form to load after the primary data.
Minimize the size of dynamic data messages, though this can be hard when you work with devs who like very long names.
Accept the fact that many of your web clients may have round trip latency well over 200 milliseconds, as well as the occasional multi second delay or request loss, and plan to work around that without hanging.
It's very easy to write fast JavaScript, with React or pick-your-framework. I would argue that these frameworks go out of their way to help you do that. But carelessly throwing modules at your problem is not going to get you that.
There are many reasons why SPAs can be slow. Whether that is due to truly lazy developers, overly tight deadlines, or management that just doesn't care is often left out of these discussions.
Everyone likes to complain about SPAs, as if terrible load times are a fundamental trait of frontend frameworks. That is not at all the case.