Yes, the rendering delay caused by a non-streaming HTML parser exists, but unless you screw things up elsewhere, that's going to be less than 1ms of delay.
Yes, bloated frameworks are annoying, but not due to their computational overhead per se, but because they increase download size and because Javascript is slower then native code in general.
I find it very telling that this article doesn't mention network latency even once. Those 50ms to 100ms round-trip time between the user and the server, that's what is causing most of the visible delays.
Ship your users an app including a copy of the data ahead of time, and you have no visible delays. It's as easy as that :)
And before you criticise that this won't work for every use case, consider that IMAP is basically an offline-capable newsfeed. Also, I can buy a DVD with a directory of all US businesses.
The only reason why Facebook and Yelp work the way they do is because otherwise they would have to give up on some of the tracking. Because for some reason, people are more tolerant towards privacy invasive tracking if it's a "web app" instead of a native app.
> Ship your users an app including a copy of the data ahead of time
I did exactly that with a hobby project (https://www.acceleratul.ro/). The funny thing is that I had to add a short (~50ms) artificial delay with a spinner before showing search results, because people completely missed the fact that the page had refreshed
Yeah, I did the same with https://james.darpinian.com/decoder/. It's an underused technique for sure. It's enraging when I have to repeatedly wait multiple seconds to load a bloated results page containing 10 additional rows from a database query (e.g. paginated product listings) when I could have easily downloaded the complete query results or even the entire database in less time.
Nice helpful project, I think I may have used it once while in Romania. Yeah, for small static data it absolutely makes sense to embed it in the app itself. But imagine your app was world wide or it needed to account for dynamic data, not sure it would make sense to ship a few dozen MBs all at once.
I believe this entire technique is known as SSR - Server-Side Rendering. It's built in to Vue and React and you can find examples on their respective sites.
Well, the idea is that you render the first pass of the SPA and serve that HTML to the client. Rather than just injecting the data into the SPA source code, you inject the data and render the page, and then serve that page. Then whatever changes the user makes to the page after that initial load gets handled by the SPA. This way the user avoids having to wait for the SPA and all its dependencies to load before the page can even appear, and the developer doesn't have to differentiate between what happens on the client side and server side (remember using PHP to inject variables into JS source code?) because they can just bake it into their SPA.
"unless you screw things up elsewhere, that's going to be less than 1ms of delay"
Are you using an iPhone?
In my experience SPA performance falls apart on cheaper Android devices, which are far more common among regular users but tend not to be in the pockets of developers, who spend way more money on their devices.
A few years ago while working on a big redesign of a high traffic web app used by people on a variety of devices–where maintaining conversion rates was critical to the project–I bought my team a handful of the cheapest current-gen Android devices I could find on Amazon to test with.
It's definitely a worthwhile exercise if your audience isn't just tech company employees with brand new flagship devices.
I think the question is whether developers actually care if users with cheap devices can't use their SPAs. If you're trying to sell them something, you probably want customers with deep enough pockets to afford a somewhat modern phone. If you don't want to sell them something and you just want to provide information (e.g. restaurant website, digital flyer, event RSVP, etc) you could serve a simple static page, or forego the website entirely and post your information on social media which can then be consumed by optimized native apps.
If you don't want to sell them something and you don't want to tell them something, why are you building a complex SPA at all?
> forego the website entirely and post your information on social media which can then be consumed by optimized native apps
...and force the user to install the "optimized native app" for the social media site du jour? Or simply let them put up with Facebook's "yes-this-is-a-public-page-but-I'm-still-going-to-ask-you-to-log-in-or-sign-up-just-because-I-can" shenanigans? Great UX, that!
Are we really going to assume that your average user doesn't already have FB installed and already signed in? They have over one billion users, you know. It's not exactly rare to find Facebook on any arbitrary person's device.
Even assuming all of those accounts are still active (and none of them are duplicates), that still leaves around 7 billion people without a Facebook account. Personally, I do have a Facebook account, but I only use it on my private desktop PC, not on my work laptop and also not on my phone.
I agree with you, I'm only logged into my work FB account on my personal device, however I'm also a power user (and I'm guessing you are too) so I don't think our experiences can extrapolate out to the population as a whole. Sure 7B people don't have FB, but are you trying to invite 7B people to your event or business? Not really; you're probably trying to target a very specific kind of person, or a group of people you already know. In that case you can just do some surveying / focus group'ing to figure out what people already use and push your content on the platforms that cover the majority of users (which in most cases, I'm guessing, would include FB).
That doesn't jive with my experience. If you are using something like React to do client-only rendering, you're gonna be waiting a lot for blocking critical path JS to run, as well as the actual paint time (which alone can easily be in the hundreds of milliseconds).
> Ship your users an app including a copy of the data ahead of time
That's not really feasible though, is it? Most SPAs in the wild fetch data on demand, so you still get the network latency no matter what (on top of the delay due to no streaming each time). This is why you get those skeleton placeholders on sites like Facebook.
>which alone can easily be in the hundreds of milliseconds
This actual paint time is surely not any different than if you rendered the exact same application from pre-hydrated HTML though.
Plus, if your application is so complex that paints are on the order of magnitude of network latency, it's probably very important to have a framework to handle interactivity, else everything will end up as some kind of a mess if you try to re-load the whole HTML page to change some small DOM subtree.
Overall, I don't really see the situation where an SPA is performing worse than a server-rendered application, given that some in-browser interactivity is required.
> This actual paint time is surely not any different than if you rendered the exact same application from pre-hydrated HTML though.
Well, it's different in different ways, depending on which goalpost you're standing on. If by that you mean do SSR, then run React on top, then you've just incurred the same TTFP as plain HTML, plus the cost of React hydration, plus potentially a second repaint (or even third and fourth repaints, depending on how your data trickles in). In the wild, this pattern gets particularly egregious when people decide that it's ok to have loading icons and widgets pushing each other down as data comes in.
If you mean to compare to client-side rendering without SSR, then you're still looking at the cost of at least two repaints (the initial TTFP repaint, plus any repaints from data fetching after that).
You misunderstand - I mean that literally. Arriving to the same initial application state from any means, surely the HTML painter isn't going to be slower or faster. Other parts of the stack will of course be, but I really didn't understand how the painter would induce latency.
I personally see extremely limited value in SSR, I prefer to let the client machine deal with assembling related data objects into a nice UI, so you won't catch me out here arguing for it. I load an HTML page that displays an animated spinner inside #root, and then once my bundle loads it lays the application out.
In single-page apps, navigating between pages is hard to get right. https://vector-of-bool.github.io/ implements its own JavaScript-powered page turning. However if you press End on a long page, then navigate to a short page and back to the long page, the browser's scroll position gets truncated.
In the "Instant Multi-Page App", navigating between pages "just works".
Fastest experience will always be without a special framework. Fastest will always be if there is nothing except the content served in a single minimal request/response. So, this means no JS (excepct issuing requests), no browser framework, pure content. Browser sends the simplest request and receives simplest content. User reads it or looks at it. Fast as hell. It can't be faster than that because JPG can not be compressed further, gzipped HTML can not be compressed further. With later HTTP implementations, the connection can already be open and re-used.
To explain this with an analogy. Bloatware comes when you decide that your true content is not enough to be appreciated as is, and needs to be beautified, made interactive, .... Just like make-up. Without make-up you only need to show your face. When you decide to use make-up to cover the true content (your true face), you need huge ammount of overhead - you need to buy the make-up, you need to learn how to apply the make-up, clean the make-up, re-apply the make-up. You need robust make-up. You need waterproof make-up. Then you need special cleaner because water will not be able to clean it. Then you start needing a make-up framework, to help you clean, apply and re-apply the make-up consistently and easily. So reinventing make-up frameworks will do nothing in alleviating overhead. Dropping the framework is the best way to go.
I hope this analogy makes things clear what I meant.
You know what also uses the streaming parser? Standard server-rendered pages.
The idea that SPAs are faster because “they just download the content” is rarely true, HTML compresses well and the content will take the same space in either. The real bottlenecks are in re-executing huge amounts of JS per page (don’t have huge amounts of JS?) and page transitions (being worked on).
If the majority of users only view the one page they land on, throwing a load of JS at them to make subsequent page loads faster is only slowing down their experience.
It's also pointless for users who aren't the lowest common denominator, since opening links in new tabs also defeats the whole premise. That's how I'm browsing HN right now: middle-clicking several links on the front page, so I don't lose my place, opening the links and then comment pages in new tabs.
> since opening links in new tabs also defeats the whole premise
You should still have assets cached and not have to refetch them, but on most SPAs you are still having to get that pages content dynamically so in theory you're reducing load time.
Of course, you could just reduce the entire bundle size to be reasonable, statically render everything and save a whole bunch of complexity, but that wouldn't be very Web 4.0 of us.
The initial motivation for building intercooler.js (and then htmx) was because I found that slamming a large html table rendered on the server into the DOM directly was orders of magnitude faster than running the JSON equivalent through a local template engine.
Sure, but what's your use case? Strict data table where everything is a string or number? Great. Full fledged application where your table rows are actually selectable items that modify app state and cause rerenders elsewhere? Surely a slight performance hit in the table is worth the smoothness of an application.
For example, I have a workflow execution tool. You have a Table serving as a left column, and when you select a workflow (by clicking its table row), it populates the right column with the graph of that workflow.
I often feel as if arguments against SPA are constructed against the worst examples of the thing, when the UIs it enables are smooth, performant, and best-of-all not requiring user installs.
You don't have to use a SPA to achieve that. With a couple of lines of JavaScript, you can insert snippets of html into a webpage that were rendered on the server. And remove them with one line.
Your entire use case is achievable with server-side partial rendering and a few lines of native js.
So your proposition is that instead of downloading a client bundle that knows all of these actions, my server:
* renders a Table for me
* my thin client injects that into the page
* my onClick handlers for the Table are going to do what? Make an API call to the server so it can render the focused Graph section of the page now? Or is it supposed to hook gracefully into some client state that is a part of the aforementioned thin client?
Either way it sounds like instead of having one async stage at application load (which can be optimized with multiple techniques), you are proposing that each user action has to wait for a server to receive the request, render HTML, return it to the client, and either refresh the page or inject the server-rendered HTML.
I fail to see why this is better than an SPA. Especially given that there's context menus and workflow actions everywhere.
Anything that actually changes the data requires a server roundtrip. The point is that it's faster and simpler to just have the server render the entire HTML and replace that rather than making an API call with client-side JS rendering.
>Anything that actually changes the data requires a server roundtrip
I mean, of course, but clearly I've outlined a situation where you can easily prefetch some objects to have an instant user experience. These are plentiful.
Additionally, you enable responsiveness in other ways server rendering can't accomplish with an SPA. You can display loading status of async resources, you can optimistically display changes so the user can begin editing another piece of data without waiting for latency-bound UI, etc..
Finally, "simpler" is a hard sell. Simpler for who? Someone who is writing a simple HTML data table page? Yes. For someone who is building a collaborative graph editor? Give me a break.
This is my point this whole comment thread - the anti-SPA crowd here sometimes hates JS and browsers so much they are willing to completely sacrifice user performance inside application use-cases because they find the idea of an SPA unappealing. It's pretty absurd, facile really, to claim "it's faster AND simpler".
Where's the prefetching? The table loads with the page. Loading it after with JS will only go slower.
> "server rendering can't accomplish with an SPA"
Again, you don't need a SPA to have some interactive DOM manipulation. Nobody is saying everything requires fresh HTML from the server, but something like jQuery Datatables will give you 99.99% of instant client features while working with server-rendered tables. htmx.js is another great library that solves most use-cases.
> "For someone who is building a collaborative graph editor?"
Obviously not, because that's one of the few times where a SPA is needed.
> "the anti-SPA crowd here sometimes hates JS and browsers so much"
What's with the extremes? Nobody is against SPAs or JS or browsers. In fact we want browsers to be used for what they do best, which is rendering URLs into pages very quickly. SPAs make a tradeoff in trying to emulate most of the browser performance and reliability in return for complex interactivity. The issue is that most sites don't need that interactivity, and what little they do need can easily be done with much less.
Yes, I said that in the context of this thread, specifically the GP post that said:
> "a large html table rendered on the server into the DOM directly was orders of magnitude faster "
Ignoring all context and calling this a "childish analysis" does nothing to further your arguments, and is yet another example of your comments going to the extremes.
You show a dogmatic support of SPAs without any consideration for the repeated statements that they are overused and not fit for the majority of the interactivity that people use them for. If you refuse to accept this than there's nothing more to discuss.
I don't know what I'm talking about, yet I build slick UIs for a living. Ok.
You are advocating for every action to take a server round trip to render, a client UI that consists of some hobbled-together mess of client JS, endpoint-rendered HTML, and worst-of-all extra server infra in order to facilitate it, when your client presents you with a perfect execution environment for a single application to transform data into semantic UIs.
I doubly enjoy that you didn't answer my earlier interpretation of your proposal, nor name any specifics of how you work, instead opting to swing back by once other people had sufficiently derailed the conversation.
> For example, I have a workflow execution tool. You have a Table serving as a left column, and when you select a workflow (by clicking its table row), it populates the right column with the graph of that workflow.
That is trivially implementable in pure HTML interactions.
If you were making a 3D game or something, sure... but that's just normal web navigation.
Basically, you are proposing what? That I have a server that renders this table, pre-populated with onClick handlers that magically know what function to call to tell my browser state that it should focus a specific workflow?
Or are you suggesting that I reload the entire application to use "pure HTML interactions"?
I really want to understand - how are you making this construct both not fragile to iterate on AND not require a page reload or server round-trip after the user clicks on a Table row.
You would have rows in one table, those rows would be annotated with two annotations:
hx-get="/workflow/<workflow id>"
where <workflow id> is the ID for the given workflow. This would issue a GET request to, er, get the details for the given row, in partial html format.
And also
hx-target="#other-table"
This would tell htmx to take the received content and put that into the other table on the screen, by id. Since this target is the same for all rows in the table, it could be moved to a parent element, where it would be inherited from.
Finally, you could also use
hx-push-url="true"
If you wanted to push the URL into the navigation bar, creating a history element for the navigation, and which would allow users to copy a URL to a particular spot in the two-level navigation, and enable back button support.
Thank you for the tutorial. It is quite a nice library, to be sure, and the freedom to work outside of the JS ecosystem, instead writing your servers however you please, seems ncie. However, I feel you are talking right past me to show me your library & dislike of JS. From my last message:
> how are you making this [...] not require a page reload or server round-trip after the user clicks on a Table row
The whole appeal of SPAs is they take perhaps 2s longer to start up than your normal webpage, and then you get lots of instant interactions. The example above is one of them. Your model requires server round-trips for every interaction.
I get it - you don't like JavaScript. I find your model quite nice and graceful for non-application purposes. It's certainly a brilliant way to render some types of data.
But for fully interactive applications where your user would want to have context menus showing applicable actions; good feedback on app / loading state; and avoid server latency on any type of event; I feel like SPAs are far superior.
I also feel like you have removed yourself from writing JS flavored HTML, but the solution of server-siding it introduces two problems. One is that you still need to hydrate and format that HTML somehow; I again think you can go a long way with templates, but if you're just using React on the back-end it seems ridiculous. Second, you have to take on all the rendering load instead of just passing thru some JSON-formatted DB results, which puts your server infra under far more pressure at less load to my eyes.
Final gripe: that interaction is not "pure HTML", you are using a JS library...
The primary advantage of SPAs is the increased interactivity of the UI. Instant navigation is nice, but that's an implementation detail: an SPA could just as easily lazily load that data, and must chew up memory on the client and then deal with synchronization complexity with the back end if it doesn't.
It's a trade off, of course. SPAs are a reversion to the classic client-server setup, and there are advantages and disadvantages to both that and the newer web model.
One is that you still need to hydrate and format that HTML somehow
htmx expects HTML back from the server, there isn't any hydration or client side templating (unless you want that)
Second, you have to take on all the rendering load instead of just passing thru some JSON-formatted DB results,
Formatting database data into a JSON string is not significantly less CPU intensive than formatting it into an HTML string, and both are a round-off error when compared with the expense of connecting to the data store.
Final gripe...
Yes, an irony of my life is that I had to learn a lot of javascript in order to avoid writing much javascript.
I think my biggest complaint about what you've written here is that you just shunt these "complexities" you wield against SPA to the back-end and then gloss over them.
I strongly disagree that "formatting database data into a JSON string is not significantly less CPU intensive than formatting it into an HTML string" - business logic is always the most expensive code to inject. This means you're either, again, not really making full-blown applications, or they're exceedingly simple. Postgres can craft nice JSON output straight from the horse's mouth. Can't say the same for HTML.
> Yes, an irony of my life is
The code looks great. Congrats on finding a development path that suits your tastes.
Nothing stops you from loading all of those graphs in the initial HTML payload if you want to, and then displaying them without a round trip to the server. These things are not exclusive, you don’t need to go all-in with a client-side framework to achieve basic interactivity (like tabs).
This is such a silly thought. I have to have a server which has to be capable of transforming data to valid HTML snippets which are aware of the context they'll be put into, a client which is capable of listening to clicks and then putting some data or snippets in some context it has to be aware of, and I have round-trips for everything even though I have a stateful client, and it's best if that stateful client is an orchestrator for remote systems which can somehow inject interactivity into my stateful client.
It's spaghetti architecture and I don't see what the gain is. Why do my servers need to know how to describe my layout!?
You are not listening. You don’t need round trips. The same way your SPA receives its initial data from the server, you can send it as HTML (hidden) and just use JS to switch between the views. And there is nothing special about “understanding layout” on the server, you can run the exact same rendering code in the server with node. I’m not going to bother mentioning streaming html / progressive rendering, SEO, caching, semantic content or accessibility since you obviously won’t care much for that.
We’ve been doing this on the web for over two decades now. Tell me about spaghetti architecture once you work on a real world project using React.
>You are not listening. You don't need round trips.
Ok, so you've already proposed to me that my server not only needs to know how to load all the data, it also has to have an HTML renderer baked in, AND in your haste to say my SPA doesn't have any advantages, you've already forced your server to execute a layout algorithm on N graphs of unknown complexity, on your own machine & dime instead of your client's, just in order to accomplish what my SPA got for free: instantly rendering the next content when the user clicked.
The fact that you can come up with a list of things orthogonally related to serving web content that aren't solved any more gracefully with server-side rendering than an SPA - SEO, semantic content, accessibility - it's just silly. Same with this idea that you're going to pre-render the HTML into an ingenious layout where the user makes a series of actions that uncover nested HTML and don't require round-trips. It's as if you're ignoring that a primary benefit of SPAs is you can hide data loading really easily with prefetching JSON blobs and writing JS to navigate the tree. It's literally the language for the task.
It's exactly why I've not been shy to say this: anti-JS anti-SPA users get so bent out of shape that SPAs build nice UIs that they throw the baby out with the bathwater.
My reasons for SPAs has always been complex behaviors on the app and state persistence is a PITA if you don't end up with a nice MVVC on the front-end.
Not saying not possible to make complex pages without SPAs, but there are some things that are harder, and some that are easier.
Personally I love writing SPAs, I find that I end up making less complex code which ends up being easier to maintain. Not counting by the number of lines of code, but counting by how easy developers find it to change the logic without breaking everything in a complex codebase.
The sort of garbage we ended up doing prior to SPAs justify SPAs.
Writing a server to dynamically create a HTML page that also includes first pass content as well as potentially dynamic javascript to make the page interactive in "later passes" is just an ugly mix of client side execution and server side execution in one spot.
The application just feels a lot more consistent when the client is just pulling data instead of being thrown away and recreated in the middle of a request made by a browser for a particular URL that is just one of a very long series of requests.
There are techniques for controlling complexity there, and often you can make a single-page react page which will encapsulate complex behaviors.
Often a mix of the two helps. But you end up having one set of code managing the static page, and another for the api-based interactions. and then once you have api-based stuff product asks to add on a feature and a feature until you have mutliple SPAs. :P
Code reuse is a big reason too, why write 10 different SPAs for one site, when you can write 1 SPA with a ton of code/tool re-use.
I always end up taking 1 SPA and make 2-4 different SPAs out of it, each facing a different set of users, with different security/look requirements, but all underpinned by the exact same technology.
So much headaches saved.
Just to make it clear:
- I agree that SPAs didn't come out of developers being idiots, but rather because the shit we had to do before SPAs was craaaaaaaaaaazy. Jesus I remember the old jquery things. And before jquery... shiver...
- I think SPAs are definitely not a one-tool-for-all-problems kind of thing.
- Interestingly enough, SPAs can be a one-tool-for-all-problems kind of thing with SSR, that's what linkedin does. LI is an SPA with the ability to behave like a non-SPA via isomorphism.
I am curious what tools you are using. I am trying to upgrade Angular 1.4 to 1.7 and it is a nightmare. Has your method had any sort of time test for upgradability?
I've been developing React for the last 5 years, times change and "best practices" change with them, but my 3 year old model / graph editor seamlessly upgrades across React versions, even if my eyes don't love the code like they once might've.
If you have a messy codebase with logic mimicked on both the back and front end its not just the developer experience that suffers - the user experience will eventually suffer too.
I mean isn't that what ruby on rails is? RoR's minimum response time is WAY slower than most other frameworks. Getting a ruby on rails page rendering in under 100ms is work, while other frameworks getting it under 50ms isn't much effort at all.
Tradeoffs everywhere. If I needed to serve my pages to an area where downloading 1mb would take a minute and that was my core demographic, I will be making very different choices in tech than what most people have to write for.
Please. Please don't do this. Service workers cause so many problems, I have never seen an implementation that didn't completely explode at some time or another, and as far as I can tell the only fix is to open the dev tools and delete service workers. They're a bad solution to a non-problem.
Twitter's implementation just exploded on me the other day on my phone. Since then I cannot open ANY links to twitter anymore. I just get a page from the browser that says "Cannot complete request".
It's on my phone so I can't just clear a single site's data, and I cannot be bothered to clear everything and relog everywhere.
Hook up your iPhone to your Mac via lightning cable. Open Twitter on your iPhone and then on your Mac open Safari, enable develop mode, and then go to Develop > Your iPhone > Twitter tab. This will open up inspector on your Mac allowing you access to the local Twitter data on your iPhone.
Have you used the internet lately? Service Workers are ubiquitous. Google.com is using them. Amazon.com is using them. Your uncles blog is probably using them too.
Sure they can be tricky, but don't dismiss their success.
It'll even have 2x the throughput at the same CPU level. Plus users can choose when they want to run your app, as opposed to you accidentally draining their battery because they left an old tab open.
"Better" is subjective. Web apps sacrifice performance, but you get an app that you don't have to install, that can't do anything malicious to your system without your consent, that's (at least in theory) easily shareable and deep linkable. Maybe that tradeoff isn't worth it for you, but it is for a lot of people!
Not really. Non-programmers have also been using computers ever since the PC revolution and are quite used to downloading, installing and running programs, provided it has a usable GUI. After all, web apps are very recent. Even now, most programs are installed, and the rest are websites.
Web apps are relatively recent, sure. But the web is three decades old at this point. The majority of people using computers today grew up with web apps being a thing.
It's difficult to verify, but my guess is your stats are backward: most programs are websites, and the rest are installed.
Clearing browser cache used to be our "have you turned it off and on again?" Looks like we need a "restore website to factory settings" which clears cache, cookies, service workers, offline storage, JIT of the code etc.
First, I think service workers are essential if we don't want all future apps to live and die by arbitrary app store restrictions. Service workers provide a lot of functionality that make web pages competitive (feature-wise) with apps. Second, I'm sure you've seen plenty of implementations that were just fine, you just didn't notice. Angular comes with a default implementation that does caching and plenty of sites use it.
This is seriously nifty but you really don’t need SPAs or client-side request interception for page loads to feel “instant”. All you need is server-side rendering without a zillion third-party scripts. Clicking around Hacker News provides case in point.
We have different definitions for “feeling instant”. Most clicks on HN take well over 2s for me on a 4g connection, and often about half a second even on fast broadband. Sure that feels snappy compared to most websites, but it is still a completely different experience to using an app, or indeed a website that has been designed to eliminate perceptible delays as opposed to making them shorter. These are separate (and sometimes even conflicting) design goals.
Yeah it took over a second to get to this comment page. A lightweight js app could have popped up a modal, I put in my comment, click send, and hide the modal and show a spinner on the bottom right changing to a checkmark when the comment goes through. That's feeling instant, not waiting for the back and forth to a server. The other stories could be lazy loaded in the background as I scroll.
If the delay you are experiencing is caused by your connection, having the data served via a JS worker will not be faster. The additional overhead compared to a pure HTML page will hurt even more.
Sure, if you preload all your sites' content, which I'm sure the users on slow mobile internet with limited volume and those on weak mobile devices will just love. But I don't see that happening in the linked article nor on the demo page.
I'm in New Zealand. HN page loads are about as instant as you can get on the web. Certainly faster than the vast majority of SPA interactions and their requisite API calls.
Maybe I am missing the point. When the problem is speed and the solutions seems single page app with bloated javascript or the thing the poster suggest.
Why isn't the solution just simple classical HTML without any bloated Javascript, which also needs to be executed. The CSS is cached, so it doesn't need to be reloaded and the next possible pages are prefetched via the correct META tags.
> it all started with the desire to eliminate blank screens in between pages and reduce payload sizes
Hell, no.
The "two main reasons" for single page app given at the beginning (faster app, reduced network traffic) are actually not the main one. The main one is that the browser loses its state at each page reload, and that the state on the server, if any, needs to be reconciled every time with that on the browser. It is fine only as long as your app doesn't have any client state. The more state you put in your client (e.g. a text field with its contents) the harder it is to maintain the state across page reloads.
This is why now we design our applications with all the state in the frontend, closer to the user, and a stateless backend. SPAs are in fact enormously simpler than equivalent non-SPAs.
> it all started with the desire to eliminate blank screens in between pages and reduce payload sizes
That isn't even a thing anymore, except in SPA's or Internet Explorer; Chrome (seems to?) wait until some content is loaded before it switches to the clicked page, there is no blank screen.
Payload sizes is not really a problem either given that content is not that much data. Images and media take up the brunt of bandwidth; the payload size is not the issue.
>The main one is that the browser loses its state at each page reload, and that the state on the server, if any, needs to be reconciled every time with that on the browser.
Yes, I also noticed this flaw. There is no good way of dealing with this other than saving state into localStorage which is less than ideal.
And that really underscores the OP's point. With SPAs your servers no longer need to scale linearly with your user base. You don't have to solve all the myriad problems that come with maintaining state for all users in a centralized location.
I think those are definitely some key advantages to SPAs, but my impression when they started becoming adopted, especially to the consumers (users) was one seamless thing with no new pages etc.. certainly development is more manageable with client only state, but I don't see that as the first big sell.
With service workers disabled on firefox the site just breaks, no graceful degradation/progressive enhancement. When clicking links in the header the url will change, but the page content doesn't.
The trouble is trailing slashes and inconsistent behaviour between the service worker and the web server.
https://instantmultipageapp.com/images/ loads from the server, and not the service worker. Observe how it lacks the Blog link in the header, which is missing in what the server is rendering—see /images/index.html in the repository.
https://instantmultipageapp.com/images loads from the service worker, and the server serves /index.html for it (which is very wrong) rather than /images/index.html.
This is purely a bug in the demo. Kind of a fundamental bug that leaves me unimpressed, but a simple bug nonetheless. If you were to actually run the service worker on the backend, it’d resolve this kind of problem, and also the inconsistency between what the two platforms render, as seen in the missing Blog link.
I think it’s better characterised in this way: service workers are dangerous because you can break things; so it’s very important that the foundations of your service worker are sound. If the foundations are sound (which takes effort), there’s nothing wrong with them. But ad hoc service workers, like this, are extremely dangerous.
off-topic: how/why is medium still around with their very aggressive techniques? They started as a minimalist blogging platform, and once they acquired the content, they started putting paywalls in front of their users' content.
I don’t subscribe to Medium yet but I started using Apple News because the ad-driven model has led to publisher pages that are near unreadable / terrible UX. So I won’t be surprised if Medium is successful.
I don't even know how to find articles on Medium. There used to be a whole bunch of articles listed on the front page. Now it just seems like one big advertisement.
In my experience if you really need fast page loads, SPAs are off the table. So the premise of the article makes no sense to me. In fact, what do people on HN think the main reason to choose an SPA is these days?
For me, SPA is either chosen due to "fancy factor" or because the app is too complex to easily implement as a traditional (SSR) web page. That could be a complex search+filter with autocomplete, or a social network-like UI with infinite scroll and dynamic widgets.
Yes in practice this is what I see as well. I would also add, "developer likes framework x tooling". The other use case that SPA frameworks sell, is develop for all platforms For web, mobile and desktop. It sounds nice on paper, but I have no experience with it.
For sure. I have a complicated SPA app that is as complex as any desktop application; but with the flexibility of it being a simple web app.
I never, ever, ever could have managed to pull this off before SPA. Especially as a non-primary web developer, angular basically does all the grunt work for me and allows me to keep a very structured application.
Looking forward to attempting a blazor experiment as some point; as I still hate having to define the same object on the front-end vs back-end and I use dynamic data little too much.. but the current state of being able to use an SPA blows the options before out of the water.
In my mind, you had to be a 100% javascript/web developer to create any type of usable complicated equivalent SPA.. but now I can do a structured (mostly) typed front-end.
>In fact, what do people on HN think the main reason to choose an SPA is these days?
The only reason to choose a SPA is when you need an _application_.
A blog has no reason to be a SPA. A news site has no reason to be a SPA. HN has no reason to be a SPA.
An internal support tool has a reason to be a SPA. A messaging app has a reason to be a SPA (slack et. al). A music player has a reason to be a SPA (spotify).
This is impressive and would work really well in a lot of sites, especially ones with static content.
However it misses two important reasons to use a single-page app:
1. SPAs allow for highly interactive interfaces with custom components, drag & drop, audio etc.
2. SPAs can cache UI state in the client. The alternative is to store it in a server side session specific to each client. This is possible, of course, but it will either be lost every time the server is restarted, or it will need to stored in the database. Sending UI state back and forth to the database is another increase in load on the database and network.
There is also the more general point about SPAs exchanging less data with the server so putting less stress on the network but that is touched upon in the post.
Regarding point #1, is there anything inherent to the single page application that makes something more interactive? I think this article touched on an alternative to the one factor of routing being necessary to eliminate transitions, but otherwise you still have the same interactive capabilities within a page.
> Blank screens make for a bad user-experience. Users don’t want to wait for content to arrive from a server when they click a link or a button. They expect websites to be fast like native apps.
Why would users expect that? Most of websites use multiple pages and users are used to that. We change pages when reading paper books too. It’s not instant.
In fact, some SPAs are much harder to navigate because they are over engineered.
People are impatient and expect everything on three computer to be as fast as the faster things on it. Anything slow can feel like there is something wrong.
They don't expect things to be instant, but your page turning example is relevant: if losing a new page take the time of a physical page turn or two then it is probably fine and feels smooth. More than a few hundred ms and things feel more "janky". This is something some people cite as why they don't like e-ink devices.
> some SPAs are much harder to navigate because they are over engineered
This is definitely true, but also holds for many non SPA sites too.
> > blank screens
This part of the parent post is significant IMO. If the previous page vanishes I find I'm less patient waiting for the new information. This is worse if there has been a delay before that happens too.
Though likewise if nothing changes I might think something is amiss: did my click even register? Some extra indication of action (the click target changing colour is sufficient) helps here, when the background request doesn't show in the browsers built-in activity indicators.
> People are impatient and expect everything on three computer to be as fast as the faster things on it. Anything slow can feel like there is something wrong.
True, but an SPA doesn't really solve that either. You still have to make a request to the API and wait for it just like you have to wait for some HTML. The only difference is with an SPA you can show a spinner to the user.
Also, on an SPA, the initial hit of JS can be alleviated with code-splitting (if your main chunk is small enough) but when you switch pages you still need to download and parse a new chunk of JS.
An SPA, if properly designed, can help interactive use significantly though, compared to full page reloads. You may pay for the reduction in interactive latency with an upfront loading cost, but when comparing to desktop applications that is often not too bad anyway (Excel doesn't load immediately on most PCs, games of any modern design generally don't either). People are more willing to wait a second or few initially than they are to wait half a second when getting more data from an app that they consider to already be loaded.
The problem is that many are not properly designed. In fact often the patten is often used when it is not really appropriate and a fairly static site would (again with that caveat: if properly designed) be much more efficient and just as user-friendly.
I agree, but it really depends on the context and if it is really properly designed (which takes a lot of effort).
Personally I don't mind waiting a couple of seconds for Gmail to load, but I hate to load that much for Twitter which is also an SPA. Not sure what the psychology is there.
> Why would users expect that? Most of websites use multiple pages and users are used to that.
I concur with this sentiment. Most SPAs I have interacted with end up being a bad experience because when a page change action does not result in a page refresh, I expect the content to be delivered with native-like performance. Obviously, mobile and desktop browsers don't deliver that just yet. I doubt they ever will until mobile platforms figure out a way to charge a cut of any transactions happening on these apps.
"In conversations with web performance advocates, I sometimes feel like a hippie talking to SUV owners about fuel economy.
They have all kinds of weirdly specific tricks to improve mileage. Deflate the front left tire a little bit. Put a magnet on the gas cap. Fold in the side mirrors.
Most of the talk about web performance is similarly technical, involving compression, asynchronous loading, sequencing assets, batching HTTP requests, pipelining, and minification.
All of it obscures a simpler solution.
If you're only going to the corner store, ride a bicycle.
If you're only displaying five sentences of text, use vanilla HTML. Hell, serve a textfile! Then you won't need compression hacks, integral signs, or elaborate Gantt charts of what assets load in what order.
Browsers are really, really good at rendering vanilla HTML.
Probably, although my question was not about the bloat but the claim on the article that users prefers the UX of SPAs.
Should rephrase that, they have big JS payloads to make the website feel close to an SPA as possible.
> my guess is that if Github were created today, most of it would probably be an SPA
Github.com today has many components that are pure JS(most new features), writing it today from scratch my guess would be they would have go almost all client-side JS.
> So we build single-page apps, where only the content that changes in the page is replaced, avoiding a full page reload, so navigating to another page feels instant.
HTTP caches and the absence of overcomplicated JS frameworks will do this better than any developer (simply because there's less to be done).
This article is basically a web dev admitting native apps are better but refusing to behave accordingly, choosing instead to make their own lives harder.
Edit: Also, makes this whole statement then publishes their article on medium...
React + Redux + ReactRouter is way too complicated and it is getting more complicated every passing day, with things like react hooks and so on. But 90% of web apps don’t need that complexity.
In web development, a polyfill is code that implements a feature on web browsers that do not support the feature. Most often, it refers to a JavaScript library that implements an HTML5 web standard
It covers some pros and cos, the main advantages of using an app shell are
> whether your web app is best modeled as a single-page application (advantage: App Shell); and on whether you need a model that's currently supported across multiple browsers' stable releases (advantage: App Shell)
I think it's reasonable to argue that lots of sites that are SPA's don't need to be, if they can do this instead.
Frankly I do not experience any problems mentioned by the author in my single page apps. Maybe because I avoid giant frameworks and my app does not drag down megatons of dependencies. Instead they feel nice and respond instantly. Customers love it.
I've found SPAs to be less work. Instead of having two templating systems (server-side and client-side) there's just one on the client side. Now I don't have to share code between client and server, but more than that, the two are logically separated. The client can just be a client, rather than a strange half-extension, half-app of the server. Secondly, if you also want to expose an API, do you put it in the server that does the rendering, or do you have a third component that the server talks to? With an SPA, there's only the app and only the API and the two are separated out by their respective duties quite cleanly.
Once I started building out SPAs for data-driven applications, I never looked back to the old model of generating HTML server-side. It seems barbaric to me. The only hesitation I would have is if the SPA needs to be SEO optimized, in which case you can go either isomorphic (again, not a fan) or scrape pages via a headless browser as .html into a directory that your webserver serves up.
I agree having two templating systems is a problem (more on this later) but by going SPA your front end now becomes massively more complex than sprinkling some JS here and there. For example you now need to manage application state which is not a trivial problem on large projects. You also need to replicate native browser UX if you're serious about making a good SPA (eg: your forms are filled when going back, the content and scroll position are preserved when going back, etc).
As for the duplicate template problem, this is easily solved by doing SSR + hydration. I concede this is generally complex to set up (unless using something like Next.js) but it's worth it IMO. It's really the best of both worlds (MPA vs SPA). You are still creating components and having a sophisticated interactive UI just like in the SPA world, but front end development is greatly simplified since you're only dealing with one page at a time. Also, in many cases, you might not even need an API. How many times have you create an API only to feed your SPA? It's a lot of overhead.
Also these days I don't think the decoupling argument is so strong unless you're working on a complex project with multiple teams.
Fail!
The demo site does not work.
Failed to register/update a ServiceWorker for scope ‘https://instantmultipageapp.com/’: Storage access is restricted in this context due to user settings or private browsing mode.
The article does not even try to mention this.
I thought this article was going to be about HTTP 2 Server Push which allows the server to deliver pages ahead of time before you click them and thus allows for the instant transition between preloaded pages.
It's strange to me that people do not appreciate that technology or discuss it more. I think it's just something that hasn't caught on.
the idea that the DOM based browser can be used as a GUI toolkit was wrong to begin with. It has been stretched to absurd proportions over the years. Maybe we need to go back to java applets?
I disagree. The only problem is that people insist to use it for desktop apps. Webapps are their own thing. They should stay in the browser instead of leaking and spilling everywhere.
Yes, the rendering delay caused by a non-streaming HTML parser exists, but unless you screw things up elsewhere, that's going to be less than 1ms of delay.
Yes, bloated frameworks are annoying, but not due to their computational overhead per se, but because they increase download size and because Javascript is slower then native code in general.
I find it very telling that this article doesn't mention network latency even once. Those 50ms to 100ms round-trip time between the user and the server, that's what is causing most of the visible delays.
Ship your users an app including a copy of the data ahead of time, and you have no visible delays. It's as easy as that :)
And before you criticise that this won't work for every use case, consider that IMAP is basically an offline-capable newsfeed. Also, I can buy a DVD with a directory of all US businesses.
The only reason why Facebook and Yelp work the way they do is because otherwise they would have to give up on some of the tracking. Because for some reason, people are more tolerant towards privacy invasive tracking if it's a "web app" instead of a native app.