Hacker News new | past | comments | ask | show | jobs | submit login
Browsers are pretty good at loading pages (duodecima.technology)
592 points by csande17 on July 21, 2019 | hide | past | favorite | 317 comments



I don't get client side navigation. It's a worse experience in every way. It's slow, often doesn't support things like command-click, it usually breaks the back button, and even if it doesn't it breaks the restoration of the scroll position.

The only thing worse is a custom scroll UI.

Why do people try to reinvent the most basic features of a webbrowser? And if they do, why do they always only do a half-assed job at it?

It's infuriating.


Because it's faster. If you don't have to download all the content again, force the browser to re-render everything, then by design you just get the new content from the server faster, if you have to download anything at all. The idea exists since the introduction of AJAX.

Furthermore, you don't lose state, which makes things much more simple.

Imagine a simple image gallery. You just update the <img> tag, update the URL with the history API and everybody is happy. If you were to navigate via links, you get the same behavior.

Of course shitty implementations exists and you only notice the bad ones. If done right, you don't notice that it's happening at all.

Lastly but most importantly, context matters! It's not a silver bullet, but it can be really useful.


> Because it's faster.

The whole point of the article is that it's not true. It's not the only article that disproves it, and honestly, it's not difficult to notice it. Just go to any blog running off a static site generator; loading times of pure HTML webpages on good connection are so fast they whole thing outruns client-side page switches even if the page is already in memory.


> The whole point of the article is that it's not true.

Only for the narrow condition of loading a whole page. A lot of Web apps make async request much smaller than loading the whole page: imagine deleting one record in a list of 50 items; the async response could be the HTTP 200 header alone.

I think it would be useful for developers to separate web applications from web sites and consciously make trade-offs (such as state management: for the projects I have worked on, adding "Undo" logic was far simpler with client-side state-management. YMMV)


Web apps making async requests should be making much smaller requests than loading the whole page - but they're not. They're loading a page and parsing out the "less than the page" bit to stick into the current page; or they're loading a JSON blob bigger than the current page to update the three visible elements on the page.

Imagine Software Engineers that actually engineered solutions - the use of small async requests would be a boon to everyone! But no one's doing this.

I know first hand of one company doing this nonsense with requesting a full JSON payload (describing the whole house that goes along with the proverbial kitchen sink), rather than requesting updates for the one property on the user's screen. I've proxy-sniffed at least two other, unrelated companies doing exactly the same thing.


> They're loading a page and parsing out the "less than the page" bit to stick into the current page; or they're loading a JSON blob bigger than the current page to update the three visible elements on the page.

This is...disgusting. I was speaking of applications I have personally worked on; the teams/organizations I have worked with were concerned about performance (and measured it with regression testing). I suspect the oversized responses and cherry-picking are a result of back end and client being responsibilities of independent teams and client team is told "use this pre-existing kitchen-sink endpoint".


It’s certainly possible to write good client side code, I’ve worked on teams that really care and are given room to do it (because they were able to demonstrate its worth after so much effort.) and on teams that don’t.

JavaScript, as she is spoke, is just an awful language. It has brilliant ideas and if applied correctly could make everyone’s lives better but that’s just not how it’s used.


So your argument is that many teams don't care and do a bad job and it is the fault, somehow, of the language they use?


It's a powerful argument. Defending client-side rendering here (or SPAs) is almost no-true-Scotsmanish. It is technically possible to do a good job, but it's almost never done. Yours and sangnoir's teams may care about performance and do actual software engineering - but it doesn't help me much when my bank doesn't do it, the places I shop don't do this, big sites like Reddit don't do this, and seemingly none of the SPAs I've visited in the past 5 years do it.


Have you used the banks mobile applications, and are they fast? I suspect you're right to say there's a cultural issue, but it's not with the language, but with the organization building on the platform. If they don't care about the user experience on the web, they wouldn't care about it on a native desktop or mobile application either.


Yeah, but how is this the fault of the language?

There's nothing inherent in JS to say "you must do a shitty job of optimising your page speed"

I'm working on replacing a PHP app that is currently taking 20 minutes to refresh the index page because their SQL doesn't scale. Is that the fault of PHP, SQL, or the developers who wrote it?

You can write crap code in any language (even Rust!).


It's probably not the fault of the language per se, just the culture surrounding the use of that language.

It could be argued that the same language, given another chance, would produce a similar culture, though I'm not 100% convinced of that. Anyways, what we need is a reboot of web development culture.


Maybe it's a fault of Sturgeon's law - I sort of wonder if the "necessity" to have so much web output - so many applications, so much new development does not create a situation where there is a pressure to make 95% of everything crap, because you need to get a lot of developers to make things and some of those developers are going to be crap, and you need to make lots of decisions and some of those decisions are going to be crap, and you need to do a lot of changes in short periods of time and that results in a lot of crap.

It just seems more likely to me than any culture about a language per se.


Might be, but I wouldn't discounting culture as a mechanism reinforcing it. People aren't working in isolation; they build on each other, and enshrine "best practices" that are often enough the sources of these problems.

But thinking of it, Sturgeon's law may be at play. PHP used to suffer from similar reputation to JavaScript, and only started regaining its status as a proper server choice once the masses moved to greener pastures. Sure, the language was a "fractal of bad design" and had footguns galore, but it wasn't that bad, and most of the traps were avoidable when you had half a brain and used it. The web may be crap very well because it's where anyone fresh to programming can find a high-paying job, and you can become a "senior engineer" after one year of job experience.

But that, still, is a problem. Outside of programming, there are quality standards on the market - often enforced by governments. Even if 90% of chairs are crap, you can't go and sell that crap to the public. Quality standards filter most of the crap out.

If that's the case, I'm not sure what to do. Introducing quality regulations to programming might help solve the problem of website bloat and constant leaks of private data, but it would also destroy the best thing about the web and programming in general - if you have an idea and a computer, you can make it and show it off to everyone.


PHP is considered a proper server choice these days? When did that happen? It's been a long time since I encountered a new project being written in PHP.


I don't do php but I assume any resurgence would have something to do with Laravel.


> the use of small async requests would be a boon to everyone! But no one's doing this.

Some of us are! RFC8620 is purpose-designed for building network efficient APIs: https://tools.ietf.org/html/rfc8620


I apologize for the hyperbolic “no one.” I was rushing...


GraphQL was made to fix that issue.


GraphQL's nice if you want a third option to the choice between many bespoke endpoints or few generic endpoints, but if your problem is sending a list of 400 widgets with every single page load, then you have an easier and better way to increase performance sitting right in front of you.


When I want to read an article, I don't need a "web application". I want a static page with the text and the pictures. That's it. Not a single line of JavaScript. And certainly absolutely no image lazy-loading nonsense. If my data is that limited, I'd disable images myself in my browser settings, thank you very much.

What I'm trying to say is that most of the web isn't really that interactive. It's mostly comprised of content that is static in its nature.


> If my data is that limited, I'd disable images myself in my browser settings, thank you very much.

Yeah, because most visitors to most websites can and know how to disable images in browser settings.

In fact, let me know how to do this in iOS Safari. Before you tell me to throw my iPhone and iPad in the trash, get an Android, root the damn phone, install F-droid, compile Chromium with patches, or write my own operating system, now is a good time to stop.

I seriously hope this sort of condescending “elitist” bullshit would die off.


> Yeah, because most visitors to most websites can and know how to disable images in browser settings.

Maybe they would, if the UX culture of these days wasn't removing every feature that isn't used on every interaction, and then devs wouldn't try to build increasingly complex reimplementations of browser features on each page.

In an ideal world, lazy loading of images would be something handled purely by the browsers, and users would be aware how to operate it. The site's job is to declare what it wants to show; the User Agent's job is to decide what to show, when and how. But nah, the web culture prefers to turn browsers into TVs.


> In fact, let me know how to [disable image loading] in iOS Safari

Given any browser, how do I reload a lazy-image that failed to load, without resorting to whole page refresh or diving deep into the Web Inspector?

Most browsers have a context menu option to reload regular images, but they cannot and will never handle a bunch of dynamic block elements with background-image option.


Any reasonable img lazyloading implementation should produce plain img tags once loaded. Not sure why you would end up with background-image’d block elements.


"once loaded" is the key. If it fails to load, then bummer, no img tag.

I'm also not sure about the background-image'd block elements, for what it's worth.


If it failed to load, it would leave behind an img tag that failed to load, like any non-lazy-loaded image. Unless you’re talking about JavaScript code failing to generate an img tag (e.g. from a data-src attribute), which would be bizarre.

Edit: by “once loaded” I meant once loading is triggered, if I wasn’t clear enough.


>It's mostly comprised of content that is static in its nature

Yes, but the readers of the content are not the only customers the site is built for. The people writing and editing content are also consumers of the site. The article isn't (and doesn't have to be) interactive, but the CMS on the backend is. The advertiser's portal is. The reporting dashboards are.

Just like how Ruby on Rails is slow but we deal with it because it makes programming so much faster, dynamic websites are slow but we deal with it because it makes their administration so much faster.


The authoring tools and the published article don't have to (and I'd argue shouldn't) be joined at the hip like that, though. Obviously the editing tools benefit from being JS heavy. That doesn't justify polluting the published article itself with JS (unless the readers are editing the article themselves? Even then, though; Wikipedia seems to get by just fine without trying to replace half my browser with shitty JavaScript code).

> Just like how Ruby on Rails is slow but we deal with it

Not all of us :)


In pretty much the same time it takes for the HTTP 200 to arrive I can also download a few kilobytes of HTML. Latency will still dominate.


> Only for the narrow > condition of loading a whole > page

It is an interesting future we live in, is it the best one?


Case in point, navigation on my site is pretty fast (to me, at least) and doesn't use much JS at all: https://www.stavros.io

(I promise I'll reply to your email soon)


I'll admit it is pretty fast. Assuming you are using a mouse. But keyboard navigation is non-existent. So you have to ask yourself is it worth it to go against decades of effort put in to standard web navigation for what gain? Obviously only you can answer that for your blog. I'm not having a go.

But I will give you credit for the fact that it does work with Lynx!


By "keyboard" I assume you mean TAB-key-based navigation (I don't know of any other included in the browsers)? If so, it looks to me that links are in fact TAB-stops, but they're not being highlighted. It's something that should be solvable with a CSS adjustment.


If the links are not highlighted by default, then tab navigation is basically non-existent since I cannot see where I'd be redirected and I personally bother to write custom CSS only for the websites I visit very often.

Is there an actual reason to disable highlighting? It lowers usability and accessibility, but I'm not sure what do you get in return?


Oops, I'll fix that, thank you. I'm assuming the designer thought it looked "better".

Also I use Vimium so I never noticed, since that mode of navigation is much faster.


> Is there an actual reason to disable highlighting?

Some browsers, Chrome especially, show the focus outline when elements are clicked with a mouse and some people think it looks unacceptably bad.

Focus-visible is a CSS property meant to solve that but it’s only supported in Firefox and requires browser heuristics to do the right thing.

https://caniuse.com/#feat=css-focus-visible

https://css-tricks.com/keyboard-only-focus-styles/


...also known as "directional navigation" — widely used by screen readers and browsers on Android TV or some Android Auto devices.


Yeah, but you force me to stare at blank spaces while your webfont downloads [0]. I guess I should be thankful you’re not using that font for the body text, too (most sites do!).

[0]: https://imgur.com/a/FAX6BDW


I just set browser.display.use_document_fonts to 0 in about:config.


That's conflating two different issues


It's the same meta-problem, though: modern webdev practices involving adding gimmicks for no good reason that introduce extra complexity and resource use, and then adding even more complexity and resource use trying to fix all the expected behavior and features of browsers that the initial gimmick broke... and doing it poorly.


I hate all these gimmicks you are talking about (slow to load fonts, unnecessary videos, fucked up scrolling, spinners everywhere, etc.).

That being said I challenge the assertion that they are used for no good reason.

- Custom fonts are used because they help shape a brand. Properly used, the choice of fonts communicates a lot (at a subconscious level) about the company or person behind it.

- Unnecessary videos unfortunately work (i.e., help grab and retain user attention). Not for me (quite the contrary!), but for the majority of people stumbling upon a company's website.

- Same for weird gimmicks involving animations and what not. I can feel them draining my battery out and it phisically hurts, but most people like them and take away the impression that "this is a modern person/company".

All in all, many website's primary goal is not to communicate factual information, but to capture user attention and/or communicate at a subconscious level, and gimmicky things work for that purpose :S


This idea that users need to see all text on your website in a particular font face (which is usually just a poorly packaged riff off of a famous font with minor changes that will largely go unnoticed by the unwashed masses) in order to market your product is absolute BS. Aside from a very few iconic font associations (e.g. IBM), there's no actual evidence that it actually works.

I fully support using a custom font for your visual assets - that's what SVG with text exported as curves/outlines was created for. But why should I use your horribly hinted, terribly rendered, absolutely illegible webfont (and have to download it to boot) just to read the copy on your website? Why should anyone?

Look at Apple. Despite what I'm sure their design team tells them, even they don't have an iconic font. They've bounced around between Helvetica, Myriad, Lucida, and a half-dozen other sans serif fonts that share certain design traits (which people do identify and associate in general), yet each time they introduce a new font they update their website to trigger your browser to download the webfont to render the page. It's a pointless exercise in the name of job security.

Companies have had websites going back 30 years. Web fonts have existed for a long time. This trend of each company having to pay tens of thousands to commission an unrecognizable, undistinguishable typeface that all text on their website must appear in is a brand new phenomenon, and there's zero proof it does anything besides (poorly) accomplish what someone thought was a good idea.


I agree; I cheated by purposefully using both meanings of "good". The reasons you mention I consider bad in ethical sense, and I believe the world would be better off if sites didn't do it.


About as fast as mine on my underpowered machine, and mine is plain HTML without much attention paid to optimizing it further.

> (I promise I'll reply to your email soon)

(Take as much time as you need; also, I didn't expect a reply over the weekend :).)


It's not as fast as https://dev.to/, which is an SPA. I.e. client side routing.


And it took me 2 minutes clicking around to break its idea of the page state. I am partially scrolled down the home page, and it just decided to deactivate scrollbars and the ability to scroll.

A great example of how it's quite difficult to reimplement stuff that works perfectly well on traditional pages.

(At least they seem to haven gotten rid of some of the dark patterns they had in the past, that's nice to see)

EDIT: and within a minute more found another state bug :D

Yes, you can make perfect SPAs, but many people fail and it's a good question if the effort required to do it properly is worth it.


I've never found a bug on there, and I've been on it many times.

I'd love if you can show me how to reproduce this bug.

I just don't have this experience with SPAs breaking. I actually have no idea where it's coming from.


I just reproduced it following the commenters instructions -- clicked the sidebar link, got a popup, pressed the back button and I have no scroll bar.

I can see what is happening there: the popup removes scrolling (because of the overlay) but the back button doesn't restore it.

This certainly does lend to the conclusion that managing page state in a SPA is not trivial.


It doesn't happen 100 % of the time, but right now going to the homepage, clicking one of the listings in the "newest listings" box, and then returning to the homepage through the browser back button triggered it.


I am able to reproduce this bug with Safari on a Macbook. I clicked on an item in "newest", then quickly pressed Cmd-Left to return to the previous "page". The front page reappears, but I'm unable to scroll with arrow keys or trackpad. An additional press of Esc returns the expected functionality.

It seems to be a fast and responsive site when it works, though.


> It seems to be a fast and responsive site when it works, though.

LOL.

Y'all, I love speed as much as anyone, but your development priorities should be 1) it works and 2) it's fast.

Using HTML links where 1) is never in doubt and all focus can be placed on 2) seems like good engineering to me.

Reinventing browser navigation is like building a rocket. You should be really, really sure that you need to do it before you try.


Ah, the good old “footer at the bottom of an infinitely long page.”

Facebook and Google used to be guilty of this, but it’s been a while since I ran into that particular brand of user-hostile web design.


Addendum - I may have been too harsh here. The page most likely is not infinite. You should able to scroll through the entire backlog of dev.to content to access the footer.

Seriously though, you can't just bolt infinite scrolling into the middle of an existing page if you have content at the bottom.

If anyone's curious, the footer contains: Home, About, Privacy Policy, Terms of Use, Contact, Code of Conduct, DEV Community copyright 2016 - 2019

They've duplicated most of that (but not the copyright) in the sidebar's "Key Links" box, so it's not as big a problem as I've seen on other sites.

If they hadn't, I wonder about the legal implications of making your privacy policy, terms of use, and copyright notice completely unreachable. And why keep them in the footer if you never leave it on screen long enough to click it? Just a "not my job" issue with whoever implemented the continuous scroll? Clearly someone thought about it long enough to put the links somewhere reachable, but not long enough to get rid of the old ones?


It is fast, but it breaks even faster. Managed to get myself in locked up state in some 15 seconds. Couldn't go back, displayed page was incomplete but no scrollbars were to be seen.


Good: it detects when you’re offline and displays a fun error page.

Bad: this only works on every other click. Half the time the links just silently fail....


For me, it always displays the "you're offline" page.


A good demonstration of one of the major hazards of reinventing basic functionality like this: it’s really easy to end up with something that behaves differently on different browsers or just for different people.


You just reminded me that slate.com frequently tells me I'm offline after clicking the back button.


How do I access the footer on mobile? It just keeps scrolling it down as soon as it appears


Scrolling is extremely laggy on latest mid range Android using Chrome. Almost unusable.


That definitely seems faster, I'll give you that:

https://i.imgur.com/bx1LUZD.png


Scroll, click a story, click a link off the site and then hit back button twice. You will jump to top of homepage incorrectly.


Sure, I mean all you're doing is basically loading up static content. In which case SSR HTML will perform just fine, but you do happen to use client side logic for your comments section which also makes sense to me. A SPA for your site isn't really the use case for SPA.


Scrolling on the hamburger buttons is in my opinion much smoother on your site than on sites that force you to click their links in the page itself. I'll have to check out the source when I get home, its overall a great UX with a focus on functionality over form, but the form is still nice enough to get the job done!


Your site is great and the perfect usecase for traditional links, but it's literally like 99% content, with very little markup. The difference is more stark on design heavy pages.


... loading times of pure HTML webpages on good connection are so fast they whole thing outruns client-side page switches even if the page is already in memory.

Unfortunately for us front end engineers we can't rely on the user having any internet connection let alone a good one. Most of the push behind static site generators is to get as much of the code necessary to display all the website in to your browser as fast as possible, so it's there no matter what happens later. In cases where the user has a fast, robust connection that may well be slower than loading each page on demand, but in cases where the user's connection is slow and flaky (eg on a train) static sites generators do work better.

Perhaps the next generation of websites will take your connection in to account better. It depends on whether browsers and users will be willing to give up that information though. As far as I'm concerned I will use everything I can to improve the user experience on the sites I build.


Well. I can speak from a pretty solid experience here. I’ve travelled the US by train, the length of the UK by train and large swathes of Europe by train.

The site that tends to be the best to use is the ones that don’t take much to load. Connection tends to be spotty, you get bouts of “some” data and then you’re dry again for a while. If you can squeeze a page load in there it’s infinitely better than a half opened page.

Your first page load is -incredibly- important here. It’s the difference between a usable site and an unusable one.

The sites that work the best are the ones that do not try to do very much fancy stuff, because that fancy stuff only half-loads most of the time; leaving you to keep refreshing and hoping the spotty connection finally lets you bring in that 2KiB that will allow the page to actually load.


Your first page load is -incredibly- important here. It’s the difference between a usable site and an unusable one.

This is the point I was making. If the server can send the user enough data on the first load to make the whole app/site usable then the user won't need to wait for the network if they're in a tunnel. They've already got the necessary resources (which shouldn't be everything, just what's necessary). In that scenario client side routing beats server side completely because server side rendering just doesn't work when the user doesn't have a network connection.

That said though it's wasteful and entirely unnecessary if the user has a good connection. Really websites should have a good mechanism for testing. The Network Information API doesn't have particularly good cross browser support and it isn't especially reliable yet.


If. There are two failure points here, both of which are so frequent that I can't even recall seeing an exception.

One, if your first page load tries to load a full page, instead of just some JS that bootstraps the process of loading of the rest of the page, which lets the first load finish before execution. Better yet, it should load the absolute minimum bit of JS site kernel. Then, the first load is likely to succeed on a slow/spotty connection, and we can skip to problem #2 below. This isn't being done correctly in most of the sites I visit for some reason; the first page is either attempted to be downloaded in full, or the "skeleton" of the UI is the piece that always loads the longest.

Two, loading UX. You have a loaded UI skeleton with boxes that need to be filled via further requests. Or, I've clicked on something and a subsection of the site needs to be refreshed. What happens is either nothing, except the SPA getting unresponsive, or I get the dreaded spinners everywhere. If the requests succeed, the spinners eventually disappear. If they don't, they don't. Contrast it to the pre-JavaScript style: if something needs reloading, my page is being rendered essentially top-to-bottom, complete bit of contents popping up as they're loaded; the site is usable in partial state, and if anything breaks, I get a clear error message.

Can this two problems be solved correctly in client-side rendered code? Yes. Can SPA be faster than full page loading? Yes. Is it usually? No, because web development is a culture. When a company decides "let's do SPA" or "let's do client-side rendering", they unfortunately inherit almost all the related dysfunction by default.


I think you're not making the same point as me at all.

I'm going to take the common case of a news article;

Imagine for a second, you're on a train and you have low bandwidth internet, when it works, which is rare. Now you're on hackernews and you've loaded a whole comment thread, you're reading through and someone posts a link to your article.

Now, the article can load with client side routing, but will take longer. And depending on implementation might not actually have the whole article.

The page which is pure html with minor javascript is going to load, in full, and I don't need subsequent requests. And, it's guaranteed to be smaller than the one that you're over-engineering.


I have a family friend who lives in a part of the US where the only options are dial-up and satellite. He thus uses dial-up.

Without fail, the sites that rely heavily on JS to do page loading end up performing significantly worse (and in fact outright bugging out, and often failing to load entirely) than sites which just send ordinary HTML docs. A disturbingly-high number of the JS-heavy "web apps" out there seem to have little regard for actually handling failures on a sketchy connection.

Your point would make more sense in the context of an Electron app or something with a permanently-locally-cached copy of the site. That would at least give my elderly friend the means to predownload it when he piggybacks off the public wifi when he goes into town.


> Unfortunately for us front end engineers we can't rely on the user having any internet connection let alone a good one.

Surely some web apps need to work offline. But most web pages do not, and I don't want most sites I visit to store a bunch of data on my machine on the off chance that I'll use them offline.

"Offline first" seems really misguided to me as a rallying cry for all things on the web.


The article only tested one site with one browser, hardly a good test. It may be correct anyway but it IS not a proof.

It may also be much harder to implement something like a comment section that is fast and correct with only static html and a server backend.


I don’t understand why doing things the non-JavaScript way wouldn’t be correct. Surely you still need to do all the correctness things on the server anyway even if you do some in JS because client side validation won’t stop other people (or spammers) from sending invalid requests to the server. When I think of a correctness problem it would be in keeping the js-rendered comment section synchronised with the server-side comment section which seems harder than making it work with no js.


> webpages on good connection are so fast

Not everyone has this, espicially when your someone like myself who writes informational websites for people who won't be connected to any internet for hours at a time.

Super fast response times don't cut it when your response time is non existent at the moment.


If your internet connectivity breaks constantly, there is a good chance that a JS heavy client side app is going to irrecoverably break, require a hard refresh and take much longer to load because of your crap internet.

I should know, my internet connectivity sucks, and single page applications are almost without exception, a completely awful experience.


I don’t think blogs are a good example. How much complex logic do they have, and how many database queries and data processing do they do before they serve their content? Furthermore, how much interactivity do they have? Not much.


I'd guess as much as 90% of pages out there - fetch a blob of data from the server, render it, and have most interactions not touch the server at all.

You can view e.g. ecommerce sites as blogs with one post per combination of (search query, filter switches, page selected). This necessitates frequent shots to server, but the site otherwise transfers roughly a page's worth of data per viewed page. I've never seen an on-line shop that was made better by being an SPA, over an old-school page reload every click.

Similarly discussion forums - there's Discourse that's arguably more gimmicky with its client-side magic; beyond that, if you want to see what would happen if you turned HN into an SPA, look no further than the dumpster fire the new Reddit design is.


> Because it's faster.

In my experience sites with these kinds of navigation are typically extremely slow with initial page loads taking anywhere from a couple seconds (bad) to 10-20-30 seconds, sometimes even a minute (on a 100 MBit/s connection) and subsequent navigations are often slow as well.

It can be hypothetically faster, because you can theoretically get away with less data transfers and less client work, but in practice the exact opposites materialize.


Reminds me of JIT compilation, with the large initial load cost and the theoretical-but-mostly-unmaterialized reasons it could be faster.


HN is very fast. Stackoverflow is very fast. Lots of other well engineered sites are very fast. As the article shows, browsers are well optimized and really don't download all the content again. In most cases it's just the HTML, which is streamed and rendered as it comes in. All the assets are cached, and scripts are even stored in compiled state to skip reparsing.

Some sites might be slow at generating that HTML but then they would be equally slow at generating whatever JSON/API responses used in a SPA, along with loading all the heavy JS in the first place to render it all.


This is what I don't really get. It seems like some people are under the impression that generating and parsing HTML is slow or takes a lot of resources. In almost all cases it's going to be faster and less resource intensive than generating JSON - especially if you are just using a templating language to interpolate some values. I agree that JSON could lead to less data being transferred over the wire, but that assumes your client already has cached the megabytes of JavaScript needed for you SPA. For something like a news site it doesn't make sense.

On the client parsing is fast, but the slow part is the browser laying out the page and fetching new resources - but that is going to be slow anyway, even if you do client side rendering. To make that fast you need do something more intelligent than just rendering a different React component, as well as prefetching resources in the background. But how many SPAs actually bother trying to do that?

I agree that SPAs have their place, and they have a lot of advantages over what we had before, but I just don't understand how it has seemingly become the default for any kind of web development - with such disregard for performance.


> I agree that JSON could lead to less data being transferred over the wire

I haven't benchmarked the difference, but I bet HTTP compression removes most of the difference

> but that assumes your client already has cached the megabytes of JavaScript needed for you SPA

A cache that will need to be busted every time you deploy new code. You do deploy often, right?


The idea exists but it's clearly not true, except perhaps in specific, niche uses like re-rendering a continuously refreshed graph. Even for your image gallery, it makes for confusion. Back button does what? Shift-refresh does what? Just let the browser do what it does. If you want the images to render fast, use HTML 2.

Writing a web app with server side pages forces you to think about where the state lives. This is a beneficial discipline.


Google Maps would be the classic example of client-side refresh working so well that it's now the universal choice. At the time, it was a revelation, as the Mapquest-ish predecessors (if I recall) required a click and server-side refresh to scroll or zoom the map.

Of course the revelation here was that <a> tags weren't what we needed to move a map, but rather a click-and-drag plus scroll-wheel behavior to explore a huge image at various levels of detail. If the server-side page-by-page navigation paradigm is a lousy fit for information delivered over the internet, then it may make sense to re-invent the page load.

To use the language of a sibling comment, this brought things to a much more app-ish behavior. And eventually Internet maps have become, especially on mobile devices, an app. Hence the need to break server-side navigation may have foreshadowed the need to break out of the browser.


I think this can be generalized as: if you need to break out of "click and wait a moment and see a changed screen" paradigm, for something like a continuously scrolling map or a smoothly flowing server load graph, then you can make good use of client-side loading.

If you are just trying to re-create it, don't.


Exactly. But why a blog platform or news site or other content-focused website would feel the need to do that is beyond me. Not everything on the internet needs to be an app.


The back button would do the same as it would do after you click a [next image] link :)

Not sure about your point, the history API improves UX if you do it right:

https://developer.mozilla.org/en-US/docs/Web/API/History_API

It does everything as hand written, pure HTML page would, just do it faster.


"You don't have to download all the content again" is also true if you version your assets and use a CDN with far-future expiry headers.

If you need an HTTP connection to download a section of HTML for a new part of an SPA it won't be that much different from a full page of HTML, presuming you compress the transfer as you should.

"Of course shitty implementations exist" is true of a non-SPA setup too.


While it's true that the browser won't have to download the content again, it will have to re-instantiate various resources (eg execute all JavaScript over again..., restart gifs). If implemented correctly, JavaScript navigation should seamlessly appear like normal navigation. Not supporting streamed requests is a serious drawback.

Of course browsers have actually gotten pretty good at AJAX-like loading instead of completely re-rendering the page. These systems tend to rely on heuristics though, and I don't think there's any documentation for these, but even when these fail the browser tend to be more competent than even the best JavaScript solutions.


I think there are good use-cases for SPAs (Google Maps, for example), but the majority of cases that I've seen aren't good ones, and the extra complexity involved in managing state etc. far outweighs any marginal gains in not re-instantiating JS.


> While it's true that the browser won't have to download the content again, it will have to re-instantiate various resources (eg execute all JavaScript over again..., restart gifs).

Your JavaScript shouldn't be blocking page load anyway. Defer, defer, defer.


JS is cached in its compiled state in modern browsers. There is no download or parsing step for repeated loads.


without a unique key in the js name/path, or any server action to enable it? If so I would like to read about this particular development - can you point me to an article on how they're doing it?


Yes, Chrome uses V8 which has Isolates (also used by some FaaS platforms like Cloudflare Workers), and adds more optimizations on top like disk-based caching to share across processes. The script is keyed from a hash of its contents.

https://v8.dev/blog/code-caching-for-devs


Thanks! I guess will have to see if FF and Safari support same thing. Perhaps in another year can remove cache busting from builds.


They do, it's linked in the blog:

https://blog.mozilla.org/javascript/2017/12/12/javascript-st...

https://bugs.webkit.org/show_bug.cgi?id=192782

Also did you mean http caching? Not sure why would want to remove that. It's still important for the browser to get the latest script content before the bytecode caching happens.


How can the download step be skipped then if you are using the hash of the content as a key??


That's what HTTP caching is. Browsers use headers and heuristics.


What if there's no content to download? The client could have the same algorithm that the server could render.

For example create a melody with seed: 4564342

The client can render it and if you access it from the server the server does the rendering with the same seed.

Caches also exists, now with PWA-s offline modes would benefit from the History API.


>>Imagine a simple image gallery. You just update the <img> tag, update the URL with the history API and everybody is happy. If you were to navigate via links, you get the same behavior.

Please, don't! Just give me a page with thumbnails that are direct links to the original pictures! That's million times easier, works blazingly fast and I wouldn't need to spend so much time clicking multiple times to save each pic, which is the best case! The worst is serving me blobs of PNG instead of original pics. That's pure hostility.


> Because it's faster

That was the initial benefit. (Gmail, for example)

Then everyone starting using the client-side model as the de facto architecture for every new website and web app.

The old, "give a man a hammer" adage.

Like any tool, it depends on the job.

But sometimes a tool becomes so popular that it takes courage to choose not to use it.

"Err...well... sure I can explain why I chose to go with plain old HTML.. (gulp)"


> Furthermore, you don't lose state, which makes things much more simple.

Or not... I think about infinite scrolls with no proper paging.


New Reddit.. shakes fist


It's probably not the network round trip or the actionable content making reloading the site so slow. It's almost certainly tracking scripts, unoptimized database queries, and unoptimized assets.


Sure, web performance is an afterthought at many places and the more people work on a certain project the worse it gets because each team has its own motivation, but they all have the same target to shoot at.

I think there's a connection between the organizational structure and the bad frontend experiences and this is almost always overlooked in these discussions. This is no surprise of course, we only see the crappy end result and blame the technologies, but this is superficial.

It's usually not the certain technology that's problematic (SPA, history api, react, angular, webassebmly...) but the use of them without understanding the problem first. That's why I find it funny when I constantly read general comments here like "SPA-s are the worst".


SPAs are the worst, because companies deploy them to avoid separation of responsibilities and turn every employee into easily replaceable "full-stack developer".

Unfortunately, a lot of people write terrible applications regardless of chosen technology. When this happens to purely server-side applications, the company is forced to optimize them to keep the hosting bill low — a positive feedback loop in action. SPA applications cause the opposite — companies move everything to client-side in order to reduce Amazon bills, and don't care if those client-side scripts are poorly optimized and contribute to global warming by causing hundred thousands machines to spin up their CPU fans.

Clearly, there should be a heavy tax on single page apps. They are

1) addictive — by making more Javascript devs, who in turn write more Javascript websites

2) act as luxurious goods ("Look guys — we have created a new version of our website. It looks so cooool (but loads a bit slow)!")

3) have ugly externalities, completely ignored by most of their creators


> companies deploy them to avoid separation of responsibilities and turn every employee into easily replaceable "full-stack developer"

SPA-s are much-much harder to develop if more teams are working on it. So your first sentence makes little sense.

> companies move everything to client-side in order to reduce Amazon bills

This is never the reason why it happens. Seriously? The cost are not saved, just moved around. SPA-s are developed because they could provide a much better UX. As a side benefit, server side development becomes simpler by providing some REST or GraphQL API. You don't want to be in a place where tens of thousands of lines are generated backend side by backend developers.

> client-side scripts are poorly optimized and contribute to global warming by causing hundred thousands machines to spin up their CPU fans.

I appreciate your sense of humor :D


Yes, but equally so, replying to this email comments with "but page loads are slow" is not helpful.


CSS doesn’t require javascript, and allows for nearly all the features of a SPA.


Like plotting an equation to canvas? Editing video? Handling drag and drop events?

I've seen that blog post where a guy demonstrated that many UI elements can be done with CSS, I like that. I try to do that myself as much as I can, but let's not pretend that CSS is a programming language and it can replace ANY JavaScript.


It is debatable that CSS is or isn’t Turing complete, but as long as the task isn’t totally automated without user input, CSS could replace most Javascript.


> Furthermore, you don't lose state, which makes things much more simple.

Nah, it makes it more complicated. You still have to handle reloads and back/forward, but now it's on you.


It is not faster, ever. It is always snappier to render everything server side and avoid as much JS as possible. If you can consolidate the whole page down to a half dozen total HTTP requests or less that's ideal.

The narrow case where you make an XHR call to just reload a small slug of data instead of the whole page effectively does not exist in the real world. I mean I'm sure a few developers have implemented that exact pattern a few times, I think I remember even doing it myself probably? But it's not something that actually happens.

What's actually happening instead is people are just adding more and more intricate tracking and analytics to all these SPAs. The XHR call to reload a blurb of text kicks off three other XHR calls to register the event with the various analytics/advertising tracking partners. Oh and we're adding a new ad partner next week so make sure you refactor all the AJAX to register event handlers for all these new interactions. And can we add mouse position tracking too?

I'm starting to see some pages break into the tens of megabytes of JS scattered across hundreds of files, nobody is paying any attention to what's fast.

I'm not sure how you fix this, I've largely given up on the web. It's just a thoroughly terrible experience from top to bottom, and effectively unusable if you're a layperson.


All major browsers cache the relevant assets (images, CSS, JS) in-memory between navigations in the same frame/tab and origin (at least).


Try a blog not built on client side tech: https://www.lukew.com - the page loads are insanely fast, just click around.


In my experience for whatever reason github's turbolinks often take longer than opening the link in a new tab.


Github is simply a very slow website.


> Furthermore, you don't lose state

I'd count breaking my back button or otherwise mucking with my browser history as "los[ing] state". Maybe we have different definitions of "you" in mind.


Do you have an example of a site which does it well?


You can try PhotoStructure (disclaimer, I'm the author). It's an image gallery built with Vue and vue-router (for your images). If you look at the vue-router documentation, they've got some examples to follow.

The back/forward buttons, command-click, and ctrl-shift-t work on all modern desktop and mobile browsers.

I started with traditional page loads, but even with minimal css and no js, screen flashing between pages was prominent (especially on mobile), and visual transitions between pages (like swipe navigation where both prior and next page content is concurrently visible) is difficult.


There are many good ones, but after a quick bookmark search, this shop is done really well, imho: https://www.shopflamingo.com/


This is what makes it worse.

If you scroll down on the page, and click on the link, it takes you to a new page. Works great.

If you use browser back button, it takes you to the previous page, but scroll position is lost.


It requires special attention to do it right. It should be more about what works best for your product.

https://reddit.premii.com - Uses client side navigation. Try it on mobile and then desktop. Its not perfect, but works really good for what I want. Its hosted as a static site. I make request to reddit directly to get the content.


The blog of Svelte, the framework/compiler that was featured on HN some time ago works really well in my opinion: https://svelte.dev/blog


I don't know if their javascript is to blame, but I got this when I tried to use the back button: https://imgur.com/a/YcRtDFj


>Because it's faster

Did you read the article? The entire point was that “its faster” isn’t actually true.


Do modern browsers really re-render everything without optimizations?


A media gallery website is a good example of a use-case for client-side routing.

I worked on a porn site that was basically an endless-scroll video gallery. Clicking a thumbnail opened the video in a modal overlay. All pages on the site were modal on top of the gallery in the background. You could deep link to a page and the gallery would load in behind it.

It worked really well and had great UX.

This generalizes to any website that has some sort of overall state between page transitions, like soundcloud.com playing a track as you click around.


> I worked on a porn site that was basically an endless-scroll video gallery.

So, if you're a horny teenager, you scrool down for a huge amount of time to find "the video" that will get you off... and you hear your mom coming up the stairs, Control+w (close tab), and when she goes downstairs again, you press Control+shift+t (reopen last closed tab), you're back at the beginning, and have to search for that video again? That sucks.

Endless scrolling sucks. You go down and down and down, and something breaks (eg. bad wifi), and you lose your position, since refresh takes you back to top.


I hate infinite scroll with a passion. I've often been quite a way down someone's interesting Twitter feed and lost my place somehow, then just given up and gone somewhere else in frustration rather than trying to scroll down a few hundred tweets, waiting each however-many tweets for the next batch to load, just to get back to where I was.


Also the slowdown. Didn't matter whether I had 8, 12 or (currently) 32 GB of RAM; couple minutes scrolling down a Twitter or Facebook feed and the whole page slows down so noticeably that I simply give up.

Also: something breaks, you press F5, and now the feed is gone, or is in a completely different place than it was before refresh.

Infinite scroll should be labeled as dark pattern. Its only benefit is to the companies exploiting intermittent rewards; for users, it's just bad ergonomics and bad experience.


Funny. I agree with you that infinite scroll brings a bunch of UX issues.

But a dark pattern? Definitely not. I have had multiple projects this year where the feedback from UX workshops has overwhelmingly been to use infinite scroll. This is feedback from real users, customers, and clients.

We need to be careful to align the website UX to the correct target users. Are you building something for a very technical market or power users, such as software engineers? Sure, ensure you don't interfere with the experience.

However, if you're targetting business or social users, you need to base your decisions on their priorities. This means the optimal path for their primary use cases. This means optimizing for the 98% of the times the user just scrolls down the feed, not the 2% of the time they scroll a bit and refresh.


> you're back at the beginning, and have to search for that video again? That sucks.

and yet, that's the same behavior of the social media sites; you will spend more time (=more ads shown to you) there because you are searching for that damn video again


>you're back at the beginning, and have to search for that video again? That sucks.

1. browsers have some sort of cache[1] that allows them to restore closed/previously visited pages without doing a page reload. granted, it's not very reliable, but it'd probably work most of the time as long as you're not memory constrained or visiting too many pages in-between.

2. if the infinite load mechanism also updates the url (via the history api), then this wouldn't be an issue.

[1] https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Rel...


Discourse has the only implementation of infinite scroll that I've ever seen consistently preserve state over a reload. Either everyone but them is incompetent, or it's much more difficult to do well than it seems.

(I still don't particularly like Discourse's implementation of infinite scroll because their custom scroll thing is awful to use.)


Media galleries (carousels) were client side things at least since the original xhr implementation in IE5. Maybe well before that if changing the src of a img let the browser load the new one from the server. I can't remember.

But sites were small and even loading all the page again was not that bad.

By the way, Rails Tutbolinks [1] are a way to get the same result with the server rendering only the html body and the js code in the browser swapping it with the current one.

[1] https://github.com/turbolinks/turbolinks


IIRC, Turbolinks work by loading pages in the background in response to a mouseover event - by the time a click has registered, the remote content has already been downloaded and just needs to be injected into the page. The speed-up comes from anticipating clicks, not from JavaScript tomfoolery.


I checked the README and it never mentions mouseover

> Turbolinks intercepts all clicks on <a href> links to the same domain. When you click an eligible link, Turbolinks prevents the browser from following it. Instead, Turbolinks changes the browser’s URL using the History API, requests the new page using XMLHttpRequest, and then renders the HTML response.

I don't have a Turbolinks application to check but I found this https://github.com/turbolinks/turbolinks/issues/313

and this

https://www.mskog.com/posts/instant-page-loads-with-turbolin...

The behavior you describe is possible but it's not the default and requires adding other libraries.


I think you're thinking of https://instant.page/ , which is pretty much as simple as "when link is moused over, tell browser to load the page in the background".


What happened to http prefetch or server push? Does anyone use those things?


> And if they do, why do they always only do a half-assed job at it?

Oh, that's the easy one. It's half-assed because it's hard as hell to do a good job replacing the browser navigation. It shouldn't be surprising, since browsers are huge pieces of software made basically for displaying content and navigation, it's not reasonable to expect every web page to competently replace half of that job.


Just use the history state pushing API and it's not that hard. Put state information in the url and load the right content on refresh.


Scroll restoration is very difficult to reproduce perfectly. Particularly when content changes between navigating from one page to another and then navigating back, or when the user closes and re-opens the tab/browser.


I have no idea. I do a lot of front end work in React, and the assumption that an SPA is a better experience for people because you don't have to do a page reload to see a new page is really baffling to me. It's widespread too - across industries and disciplines and age ranges, as if the people suggesting these things have never used SPAs.


Because page reloads are jarring and discontinuous experiences. They run counter to a good user experience. That's not to say that every SPA is a good user experience, but just that a page reload is not part of the recipe for a good user experience.


HN uses the "old fashioned" approach of rendering everything server side and every link forces a page reload and I wouldn't describe the experience as "jarring and discontinuous".

I'd rather have a fast full page reload than looking at a spinner while complex client side stuff does its stuff.

[NB I really like React and when good SPAs are very very good - but a lot aren't].


On that note, it means HN supports for free one feature too-often forgotten about in single-page apps: middle-click, or, right-click-open-in-new-tab.


Well, the up/downvote transition is client side, but otherwise, yes.


AFAIK it works without js as well :-)


I occasionally suggest to people who complain on HN about the bad old days of table based layouts to do a "View Source" ;-)


The "bad old days of table based layouts" weren't really that bad, as evidenced by every generation of web developers reinventing tables in weird new ways. From "semantic" div soups through flexbox to CSS grid, it seems to me that most of layouting work is just building tables without using the <table> tag ;).


There were real arguments against layout tables back in the day (though the situation may have changed): https://stackoverflow.com/questions/83073/why-not-use-tables...

One thing for sure: layout tables are undeniably powerful, that's why people want to recreate them but without the penalties that come with real <table>.


Yeah, they were. Though arguably not in the link you posted - those are mostly clichés, as correctly pointed out by the original poster. Especially the "separation of content from layout" and CSS Zen Garden were obviously[0] nonsense, and you can observe how SPAs of today go against both.

Tables had performance problems when they got large and content got potentially dynamic. That I learned only many, many years later - I never did sites big enough to run into such problems then.

The accepted answer in this post is cringeworthy. So much rationalization these days, makes you wonder what we're rationalizing today.

--

[0] - I admit I bought the CSS Zen Garden for a while; it took me some time and experience to realize that, really, no one does that in practice, and it requires ridiculous amounts of either forethought or afterhacks to do it.


There's definitely parts that are awkward though. I can't see the context of your comment in my reply as it's on a completely different page for example. HN has never really been a great UI though, it used to be a massive set of nested tables that didn't render properly on a mobile, it has some tiny fonts and hard to click buttons, and has unliked comments just less accessible by lowering contrast.

It's good enough to read the content and the content is the vital part, but I wouldn't point to HN for a good user experience (beyond the content and lack of dark patterns).


Yet even this crucible of anti design is more usable than many a designer blessed SPA. Thus is the power of server side rendering.


It's better than many server side rendered pages, the things that work have nothing to do with where the rendering happens.


I feel hn experience is awesome. Everything is accessible with one or two clicks. I never wait for something to load, never pest against it because of some obscure behavior. It's simple and efficient. The content is perfectly served. No frills. Even on mobile I don't really feel the buttons so hard to click, even if they are tiny. Maybe we don't use it the same way.


HN is not what I would consider an “app” though. The content is mostly static, with a few interactive bits (upvote/downvote etc.) sprinkled in.


Page reloads are not jarring, they are expected, well understood, and often add that subtle hint that something has indeed changed. People want reliability and familiarity over speed, and speed from the lack of heavy JS and wonky click handling is a bonus.

Compare the site you're on right now (HN) to Reddit's new SPA frontend. Which one is faster to browse?


A page reload is part of the expected experience on a traditional site when you're navigating to a new page. I expect it to look like I'm going to a new screen, not simply replacing the content on the existing screen. Not every website is an app or should act like one.


I believe the only upside is that site framing (headers, navbars, side menus, etc etc) doesn't flicker or jump around reflowing as you navigate across pages. Which is I think why almost every "webapp" out there does this - to provide visually continuous experience, even if this client-side intra-screen navigation is slower.

Surely, on a fast-enough connections this flicker is essentially invisible, but if your connection is far from perfect (crappy hotel WiFi or poor cellular reception), it is certainly well noticeable.

Also, it's easier to persist some state across the navigations. Like making it trivial for those (annoying) on-page support chat overlays not losing message history.

This rationale only applies to web "apps", not documents, of course.


I much prefer getting normal pages to behave more like spa, than spa trying to recreate the web browser experience. Probably most importantly, automatically degrading back to normal web behavior, if the transitions arent supported or js doesnt load.

like https://github.com/turbolinks/turbolinks


For a company I worked for, we tested client side navigation and non-client (traditional) navigation for the admin interface. And then we asked the admins who use the site in question: all of them loved the client side navigation.

So you see, I suspect most users are not like the HN crowd and don't really care about command-click or the back button.


I've got a customer with a complex UI where a lot of things happen. Basically a desktop application in a browser. We built a SPA for that.

Then there are the AWS and Google consoles where nearly every click loads a new page and I don't complain, because it's ok. And Amazon, the shopping site. Every click into a product loads a new page. They seem to be doing pretty well.

I would build SPAs where it's difficult to give a different URL to what's on screen after every click. Server side rendering in every other case.


Not every project is the same. I suspect there are many applications where an SPA is a much better experience than a traditional website. In my opinion, a documentation site is not one of them.


Its also really not that hard to make cmd click + back button to work properly with client side nav.


Can you support iOS's "peek" menu as well?


Yes, I tried it just now on a random website I built in Vue. Any sane implementation of client-side navigation uses the History API[1]. It is indistinguishable from "real" navigation in the browser UI, apart from how fast it is.

If on the other hand, if you encounter code like the abomination below (which breaks cmd+click, peek and forward/back), you should not (imo) conflate it with actual client-side navigation. I suspect that's what some comments here are referring to.

    <a href="#" onclick="loadContentIntoPane('contact')">Contact Us</a>
1. https://developer.mozilla.org/en-US/docs/Web/API/History_API


Is there anything special needed to support that? I’m pretty sure you just need an <a> tag with an href. With any SPA framework/library I’m aware of, you have to go out of your way to make it not work.


But you can support both with client side navigation


I certainly can. It was just not necessary because the people who are using the site didn't care.


I think it's possible but it's hard to execute well.

I tried 4.5 years ago with https://vivavlaanderen.radio2.be/ - disclaimer: the experience isn't great on mobile (design issue, not tech) and the JS/HTML is massive (it was my first JS project ever, so I messed a bit with Webpack etc).

One of the tricks I used is partial rendering. If you click an artist page (the square/rectangular people photos with a name) and have JS enabled it'll first only render the header, then add a few body items, then the rest of the body items. Since we used a horribly inefficient handmade Markdown to JS thing with a renderer in old naive React it took way too long to render them all at the same time, it'd easily lock up the browser for two seconds on a large page.

Another ridiculous thing was to preload enough data to render the top part of every single artist page while you're on the homepage. A complete waste of data, but otherwise navigation would need a request before being able to render something useful and that defeated the point.

I did pay a lot of attention to making it feel like real navigation though. Almost any meaningful interaction modifies the URL, and the site is practically fully functional without JS. Navigation and Cmd-clicking should all work perfectly. Including scroll positions handling. SEO worked really well too.

So for a classic website format, to make client side rendering work, you need:

- a URL for every view change, with regular old <a> elements that have those URLs as href.

- server rendering that actually handles all of those URLs

- something that restores scroll positions on navigation

- phased/batched rendering on click if your initial new view doesn't render in say 100ms (basically faking progressive rendering)

The development experience was a bit frustrating back in the day, and I don't think it paid off. Which is why for the next project (https://www.klara.be) we decided to go back to server rendering, but using React on the server. Some parts of the page (like the coloured box top right on desktop) are rendered as a skeleton on the server, and then React on the client reuses the same exact code to make these parts interactive again. Partial universal React basically, and it worked very well. I think klara.be feels like a nice and snappy site and it was way easier to develop for.


> with regular old <a> elements that have those URLs as href.

This is the most common problem I have. I middle-click and nothing happens (or it loads in the same page)

That being said this is partially an API issue since the website needs to guess what a "regular click" is. It would be nice if there was an "onnavigate" event which would trigger for regular clicks (or other ways to follow a link in the same page) but not for things such as "open in new tap" shortcuts.


It's justified if you run a web app that needs to maintain complex state between clicks, such as a text/graphical editor.


I don't think anyone disputes that. Maintaining a whole editor state in server side sessions or re-rendering everything from localStorage would be rather silly indeed. This is about changing pages, such as Github annoyingly does when browsing a repository.


Sadly I think this is true for a lot of sites. However my experience has been that many (particularly in the last 3-5 years) support command-click, back button navigation, and scroll position restoration.

More often than not, I've seen that tracking scripts break this more than anything else. Particularly command+click.

Still - it'd be great of there was an API to preserve UI state moving into a new entry in browser history. I don't want to lose scroll position for a sidebar if I'm drilling down into a large list of data.

Being able to add hints through markup for when to preserve UI state could go a long way towards making page-refresh based form and navigation behavior competitive again.



It also breaks a lot of disability features, like text to speech (in many instances) or navigation using keyboard keys instead of a cursor.

On the plus side, not having to reload a new page for each click can have some interesting benefits. These systems remind me of flash content, except they are perhaps less terrifying and intrusive.


I see lots of responses to this article asking "why client-side navigation?". I can share my own experience with building an app a few months ago, and how/why I switched to a client-side single-page app..

The app is this: https://osmlab.github.io/name-suggestion-index/index.html

It is a worldwide list of brands that have been seeded with OpenStreetMap data, and which volunteers have linked to Wikidata identifiers. We use this data in OpenStreetMap editors to help people add branded businesses. Pretty cool!

1. We had data in `.json` files (but not too much data) and we wanted to show it to people working on the project so that they could review the brands.

2. I spent a day or two and built a static document generator. It took our data and spit out an `index.html` and few hundred `whatever.html` files. This worked really well. As the article says, "browsers are pretty good at loading pages". A side benefit - Google is really good at indexing content like this.

3. Then users made the obvious request: "I want to filter the data. Let me type a search string and only show matching brands. Or brands that appear in a certain country".

4. OK, SO.. If your data is spread out over a few hundred files, short answer - you can't do this.

5. But the data is _really_ only a few megabytes of `.json`. I spent a few days to learn React and switch to a single-page client side app so that we can filter across all of it. The new version uses hooks to fetch the few `.json` files that it needs, `react-router` to handle navigation between the index and the category pages. It works pretty ok! Most people would stop here.

6. The first version with client-side filtering performed good enough, but not great. The reason was because, as users type these things happen: The filters get applied, the React components get a new list of brands passed in as props, and React re-renders these new lists to the virtual DOM, and eventually, slowly, the real DOM.

7. It's really easy to build React code like this, and many people do. But it is better to avoid DOM changes in the first place. I changed the components so that the lists stay the same, but filtered things just get classed as hidden `display:none` instead of being added and removed from the DOM, and performance is much better now.

Anyway hope this is helpful to someone!


I believe that a multi page app/website would have the data in a database rather than distributed across html files. The server would then populate the html files before serving them up to clients.

It sounds like you started by trying to build a static website and then decided you wanted something more dynamic...so shifted to react where you are doing the dynamic data manipulation on the client as opposed to a server..not a like for like comparison


The problem isn't client side apps. YouTube is a single page app and it works very well for me. No issues with ctrl clicking or anything like that. The real issue is poor SPA implementations.


Curious, how would you implement Facebook's infinite scroll with only static pages?


Don't ask me how to solve a problem that doesn't need to be solved in the first place! :p #deletefacebook



Dude! #scrollhijacking is the worst!


Slow? I'm quite sure the whole point of client side apps is for them to be substantially faster - and most are.


I have yet to encounter a site where “client side navigation” is faster if there’s any real content.

Page losing is dominated by two things:

* the network. Browsers are written knowing that networking is terrible, so aggressively optimize their use of any data, including starting layout, rendering, and even JS before the page has completed loading. That’s not possible for client JS that is using XML and JSON data as neither will parse until the data is complete (this is part of why XHTML was so appallingly slow and fragile).

* JS execution - people love shoe horning JS into early page rendering. If you care about your page performance the first thing to do after networking is pushed all JS out of the path-to-initial-render. By design “client side” navigation blocks all content rendering on executing JS to build out the html/dom that a browser is much faster handling directly.

As a side benefit to making your site faster by not manually reimplementing things the browser already does for you and does better, you all get better back/forward, correct interaction with the UI, work correctly with any accessibility features that are operating, correct scrolling behavior, correct key handlers, ...

The benefit of client side rendering is you get to say you spent a lot of time doing something that already worked.


Did you read the article?


I read the article.

Perhaps I'm misunderstanding, but did they do this test on a 128 kilobit cellular Internet connection?


I made the video in the article using Chrome's "low-end mobile" throttling preset, which simulates a ~300 kbit connection IIRC. But I saw very similar behavior on my actual phone with an actual 128 kilobit connection in Canada.


Thanks for the clarification. Do the conclusions hold up when testing with a broadband network connection?


As I mention in the article, I'm not really able to tell the difference between old MDN and new MDN on the fast network connection I usually use. They both load pretty much instantly for me.


I sometimes wonder if website authors actually test their pages on slow connections before declaring that they’ve improved the experience for them…


Do all MDN users have low-latency, high-bandwidth connections?


I’m a bit lost. How would using a broadband connection affect the relative speed between the two versions ?


For one, latency has a proportionally higher impact on "time to render" at lower connection speeds.


I’m guessing no?


This may get downvoted to oblivion due to the HN bias against js.

The correct answer is it all depends. Certain things are faster to do in JS. Certain things are faster as a page load. One has to profile and see what makes sense.

There is a reason Atlassian is dog slow and trello runs circles around in terms of UI performance. The immediate <100ms navigation between views is probably why Atlassian ended up buying Trello. It would have eaten their lunch.

Once you have a substantial amount of JS, and you’re an app site, which a lot of sites do, it’s usually faster to not make the browser parse and compile again. Just stay in js land as a single page application and communicate with server purely in REST.


> This may get downvoted to oblivion due to the HN bias against js.

Now, come on. Very few people in here would say that js had no place in the web. A lot of people are against js when alternatives exist. Trello is an application, so js makes sense there. A blog article that doesn't even display when js does not load, that's where people have a problem.


It seems a large chunk of people would rather trello was built entirely out of html forms so you press a <button> to make the card move to the left and then the page refreshes with the card moved.


This is an uncharitable interpretation: many people would like it if things which were billed as being faster were in fact consistently faster and degraded well. That doesn't mean that Trello should trigger navigation every time you move a card but it does mean that anyone taking over a core browser function is taking on a higher level of responsibility to do performance, accessibility, and compatibility testing.

A similar issue comes up with Google AMP: it's billed as a web performance move but it's regularly slower and the failure mode is that you don't see anything at all. That doesn't mean that the problem is completely intractable but the act of taking over a core function put the onus on them to do a less shoddy job.


This x 1,000,000. It really depends on what the website is trying to do. If the site is basically just text and images, then yeah. You have to fetch all that every time you go to a new page either way, so why try to re-invent the wheel? The more your website is an application, the more it usually makes sense to have a SPA, because you want everything on the website to react instantly to interaction. In that case, you're building a classic client-server application, and you want the interface code running on the client, not the server. Otherwise you introduce latency due to the communication between the two.

So maybe this is kind of obvious, but the whole purpose of client-side scripting is to make the site more interactive, right? If your site is completely static, you shouldn't be using client-side scripting. If you're site is very dynamic and interactive, you should be relying a lot on client-side scripting. Imagine writing a video game for the web where every key stroke was sent to a web server and every video frame was retrieved from the server. That would be way too slow due to the amount of interactivity involved and the latency introduced by the communication with the server between every frame.


It's two years later, and I'm still waiting for those improvements in Jira. Until then Jira is a great case study of how not to do things in JavaScript, it's slow, it's inconsistent and I doubt the accessibility is very good.

(It's also a great case study of how it doesn't matter how good or bad your app is, the success of your business doesn't depend on that)


We're definitely very slow on load at Atlassian, but our accessibility is actually really good. One benefit of a good component library (in our case React) is it allows mapping of accessibility labels fairly easily. Jira should be fully navigable with just a keyboard and headphones.


> Just stay in js land as a single page application and communicate with server purely in REST.

This is a sensible conclusion. Unfortunately, "js land" tends to grow as application grows. And nobody actually unloads their Javascript when it is no longer necessary (can you even do that?)

Server-side rendering means, that you shouldn't leak memory — otherwise your app will literally stop working.


(Disclosure: I'm Carter's desk-neighbor at Triplebyte.)

I think there's actually a middle ground where you can utilize some of the more modern techniques to actually do better than the pure static pages approach, while still using normal browser-based page navigation.

As a personal challenge, I wanted to see what could be done about performance for a recently-launched side project: the Ultimate Electronics Book [1], which is a free online electronics textbook that has interactive circuit simulations built in. Here's what I ended up with in the spirit of progressive enhancement:

1) Static site generated by Jekyll (with a few custom plugins) and hosted on S3+CloudFront with appropriate caching headers

2) No JS blocking initial page load

3) No custom CSS fonts to download

4) Prefetch and dns-prefetch headers

5) Tell browser to preload next page on link mouseover (instant.page)

6) Lazy-loading of schematic images (lozad / IntersectionObserver) when they're nearly within scroll range

7) Client-side instant search index (awesomplete + custom code)

8) Tooltips with section descriptions on internal navigation links (balloon.css)

There are heavier components to this too:

1) Equation rendering (MathJax) is heavy and slow but starts rendering the equations after initial page load, and prioritizes equations within the first few thousand vertical pixels first. (Sadly I need the full power of MathJax; the faster KaTeX engine can't handle all of my equations.)

2) The schematic editor / client-side circuit simulation engine is an entirely separate SPA. (But by modern standards it's probably a smaller payload than many sites today use for serving totally static content with a few forms.)

The result is a site that loads pretty darn fast -- maybe faster than a static page due to preloading and lazy-loading -- but packs in a lot of functionality appropriate to the problem. Any techniques I'm missing?

[1] https://ultimateelectronicsbook.com/


Loads about as fast as I'd expect from a static site.

Hover is a bit messy. When I move my mouse down the table of contents, the tooltip obscures the next chapter title, and sometimes grabs the click as well.

Back button handling seems to be buggy. If I click from the table of contents to a chapter, then hit back, then click to another chapter, then hit back again and do that a few times, the history becomes filled with many instances of table of contents.

Also something weird happens when I click a chapter link. Like, the whole table of contents scrolls for an instant and then I see the chapter.

Edit: if I disable JS, all these problems go away and the site feels just as fast. So good job on that :-)


Thanks for the feedback. Tooltip hover on TOC is annoying -- need to think about that... The scrolling on TOC click was intentional as I wanted to highlight the section you clicked and center it if you do go back to TOC to "remember you place" in the book, but maybe I overdid it here.


MathJax can render server-side if you want to speed that up: https://github.com/pkra/mathjax-node-page


For the love of god, please, never, ever do lazy image loading. As a user, I expect the page to be 100% complete when the progress bar in my browser disappears.


Note that some browsers are planning to start doing lazy image loading themselves. See https://groups.google.com/a/chromium.org/forum/#!msg/blink-d...


As another lazy loading hater, I think that's a good thing. If a website wants to use lazy loading, it should be handled by the browser, rather than using a custom JavaScript code that the webpage requests. There being one universal method means that there is only one setting that I need to disable to avoid lazy loading. Currently I use a userscript based on heuristics to load all lazy-loading images, but it often doesn't work, as websites use a wide variety of methods to implement it.

Also, incorporating lazy loading into the browser would make NoScript viable in far more websites. Most of the time the only reason why I consider enabling JavaScript for a website is to view lazy-loading images, everything else works good enough (or better) without JavaScript. With lazy loading being done by the browser, almost no website I come across would have this problem, unless they choose to intentionally break without Javascript. Of course, this wouldn't be a problem in the first place without lazy loading, but alas, here we are, with most websites using it.


The NoScript point is a good one.

I don't necessarily expect browsers that implement it to have a user-visible way to disable lazy loading, though, so you may be out of luck there. You _might_ be able to create userscripts that force non-lazy loading on all potentially-lazy elements by using the opt-out frob browsers would add for websites.


I would hope that Firefox would have an about:config setting to disable it, but with the direction they are going, removing customizability left and right, who knows. Normally I would have also expected addons to disable this browser-side lazy loading, but since Firefox 57 they are more like glorified userscripts, so they probably wouldn't be able to.

On Chromium side, maybe Vivaldi and the like could provide the option to disable it, if it is not too difficult of a change on Chromium. But for sure, I would be very surprised if Chrome offered that option, so not much of an improvement for Chrome users, or maybe even a deterioration if sites start to adopt it even more, now that it is supported natively.


Firefox does not have plans to implement lazy loading at this time, fwiw. If we do, yes, I expect we will have a way to disable it.


> Currently I use a userscript based on heuristics to load all lazy-loading images, but it often doesn't work, as websites use a wide variety of methods to implement it.

Would you mind sharing this script? I use NoScript heavily and something would certainly be better than nothing in a lot of cases.


Mine was based on the answer here[1], now I see that the script in the answer was edited after I started using it. I should try the new version to see if it works better.

My experience with the previous version was that while it sometimes worked, on some websites it broke more than it fixed; and when I modified the script to account for that issue, it broke something somewhere else, as websites use very different methods to display images that I couldn't imagine (hence I welcome a standard way of doing it). So I ended up using the script on a whitelist basis instead. Maybe the new version works more properly, so that I can enable it globally again.

[1] https://superuser.com/questions/938345/load-all-images-even-...


> Any techniques I'm missing?

Not sure what would make it faster, but thought I'd report that with Safari 10.1.2 the site is currently broken for me. The MathJax equations never load, and I get text mixed with spinning circles.

Javascript Console contains: [Error] SyntaxError: Cannot declare a let variable twice: 'e'. (anonymous function) (compiled_postbody-544dfbba31d691e567064ba96e9eec29f4257fe5909a4ddb5218cd36d823e215.js:1).


Thanks for the bug report. I’ll try to reproduce.

Edit: I'm unable to reproduce this with Safari 12.1.1. Did some searching and this is a known bug in Safari 10 which was resolved in 2017: https://github.com/webpack-contrib/uglifyjs-webpack-plugin/i... / https://bugs.webkit.org/show_bug.cgi?id=171041 -- I will try to workaround.


I use Jekyll for my personal site and it renders MathJax, and even MusicXML with JS, amongst other small things. It's great, all static.

https://github.com/nixpulvis/nixpulvis.github.io


Interesting. Am I missing something? I went to https://nixpulvis.com/math/01-fibonacci and it appears to be effectively the same as what I have for MathJax -- pushing the raw LaTeX commands into a specially-labeled HTML element, and then loading the MathJax JS from cdnjs.cloudflare.com in order to process and render them client-side.


Yep, that part is pretty easy, and not too bad in terms of fallback while loading. I love how easy it is most of the time to write nice looking math.


He says the reason people code client-side navigations is to load pages more quickly. That's not the reason. Client-side navigation evolved from single page apps, where we use javascript to create dynamic content on the fly, for example a chat application. Now that you're a stateful single page app, we need to rebuild page navigation if our single-page-app has more than one "page". I understand that projects like Gatsby have used this technique for performance, but I see no reason that today's performance optimizations will be valid tomorrow. Page load performance is not the need driving this pattern generally.


> He says the reason people code client-side navigations is to load pages more quickly. That's not the reason.

That was the reason given by MDN.


He claimed in the article that a MDN developer said that was the reason.


> He says the reason people code client-side navigations is to load pages more quickly.

No, he says one of the developers of a specific website (MDN) told him that is why they did it in that particular case.


Try visiting indiehackers.com, the come back to hn for a comparison. I would love to love that site but i value my time and sanity more. Waiting 10 seconds for every click is just torture.

Ajaxy navigation has its place i guess, but your website has to be fast for that too. The worst offenders i have seen here must be all the advertising managers like google adwords, facebook ads. Each request takes a horrible amount of time to load, which makes the interface completely unintuitive. Is the browser loading a page or not? Is this popup from the first time i clicked the button or from the second time? Where did this other popup come from and why now? It's like reading HTML sent through UDP.


I stopped using Indie Hackers because of its terrible UX.


This is definitely in the same vein as http://boringtechnology.club/ After several years of drinking the client-side kool-aid, I've come to realize that it almost never has a positive ROI. On top of the false premise that it "makes pages faster", it doubles the cost of the entire pipeline/stack, it doubles the amount of documentation that needs to be read/written, and it doubles the statefulness of the app, amongst other things. Maybe under the best conditions by the most knowledgeable devs, client side apps could be impressive, but your typical company doesn't have those resources.


There's such an "ego load" associated with this investment too. I find that even when a developer comes to realise that the SPA mess is largely unjustified, they're rarely willing to admit or change anything.


Agreed. It's very much a sunk cost/escalation of commitment fallacy. https://en.m.wikipedia.org/wiki/Escalation_of_commitment

Also bad is that junior engineers see shoddy client side code and internalize it. As an example, one junior engineer asked me how he was supposed to iterate over an array server side when I told him he didn't need to use Vue on a page. Server-side rendering fixtures had been completely forgotten.

I accept my responsibility in causing this mess though. I niavely pushed the SPA on my myself, my coworkers, and our end users years ago and have to live with that sin daily.


That must be humbling. Props for identifying it and adjusting course tho.


HTML needs to be split into two: A spec focused on dynamic applications using JS and components, and a spec specifically focused on documents, CSS and hypertext.

The fact that in 2019, we still don't have standard browser features that let us create rich text documents in a WYSIWYG interface is ridiculous.

We've ended up with a dozen editors all producing slightly different markup, workarounds like Markdown and AMP, and browser engines that are so complicated even a massive company like Microsoft finally gave up creating its own.

This isn't about technology at this point, it's about standardization. We need to have a simple tag which marks the beginning and end of textual information, with massive restrictions on what tags are used and the JS and CSS in that section, so that a simple, standard and ubiquitous editor can both read and write hypertext 100% the same while still including all the basic functionality of a common word processor, like fonts, colors and sizing.

This isn't a technical challenge on the level of WebGL, web sockets, HTTP/2, media extensions, etc. It's just a matter of specifying a subset of functionality for a specific use-case, standardized for the sake of simplicity and interoperability.


I installed noscript to test this, and yes, the beta rewrite, whose selling point is react, is faster when react isn't allowed to run.


The current generation of less-experienced developers tends to default to building every project in React, even if there's no tangible benefit to accepting this complexity. It's unpopular to express, but the truth is that many junior devs don't know how to do it any other way. I don't blame them for this, because they literally haven't been doing it long enough to have mastered multiple techniques.

Managers go with it because it's still hip and easy to hire for. If things go wrong, well, it was good enough for Facebook.


We can’t blame junior devs using React and other frameworks to try to build more responsive applications. That’s where the demand is.

I wouldn’t say it’s easier to build in these SPA frameworks nor harder. Attention to detail is something people recognize or learn over time.


I actually blame the old guard for not putting in enough time and energy to mentor juniors. It didn't have to end up like this.

As for whether an SPA is harder or easier, that's not really the relevant dimension. You should not be making technology decisions for your company based on what your new junior devs are comfortable with. They will actually level up faster if they are forced to take what they've learned and apply it to something they weren't working on in their nine-week intensive.

Meanwhile, there's nothing "easy" about React and co when you factor in the layers of abstraction and bikeshedding involved in a typical full-stack deployment today. Compared to when I learned, you suddenly have to also be confident with bash, git, docker, AWS, postgres, webpack, and the whole concept of a virtual DOM before you even start modelling your data or thinking about state transformations. Now go compare that to the original Rails "blog in 15" video and you'll have a hard time claiming that anything is easier. The drop in developer ergonomics over this golden era of JS tooling is stunning in its unneccesary masochism.


> bash, git, docker, AWS, postgres, webpack, and the whole concept of a virtual DOM

That's very far fetched. A React app can just be the default template from CRA + some place to host the generated files (like Netlify if you want something simple). You don't need to know any of the above.


Sure, you can get an example of a React component working on a webpage. I'm talking about the daily lived experience of a working junior developer trying to build something real using typical tools for 2019.


There's nothing wrong with react here. You just have to also use a router.


Reduces server costs. Able to process data on client, including mathematical computation via We ebAssembley. Which in effect allows you to create "Web Apps," rather than just a news site, blog, ecommerce, etc... figma.com is an excellent example.

Not to mention that SPA have been used for video game UI. Rather than building that system from the ground up.

If your client's needs are to just display a static and do not dynamic updates from the server then ignoring the SPA approach is very wise.

Client side navigation handled poorly is just that handled poorly. And should be handled by the Senior Developer and not the Junior. As SPA is just another tool for a specific use case.


Honestly, what do you think we did before the Facebook gods deigned to grace us with React? You understand that we didn't need SPAs to build everything, right?

Most of what's gotten better on the web has been about standardization and drastic improvements to EcmaScript. Unfortunately, we've taken all of these new toys and made sites that load slower, on average, on today's phones than typical sites did in 2009. The problem is far worse in developing nations where phones are many generations behind.

Don't get me wrong: I'm glad that React exists and if I was building Trello, it would obviously be an excellent tech choice.

However, there's exactly nothing simpler or faster about a React app vs a properly-cached SSR fronted by Turbolinks and a tiny framework like Stimulus for 98% of applications. I don't know how this became a sacred cow but at some point the Reactive defenders started reminding me of Scientologists.

People were really, really stoked about Java when it came out, too. One codebase that will run without modification on every device, they said.


I use Angular and Svelte professionally. Reactive to me is reactive programming via the rx package that spans many languages. However I only use those frameworks if the job calls for it.

Call me a blending edge buff, but once WebGPU rolls out you can blur the distinction of web site and native program. But I would not utilize such on statically rendered website. But I would use it on a portable version of Photoshop, Blender, AutoCAD, or Minecraft.

Can you do the above today, inefficiently yes. But once WebGPU (gpu multi processing via a compute shaders with shared instructions) hits in combination with WebAssembley Workers (shared instruction cpu processing with proper floats across X cores) it will be an entirely different story.

What is insane to me is that these UI frameworks run in combination with the DOM on the main thread. That they are not designed from the ground up to run SSR and it is an afterthought.

Or in the instance of third world can even be managed by a Service Worker that drip feeds updates and presents th em on the site they are on next time they visit.

1 million and one ways to skin a cat. No way is better, it depends on the client's budget and use case.


Facebook loads pretty slowly for me.


It's highly amusing to find out about this, as someone who disables ECMAScript on every browser I use. Why is it that certain groups of web authors seem to be under the impression that more "tech" = better? Why can't they be satisfied with what works in a simple and easily-accessed manner?

For that matter, why does anyone engage in this constant race to the bottom? If more web authors and developers were to make a stand for decency and refuse to implement these unnecessary hacks, the Web might be a better place.


It's self-fueling phenomenon and reasons are plenty. My random thoughts:

- Everyone who thinks about themselves as doing "frontend" needs to do hot tech of the day because that's what everyone talks about in the internets (FOMO). If you leave the race, you can't fill your CV with buzzwords and won't find a job in a few years (would you hire a Java developer who still writes Java 4?)

- New people come in and that's the reality they start with and it feels normal. They never wrote a clean piece of HTML+CSS by hand.

- When you're in a company where all people have they minds set up on a given tech because of the reasons above, it's impossible to suggest an alternative tech to the currently fashionable one. You can't tell dozens of people to do a career suicide. Anyway it's typically a very small group (or a single dev) who choose the tech stack and architecture for new projects, and they typically have a very limited time to do so. Going with the flow is the "safe" option.

- Hiring: you don't want to use unpopular unsexy tech, because you won't find candidates, who will fear to take it to not work in a non-future-proof tech stack (COBOL job anyone?)

- Sometimes you just have to do something complex which requires a ton of JS anyway (any highly interactive widgets/apps) and then having a single well-understood framework is better than gluing together many things in a random way.

- Finally, doing complex stuff in a huge team is tricky. To make stuff not collapse, the developer experience is generally favored over user experience.


I'm going to push back against the idea that proliferation of client navigation is caused by inept developers. I think that component architecture is a fundamental improvement for web development. Breaking the interface into smaller, well defined chunks promotes code reuse and makes the relationship between parts of the interface more clear. Additionally, almost any web application is going to include some functionality that is best handled with client side rendering. I posit that this puts developers in a position where client side rendering eats more and more of their application, creating SPAs where they are not needed.


You can also make a mess of server side nav. Just take a look at PyTorch documentation: 2.5MB of shit loaded in 26s. At least 10 fonts, 300KB of CSS and 455KB of JS. What I hate most about it is that the 'in page search' function doesn't work for many seconds after the page starts showing, and it's the only way to navigate such a long-ass page. Even with such a huge page most functions have no example of usage, it was better with the TurboPascal 7 help in the MS-DOS days.

https://pytorch.org/docs/stable/nn.html

Apparently I am not the only one who noticed this bad behaviour:

https://github.com/pytorch/pytorch/issues/20984


And the rustdoc for Iterator [0] (everything else is great, though).

[0] https://doc.rust-lang.org/std/iter/trait.Iterator.html


As of the next release, it will be a lot better!


It's the "app-like" experience that everyone is chasing.

Loading a new page feels like browsing the internet.

Single Page apps feel like a native app on your phone.

At least that's what I think the reasoning behind the decision to go client side rendering is.


Nah, it doesn't "feel like" a native app. It feels like an incredibly fragile app-like porcelain on top of a webpage. A native app, a webview packed as an app and a website looking like an app are all easy to tell apart if you've used all three categories in the past and have at least little attention to details.

But yes, I can buy this is part of the reasoning. I think most of it is "we need to make SaaS; cool kids use React, so we'll use React too; now we have an SPA, so the easiest path is client-side navigation".


What I can't get over is Chrome and Gmail. They're the same company, but managed to make an abstraction between these two groups where it takes several seconds to load and display a list of text fields on a mid-range PC. The HTML version is instant.


It takes a little longer to load, but it also lets you continue working while your train goes through a no-internet zone.


textarea manages client-side state just fine in the HTML version.


Can I open and reply to 25 different emails, and have the replies sit in my outbox until I reconnect? Because that’s a lot more useful to me than how fast it loads.


It sounds like a traditional email client would fit your needs perfectly. Check out Thunderbird: https://www.thunderbird.net/


Modern web is all about bloat and ever complicated ways of doing the meaningless things and how to make a web page more expensive and slower by far than applications that did 10 times 10 years ago. Web has good things going for it biggest being the nearly universal access across devices and platforms but bloat free is not one of them. It might be fast but I feel like it's never going to be efficient in my lifetime. All the browser advancements if they ever make things faster will do so at ever increasing use of resources.


Yes, it is true. It is too complicated and messy, and also stupid. There are other file formats (e.g. plain text) and protocols for other purposes anyways, which is sometimes useful.


I remember once I had to access the benefits page (Workday) at my employer but I was in a remote location (the middle of the Greenland ice sheet) with a slow/high-latency connection and the required page just would not load properly (presumably the client-side navigation wasn't robust to a slow connection...). I had to VNC (slowly) to a computer in the US in order to fill out some shitty HR form.


The biggest advantage of using modern SPAs is that it forces the developer to build the backend as an API with which you can interact programmatically


I do agree with this bit, even though I hate the end product.

I've seen exactly what happens when you have a server side application worked on by a lazy developer, you end up with tightly coupled code inside all your templates and controllers and it devolves into a maintenance nightmare.


This doesn't work as well as you think once you consider multiple endpoints and round trips. You're better off with something like GraphQL or server-side making the API ship views to clients.


The API between the SPA and the server is an internal implementation detail that doesn’t need to support interoperability or backwards compatibility.

I see no reason to believe interacting with it directly is easier than scraping an HTML page. (In fact, I’d expect it to actually be much harder and more fragile than scraping html in practice)


In my experience scraping both, APIs have tended to be more stable than the structure of HTML pages. I'd guess this is because there's an incentive to keep your API stable (a breaking change for my scraper is also a breaking change for the front-end), whereas there's basically no reason not to rewrite giant chunks of the HTML to accommodate a design change.


The only good reason I've seen to have client side navigation is when there's a permanent sound player embedded in the page, so you can keep listening while you browse around the site (possibly for other things to listen to).

Although really, for longer-term listening, playing the stream in a separate media player is a better idea. But sites tend to hide their stream addresses, and this is also usable.


> Most web browsers also store the pages you previously visited so you can quickly go back to them.

Not anymore, they don’t. Most of the time I get a huge lag when I hit the back button as all kinds of stuff reloads.

I’d prefer if the back button worked like going to a previous tab, where nothing has to reload, but they don’t work like that.


> Not anymore, they don't.

Hell yes they do, it's the websites that make it impossible most of the time. Try it on a standard website that doesn't use many megabytes of resources (which it would evict for memory reasons). I notice it sometimes when using such websites, it's amazing how fast it feels compared to even so much as a 304 Not Modified round-trip.


No, they don't make it impossible, because browsers could just implement the same behavior as if I had opened the page in a new tab, and indeed, many people do (and are advised to do) this as a workaround.


Very concise and well written article.

I remember when smartphones became popular, it was an occasion to reset the trends and do basic pages without much JS, no endless hovering menus, no 5 column and 3 popup layouts, no flash intro, etc.

Then it started to creep back as power increased.

I wish the next big thing resets the field again, one can dream.


If you're into this and you still use Gmail, switch to HTML mode. I've done it for all my accounts and it's so much better than the JS-laden garbage they've switched to recently.



Why is management paying for all those overdesigned web pages?


Because they don't know a thing about underlying technologies.


Just as a side note. SPA style design often claims that the initial load is slower for the benefit of successive loads. FF reload does override client side caches by default.

Relying on warm caches for a good user experience is pretty bad, imho.


For many sites transferring HTML works fine. I'm surpirsed that MDN would be implemented with client-side-rendering.

On the other hand I've had the pleasure of working with both Meteor and Elm thae last few years. It was the first time I had fun implementing client-side code. The resulting apps cannot be reproduced with HTML generated on the server. Often page switches in those apps don't take a round-trip to the server because the data is already loaded and can be rendered right away. Meteor also does incremental rendering by design. So the parts of the data you already have client-side are displayed right away while the rest is being loaded.

It is more work to get it right because you have to reproduce functionality the browser would do for you. But often you just don't want to do a full page load and then fetching and patching the right HTML snippets to update parts of the site gets messy real quick.


I browse using Brave, with JavaScript disabled by default and full "shields up" on every site.

Anecdotally I would say that 10% of the web is unusable, another 20% is barely usable... but to my utter surprise the other 70% is functional to some basic level.

The one I am most impressed with is Amazon. I am not an Amazon fan at all, and try to avoid shopping there or using their services, but when I do happen to need something in a small quantity and fast Amazon are very good, and so when I visited recently it took me a while to comprehend that the site looked right, felt right, hadn't degraded the experience to any shocking degree, and was fully functional even without JavaScript and 3rd party cookies, etc.

There is still a lot to be said for the very simple approach of make it all in HTML on the server first, and only use JavaScript to add small enrichments that can only be done in the client.


Page load speed is not the only reason why client-side rendering is important. When traditional server-side rendered pages need to update state on the page they have two choices: Use DOM manipulation or reload the entire page from the back-end just to update one small piece of the page. The more state that needs updating, the more complex it becomes to manage rendering on both the server and the client. Code redundancies creep in as html templates are duplicated on both sides.

In my experience, as soon as a website has any app-like qualities at all, a tool like react just makes sense. It's a great tool that solves a very real problem. I always use universal rendering so that you get the best of both worlds and the page still renders fine for the no-js folks out there (respect).


Client-side rendering/navigation makes sense when you have high ratio of markup to content, or fixed content to changing content. In the olden days people used frames for such fixed navigation stuff etc, but I think we can agree that it was not all that great of a solution.


> It’s not worth it to try and go behind their backs—premature optimizations like client-side navigation are hard to build, don’t work very well, will probably be obsolete in a couple years, and make life worse for a decent portion of your users.

I don't know what is wrong with me -- when I read this sentence all my brain can see is "job security."

______________________________________________________

My take on client side navigation is the desire to cater to smartphone users instead of desktop users. Every few days it seems a popular website does a redesign that is centered around smartphones. They're increasingly bloated and wasteful. Why Twitter, why???


It's funny, actually frustrating, that people create all these frameworks to mimic "native" looks and behavior. But there are soooooooooo many elements that are overlooked, and usually only tested on a few platforms. Don't you just love sites that capture your scrolling? Or buttons that have different hotspot areas or don't support canceling by dragging out of them?


We've also spent 10 years getting to the current state.

Backbone was notoriously difficult to manage state in. AngularJs was quick to prototype (though often slower to develop in than HTML) but had massive memory leaks. Even modern Angular and React were new iterations that didn't solve the state problem well until Ngrx/Redux became mainstream.


Have just read this article again. The author has a very pleasant style of writing. Very good explanations and good reasoning in the conclusion. What could also be mentioned is the sideeffect from having n different proprietary "Routers" from different framework vendors as well as "code-splitting" and so on.


If you are thinking of things as "pages", this is of course true.

I think a good example of a site that makes good use of this sort of thing is Wikipedia (the non-mobile version). I love that I can mouse over links and get more information on something, without having to click to its actual page. It pulls down only the information needed, and leaves everything else in place. This makes browsing far more efficient, at least for me. Even on a fast computer with a fast connection, going to a whole new page is jarring, comparatively. Obviously if you need more information than the summary provides, great, go to the new page. But if not, this works great.

Maybe what Wikipedia does isn't what the article is complaining about. But at least I think the article should mention that "page" is not the only meaningful unit of information on the web, or at least it doesn't have to be.


This feels more like the old "progressive enhancement" model than the full blown SPA style that's popular today.

Most navigation on Wikipedia is plain old browsers loading HTML pages, but if you happen to support JavaScript, it's used to augment the page to make it a little easier to browse.

(I don't actually know for sure whether Wikipedia pages are primarily server rendered, but what you describe is perfectly possible using progressive enhancement).


> I don't actually know for sure whether Wikipedia pages are primarily server rendered

Yes, Wikipedia works perfectly fine without Javascript


I think you actually make the opposite point you're trying to make. Wikipedia doesn't use JS for navigation, just to provide those little popups. If you have no JS, the site works exactly the same.


Like other comments have noted, that's not what the article is complaining about.

In the context of this article, Wikipedia's design is analogous to the existing MDN site. I think Wikipedia is a website other websites should seek to emulate, because it loads quickly on all devices and connections I've had. Instead many text based, informational sites are going shifting to SPAs or similar JS reliant architectures.


In google maps, gmail, and other places (where data shared between pages generally is loaded) I am very content with client site navigation.

As far as I see it really depends whether it is an app or a site that is being developed.


I've been working on search UI. This search UI is substantially more complex than the average one you might come across.

There are potentially multiple search bars (added on command), to which many different variants of filters and search terms can be added. They provide an aggregate search expression which is not reducible to boolean logic. The search results update interactively when the search terms are altered.

It's specialized tool, and meant to be used by people of a certain vocation rather than the general public.

The reason that the search results update interactively is to provide a short feedback loop on the effect of the search terms. Building a mental model ahead of time for the interactions between the query and the hundreds of millions of potential results is a tall order. It therefore seems sensible to provide affordances which facilitate exploration and (in some sense) experimentation. To this end, search results also expand inline to provide a way to inspect results quickly, with the purpose of validating whether the search terms were effective or need alteration.

Now, where do I draw the line between what counts as a "new page" or the "same page" with this? Do I reload the page on every alteration of the search terms? Do I scrap automatic reloading between alterations of search terms and only do so explicitly when the user clicks a button? Should it navigate to a new page when interacting with search results rather than previewing them inline?

This is all rhetorical, of course. All of those changes would make for a much poorer experience for this particular tool and the particular target user group.

Today the web is used to build a lot of stuff where "page navigation" is a poor model to work with as a basis. I can see there being a case for it where a "page" really is a self-contained unit of information. But there are also a lot of cases where declaring some particular state in a long user journey of interconnected actions as a "different page" is going to be arbitrary.


> A big one we’re seeing here is called progressive rendering: browsers download the top part of the page first, then show it on the screen while the rest of the page finishes downloading

I question your reasoning here -- I don't think this is how progressive rendering works in browsers. The browser has to download the entire document to construct the DOM, and CSS in the head tag all has to be downloaded and parsed into the CSSOM before the browser can make smart decisions about what to render.

For a simple page, I also doubt that progressive rendering could account for a >1s difference in rendering time. I would suspect the disparity in loading time is related to network IO.


> The browser has to download the entire document to construct the DOM

Actually, the browser can construct the DOM for the first part of the page while it's waiting for the rest of it to download. Like, if the browser starts downloading a page and it sees this:

    <body>
      <h1>My Cool Website</h1>
      <p>Hello there
...it can add the <h1> element to the DOM, since no matter what comes after this in the HTML code, the <h1> will always be first on the page. (It might actually be able to add the <p> tag, too. I'm not 100% on the details.)

A cool demo of this is https://harmless.herokuapp.com/main , a JavaScript-less chat app that works by holding the connection open (essentially never "finishing the page load") and sending new chat messages as they arrive.

> CSS in the head tag all has to be downloaded and parsed into the CSSOM before the browser can make smart decisions about what to render.

That's true, but in this case, the CSS is already present in the browser cache, so it doesn't need to be re-downloaded.


Do you know how contact the author? There's an issue with the CSS (it sets the background of the textarea to #fff, which is the same color as the text when the browser/OS uses a dark theme)


That's very cool, thanks for the info.


Strongly agree with the premise, but

> it just isn’t possible for a highly dynamic language like JavaScript to run as fast as the C++ code in browsers

With modern tracing JITs, this isn’t always true ;)


As far as I know, no production browser uses tracing anymore. The only modern tracing implementation I'm aware of is LuaJIT.


Yes, most browsers have moved away from tracing JITs, but I think PyPy still uses it in addition to LuaJIT. The reason I brought it up was because tracing JITs, as far as I am aware, can give very good performance in the best case where your trace actually works (which is of course a bit of a dirty trick, but when you're up against C++ it's hard to play fair…)


Do you know why browsers moved away from tracing JIT?


Well, one reason is probably because they don’t perform well on programs that cannot trace well.


Let's just rethink the discussion without the iOS/macOS crowd and we wouldn't even have to talk about it...


Client side navigation is for single page applications like applications written with ExtJS.


this is the first time I see mentions of react being difficult to optimize and I'd like to know more, is there any in depth quality article about it anybody knows on top of the usual light blogs a search finds?


No there isn’t because it’s BS. React is super easy to optimise. https://reactjs.org/docs/optimizing-performance.html


People say React is easy to optimize, but many of the React apps I use in practice have crummy 50-100 millisecond response times on basic operations like "click button" or "press key in text field". And a significant portion of the performance difference between beta MDN and old MDN was the ~half second the React code took to run in the beta.

I'm pretty sure front-end developers aren't writing slow code on purpose. The most reasonable explanation I can come up with is that React makes it easier to write reliable UI code but harder to reason about performance -- features like "by default, all of a component's descendants are recreated and diffed with the DOM on any state change" point strongly in this direction.


File under: ”Read, then ignore”. Crappy JS navigation (and scrolling!) is here to stay, unfortunately :(


He lost me when he described Canada using a dictionary-like wording but also adding some pretty personal perspective. Describing an entire country as Socialist seems super opinionated to me.

Client-side navigation is quite tricky to get right - since there is no definition of right. Browser back buttons and scroll positions between back and fourth page loads are not standards-based things and the only way to study them is by just using the browser.

The only fashion that I have come to hate in client side navigation is endless scrolling on product pages. In my view it is completely pointless and hitting the back button NEVER returns you to the exact point you were. On product pages I think page-based pagination is the only way to go. Amazon does that. I think only feeds are a use-case where infinite scrolling makes sense.

P.S. I wonder if client-side ajax calls are GZIP compressed🧐


> He lost me when he described Canada using a dictionary-like wording but also adding some pretty personal perspective. Describing an entire country as Socialist seems super opinionated to me.

I think this was pretty clearly supposed to be a joke. Author is probably American - the joke is that healthcare and free (very slow) internet are considered socialist by Americans.


> Author is probably American

He goes to college here in the US (but I think he might have lived in Canada in the past?)


>P.S. I wonder if client-side ajax calls are GZIP compressed🧐

I thought HN filtered out emojis and other graphical characters?


This is news to me too!


>Describing an entire country as Socialist seems super opinionated to me.

Not to mention the fact he's wrong; contrary to popular belief, "socialism" has been described as many things but the simple fact of offering healthcare, market regulations and some amount of free Internet access(?) - Socialism is a mode of production in which means of production are operated and managed (and some would say "owned") collectively by the workers, i.e. the majority of the adult population. This is also a form of society in which abstract labour is not valorized. A modern nation with money, capital, rent, predominant wage labour, and capital accumulation is in no way "socialist" - never mind by Marx's term with which he considered "socialism" and "communism" to be one and the same thing.


I, uh, was kidding about Canada being a socialist state...

For what it's worth, I had a 128 kilobit connection in Canada because American T-Mobile plans give you one for free when you're outside the country.


Unfortunately it's not always clear who's joking and who's not; many (most?) people who feel the need to comment on socialism and capitalism have done very little reading in political philosophy.


... are you being serious? I ask this question pragmatically. Perhaps my nuance for sarcasm after spending my entire life in the UK & Australia is more finely tuned than most.

The author went as far as to describe Canada as "a socialist state in North America" and later followed on with:

"(T-Mobile told me I could buy an “all-access high-speed day pass” for $5 on the black market, but I didn’t want to get arrested by the Royal Canadian Mounted Secret Police.)"

I don't think it's possible to be more overtly tongue in cheek without having to retort to using Reddit's HTML sarcasm tags.

All in all, it seemed quite clear to me that is was in playful jest.

Also, dear author, thank you for the laugh.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: