Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just my opinion but, server‑side rendering never really went away, but the web is finally remembering why it was the default. First paint and SEO are still better when markup comes from the server, which is why frameworks as different as Rails + Turbo, HTMX, Phoenix LiveView, and React Server Components all make SSR the baseline. Those projects have shown that most dashboards and CRUD apps don’t need a client router, global state, or a 200 kB hydration bundle—they just need partial HTML swaps.

The real driver is complexity cost. Every line of client JS brings build tooling, npm audit noise, and another supply chain risk. Cutting that payload often makes performance and security better at the same time. Of course, Figma‑ or Gmail‑class apps still benefit from heavy client logic, so the emerging pattern is “HTML by default, JS only where it buys you something.” Think islands, not full SPAs.

So yes, the pendulum is swinging back toward the server, but it’s not nostalgia for 2004 PHP. It’s about right‑sizing JavaScript and letting HTML do the boring 90 % of the job it was always good at.



Having a server provide an island or rendering framework for your site can be more complex than an SPA with static assets and nginx.

You still have to deal with all the tooling you are talking about, right? You’ve just moved the goalpost to the BE.

And just like the specific use cases you mentioned for client routing I can also argue that many sites don’t care about SEO or first paint so those are non features.

So honestly I would argue for SPA over a server framework as it can dramatically reduce complexity. I think this is especially true when you must have an API because of multiple clients.

I think the DX is significantly better as well with fast reload where I don’t have to reload the page to see my changes.

People are jumping into nextjs because react is pushing it hard even tho it’s a worse product and questionable motives.


  I think the DX is significantly better as well with fast reload…
As a user, the typical SPA offers a worse experience. Frequent empty pages with progress bars spinning before some small amount of text is rendered.


> As a user, the typical SPA offers a worse experience.

Your typical SPA has loads of pointless roundtrips. SSR has no excess roundtrips by definition, but there's probably ways to build a 'SPA' experience that avoids these too. (E.g. the "HTML swap" approach others mentioned ITT tends to work quite well for that.)

The high compute overhead of typical 'vDOM diffing' approaches is also an issue of course, but at least you can pick something like Svelte/Solid JS to do away with that.


My biggest annoyance with SPAs is that they usually break forward/back/history in various subtle (or not so subtle) ways.

Yes, I know that this can be made to work properly, in principle. The problem is that it requires effort that most web devs are apparently unwilling to spend. So in practice things are just broken.


An additional one for me is stale state. I can leave most webpages open for days, except SPAs. Especially on mobile.


A small but silly one: breaking middle and right click functionality for links.

An auction site I use loads in the list of auctions after the rest of the page loads in, and also doesn't let you open links with middle click or right click>new tab, because the anchor elements don't have href attributes. So that site is a double-dose of having to open auctions in the same tab, then going back to the list page and losing my place in the list of auctions due to the late load-in and failure to save my scroll location.


I would submit this as product feedback if you haven't. One of my favorite things as a dev working on client-facing things is when I get negative feedback that presumably has a pretty easy fix to at least part of it ("add 'href' to these links") where I can pretty quickly make someone's life a little easier.


Unless it's not a link but a <div onClick={loadItem}>...


This is not exclusive with an SPA. Even MPAs/SSR apps can have this issue. But I guess MPAs are probably not built with post load interactivity in mind and maybe that's why its less prevalent there.


This issue doesn't get enough attention; apart from the obvious implications on bad UX, I find myself losing interest in a project after realising its broken in so many subtle and non-subtle ways due to the underlying tech. I, like many others, got into programming due to the joy of creating something beautiful and attempting to follow (influencer led) JS trends nearly killed my interest in this field at a time.


I still have some ptsd from payment gateway integrations via iframes about 6-7 years ago. If you thought SPAs are bad by themselves for history tracking imagine those banking iframes randomly adding more entries via inside navigation/redirection that you have to track manually.


A lot can be said for just putting a "back" button a page. I still do it occasionally for this very reason. Then again, my user base for the apps I write are the most non-technical folks imaginable, so many of them have no concept of a browser back button to begin with. I am not being hyperbolic either.


Thing is, the browser back button is still there, though. So now you have two identical buttons that do different things. And that's really bad UX.


I’m split on this. I used to agree with you but when I talked to internal users and customers, they really liked having a back button in the app. I would tell them the browser back button is there and we haven’t broken history so it should work to which they just often shrug and say they “just” prefer it.

My hypothesis is that they’ve had to deal with so many random web apps breaking the back button so that behaviour is no longer intuitive for them. So I don’t push back against in-app back buttons any more.


It's okay if both buttons do the same thing. But OP (if I understood them correctly) proposed the in-app Back button as a hacky solution to the problem of browser one being broken, which kinda implies that they don't behave the same.


I think you're right on the money—those bad web apps that told people emphatically "do NOT use your browser's back button!" did the rest of us a lot of damage, as I really do agree that it trained many people to never press it unless they actually want to leave the app they're using.

I myself am guilty of (about 14 years ago now) giving an SPA a "reload" button, which had it go and fetch clean copies of the current view from the server. It was a social app; new comments and likes would automatically load in for the posts already visible, but NEW posts would NOT be loaded in, as they would cause too much content shift if they were to load in automatically.

Admittedly this is not a great solution, and looking back on it now, I can think of like 10 different better ways to solve that issue… but perhaps some users of that site are seeing my comment here, so yeah, guilt admitted haha.

/)


Bad UX that provides a functionality that otherwise isn't there at all is still better.


> Your typical SPA has loads of pointless roundtrips

This is an implementation choice/issue, not an SPA characteristic.

> there's probably ways to build a 'SPA' experience that avoids these too

PWAs/service workers with properly configured caching strategies can offer a better experience than SSR (again, when implemented properly).

> The high compute overhead...

I prefer to do state management/reconciliation on the client whenever it makes sense. It makes apps cheaper to host and can provide a better UX, especially on mobile.


Except for a user on a lower specced device that can’t performantly handle filtering and joining on that mass of data in JS code, or perhaps can’t even hold the data in memory.


> Except for a user on a lower specced device that can’t performantly handle filtering and joining on that mass of data in JS code, or perhaps can’t even hold the data in memory.

Just how low-spec and/or how much state-data are we talking about here? I ask only because I am downloading an entire dataset and doing all the logic on the client, and my PC is ancient.

I'm on a computer from 2011 (i7 870 @ 2.9GHz with 16MB of RAM), and the client-side filtering I do, even on a few dozens of thousand of records retrieved from the server, still takes under a second.

On my private app, my prospect list containing maybe 4k records, each pretty substantial (as they include history of engagements/interactions with that client) is faster to sort and filter on the client than it is to download the entire list.

I am usually waiting for 10s while the hefty dataset downloads, but the sorting and filtering happens in under a second. I do not consider that a poor UX.


10s is painful. A server-rendered app should be able to deliver that data, already rendered, in closer to a fifth of a second. Fast enough that the user doesn’t even notice any wait.


> 10s is painful. A server-rendered app should be able to deliver that data, already rendered, in closer to a fifth of a second.

How do you know how large the dataset is? All you know from my post is that a dataset that takes 10s to download (I'm indicating the size of it here!) takes under a second to filter and sort.

My point is that if your client-code is taking long to filter and sort, then your dataset is already so large that the user has been waiting a long time for it already; they already know that this dataset takes time.

FWIW, the data is coming in as CSV, compressed, so it's as small as possible. It's not limited by the server. Having it rendered by the server will increase the payload substantially.


The JS processing and rendering time on an underpowered CPU is the issue, not the payload size. It’s difficult to describe how excruciatingly slow some seemingly simple e-commerce and content sites are to render on my 2019 laptop or how slowly they react to something as simple as a mouseover or how they peg the CPU - while absolutely massively complex and large server-rendered HTML loads and renders in an eyeblink.


Which sites are you thinking off?

I can't really speak for those sites anyway, or why they are so slow doing things on the client, but like I said, I've written client-side processing and used my 2011 desktop, and there has been no pegging of the CPU or large latencies when filtering/sorting data client-side.

> while absolutely massively complex and large server-rendered HTML loads and renders in an eyeblink.

I've not had that experience - a full page refresh with about 10MB of data does not happen in an eyeblink. It takes about 6 seconds. There's a minimum amount of time for that data to download, regardless of whether it is pre-rendered into `<table>` elements or whether it is sent as JSON. Turning that JSON into a `<table>` on the client takes about 40ms on my 2011 desktop. Sorting it again takes about 5ms.

For this use-case (fairly large amounts of data), doing a full-page refresh each time the user sets a new sort criteria is unarguably a poorer experience than a bit of JS that goes through the table element and re-orders the `<tr>` elements.

In this case, using server-rendered HTML is always going to be 6000ms whenever the user re-sorts the table. Using a client JS function takes 5ms. On a machine purchased in 2011.


For sorting the table, perhaps. But for displaying the table in the first place? Static HTML is invariably faster in my experience. IMO there is just no excuse for placeholder elements with loading animations on a content or e-commerce site where it should be possible to provide all the content up front and add some very light JS on top to handle things like image carousels.


> But for displaying the table in the first place? Static HTML is invariably faster in my experience.

Okay, lets assume it is faster by whatever the latency is for a network request.

What sort of use-case are we talking about where a table is displayed on a content or e-commerce side and the user is not allowed to re-sort it?

It's all about the user's experience, not the developer's, and I can't see how a UX that prevents sorting is a better UX. Ditto for a sortable table that refreshes the page each time the user sorts it.


What mass of data? Can you give me an example of the kind of device and the kind of use case you're talking about?


As an issue, yes, often ignored by QA, Product and engineers


Yes, pure old school SPAs have at least one additional roundtrip on the first visit of the site:

1. Fetch index.html 2. Fetch js, css and other assets 3. Load personalized data (json)

But usually step 1 and 2 are served from a cdn, so very fast. On subsequent requests, 1 and 2 are usually served from the browser cache, so extremely fast.

SSR is usually not faster. Most often slower. You can check yourself in your browser dev tools (network tab):

SPA: https://www.solidjs.com/

vs.

Poster child SSR: https://nextjs.org/

So much complexity and effort in the nextjs app, but so much slower.


> Poster child SSR

That's not really SSR though, it's partial SSR but then hydrated into a client-side React app, so a SPA. If you really want to compare try an htmx page.


Requires additional engineering.


> Your typical SPA has loads of pointless roundtrips. SSR has no excess roundtrips by definition

SSR also has excess round trips by nature. Without Javascript, posting a form or clicking a like button refreshes the whole page even though a single <span> changed from a "12 likes" to "13 likes".


I think most of the thread is talking about SSR with partial HTML replacement.


This exactly. It seems like the last 10 years of JavaScript framework progress has been driven by DX, not UX. Like at some point everyone forgot this crap just needs to work at the end of the day, no user benefits from 3 rewrites over 5 years because the developer community decided functions are better than classes.


In my view, DX should be renamed "Developer Convenience" as we all know that convenience is often a trade-off.

Please forgive the self-promotion but this was exactly the premise of a conference talk I gave ~18 months ago at performance.now() in Amsterdam: https://www.youtube.com/watch?v=f5felHJiACE


That was super annoying when I was just picking up react.


We have been moving to localized cache stores and there aren't any client side loaders anymore outside of the initial cache generation. Think like Linear, Figma, etc

It just depends on what you are after. You can completely drop the backend, apis, and have a real time web socketed sync layer that goes direct to the database. There is a row based permissions layer still here for security but you get the idea.

The client experience is important in our app and a backend just slows us down we have found.


>and have a real time web socketed sync layer that goes direct to the database

you might be able to drop a web router but pretending this is "completely drop[ping] the backend" is silly. Something is going to have to manage connections to the DB and you're not -- I seriously hope -- literally going to expose your DB socket to the wider Internet. Presumably you will have load balancing, DB replicas, and that sort of thing, as your scale increases.

This is setting aside just how complex managing a DB is. "completely drop the backend" except the most complicated part of it, sure. Minor details.


I assumed they meant a client side DB and then a wrapper that syncs it to some other storage, which wouldn't be terribly different than say a native application the relies on a cloud backed storage system.

Which is fine and cool for an app, but if you do something like this for say, a form for a doctor's office, I wish bad things upon you.


> We have been moving to localized cache stores and there aren't any client side loaders anymore outside of the initial cache generation. Think like Linear, Figma, etc.

That's never the case.


There's no way around waiting for the data to arrive. Being it JSON for SPA or another page for MPA / SSR. For MPA the browser provides the loading spinner. Some SPA router implementations stay on the current page and route to the new one only after all the data has arrived (e.g. Sveltekit).


With SSR all the data usually arrives in 100-200ms, in SPAs all the data tends to take seconds to arrive on first load so they resort to spinners, loading bars etc.


> For MPA the browser provides the loading spinner.

Yes, that one. I want that experience please.


If you employ a "preload then rehydrate" data sync paradigm then you should never see a blank page -- except on initial JS load. This is just an improper data sync strat and has nothing to do with SPA.

I built a lib specifically designed for this strat: https://starfx.bower.sh/learn#data-strategy-preload-then-ref...


"Devs are doing SPAs wrong" is irrelevant when 90% of devs do SPAs that way. That it's wrong doesn't help the fact that most SPAs have garbage user experience.


“just add more complexity”


The obsession with DX tooling is exactly why JS is such an awful developer experience. They always chase something slightly better and constantly change things.

Maybe the answer was never in JS eating the entire frontend, and changing the tooling won’t make it better, as it’s always skirting what’s actually good for the web.


> The obsession with DX tooling is exactly why JS is such an awful developer experience.

I used to agree but these days with Vite things are a lot smoother. To the point that I wouldn't want to work on UI without fine-grained hot reloads.

Even with auto reload in PHP, .NET, etc you will be wasting so much time. Especially if you're working on something that requires interaction with the page that you will be repeating over and over again.


> I used to agree but these days with Vite things are a lot smoother.

Didn't everybody say the exact same thing about Node, React, jQuery...? There is always a new and shiny frontend JS solution that will make the web dev of old obsolete and everyone loves it because it's new and shiny, and then a fresh crop of devs graduates school, the new shiny solution is now old and boring, and like a developer with untreated ADHD, they set out to fix the situation with a new frontend framework, still written in JavaScript, that will solve it once and for all.

I still build websites now the same as I did when I graduated in 2013. PHP, SQL, and native, boring JavaScript where required. My web apps are snappy and responsive, no loading bars or never-ending-spinning emblems in sight. shrug


Except you can't really build PWAs with those technologies and most web content is now consumed on mobile. I used to do it like that as well, but clients want a mobile app and management decided to give them a PWA, because then we could use the existing backend (Perl, Mojolicious, SQL). I now agree with them if it keeps the lights on.


> I used to do it like that as well, but clients want a mobile app and management decided to give them a PWA

I'm quite surprised to hear this is a common thing. Besides myself, I don't know a single person who has ever installed a PWA. For people in tech, despite knowing they exist. For people outside tech, they don't know they exist in the first place.

Does management actually have any PWAs installed themselves?


People outside tech just get installation instructions and do not care if it’s app store or something else. This is how sanctioned Russian banks continue to serve their customers via apps, when they cannot get into app store. The number of users of PWA is probably on the scale of millions.


I had no idea! Cool to learn.

It definitely makes complete sense in that scenario, but remains a very niche usecase where people have no other option.

>People outside tech just get installation instructions

People outside of tech don't need instructions to install non-PWA, store apps. So all this does to me is reinforce that no one is installing PWAs outside of niche scenarios where 1. people basically have to use the app due to a connection to a physical institution 2. they are explicitly told how to do it 3. the app is not available on the stores for legal reasons.


> People outside of tech don't need instructions to install non-PWA, store apps.

Depends on age and tech awareness. Many still do, when they cannot rely on a family member to do it for them. Overall installing PWA is no more complicated than getting something from a store.


That sounds like roughly all work related software? Not so niche at all.


Who uses PWA for work-related apps though? There too, the standard is MDN which uses a curated version of the app stores.


For me the whole worker setup filtering requests and storing results on local storage, looks like gimmicks.

They should have designed it as a proper native experience.


They don't want to be subject to app store approval policies, shitty TOS, nor pay Google or Apple a 30% cut. Installing the app is easy, visit the web site, clck the install banner, add to home screen and you're good to go. On the developer side you get to deploy as iften as needed.

Yes, the service worker thing is annoying but you possibly don't need it if you have a server backend. It's basically a glirified website with a home screen icon. Most of the native vehicle, asset or fitness tracking apps need a backend anyways and they fail miserably when disconnected from the network.


Might be easier for the user, sucks as developer experience.

Better do a mobile Web friendly website and leave it at that.

Most users hardly tell the difference anyway.


> They don't want to be subject to app store approval policies, shitty TOS, nor pay Google or Apple a 30% cut. Installing the app is easy, visit the web site, clck the install banner, add to home screen and you're good to go. On the developer side you get to deploy as iften as needed.

And the metrics are saying that people click it?


We don't care about people clicking it as it's not tiktok but an app that complements a certain hardware solution. If you don't have the hardware, you don't need the app.


Just focus on being mobile friendly, hardly anyone cares about PWAs and the crazy setup of JavaScript workers to make it work.


> Especially if you're working on something that requires interaction with the page that you will be repeating over and over again.

That’s honestly not that many things IRL. If you look at all the things you build only a minority actual demand high interactivity, or highly custom JS. Otherwise existing UI libraries cover the bulk of what people actually need to do on the internet (ie, not just whatever overly fancy original idea the designers think is needed for your special product idea).

It’s mostly just dropdowns and text and tables etc.

Once you try moving away from all of that and questioning if you need it at every step you’ll realize you really don’t.

It should be server driven web by default with a splattering of high functionality islands of JS. That’s what rails figured out after changing the frontend back and forth.

> Even with auto reload in PHP, .NET, etc you will be wasting so much time

Rails has a library that will refresh the page when files change without a full reload, using Turbo/Hotwire. Not quite HMR but it’s not massively different if your page isn’t a giant pile of JS, and loads quickly already.


> Rails has a library that will refresh the page when files change without a full reload

What if you have a modal opened with some state?

Or a form filled with data?

Or some multi-selection in a list of items that triggers a menu of actions on those items?

Etc.

And it's true Vite can't always do HMR but it's still better than the alternative.


> What if you have a modal opened with some state?

Stimulus controllers can store state.

> Or a form filled with data?

Again, you can either use a Stimulus controller, or you can just render the data into the form response, depending on the situation.

> Or some multi-selection in a list of items that triggers a menu of actions on those items?

So, submenus? Again, you can either do it in a Stimulus controller (you can even trivially do things like provide a new submenu on the fly via Turbo), or you can pre-render the entire menu tree server-side and update just the portion that changes.

None of these are complex examples.


> Stimulus controllers can store state.

Yes, obviously, but do these maintain state after hot reload?


We're talking about two entirely different things that lead to the same outcome. The approach we're describing is not a "hot reload" per se, it's just selectively updating the changed contents of the page. For the vast majority of the changes you do during development, this is invisible to you.

If you change the JS or the controller itself, obviously, state stored in JS would be lost unless you persisted it locally somehow.


Eh, I recently stumbled into an open bug in Npm/vite and wasted two days before just reinstalling everything and re-creating frontend app. Hot UI reloads are cool, but such things kill any productivity improvements.


>To the point that I wouldn't want to work on UI without fine-grained hot reloads.

No -- but you could. And it wouldn't be the end of the world. So I'm just saying, DX doesn't eclipse all other considerations.


Hmm, JS is "different" because it has completely unique challenges that no other programming language has to deal with: https://bower.sh/my-love-letter-to-front-end-web-development


There’s just no way for the abominations that are HTML, JS, and CSS to be used in an accessible and maintainable way. It’s absurd that we haven’t moved on to better technologies in the browser or at least enabled alternatives (I weep for Silverlight).


> (I weep for Silverlight).

'Twas before my time. What was so great about it? I remember needing it installed for Netflix like 15 years ago. Did you ever work with Flash? How was that?


> dramatically reduce complexity

If you ever worked seriously on anything non-SPA you would never, ever claim SPAs “dramatically reduce complexity”. The mountain of shit you have pull in to do anything is astronomical even by PHPs standards and I hate PHP. Those days were clean compared to what I have to endure with React and friends.

The API argument never sat well with me either. Having an API is orthogonal: you can have one or do not have one, you can have one and have a SSR app. In the AI age an API is the easy part anyway.


I disagree, the problem with an SPA is that now you have two places where you manage state (the backend and the frontend). That gives you much more opportunity for the two places to disagree, and now you have bugs.


You had to manage state on the frontend even before spa though, if you wanted anything but the most basic experience.


No you really don’t. I’ve worked on exceptionally complex legacy applications with essentially no state in the front end. At most, you’re looking at query parameters. You just make everything a full page reload and you’re good to go.


So you make incrementing a counter a full page reload?


You don't need an SPA to handle incrementing a counter. If a page needs dynamic behavior you add JS to it, whether it's just adding an in-memory counter or an API call to store and retrieve some data. It's not difficult to write JavaScript.

The problem with SPAs is that they force having to maintain a JS-driven system on every single page, even those that don't have dynamic behavior.


> You don't need an SPA to handle incrementing a counter. If a page needs dynamic behavior you add JS to it, whether it's just adding an in-memory counter or an API call to store and retrieve some data. It's not difficult to write JavaScript.

I agree with this. Sprinkle in the JS as and when it is needed.

> The problem with SPAs is that they force having to maintain a JS-driven system on every single page, even those that don't have dynamic behavior.

I don't agree with this: SPAs don't force "... having to maintain a JS-driven system on every single page..."

SPA frameworks do.

I think it's possible to do reasonably simple SPAs without a written-completely-in-JSX-with-Typescript-and-a-5-step-build-process-that-won't-work-without-25-npm-dependencies.

I'm currently trying out a front-end mechanism to go with my high-velocity back-end mechanism. I think I've got a good story sorted out, but it's early days and while I have used my exploratory prototype in production, I've only recently iterated it into a tiny and neat process that has no build-step, no npm, and no JS requirement for the page author. All it uses is `<script src=...>` in the `<head>`, with no more JS on the rest of the page.

Very limited though, but it's still early days.


A codebase doesn't need that toolset to be an SPA. An SPA is just a website where all the site's functionality is done on the "root page", and it uses JS to load the data, handle navigation, etc. Doesn't matter whether that's all done through React in TypeScript and compiled by Vite or by handrolled JavaScript fetched in .js files.


> A codebase doesn't need that toolset to be an SPA.

That's kinda the goal I'm trying to reach. If you know of any SPA that doesn't come with all the baggage and only uses `<script src=...>`, by all means let me know.


That's still state on the frontend, which the commenter claimed sites don't need.


True, I shouldn't have said in memory. As the GP mentioned, you can store the counter value in a URL param. There are ways to achieve dynamic behavior without having to load or store values into JS memory.


That is more work both for the developers and the servers though. You need to re-render the whole page every change, rather than make a local change (or a tiny request if it needs to persist)


You misunderstood what I was saying. I was saying that you could write some plain old JS to catch an event on incrementing and updated the URL and the UI, and some JS to get the data from the URL on page load to set the UI. No new server render, and that's maybe 5 minutes of writing JavaScript code (compared to, say, setting up react project and instantiating that whole beast from the page root until reaching the specific UI element that needs to be dynamic).


What's the business usecase for incrementing a counter?

We can sit here all day and think up counterexamples, but in the real world what you're doing 99% of the time is:

1. Presenting a form, custom or static.

2. Filling out that form.

3. Loading a new page based off that form.

When I open my bank app or website, this is 100% of the experience. When I open my insurance company website, this is 100% of the experience. Hell, when I open apartments.com, this is like 98% of the experience. The 2% is that 3D view thingy they let you do.


> What's the business usecase for incrementing a counter?

Notification count in the top right?

Remaining credit on an interactive service (like the ChatGPT web interface)?

So, maybe two(!) business use-cases out of thousands, but it's a pretty critical two use-cases.

I agree with you though - do all normal HTML form submissions, and for those two use-cases use `setInterval` to set them from a `fetch` every $X minutes (where you choose the value for $X).


Notification incrementing is not purely client-side state though. It's triggered by an event coming from a server. Eg let's say you're on a product page and you click 'Add to Cart'. You want the cart icon's counter badge on the top right corner of the page to increment. With eg htmx when you click 'Add to Cart' we send a request to the server, it updates the cart state in the backend, then sends an HTML fragment response that updates the necessary parts of the page, including the cart item counter badge.


In my experience it's just exceedingly rare to require this. My insurance company website has a notification thing, and it's actually static. You need to refresh the page, and considering how few and far between notifications are, and how common refreshes are, it works fine.

There's an entire domain of apps where you truly need a front-end. Any desktop-like application. Like Google Sheets, or Figma. Where the user feedback loop is incredibly tight for most operations.


Let's be honest -- the alternative is an API call with poor or no error-handling with the brilliant UX of either hanging with an endless loading indicator, or just flat out lying that the counter was incremented...


> or just flat out lying that the counter was incremented...

Which is what HN does and it sucks. It's very common for me to vote on a couple things and then after navigating around I come back to see that there are comments that don't have a vote assigned.

Of course the non-JS version would be even more annoying. I would never click those vote buttons if every vote caused a full page refresh.


That sounds like the most basic of experiences...


Not between page loads.


You absolutely did. It was common practice to stuff things in cookies or query strings to retain state between trips to the server so that some JS could do its job.

Every form also normally ends up duplicating validation logic both in JS for client-side pre-submit UX and server-side with whatever errors it returns for the JS to then also need to support and show to the user.


Right, but validation logic and state transferred by the server isn't in-memory state. The fact that the pages completely reload on each request clears a lot of cruft that doesn't get cleared on pages whose lifetime is tens or hundreds of views.


Every SPA I come across, especially when using React, uses persistent state so that in-memory changes are synced to cookie/localStorage/server so they survive refreshes. Every popular state management library even supports this natively. And all of that state combined still requires less memory than any of the images loaded, or the JS bundles themselves.


I absolutely loathe that. State is the source of most bugs. If the page crashes then refreshing it should clear out the state and let me try again.

Anecdotally, it seems like I encounter a lot more web apps these days where refreshing doesn’t reset the state, so it’s just broken unless I dig into dev tools and start clearing out additional browser state, or removing params from the URL.

Knock it off with all the damn state! Did we forget the most important lesson of functional programming; that state is the root of all evil?


Minimizing state simplifies the codebase but it’s a trade off.

There are times the user experience is just objectively better with more state, and you have to weigh the costs.

If I am filling out a very long form (or even multi-page form) I don’t really want all that state lost if I accidentally refresh the page.


I can remember vastly more instances where I've been frustrated as a user of an app or website by losing state, than remember state.


Unless you can guarantee RTT under 100ms, you have to manage some state on client side, else your UI will feel sluggish.


I’d rather have sluggish UI with proper feedback than potentially inconsistent states which I often experience with modern SPAs. At least that represents reality. Just today I was posting an ad on the local classifieds page, and the visual (client) state convinced me that everything was fast and my photos are uploaded. Turned out all state was lost and never reached the server, and I had to redo everything again.


It’s trivial to achieve under 100ms in the US with even just one server.

Most companies aren’t international.


Without keep-alive, any HTTPS request requires multiple round trips to complete.


Fortunately every browser made in the last 25 years supports keepalive. e.g. Firefox (and according the the reporter of this bug, Chrome) won't even let you disable it[0].

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=879002


Barring the interactivity SPAs will also end up talking to server anyway. So even SPAs will feel sluggish in a high latency env.


Phoenix Liveview works pretty good without client side state. Sure if you just have a toggle mobile menu you might sprinkle some JS for it but other state lives in the server and the delta is sent to the client via websockets


Client-server model is known for decades, state management between them isn’t hard.


Who says your backend needs to manage state?


You don't have a database?


But on the flip side, you can program the backend in anything you like, instead of being bound to javascript.


You haven’t had to deal directly with JS on front end since Dart released over 10 years ago


No, you still need to deal directly with JS even with a transpiler like Dart or whatever other language you want to use. When things go wrong, and they will, you'll need to deal with the JS errors. When you're trying to debug or even call out to JS APIs, you better be intimately familiar with how your transpiler interops with JS, otherwise you're kinda screwed.


Dart hasn’t been much better in my experience, but you have reminded me to revisit Kotlin/JS!


I tried getting json deserialization into my app and ended up with a 2MB runtime, so it’s not going great.


Does anyone use Dart without Flutter? I've never seen it used separately.


Yeah sorry I meant Flutter ... 99% of people use Dart with Flutter, they are basically synonymous


JS/TS is fine. Why switch back and forth between languages and frameworks and data models and…


If your axiom is ‘JS is fine’ then yeah. It isn’t, though. TS is much closer to ‘fine’, but still can’t avoid some dumb JS decisions.


I've been a professional programmer for ~20 years and worked in a variety of languages on a variety of different types of projects, and Typescript with Bun is mostly just fine. It lacks some low level primitives I'd like to have available for certain projects (e.g. Go channels), and the FFI interface isn't as nice as I'd like, but it's basically serviceable for for a very broad range of problems.

You should still know a language like Rust or Zig for systems work, and if you want to work in ML or data management you probably can't escape Python, but Typescript with Bun provides a really compelling development experience for most stuff outside that.


I agree, nowadays working on mostly TS backend with some parts in JS written before async/await was introduced and I’m inclined to say TS is better than Python at most things bakcendy. I’m missing sqlalchemy and a sane numerical tower pretty much.


Python suffers from the same problems: its type system has many escapes and implicit conversions, making soundness opt-in and impossible to statically verify. Any language with an implicit cast from its bottom type to an upper type is unsuitable for use.


It reminds me of an older dev I met when I was just beginning who had worked even more years and said Fortran 95 was "fine". And he could use it to build pretty much anything. That doesn't mean that more powerful language features couldn't have increased his productivity (if he learned them).


There's something to be said for using the right tool for the job. There's also something to be said for maximizing your ability to hire developers. Software is a game of tradeoffs, and while I can and do still pick up modern hotness when warranted (e.g. Zig), sometimes the path to minimum total software cost (and thus maximum company value) is to take well trodden paths.

As fun side anecdote, if you're doing scientific computing in a variety of fields, Fortran 95 is mostly still fine ;)


It is fine, though.


No, it’s footgunny and riddled with bugs. Most JS barely works and edge cases just aren’t addressed.

I’ve seen undefined make it all the way to the backend and get persisted in the DB. As a string.

JS as a language just isn’t robust enough and it requires a level of defensive programming that’s inconvenient at best and a productivity sink at worst. Much like C++, it’s doable, but things are bound to slip through the cracks. I would actually say overall C++ is much more reasonable.


You really need to learn a language to use it. As for undefined vs null, I fine it useful. Particularly in a db setting. Was the returned value null? You know because the value is null. Did you actually load it from the database? sure, because the value is not undefined.


> I would actually say overall C++ is much more reasonable.

This is where I know that, some people, are not actually programming in either of these languages, but just writing meme driven posts.

JS has a few footguns. Certainly not so many that it's difficult to keep in your head, and not nearly as complex as C++, which is a laughable statement.

You've "seen null make it to the database," but haven't seen the exact same thing in C++? Worse, seen a corrupted heap?


I haven't seen null make it to the database, I've seen undefined. And here you demonstrate one of many problems - there's multiple null types!

In C++, there's only one null, nullptr. But most types can never be null. This is actually one area where C++ was ahead of the competition. C# and Java are just now undoing their "everything is nullable" mistakes. JS has that same mistake, but twice.

It's not about complexity, although that matters too. C++ is certainly more complex, I agree, but that doesn't make it a more footgunny language. It's far too easy to make mistakes in JS and propagate them out. It's slightly harder to make mistakes in C++, if you can believe it. From my experience.


I am really trying to get on board, but I don't know what you mean. I see your anecdote about two different codebases and you found the C++ one easier, and I'm sure it happens.

> And here you demonstrate one of many problems - there's multiple null types

In JS, null and undefined are not meaningfully different unless you're checking for exactly one or the other. And there's little reason to do that. It has never come up for me that undefined made it through where null wouldn't have. But yes, you need to check if things are defined.

> In C++, there's only one null, nullptr. But most types can never be null

C++ absolutely has undefined, it just doesn't tell you about it. If I make a var with `int myVar;` it's undefined. There's no telling what it points to. But C++ will treat it as `int`. And it can be a lot worse. Vars can be memory represented as the wrong type. They can be truncated memory, dangling pointers, memory freed twice.

But with JS, if I access memory that wasn't explicitly set, it says "that's undefined." That's a good and explicit thing to tell me, and I can actually check for it. And the GC, obviously, avoids a whole class of cases where undefined would come up.

> This is actually one area where C++ was ahead of the competition

For the reasons above, I would say C++ is literally the worst option for null safety. Unless we define safety as "don't tell me this is undefined, even though it is."

> It's far too easy to make mistakes in JS and propagate them out

I'm just not sure why. I would say C++ has absolutely every footgun JS has and more.


C++ is at least somewhat strictly typed and - again - most types cannot be null.

Make std::string null. You can’t. Make std::vector null. You can’t. It’s one of the benefits of a value types.

That, alone, eliminates a whole class of bugs that’s JS has. Yes, there’s edge cases in C++ too. The edge cases are the common cases in JS in this regard and that’s why I class them differently. Really, every language can do everything, basically. But how easy is it, and how common is it?

Also, for the record, using an initialized variable will result in a warning or compilation error.

None of this is to say that C++ does not have other glaring problems that JS does not. But, in my experience, it’s slightly more difficult to create logic bugs in C++. I also have worked with PHP - similar situation to JS. Too footgunny that in practice the codebase is riddled with bugs that rarely manifest, but are there. Everything you do has, like, a dozen implications and edge cases you need to consider.


C# introduced nullable reference types back in 2019, so it's been some time and now the vast majority of the ecosystem uses null-aware code. The only remaining warts are around a. codebases which refuse to adopt this / opt out of it and b. serialization.


Yeah, I don't know how someone can say that with a straight face to other engineers.

It's like people just talk in memes or something.

This is how a lot of discourse feels these days. People living in very different realities.

Though in this case, seeing the most complex C++ app they've built would illuminate what's going on in theirs.


It's not a different reality. To give perspective to what JS I've dealt with - I worked a couple years on a legacy webapp. It used vanilla JS and the only library used was jQuery. It heavily used iframes for async functionality in combination with XSLT to translate backend XML apis to HTML.

Opening up a 10K lines JS file is like jumping into the ocean. Nothing is obvious, nothing makes sense. You're allowed to just do whatever the fuck in JS. Bugs where always ephemeral. The behavior of the code was impossible to wrap your head around, and it seemed to change under your feet when you weren't looking.

Now, the backend was written in old C++. And yes, it was easier to understand. At least, I could click and go to definition. At least, I could see what was going in and out of functions. At least, I could read a function and have a decent understanding of what it should be doing, what the author's intention is.

The front end, spread across a good thousand JS files, was nothing of the sort. And it was certainly more buggy. Although, I will concede, bugs in C++ are usually more problematic. In JS usually it would just result in UI jankyness. But not always.


> You're allowed to just do whatever the fuck in JS.

I think that’s a feature not a bug.

But then again, I generally like and use Typescript.


It's definitely a feature when you're starting out. But as the codebase grows and ages, it becomes a bug.

The problem is that the behavior becomes so complex and so much is pushed to runtime that there's no way to know what the code is actually doing. There's paths that are only going to be executed once a year, but you don't know which ones those are. Eventually, editing the code becomes very risky.

At this particular codebase, it was not uncommon to see 3, 4 or 5 functions that do more or less the same thing. Why? Because nobody dared change the behavior of a function, even if it's buggy. Maybe those bugs are the only thing keeping other bugs from cropping up. It's like wack-a-mole. You fix something, and then the downstream effects are completely unpredictable.

It becomes a self-eating snake. Because the codebase is so poor, it ends up growing faster and faster as developers become more risk-averse.


> The problem is that the behavior becomes so complex and so much is pushed to runtime that there's no way to know what the code is actually doing. There's paths that are only going to be executed once a year, but you don't know which ones those are. Eventually, editing the code becomes very risky.

It's clear you've worked on a really bad codebase, but this is totally different from what you were originally suggesting, which was null being a bigger problem in JS than C++.

What you're describing is frontend development with JQuery, not the capabilities of JS. Now, I don't like NextJS for a lot of reasons, but code organization is not one of them. Everything you're complaining about is dead simple in NextJS. Functions are localized, tracking localized state is easy, and even the older React state libraries have lots of support for tracking and visualizing the more global state changes.

I will die on the hill that JQuery, outside of very small interactivity, is fucking bad. People use JQuery terribly and it creates spaghetti code where you can't tell what the execution path is. People don't document QA testing. And JQuery makes you target html attributes to make changes. That's fun if you have to do it once, it's terrible if all of your app interactivity relies on it.

But that's not due to "undefined", and it would be 100x worse in C++. I am assuming the C++ codebase you compared it to was not a DOM manipulating app?

I don't like NextJS for performance and market reasons, but the codebases I've worked on have been very clean.


IMO jQuery is a symptom, not a cause. It’s the natural end-state of a stringly-typed “do whatever the fuck you want” attitude. You can create just as much spaghetti without jQuery, jQuery just lets you do it with less characters.

The horrors I’ve seen. Constructing a custom string based off of user input and then tacking on () and calling it as a function? We can do that? Apparently yes we can. The most cursed type of polymorphism where nothing makes sense and you’d have more success communing with the dead than deducing the control flow.


If you already know another backend language and framework, all you need to do is tell LLM or some code generator to convert your models between languages. There is very little overhead that way.

I greatly prefer Java with Spring Boot for larger backend projects.


What is 0.1 + 0.2 in JavaScript. I'll give you a hint, it's not 0.3. Is that fine?


That's not a JavaScript issue. It's the same for almost any language where you don't use some bignum type/library. This is something all developers should be extremely aware of.


https://en.m.wikipedia.org/wiki/IEEE_754

To answer your question directly - yes, it’s fine, it’s actually expected behavior.


I have repeated this elsewhere. APIs for UI tend to diverge from APIs in general in practice.

For applications that are not highly interactive, you don't quite need a lot of tooling on the BE, and since need to have a BE anyway, a lot of standard tooling is already in there.

React style SPAs are useful in some cases, but most apps can live with HTMX style "SPA"s


Agreed. We started with one API to rule them all. What happened? Now we got two.. and now we have to communicate like this:

“So the backend gave this weird …”

“What backend?”

“The backend for the frontend…”

“So not the backend for the backend for the frontend?”

I jest, but only very slightly.


Exactly. And state is in two places now. It's like building two applications and trying to somehow keep them in sync.


> APIs for UI tend to diverge from APIs in general in practice.

I'm arguing to just use a single API, not creating one for UI, at least when you want things to be simple for multiple clients.



So here's the kicker: React Server Components don't need a server. They are completely compatible with a static bundle and still provide major benefits should you choose to adopt them (dead code elimination, build-time execution). This is effectively the design of Astro Islands, natively in React Server Components. Letting you write static and client-side dynamic code in a single paradigm through componentization and composition.

If you are curious, my most recent blog post is all about this concept[0] which I wrote because people seem to be misinformed on what RSCs really are. But that post didn't gain any traction here on HN.

Is it more complex? Sure–but it is also more powerful & flexible. It's just a new paradigm, so people are put off by it.

[0] Server Components Give You Optionality https://saewitz.com/server-components-give-you-optionality


Then they are poorly named.


I generally agree. Naming things is among the hardest problems in computer science


You can accomplish the "don't have to reload the page to see my changes" with htmx and it's still "server-side rendering" (or mostly server-side rendering). Legendarily, the fastest website on the internet uses partial page caching to achieve its speed


What do you like about HTMX? I coming from a world of plain JS usage -- no SPAs or the like. I just felt like HTMX was just a more complicated way to write what could be simple .fetch() requests.


I like that it still feels like html. I think that's it's biggest selling point.

You write:

  <div id="moo" />
  <form hx-put="/foo" hx-swap="outerHTML" hx-target="#moo">
     <input hidden name="a" value="bar" />
     <button name="b" value="thing">Send</button>
  </form>
Compared to (ChatGPT helped me write this one, so maybe it could be shorter, but not that much shorter, I don't think?):

  <div id="moo" />
  <form>
     <input hidden name="a" value="bar" />
     <button name="b" value="thing" onclick="handleSubmit(event)" >Send</button>
  </form>

  <script>
  async function handleSubmit(event) {
    event.preventDefault();
  
    // the form submit stuff
    const form = event.target.form;
    const formData = new FormData(form);

    const submitter = event.target;
    if (submitter && submitter.name) {
      formData.append(submitter.name, submitter.value);
    }

    // hx-put
    const response = await fetch('/foo', {
      method: 'PUT',
      body: formData,
    });

    / hx-swap
    if (response.ok) {
      const html = await response.text();
      // hx-target
      const target = document.getElementById('moo');
      const temp = document.createElement('div');
      temp.innerHTML = html;
      target.replaceWith(temp.firstElementChild);
    }
  }
  </script>
And the former just seems, to me at least, way way *way* easier to read, especially if you're inserting those all over your code.


Yeah, the JS could technically be shorter, but your example is functional enough to get the point across.

Going with your example, how would you do proper validation with HTMX? For example, the input element's value cannot be null or empty. If the validation fails, then a message or something is displayed. If the validation is successful, then that HTML is replace with whatever?

I have successfully gotten this to work in HTMX before. However, I had to rely on the JS API for that is outside the realm of plain HTML attribute-based HTMX. At that point, especially when you have many inputs like this, the amount of work one has to do with the HTMX JS API starts to look at lot like the script tag in your example, but I would argue it's actually much more annoying to deal with.


> At that point, especially when you have many inputs like this, the amount of work one has to do with the HTMX JS API starts to look at lot like the script tag in your example

Not sure exactly what you're talking about w.r.t 'the amount of work one has to do with the HTMX JS API', but I've found that form validation with htmx and a backend server works really well with just a tiny bit of JS and a little care crafting the backend's validation error response: https://dev.to/yawaramin/handling-form-errors-in-htmx-3ncg


two possible approaches:

1. Do the validation server side and replace the input (or a label next to the input, see https://htmx.org/examples/inline-validation/)

2. Use the HTML 5 Client Side form validation API, which htmx respects:

https://developer.mozilla.org/en-US/docs/Learn_web_developme...


Well, I never expected to get a reply from the man himself. Thank you for taking the time to respond.

So, I did end up going with #1 with a slight variation.

You also commented on another comment of mine stating:

> if you are using the htmx javascript API extensively rather than the attributes, you are not using htmx as it was intended

There seems to be some confusion, and I apologize. I extensively used attributes. That wasn't the part of the API I was referring to. Rather, I should have specified that I was heavily relying on a lot of the htmx.on() and html.trigger() methods. My usage of htmx.trigger() was predominately due to something being triggered on page load, but also, it needed to be triggered again if a certain condition was met -- to refetch the html with the new data -- if that makes sense.

I should also preface that I was working on this project about two years ago. It looks like a lot has changed with HTMX since then!



I appreciate the suggestion. Not sure I am a fan of this implementation though. It looks near identical to the HTMX JS API that is already backed into HTMX. Most of the annoyances I dealt with were around conditional logic based on validation.

After enough of the HTMX JS API, I figured, "What is HTMX even buying me at this point?" Even if plain JS is more verbose, that verbosity comes with far less opinions and constraints.


if you are using the htmx javascript API extensively rather than the attributes, you are not using htmx as it was intended


If you truly need for MVC to manage all things state, component communications, and complex IxD in the front-end, sure, but not every app has that level of front-end complexity to warrant a SPA, in my opinion.


With an SPA you're writing two apps that talk to each other instead of one. That is, by definition, more complex.

> You still have to deal with all the tooling you are talking about, right? You’ve just moved the goalpost to the BE.

Now you're dealing with 2 sets of tooling instead of 1.

> And just like the specific use cases you mentioned for client routing I can also argue that many sites don’t care about SEO or first paint so those are non features.

There is no app which would not care about first paint. It's literally the first part of any user experience.

> So honestly I would argue for SPA over a server framework as it can dramatically reduce complexity. I think this is especially true when you must have an API because of multiple clients.

So SEO and first paint are not necessary features, but an API for multiple clients is? Most apps I've worked with for over 15 years of web dev never needed to have an API.

> I think the DX is significantly better as well with fast reload where I don’t have to reload the page to see my changes.

With backend apps the reload IS fast. SPA's have to invent tooling like fast reload and optimistic updates to solve problems they created. With server apps, you just don't have these problems in the first place.


> You still have to deal with all the tooling you are talking about, right? You’ve just moved the goalpost to the BE

This. From a security perspective, server side dependencies are way more dangerous than client side.


As somebody with an expert level knowledge with MVC frameworks like Ruby on Rails and Phoenix Framework, etc., and an experience building large-scale enterprise-grade apps using simpler technologies like jQuery, StimulusJS and plain old JavaScript on the front end with a little bit of React thrown in here and there, I found Development cycles to be much faster with these simpler stacks overall. The complexity of the code base never ended up turning to be a liability that it creates significant overhead and bottlenecks for new engineers joining the team to jump in and understand the end-to-end workflows of things.

Fast forward to what I am doing today in my new job. We have a pretty complex setup using Redwoodjs along with several layers of abstraction with Graphql (which I approve of) and a ton of packages and modules tied together on the front end with react, storybook, etc. and some things I am not even sure why they are there. I see new engineers joining our team and banging their heads to make even the smallest of changes and to implement new features and having to make code changes at multiple different places. I find myself doing similar things as well from time to time - and I always can't help but think about the complexity that I used to deal with when working with these MVC frameworks and how ridiculously easy it was to just throw logic in a controller and a service layer and and the view templates for the UI stuff. It all fit in so easily and shipping features was super simple and quick.

I wouldn't discount react as a framework but I am also starting to some cracks caused by using TypeScript on the backend. This entire Javascript world seems to be a mess you don't want to mess with. This is probably just me with an opinion, but but using Turbo, Stimulus and and sprinkles of LiveView got me really really far very quickly.


I never get these comments. I would choose a Next.js / React project to work on 99% of the time compared to the hellish nightmare that is jQuery.


Interesting because I think jQuery, although a nightmare, is a much smaller one than the today's stack of React single page apps. Everything from bundling to package management and the hell with modules and dependencies seems to be too much to maintain. I am probably going to be okay to take it on the front end, but I cannot take JavaScript on the back end.


I would choose neither; there are much easier options available.


What would you say the good and bad of GraphQL are? Like, when it is a value-add, and when should it be avoided?


The good news is GraphQL is very quick and easy to pick up and it gives that inbuilt functionality to fetch exactly the amount of data that we need. On top of it, it also has enough flexibility to integrate with your business logic. So it can be a straightforward replacement for a traditional REST API that you would have to manually build.

For the disadvantages, I cannot think of any. It is a bit slower than hand rolling your own REST API, but the difference is not severe enough to make you give up on it.


GraphQL APIs can easily DOS your backend if you don't configure extra protections (which are neither bulletproof nor enabled by default), they suffer from N+1 inefficiencies by default unless you write a ton of extra code, and they require extra careful programming to apply security rules on every field which can get very complex very fast.

On the plus side, it does have offer communication advantages if you have entirely independent BE and FE teams, and it can help minimize network traffic for network-constrained scenarios such as mobile apps.

Personally, I have regretted using GraphQL every time.


My biggest gripe is losing the entire layer of semantics that HTTP gives you. POST is the only verb and different error states are conveyed via error objects in the returned JSON.


The biggest issue is security. More often then not, the API allows you to see more than you should.


This is probably true, and it can only be uncovered by rigorous testing. There is a bunch of layers of abstraction that won't be very obvious if you are using GraphQL as opposed to rolling your own REST API.


People also forget just how far you can get without using client side JavaScript at all today. HTML and CSS have a lot of features built in that used to require JavaScript.


New inputs types have been glacially slow to come out and often underwhelming. Every new HTML thing I've seen (modals, datetime inputs, datalist select, etc) had better JS versions out for years before they released. I understand that the HTML spec is setting a baseline of sorts but most of the UI is ugly and sometimes not themeable/styleable.


The best approach is to use both. Which is why I never understood the pure server side or the pure "reactive" approach. Having to manage rendering in server side code is pure pain, and having to return DOM markup from inside a function is likewise just madness. They both break the separation of concerns.

The first framework I ever got to use was GTK with Glade and QT with designer shortly there after. These, I think, show the correct way to arrange your applications anywhere, but also it works great on the web.

Use HTML and CSS to create the basis of your page. Use the <template> and <slot> mechanisms to make reusable components or created widgets directly in your HTML. Anything that gets rendered should exist here. There should be very few places where you dynamically create and then add elements to your page.

Use javascript to add event handlers, receive them, and just run native functions on the DOM to manage the page. The dataset on all elements is very powerful and WeakMaps exist for when that's not sufficient. You have everything you need right in the standard environment.

If your application is API driven then you're effectively just doing Model-View-Controller in a modern way. It's exceptionally pleasant when approached like this. I have no idea why people torture themselves with weird opinionated wrappers around this functionality, or in the face of an explosion of choices, decide to regress all the way back to server side rendering.


> Every line of client JS brings build tooling, npm audit noise, and another supply chain risk.

IME this is backwards. All that stuff is a one-off fixed cost, it's the same whether you have 10 lines of JS or 10,000. And sooner or later you're going to need those 10 lines of JS, and then you'll be better off if you'd written the whole thing in JS to start with rather than whatever other pieces of technology you're using in addition.


It's absolutely not a one-off fixed cost, it's a constant treadmill of new vulnerabilities, upgrade breakages, and ecosystem incompatibilities. Just because you need 10 lines of JS, doesn't mean you 'might as well' have 10,000.


10 lines of JS fits into a screen and can be reasoned about quite easily. Now do the same for 10000.


I don't know.

Many interactions are simply better delivered from the client. Heck some can only be exclusively delivered from the client (eg: image uploading, drag and drop, etc).

With HTMX, LiveViews, etc there will be challenges integrating server and client code... plus the mess of having multiple strategies handling different parts of the UI.


HTMX has a very nice drag and drop extension I just found, though. And old-school forms can include image files. The little image preview can be a tiny "island of JS" if you have to have it.


> The little image preview can be a tiny "island of JS" if you have to have it.

I would consider that the bare acceptable minimum along an upload progress indicator.

But it can get a lot more complicated. What if you need to upload multiple images? What if you need to sort the images, add tags, etc? See for example the image uploading experience of sites like Unsplash or Flickr.

HTMX just ism't the right tool to solve this unless you're ready to accept a very rudimentary UX.


None of what you described requires anything more than an isolated island with some extra JS. No need for complex client-side state, no need for a SPA framework, no bundling required, not even TypeScript. If you relied on DOM and a couple of hidden fields, 90% of this would be a few dozen lines of code plus some JSDoc for type safety.


> No need for complex client-side state

Please implement a multi image upload widget and then come back to argue about this.


> along an upload progress indicator

I could be misremembering, but didn't browsers used to have this built in? Like there used to be a status bar that showed things like network activity (before we moved to a world where there is always network activity from all of the spying), upload progress, etc.

I don't remember if it was in Firefox, but SeaMonkey even has a "pull the plug" button to quickly go offline/online in the status bar.

Bizarre that "progress" is removing basic functionality and then paying legions of developers to re-add it in inconsistent ways everywhere.


Hours so? I’ve found that Phoenix LiveView has made integrating the server and client code much simpler. It’s dramatically reduced the need to write JavaScript in general, including for things like image uploads. Or are you speaking of one of its many clones?


This is not my field, but my mental model was that server side mostly died when mobile apps started being mainstream, and treating the web app as another frontend for your common api was considered the best way to handle client diversity.

Was this not the case? And if so, what has fundamentally changed?


It's one of those things that's like "write one HTML file with zero styling, then you can have multiple different CSS files style the same content completely differently! Separation of Concern!" Sounds perfect in theory but just doesn't always work.

Having one API for web and mobile sounds good but in practice often the different apps have different concerns.

And SEO and page speed were always reasons the server never died.

In fact, the trend is the opposite direction - the server sending the mobile apps their UIs. That way you can roll out new updates, features, and experiments without even deploying a new version.


>In fact, the trend is the opposite direction - the server sending the mobile apps their UIs. That way you can roll out new updates, features, and experiments without even deploying a new version

Is that allowed by app stores? Doesn’t it negate the walled gardens if you can effectively treat the app as a mini browser that executes arbitrary code ?


My previous workplace did this.

What app stores don't like is you reinventing javascript i.e shipping your own VM. What they don't mind is you reinventing html and css.

So it is common for servers today to send mobile apps

  {"Elementtype": "bottom_sheet_34", "fg_color": "red",..., "actions": {"tap": "whatever"}, ... }
However the code that takes this serialised UI and renders it, and maps the action names to actual code is shipped in the app itself. So, the app stores don't mind it.

This is what the GP is talking about.

It covers a surprising number of usecases, especially since many actions can be simply represented using '<a href>' equivalents -- deeplinks. With lotties, even animations are now server controlled. However, more dynamic actions are still client-controlled and need app updates.

Additionally, any new initiative , think new feature, or think temporary page for say valentine's day, is all done with webviews. I'm not clued in on the review process for this.

Nevertheless, if your app is big enough then almost every rule above is waived for you and the review process is moot, since once you become popular the platform becomes your customer as much as you are theirs. For example, tiktok ships a VM and obfuscated bytecode for that VM to hinder reverse engineering (and of course, hide stuff)


It's not really allowed but they aren't policing it, so, the technique ("code push") continuously grows.


Expo is the most popular React Native framework and markets remote updates as a feature and highlights that it lets you skip App Review update and Apple hasn't stopped them (Expo updates not exactly server-side mobile UI but it's similar idea).


> Of course, Figma‑ or Gmail‑class apps still benefit from heavy client logic, so the emerging pattern is “HTML by default, JS only where it buys you something.” Think islands, not full SPAs.

Figma is written in C++ to webasm.


"Right-sizing" is probably the most diplomatic take on all tech churn. It's the right way to look at it. It's not that we're done with it once and for all, it's just it's not the end all be all that conferences/blogs/influencers make things out to be. It's more of an indictment of the zealotry behind tech evangelism.


I recently blogged about [0] how I felt after some experiments with a traditional multi-page application with simple Web Components as "Progressive Enhancement". Taking an "offline first" approach to the "simple CRUD" of the application (saving everything all the time in Local Storage and then doing very simple "3-way merges" with remote data as it catches up) made it feel enough like a SPA that I was thrilled with overall performance. Think I just need to add CSS View Transitions for one last bit of "SPA-like" polish.

I am starting to think now is a great time to return to some of the Knockout-era ideals of "Progressive Enhancement". Web Components, the template tag, local storage, CSS view transitions, and a few other subtle modern things seem to be getting close to the point where the DX is as good or better than SPAs and the UX feels similar or better, too.

[0] https://blog.worldmaker.net/2025/04/27/book-club/


> they just need partial HTML swaps.

Been a web dev for over a decade, and I still use plain JS. I have somehow managed to avoid learning all the SPAs and hyped JS frameworks. I used HTMX for once project, but I prefer plain JS still.

I was a JQuery fan back in the day, but plain JS is nothing to scoff at these days. You are right though, in my experiences at least, I do not need anything I write to all happen on a single page, and I am typically just updating something a chunk at a time. A couple of event listeners and some async HTTP requests can accomplish more than I think a lot of people realize.

However, if I am being honest, I must admit one downfall. Any moderately complex logic or large project can mud-ball rather quickly -- one must be well organized and diligent.


I think the confession that "Figma‑ or Gmail‑class apps still benefit from heavy client logic" is a telling one, and the reason I politely disagree with your thinking is that it relies in the app staying small forever. But that's not what happens. Apps grow and grow and grow.

I've heard people say they just want "Pure JS" with no frameworks at all because frameworks are too complex, for their [currently] small app. So they get an app working, and all is good, right until it hits say 5000 lines of code. Then suddenly you have to re-write it all using a framework and TypeScript to do typing. Better to just start with an approach that scales to infinity, so you never run into the brick wall.


The problem is that people are frequently using SPA JS frameworks for things that are clearly not gmail of figma -- i.e. news websites and other sites with minimal interactivity or dynamic behaviour. If you are genuinely building an 'app'-like thing, then of course you need some kind of JS SPA framework, but too often people are reaching for these tools for non-app use cases.


I say if you have any reactivity whatsoever you need a framework. If you don't your code will be crap, and there's really no getting around that. Once you start doing DOM calls to update GUI you've got a mess on your hands instantly, and it will only get worse over time.


Yeah, wouldn't want to rewrite the frontend in a new framework. Good thing the SPA frameworks are so stable and solid; when I choose one I will surely be able to use it for a good, oh, 3 to 6 months.


React hooks came out in 2019. That's 6 years ago. And they are still the way to write client components. Unless you're moving everything to server components (which you most likely can't and shouldn't anyways) you would be writing the same react code for 6 years.

This is just intentional ignorance.


We still have Vue 2 apps running strong. Our experimental stuff is on Vue 3, which is backwards compatible with Vue 2 for the most part if you avoided mixins (which even in the Vue 2 days was the common advice).

People who say stuff like this have obviously never actually used modern day FE frameworks, because they have all been very stable for a long while. Yes, APIs change, but that's not unique to JS/frontend, and also nothing really forces you to update with them unless you really need some shiny new feature, and at least IME Vue 3 has been nothing but gold since we got on it.


I agree. My preference is React, but I've got 4 years of Vue experience and so I know Vue is good too, and mature. There are just some people who are anti-framework entirely, and they're never actual professional web developers, but mostly hobbyists or college dabblers, who've never been involved with a large project, or else they'd know how silly their anti-framework attitude is.


This feels sarcastic but in reality ever since react switch to using hooks I’ve largely written the same style of react code for years. You don’t have to live on the edge.


The idea that apps can never be done and can never stop adding new features is the key determiner of web bloat. This is the problem.


Well any non-framework interactive app that ever reaches the point of being "bloated" is necessarily a train wreck, regardless. You can have massive apps that use a framework, and has a good design too. However if you have a massive app without a framework then it's absolutely guaranteed to be a dumpster fire. So frameworks help manage bloat, but lack of frameworks fails completely once the project is large.


I’ve been saying this for a long time. It takes very little effort to spin up a react app so there’s little point in starting a project without it or whatever front-end framework you prefer.

As I’ve become more senior I’ve realized that software devs have a tendency to fall for software “best practices” that sound good on paper but they don’t seem to question their practical validity. Separation of concerns, microservices, and pick the best tool for the job are all things I’ve ended up pushing back against.

In this particular case I’d say “pick the best tool for the job” is particularly relevant. Even though this advice is hard to argue against, I think it has a tendency to take developers down the path of unnecessary complexity. Pragmatically it’s usually best to pick a single tool that works decently well for 95% of your use cases instead of a bunch of different tools.


I agree, just use React from day one. The reality is that web pages are hardly ever perfectly static, and once there's any dynamic nature to it at all you need something like React or else you'll have a train wreck of JS DOM-manipulation calls before you know it. React is perfect. You just update your state and the page magically re-renders. It's a breeze.


Multiple times in this thread you have been taking the hardline stance that a framework is always necessary while stating that others are saying the same in the opposite direction. In reality, most people seemingly advocating for non-React are actually saying to start simple and add the complexity where and when it’s needed.

Further, being against a bloated framework is not the same as being against frameworks. Those frameworks are actually principles. It’s possible for a team to come up with or use existing principles without using a framework.

Finally, “always use React” brings other costs. You need a team to build your system twice. That means you need bigger teams; more funding to do the same thing, and so on. You add complexity at the team level and at the software level when using frameworks. The person above you said that blindly “following best practices” is bad while stating a “best practice” of always start with React. That particular “best practice” not always being the best practice is the entire point of this thread.


Your reading of my point is very strange. When I said “best practices” I clearly meant the most commonly repeated best practices. If I cast doubt on those best practices then clearly I intend to replace them with other best practices that I think are better, and that’s what I did. And suggesting better practices doesn’t imply that I think people should blindly follow them.

> Most people seemingly advocating for non-React are actually saying to start simple and add the complexity where and when it’s needed.

In my experience that’s actually not the case. That might be what people claim, but in my professional experience some people really don’t like frontend work and they try to avoid frontend frameworks because they think it’ll make their work more tolerable, but what usually happens is they start out “simple” but pretty quickly product requirements come in that are hard to do without some framework, then there’s a scramble to add a framework or hack it into some parts of the app.


So there are a couple of problems with your characterization. Firstly, your blanket assumption that people who want to avoid frontend complexity are really just frontend haters. Sure, there are some people who hate 'web shit', but there are many others, like myself, who want to build useful webapps that are as simple and performant as possible. It doesn't make us frontend haters, it just makes us different.

The second assumption where I believe you are going wrong is that 'product requirements' will always come in and force using a framework. Imho if I look around at most of the webapps I am using, very few of them actually need a SPA framework. Take Jira for example. Does it really need to be fully a SPA? It has some highly interactive parts, which could be done with eg web components, but it's mostly a boring CRUD app that would work fine with eg htmx.


Exactly right. The "start simple and add the complexity where and when it’s needed" is a badly flawed way of thinking, in web apps, because once an app becomes too big to manage without BOTH a framework AND a type-safe language (TypeScript) then it you realize everything you've done up to that point must be reworked line by line, which costs you weeks, and you have to retest everything, and will make mistakes as you try to fix the mess. It's a mess that's easily avoided, just by using a framework from day one.

You can't just switch horses in the middle of the stream. You have to ride back to the ranch, saddle up on a different horse, and restart your entire journey on a better horse.


I went out of my way to say it's only my preference to use React, but that Vue is fine too. So the thing I have a "hardline stance" (your words) on, if anything, is that a framework should be used for any interactive web app.

Having been a web developer for a quarter century, I know how tempting it is (yes, for small projects) to try to just wing it and do everything without a framework, and I know what a tarpit that way of thinking is. If you disagree, then you were certainly welcome to share you own opinion.


> most people seemingly advocating for non-React are actually saying to start simple

This assumes that non-React approaches are simple.


That's a good point. non-Framework approaches are not simpler actually. It's a tarpit that looks enticing. You think you can keep stuff "simple" by not using a framework, and in only 2 weeks you've got such complexity you're begging for a framework that does it all automatically, which is kinda...ya know...what frameworks are for. lol.

So you delete the non-framework garbage code and start over. lol.


That’s absurd, that’s like saying we should only use C++ for backend code because my CRUD business app might one day scale to infinity. Better be safe than sorry and sling pointers and CMake just in case I need that extra juice!


imo, even if the only "interactivity" a web app has is just a login page, then even that alone is enough to warrant using a framework rather than doing direct DOM manipulation (or even worse, full page refreshes after a form submit).

It's not about using the most powerful tool always, it's about knowing how to leverage modern standards rather than reinventing and solving problems that are already solved.


> ...a web app has is just a login page, then even that alone is enough to warrant using a framework rather than doing direct DOM manipulation (or even worse, full page refreshes

Maybe, but it doesn't necessarily need to be a SPA framework though. There are simpler libraries/frameworks like htmx that considerably reduce complexity and also let you avoid direct DOM manipulation.


Yeah, the debate about which framework is best is like debating which programming language is best, in that they all have pros and cons, that can be debated endlessly.

However, in general, whatever's most popular is a strong signal (like a crowd-sourced signal) that it's probably indeed "the best" all around, which are two words that can also, of course, be debated endlessly based on one's personal definition of "best".


Not really. If you use react router, you can have a client side js app and add SSR with a couple of hours work. You have your cake and eat it


>Figma‑ or Gmail‑class apps.....

Figma is a definite yes. But Gmail is something we say from late 00s and somehow continue till now. I thought it has been proven we dont need SPA for Email Client. Hey is perfectly fine other than a little slow, mostly due to server response time and not Turbo / HTML / HTMX itself.

I still believe we have a long way to go and innovate on partial HTML swaps. We could have push this to the limit so that 98% of the web doesn't need SPA at all.

Really hopes Rails has more in store this year.


I believe allowing the dynamic loading of scripts was a mistake, and we should undo support for it. So were iframes.

Everything after ready should have been static content.


I completely agree with the sentiment that we don’t need SPAs and similar tech for news sites and dashboards and the myriad crud apps we use on a day to day basis but I think what you’re proposing is throwing the baby out with the bath water. How would a site like google maps, which I’m sure we can all agree is extremely useful, work in a Web 1.0 style world? It needs to dynamically load tiles and various other resources. The web being a place where we can host and instantly distribute complex cross-platform interactive software in a fairly secure sandbox is a modern marvel.


You misunderstand me, I’m not proposing we get rid of JavaScript.

I am saying that allowing for JavaScript to be dynamically downloaded and executed after the page is ready was a mistake.

You can build your Google docs, your maps, and figmas. You don’t need JS to be sent after the page is ready to do so.


Wouldn't this make users pay for every possible feature they could ever use on a given site? For instance, in Google Maps I might use Street View 1% of the time, and the script for it is pretty bulky. In your ideal world, would I have to preload the Street View handling scripts whenever I loaded up Google Maps at all?


If you’re asking if it would incentivize us to be more careful when introducing additional interactive functionality on a page, and how that functionality impacted performance and page speed, I expect it would.

Thinking about how the web was designed today, isn’t necessarily good when considering how it could work best tomorrow.


> If you’re asking if it would incentivize us to be more careful when introducing additional interactive functionality on a page, and how that functionality impacted performance and page speed, I expect it would.

Not quite, I wasn't trying to make a bigger point about is/ought dynamics here, I was more curious specifically about the Google Maps example and other instances like it from a technical perspective.

Currently on the web, it's very easy to design a web page where you only pay for what you use -- if I start up a feature, it loads the script that runs that feature; if I don't start it up, it never loads.

It sounds like in the model proposed above where all scripts are loaded on page-load, I as a user face a clearly worse experience either by A.) losing useful features such as Street View, or B.) paying to load the scripts for those features even when I don't use them.


“Worse” here is relative to how we have designed sites such as Google maps today. The current web would fundamentally break if we stopped supporting scripts after page load, so moving would be painful. However, we build these lazy and bloated monolith SPAs and Electron apps because we can, not because we have to. Other more efficient and lightweight patterns exist, some of us even use them today.

If you can exchange static content, you need very little scripting to be able to pull down new interactive pieces of functionality onto a page. Especially given that HTML and CSS are capable of so much more today. You see a lot of frameworks moving in this direction, such as RSCs, where we now transmit components in a serializable format.

Trade offs would have to be made during development, and with a complex enough application, there would be moments where it may be tough to support everything on a single page. However. I don’t think supporting single page is necessarily the goal or even the spirit of the web. HTML imports would have avoided a lot of unnecessary compilers, build tools, and runtime JS from being created for example.

https://www.w3.org/TR/html-imports/


How are you going to stop it, when you already are running JS? I can write a VM in JS that I can load, then I can load static assets after the page has loaded, and execute them in the VM. How would you block that?


I am thinking about a different time, when JS did less, and these decisions were being made.

Today, what you are saying is definitely a concern, but all APIs are abused beyond their intended uses. That isn’t to say we shouldn’t continue to design good ones that lead users in the intended direction.


> How would google maps work in a Web 1.0 world?

We had that in the form of MapQuest, and it was agonizingly slow. Click to load the next tile, wait for the page to reload, and repeat. Modern SPAs are a revelation.


What’s cool is that with html includes (import specification) we could have supported this without any JS.

https://www.w3.org/TR/html-imports/


That ship has sailed. The web is nowadays an application delivery platform and there is no going back. Dynamic loading, iframes, and a whole host of other features all have their uses within that context - the issue is really their misuse and overuse.


I don’t disagree at all mind you. I’ve been developing websites for 25 years or so now. I just think we could have done better with the standard.


For me it definitely never went away, as I mostly done Java (JSP, JSF, Spring), or ASP.NET (Web Forms, MVC), with sprinkles of JavaScript islands.

And what makes me like Next.js, besides the SaaS SDKs that give me no other framework choice, it is being quite similar to those experiences.


This honestly feels so much closer to the old jQuery approach of using JS for enhancements but ensuring you could gracefully degrade to work without it.

I loved building things that way.


This is ChatGPT




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: