Maybe I'm missing something, but ultimately, isn't this just a re-invention of a site-wide cache?
Back in the day, in Django/Drupal/any CMS, for anonymous users, you'd just dump each page into memcached with a path URL.
Then when a new anonymous visitor comes, you load it right up from – extremely fast – memory. Maybe even faster than loading from the file system? Nginx can actually ask memcached directly, even bypassing the app server.
You're not totally wrong, but think of it this way. You have native support in your application stack for this static 'caching' along with one-click edge distribution. Which means you can write extremely complex, interactive markup (it's just React) on top of this very simple distribution system. And you don't have to build a whole magic caching system or deal with web servers at all.
I've worked with similar systems to the one you've laid out -- tell me you've never had to jump into the cache to clear it because a client wasn't seeing their updates. Or a page was getting improperly cached. Even if it was simple and effective, it was often a pain.
These days, there are a number of services that will store your html and other static files on their edge CDN servers (cloudflare workers, vercel, netlify, among others) leading to your users downloading the files at remarkably fast speeds. You can reach a pretty amazing TTI while maintaining nigh infinite scalability.
I will say that while I personally use next.js, I stick entirely with server-side rendering. I find it to be a better all-around user-experience for my use-case with several distinct advantages. But I only serve ~150,000 users a month, who are all relatively close to my server and I don't have much static content. A modest ec2 instance handles it with ease.
But if I was building an application that targeted many regions and needed infinite scalability, or one that heavily utilized static data, I'd definitely consider pursuing the approach laid out above.
We've gone 'full circle', sure. Because React offered a way to write web applications in a much more consistent, interactive way than anything before it. But we spent a few years growing our bundle sizes out of control, and improperly structuring our applications which led to TTI growing and everything being slow and clunky. Companies that cared and had good engineering were able to navigate these problems, but your average developer couldn't. Now with a system like this, they can not only match the performance of classic applications, but in many cases exceed it.
Hosting is separate from the framework, and the static site hosts you mentioned are just webserver + CDN packaged together with optional build/CI layer and some extra APIs to handle form submissions and user logins. Considering the number of CDNs and 1-click hosting, and the ease of putting them together, it's all pretty much the same.
The big difference is that these static site frameworks are using React/Vue/etc for the frontend templating and interactivity instead of a server-side language that has it's own templating and might require complex JS integration. But the trade-off is that you give up that server-side ability or have to use 3rd-party services like a headless CMS instead.
Not quite. Next allows you to dynamically generate static content both at build time – and with this latest update – at _runtime_ as well. It generates code-split bundles for each page, enabling extremely thin pages to minimize the amount of content loaded with zero configuration. You nailed the big advantage though, more eloquently than I was able to articulate: a unified code-base for both initial rendering and interactivity.
Next _also_ supports rendering things on the server if you'd like. So you're not giving up anything at all, there's no trade-off with respect to rendering patterns, unless you just prefer to use another language to build your 'frontend'.
Hosting is 'separate', but the framework snugly fits into a modern hosting paradigm in a way that most others do not.
It was a huge ordeal getting it setup (with React SSR, etc), but it's definitely doable. I've open sourced it but it sorely needs documentation (incoming):
Basically, despite the tooling (not because of it), once you get going with JSX, TypeScript and the like, nothing else comes close. At least in my experience.
But again, none of the above stops me from dumping that output into a single cache key per request. It's still dynamic, just cached robustly.
In my opinion: nothing, and I do mean nothing, comes close to the productivity of Django + Forms + FormSets + Admin. I've tried everything under the sun[1]
The model->form and presentation layer is so intuitive and robust that I'm surprised no JS framework has rebuilt it. Too much is focused around REST instead of domain specific inputs / outputs.[2]
So yea, I've focused on making Django more React-friendly. I want to stick to it.
> isn't this just a re-invention of a site-wide cache?
> It feels like we've gone full circle.
You are not wrong, it is the same thing. Now we are waiting for another revolutionary invention when it comes to caching - partial caching. Probably it could be called RESI or NESI - something like that for sure.
My take is that it implements the benefits you describe for simple sites that might have less infrastructure at their disposal. It seems like you still need a node runtime to rebuild the files when the cache needs to be regenerated, however.
Back in the day, in Django/Drupal/any CMS, for anonymous users, you'd just dump each page into memcached with a path URL.
Then when a new anonymous visitor comes, you load it right up from – extremely fast – memory. Maybe even faster than loading from the file system? Nginx can actually ask memcached directly, even bypassing the app server.
It feels like we've gone full circle.