Incremental Static Site Regeneration is really compelling, I wonder what the gotchas are.
For instance, I have a project where the front page gets server-side-rendered for logged out users. We can't do client-side-rendering because we care about SEO. And we can't do static-generation at build time because the content of the front page regularly changes (it summarizes activities of other users).
But with incremental static site generation... even if we care about the community always having up-to-date data on that front page, we could presumably set the timeout to like five seconds. That's a heck of a savings compared to generating it on every request.
Maybe I'm missing something, but ultimately, isn't this just a re-invention of a site-wide cache?
Back in the day, in Django/Drupal/any CMS, for anonymous users, you'd just dump each page into memcached with a path URL.
Then when a new anonymous visitor comes, you load it right up from – extremely fast – memory. Maybe even faster than loading from the file system? Nginx can actually ask memcached directly, even bypassing the app server.
You're not totally wrong, but think of it this way. You have native support in your application stack for this static 'caching' along with one-click edge distribution. Which means you can write extremely complex, interactive markup (it's just React) on top of this very simple distribution system. And you don't have to build a whole magic caching system or deal with web servers at all.
I've worked with similar systems to the one you've laid out -- tell me you've never had to jump into the cache to clear it because a client wasn't seeing their updates. Or a page was getting improperly cached. Even if it was simple and effective, it was often a pain.
These days, there are a number of services that will store your html and other static files on their edge CDN servers (cloudflare workers, vercel, netlify, among others) leading to your users downloading the files at remarkably fast speeds. You can reach a pretty amazing TTI while maintaining nigh infinite scalability.
I will say that while I personally use next.js, I stick entirely with server-side rendering. I find it to be a better all-around user-experience for my use-case with several distinct advantages. But I only serve ~150,000 users a month, who are all relatively close to my server and I don't have much static content. A modest ec2 instance handles it with ease.
But if I was building an application that targeted many regions and needed infinite scalability, or one that heavily utilized static data, I'd definitely consider pursuing the approach laid out above.
We've gone 'full circle', sure. Because React offered a way to write web applications in a much more consistent, interactive way than anything before it. But we spent a few years growing our bundle sizes out of control, and improperly structuring our applications which led to TTI growing and everything being slow and clunky. Companies that cared and had good engineering were able to navigate these problems, but your average developer couldn't. Now with a system like this, they can not only match the performance of classic applications, but in many cases exceed it.
Hosting is separate from the framework, and the static site hosts you mentioned are just webserver + CDN packaged together with optional build/CI layer and some extra APIs to handle form submissions and user logins. Considering the number of CDNs and 1-click hosting, and the ease of putting them together, it's all pretty much the same.
The big difference is that these static site frameworks are using React/Vue/etc for the frontend templating and interactivity instead of a server-side language that has it's own templating and might require complex JS integration. But the trade-off is that you give up that server-side ability or have to use 3rd-party services like a headless CMS instead.
Not quite. Next allows you to dynamically generate static content both at build time – and with this latest update – at _runtime_ as well. It generates code-split bundles for each page, enabling extremely thin pages to minimize the amount of content loaded with zero configuration. You nailed the big advantage though, more eloquently than I was able to articulate: a unified code-base for both initial rendering and interactivity.
Next _also_ supports rendering things on the server if you'd like. So you're not giving up anything at all, there's no trade-off with respect to rendering patterns, unless you just prefer to use another language to build your 'frontend'.
Hosting is 'separate', but the framework snugly fits into a modern hosting paradigm in a way that most others do not.
It was a huge ordeal getting it setup (with React SSR, etc), but it's definitely doable. I've open sourced it but it sorely needs documentation (incoming):
Basically, despite the tooling (not because of it), once you get going with JSX, TypeScript and the like, nothing else comes close. At least in my experience.
But again, none of the above stops me from dumping that output into a single cache key per request. It's still dynamic, just cached robustly.
In my opinion: nothing, and I do mean nothing, comes close to the productivity of Django + Forms + FormSets + Admin. I've tried everything under the sun[1]
The model->form and presentation layer is so intuitive and robust that I'm surprised no JS framework has rebuilt it. Too much is focused around REST instead of domain specific inputs / outputs.[2]
So yea, I've focused on making Django more React-friendly. I want to stick to it.
> isn't this just a re-invention of a site-wide cache?
> It feels like we've gone full circle.
You are not wrong, it is the same thing. Now we are waiting for another revolutionary invention when it comes to caching - partial caching. Probably it could be called RESI or NESI - something like that for sure.
My take is that it implements the benefits you describe for simple sites that might have less infrastructure at their disposal. It seems like you still need a node runtime to rebuild the files when the cache needs to be regenerated, however.
I haven't really kept up to date in this area so would love to hear some experts weigh in here: Do search engines (or really just Google) still penalize sites that are pure SPAs on that fact alone?
Or does it have more do do with properties generally associated with SPAs like large bundle sizes or slow time to first interaction?
I'm mainly wondering if you can build a SEO-friendly SPA marketing page today using techniques like dynamic imports and code splitting to lazy load just the scripts and content needed for the initial render instead of going all the way to static generation or server side rendering.
> Do search engines (or really just Google) still penalize sites that are pure SPAs on that fact alone?
Google says that they don't. They use an evergreen googlebot, which is the latest (or thereabouts) release of headless Chrome browser; and it waits for the scripts to load and executes them. You can watch Martin Splitt's talk from the recent web.dev live event for the latest updates.
Server-side rendering is still important if you care for other search bots or if you want to share your site's pages on social media (with page preview).
If its important that your sites content will be shared on social media then SSR is a must due to meta tags Facebook/Twitter use to create the posts preview.
I don't know what archetecture your website uses, but our site for example we just cache the view into redis for however long. It's built into a lot of webframeworks, but shouldn't be too crazy to implement by hand.
I'm still exploring this, but my sense is that with Next.JS, the cached version is still deposited on the edge cache at build time, so you don't have that first expensive query. Not sure if you can do something like that with Cloudflare alone.
Second is that with Next.JS, expiration means that it gets built and re-deposited onto the edge cache in the background, without ever needing that expensive query. Again, not sure if you can do something like that with Cloudflare alone.
Developing a Next.js app is a really nice experience. I admit when it first came out I didn't pay that much attention to it because I thought how it did routing was weird and I don't think the server was as extensible. Fast forward to 9.5 and there's pretty much no real reason for me to build a custom Webpack-based setup for my next project.
It's interesting that the React Router team, which is building the competing Remix framework (similar to Next but based on React Router) is arguing that instead of supporting static page generation, they will just urge their users to use CDNs with aggressive caching rules.
Still, good to have options. It will be interesting to see how Remix is received once it launches. Next is currently peerless for DX-friendly React frameworks.
Any rebranding will have backlash. It solved the issue of ZEIT Now being one _product_ whereas Vercel is the _platform_. It also brought a new focus on Jamstack and front-end developers.
I'll admit I'm biased though, as I'm a Vercel customer.
They could’ve just used ZEIT as the platform then. Vercel sounds like some generic VC created tech name. Their customer base is literally the exact opposite. Indie developers, people with Github accounts, technical engineers, engineering leaders. Not some Salesforce CRM enterprise selling to CEO’s marketing bs “Vercel” lol.
Curious, anyone know how this would compare performance wise for something like Reddit verses livewire/liveview. I know those would be more reactive, but I like the idea of static pages that are dynamic as well.
i love vercel and use it for all my projects, right up until i need to add auth/api, at which point I switch off of it immediately for security reasons because there are no static IPs unless you're an "enterprise" customer
it's unclear why they're rolling out performance features while this critical security hole persists. no doubt i'll get flamed for it, but, it's a really bad idea to use dynamic IPs to connect to your database with only a password...
oh yeah, I remember running into that problem when I was using a database on Digital Ocean and wanted to add vercel api routes as a trusted source. I had also switched off of it because of that. However, I had a look at their blog the other day and apparently they have an Anycast IP range now, which probably solves this issue https://vercel.com/blog/new-edge-dev-infrastructure#vercel's.... I haven't tried it yet though
For instance, I have a project where the front page gets server-side-rendered for logged out users. We can't do client-side-rendering because we care about SEO. And we can't do static-generation at build time because the content of the front page regularly changes (it summarizes activities of other users).
But with incremental static site generation... even if we care about the community always having up-to-date data on that front page, we could presumably set the timeout to like five seconds. That's a heck of a savings compared to generating it on every request.