I've started to play around with LiveView by migrating something from ajax to it [0] and it seems very nice! It's still in beta tho and a lot to do still... but very promising indeed :)
It's quite impressive. Took me two hours to figure everything out, and just one hour to turn much of an existing 'old-fashioned' server-side rendered Phoenix app into a quite dynamic one where:
- various page elements update when the server/db does (new users, new items, etc.)
- sorting tables works dynamically
- table cells are updated on click (toggle-type stuff)
What made this so easy to do is that all I had to do was:
- move chunks of controller logic to the LiveView mount() function.
- add a phx-click and phx-value attribute to my existing templates (and rename them from .eex to .leex)
- add click handlers to the LiveView modules
- use the built-in PubSub functionality in my DB logic and subscribe from the LiveViews
Obviously there are legitimate reasons why LiveView isn't good enough, but this is an incredibly simple solution provided you 1) are willing to use Phoenix, 2) can assume your clients have internet and you don't need offline capabilities
I've honestly worked on very few SPA projects where I can't see LiveView being a good option.
I've just recently gotten into elixir, and it really is the first time in a long time I've wanted to just "do things" just for the sake of doing them with a language.
The main advantage of server-side rendering of client-side templates is cutting your initial load time to a bare minimum (not having to wait for scripts to load/compile, taking advantage of page caching, etc.). However, this usually only matters when you're afraid of users getting impatient and bouncing, which tends to be true of sites that mainly display content and don't have a ton of interactivity, and sites like that usually shouldn't be using client-side rendering in the first place.
So you have two buckets: web apps that don't care too much about shaving milliseconds from the initial load, and content sites that shouldn't be shipping heavy JavaScript in the first place. The target market for hybrid rendering is the tiny slice of sites in-between those two, so it's a minority of use-cases. Though it is also used by sites in the latter category that are shipping heavy JavaScript anyway and having regrets.
A big one for the need of server side rendering is SEO. Without server side rendering, most web crawlers don't have access to your content and you can't rank for SEO.
Given the prevalence of client-side rendering, I think all the major players have learned to circumvent this. They can run a full headless browser instead of just looking at the raw HTML, and detect when the JavaScript seems to have rendered the content.
That said, user experience (in this case site speed) can be a factor in how Google actually ranks pages, so that could be another motivation.
Well... as everything else with how Google's crawler and algorithms work, there are no official or definitive answer, but what has been communicated is that it could work but definitely more difficult and can lead to more issues.
Knowing a bit about SEO, if you want to invest in your ranking, I wouldn't go without server-side rendering.
Still, the two buckets I described above hold true. If someone is returning to your web app - Slack, or Trello, for example - you don't care as much about SEO because most of your visitors are return visitors. Your splash page, on the other hand, needs to be optimized for SEO but won't have much interactive complexity, so it doesn't need client-side rendering.
I haven't had a chance to try it myself yet, but from what I've read the extra delay is sub 100ms. Caching or pre-rendering the HTML output for bots in advance is an option too depending on the particular use case.
I would certainly consider this ahead of SSR driven by "isomorphic" Javascript for frontend-heavy apps.
I pre-render my pages with puppeteer on build, and then hydrate with the js bundles. Everyone gets the faster experience of an HTML page (not just crawlers), while the full js loads async.
SSR is only easy if your frontend is written exclusively in SSR-compatible libraries and your backend is node.js, and even then you need to adjust your frontend code for SSR because node.js doesn't have all the browser APIs that your code could be using (like XHR or the real (not virtual) DOM).
On the other hand, puppeteer executes your application in virtually the same environment as the user's browser, without polluting your frontend architecture with SSR concepts and limitations.
Yes, you'll need to set up a bit of infra initially. Depending on the use case it may very well pay off.
It feels so much faster tho, to not have websites slow down when my browser slows down. Server side rendering should be considered a better experience for the end user, especially if it can prefretch the next pages.
Universal rendering is widely used in the React ecosystem, in large part thanks to Next.js[0] making it simple to set up and run.
Many large sites like Marvel.com, Nike.com, Invisionapp.com, hulu.com and many more run on server-side rendered React, see the Next.js showcase for a more complete list: https://nextjs.org/showcase
Agreed. I'm a big fan of Next.js and haven't had any major problems with it. While there have been some quirks and nuances I had to learn[0] it still provides the easiest platform for SSR out of the box.
I'm evaluating next.js for a new project. But coming from create-react-app, I'm a bit skeptical of these "frameworks" that seem to provide little benefit beyond developer tooling (i.e. setting up webpack, hot reload etc). create-react-app was great in the beginning, but months later was really annoying. I get the feeling next.js could end up like that. At least you can eject from CRA.
I'm curious if anyone has gone the "roll your own" route with SSR + react, starting with a simple approach and adding complexity only as it becomes necessary. Has anyone done this and maybe also used next.js? I would love to hear a comparison, because at the moment I'm unconvinced next.js is really necessary if you know what you're doing, or you intend to maintain the project for more than 2 years.
I'd highly recommend trying out Next.js. Once you've got the framework-specific concepts, it's quite refreshing to base an app on. There are plenty of "escape hatches" to customize webpack, babel, server-side API, etc.
That said, I don't use it much anymore - I rolled (and continue to use) my own SSR, adding features as I needed them: taking care of async server-side actions like fetching data; rendering the app; passing initial state to client; dynamic chunk-splitting of routes; a way to export all app routes to static files..
There are many moving parts and "tricks" I had to discover - like, how to bundle client/server bundles with webpack (chunk-split on client, all bundled together on server), creating script tags on server to fetch route-specific chunks faster on initial page load..
So, unless you're into understanding and building/customizing every feature, your time might be better spent letting Next.js provide all that as a foundation. It's well-organized and documented, so I think it would be valuable for teams working together also.
I had that exact experience with Next. Ended up hating it for a variety of reasons (routing, file structure, build) and fortunately no longer have to support that project.
Now I roll my own. I pre-render with react-snap, use react-imported-component for code-splitting and hydration. The pre-rendered static HTML pages are everybit as good as SSR. It adds an extra few seconds to the build process, but with Netlify's CI, I push to Github and its live once I'm back from the kitchen.
It gives you SSR with React out of the box. It's really nice to work with, and flexible enough that we haven't had issues with more customized situations.
While not JavaScript, there's now Razor Components[1]. It doesn't work exactly like isomorphic JS, as it operates more like a terminal session (the server sends down VDOM mutations to the client). I'm not entirely sold on the approach yet, but I am going to be keeping an eye on it.
I wonder what took so long for DOM-deltas-over-websockets tech to show up. It was only in the last two years or so for stuff like Plotly's Dash and Elixir Phoenix's LiveView to become popular even though we had Websockets and event driven networking for ages.
If I understand correctly, Chris McCord actually did something similar back in his Rails days (https://github.com/chrismccord/render_sync), but the issues he ran into actually led him to create both Phoenix and, eventually, LiveView.
I'm sure others are better at explaining the details, but from what I gather this approach is now more viable than it was in the past with other stacks because 1) Phoenix is really efficient and fast with rendering templates, 2) The Erlang/Elixir-specific channels/process approach makes it much easier to have tons of websocket connections open and maintaining state, and perhaps 3) javascript got a lot faster.
I played around with a Blazor POC not to long ago. It's something else to hit a breakpoint in Chrome in a DLL running on the mono runtime. It's wild tbh. I was able to leverage the trusty old C# Timer class to increment a counter every second and it ran just fine.
Razor Components are quite different to Blazor (although I think you can use the same code on both). If you weren't already aware of this difference it's worth taking another look.
Great for FB/twitter share scraping, google scraping, etc. Also much better startup speeds.
Once you get it configured it's very little additional work to maintain.
I have it set up to do things like determine mobile/desktop depending on the `user-agent` to determine the prerendered page to serve, and then the client side JS takes over depending on the viewport size and will either a) Rebuild (less than 1% of the time) changing the markup from desktop->mobile of mobile->desktop or b) Continue happily
I built a solution for server side rendering using Elixir / Plug and Chrome Headless via Chroxy. It simplifies server side rendering greatly since it doesn’t care about the client side library you use. I have been using it to power my site[1] which is written in React.
I also happen to make the Elixir Foundation series of videos[2] that show how to do serverside rendering using elixir. So I’m dog fooding the dog food.
For sites using React, Angular, Vue, and Ember, the "talk" has died down because it's just a standard feature of the platform. Server-side rendering is part of the table stakes at this point for JS frameworks.
I don't know of any off the top of my head or a quick google search. I do know, from the latest ElixirTalk podcast, that it's very careful about what it sends over the wire, only the parts that have changed. If you're interacting with an API to get the same information for a SPA you're probably going to have similar latencies.
1) Lack of reliable real time ability to detect device and compute size.
2) Increased computation complexity of what to render - why put that load on the server when devices are now commensurately as fast - or even faster given the trend to distributed services that maximize network throughput over power.
I'm not even sure why it was so popular in the first place; client server separation is a good thing, IMHO.
Best practice would be to wrap anything that is both unnecessary for SEO and non-trivial into <no-ssr> tags. No need to render a datetimepicker widget on the server, especially if Google bot is requesting the page.
Remember that Google Bot uses Chrome 41[1]. So some features from ES6+ might not be supported. It's always a good idea to use Babel during compiling a SPA without SSR.
I would say is more relevant now, the time to build and complexities of SPAs has make many companies switch more to multipage SSR react with additional react JS for the browser, for SSR, SSR react is superior (because of components) to stuff like pugs IMO.
> There was a lot of talk about Isomorphic rendering of JavaScript, but seems to have died down recently.
It's no way near dead, just look at nuxtjs/nextjs/Gatsby. they are getting more popular day by day. Recently I worked on an E-commerce platform using next.js and Gatsby.
I have to say its really cool and good user experience overall. the only issue I had was the complexity of the code. You have to think about backend/frontend at the same time. I will definitely use it where it makes sense.
Beware tho, Google has still not officially said it was able to parse SPAs as well as server-side rendered pages and have still at multiple occasion (and still recently) said that although it can deal with JS, its not perfect and leads to issues. So if SEO is important I wouldn't skip SSR.
It's still relevant, just difficult. I think the herd jumped into SPAs before they understood the implications.
I was one who jumped in, then tried to automate my way out of client-side slow renders with https://www.roast.io/ (uses a headless browser to server-side render a JS app)
I'm still surprised how many people are not aware of client-side rendering being unstable, because they don't see the final result per visitor, but a prediction most of the time. Testing cross-browser is another story, though it can help a bit, is also usually omitted.
The talk died down because it became a standard way of doing things now. It isn't something special anymore and the tooling has evolved to make it more seamless.
If we are using reddit as the standard for what's difficult to do right, then apparently everything is too difficult to do right. Case in point, their two mobile websites before react are both unusable garbage. I write SSR JS apps and React/SSR is _not_ the one to blame in reddit's scenario.
1) What do you mean by server-side rendering of JavaScript? It sounds like you mean executing JavaScript on the server that otherwise is destined to execute on the client-side. I execute JavaScript on the server/terminal all the time and don't call it server-side rendering.
2) What is your motivation? What problem do you think you are solving with server-side rendering?
I am not being pedantic. You and the op are having difficulty with specificity. I was clear about that and somehow that means I should be called names and down voted for asking an appropriate question.
If I wanted to trolled by framework children I would go back to reddit.
Demo: https://youtu.be/Z2DU0qLfPIY?t=2628
Links: https://leveljournal.com/why-phoenix-liveview-is-a-big-deal https://elixirforum.com/t/phoenix-liveview-info/16569
PS please stop calling it isomorphic — this is disgrace for the mathematics.