How imprortant is elixir in all this? Is it merely the frontrunner of a useful new pattern? E.g., do erlang, BEAM or functional programming play a critical role or will other stacks effectively replicate this and how are they doing so far, which would be the next more mature?
Live View is well integrated with the lightweight process/actor model that is central to Erlang and Elixir, where state and asynchronous events much are easier to code and troubleshoot than other languages.
So while you can replicate Live View everywhere, it's never going to be as convenient or expressive as the real thing.
On the project I am working on I have a Live View dashboard that's getting updated in real time from distributed async events coming from any server in the cluster with less than a dozen lines of code, and using nothing more fancy than standard Phoenix libraries (Phoenix.PubSub) and BEAM primitives (pg2, nodes, mailboxes, etc.). Porting Live View to Go, Python, Rust, Javascript is the easy part. You'll still miss the BEAM dearly.
Part of the “magic” of LiveView is having a server-side process per connected user, to keep state and enable fast re-renders. This is where Elixir’s runtime (BEAM/OTP) really shines. To my knowledge, no similar technology has accomplished this so far, instead they put state in Redis or another database and need to re-load it from there every time an event needs to be processed server-side.
it is very hard for other languages to pull this off
as Erlang OTP uses a shared nothing architecture
each green thread has it own GC
communication is only via message passing
it is ok to fail over as the threads are easy to replicate as they are cheap
it is possible Golang via ergo as golang gets concurrency right, but doesnt have evolution systems and BEAM
Haskell via cloud haskell might replicate live view... but they are not the sensible defaults as in erlang/elixir/otp
Python never as their async concurrency parallelism story is ill-thought out w GIL limitations. Rust very very hard as their async concurrency design is extremely complex...
It's all about a persistent connection between the client and server, with the server pushing updates to the client as new data/events come in.
If you don't have a good async model, you're down to a thread/process per connection, which can get very expensive since you will have a thread for every user on your site.
You need something like Erlang's lightweight internal "processes", golang's coroutines, or maybe even Node.js evented I/O async model to handle this well.
all client state is replicated server-side. that means spinning up and down completely isolated threads very quickly. only a stack with immutability to the core and VERY efficient processes (such as OTP threads) can guarantee this while also providing nearly instantaneous spawning from any state (thanks to immutable data structures). Every client that has a page open to your site has a tiny thread running in OTP (it does not use OS-level threads).
I've been using Hotwire[0] for a while now (Turbo + Stimulus) which works with any back-end, it's been great. It builds on everything you already know and doesn't require rewriting your app. Everything is treated as progressive enhancements.
It's not the same exact model and I wouldn't classify it as a replication of LiveView, it's more of an extension of the Turbolinks model from 2016 with a decent story for adding in frontend JS where needed.
I think it's comparable from a bottom line point of view of it being a solution that lets you send snippets of HTML over the wire (literally the tagline of Hotwire) to very quickly build nice feeling web apps with minimal to no Javascript.
Instead of going all-in with state and Websockets like LiveView, Hotwire sticks to a stateless HTTP model and sprinkles in Websockets only when necessary such as when broadcasting an event to any connected clients. I've used it in Flask before and it took a few minutes to get going. For the broadcasting bits server side, it works with Rails out of the box since this toolset was extracted from Hey and Basecamp. Other frameworks have their own ports tho.
> solution that lets you send snippets of HTML over the wire
I haven't touched rails in a long time but isn't that how it all used to be done way back when? I feel like my original rails apps were making requests and I was returning HTML partials to be (crudely) diffed into the DOM?
> I feel like my original rails apps were making requests and I was returning HTML partials to be (crudely) diffed into the DOM?
They weren't diff'd into the DOM client side. Optionally, there was SJR (Server-generated JavaScript Responses) from 10 years ago[0] that you could do to get similar behavior but now it's much easier and much more maintainable with Hotwire.
In your original Rails apps you could make a request, render a template that has 0 ore more partials and the HTTP response from Rails by default would be all of the HTML, from <html> to </html>. Your browser would parse all of that and produce the page.
Then Turbolinks came into play around 2016 and it's a 1 line of JS drop-in solution so the above still happens, except now instead of <html> to </html> being rendered by your browser, only the <body> to </body> gets rendered. This is a massive improvement for page load speeds because now all of your JS / CSS in your <head> doesn't get parsed on every page load. Even with caching this is a big win since your browser still needed to parse it. It intercepts your links and makes them ajax calls transparently. It worked for form submissions too but you had to do a little bit of work on the back-end to make this work (Rails did it by default). This works with any tech stack tho.
Then Hotwire came along:
- Now you can swap the whole body with Turbo Drive in the same way as Turbolinks did in the past except it also works for form submissions now too with no backend modifications. It's IMO the single biggest bang for your buck that you can do on a site to make it feel fast with no effort at all.
- You can also use Turbo Frames to only swap a portion of the page instead of the whole <body>. It's 1 new HTML tag, you just wrap your content into a frame tag and you're done. Everything "just works". Only that frame's HTML gets sent over the wire, the other content on the page remains untouched. For example if you had a main content area and a sidebar you could have the main content be in its own frame and as you click links and do things in that area the sidebar will never get re-rendered unless you purposely break out of the frame by setting a target on the link.
- Then there's Turbo Streams which works for non-GET requests by default. So if you posted a new comment on the bottom of a blog post you can tell the stream to append it to a div (or prepend, update, delete, etc.) so you end up only sending a tiny bit of HTML over the wire for that single comment's markup. Optionally you can also instruct that to be broadcast over a websockets channel. In Rails this is seamless, it's 1 extra line of code and now you have live updates broadcast to anyone connected to that channel. You can also use streams to update multiple areas of a page if you wanted to, such as append the comment and add a (1) notification to your nav bar or page's title.
You as an end user don't need to write much JS to do all of the above. Then there's the Stimulus part of Hotwire for when you do need to write JS for more client side interactivity. All of this works with any back-end tech stack.
I do a mix of Flask and Rails personally. Hotwire Turbo works with Flask out of the box minus the websockets bits which you could make work but you'd need to handle the broadcasting server side bits since Turbo is a client side library.
With contract work I get exposed to a number of different tech stacks too.
There is a port for Go: https://github.com/jfyne/live . However, the API is not nearly as clean and nice, and it's probably not as efficient as Phoenix either. But overall it's really made a number of applications much easier to build and it has allowed me to avoid mountains of JS.
No one is mentioning that in Elixir/Erlang (and a bunch of other FP langs) a string is implemented as an immutable linked list of integers. linked lists have the property of its nodes being all stored separately in memory, which means we don’t really need to worry about space/time complexity while adding/deleting nodes in the list.
For example, A -> B -> C -> D is a linked list, but so is B -> C -> D, C -> D and so on, and each linked list is just a sub structure of another linked list.
This is interesting, because now we can confidently say that A -> B -> C -> D is a different list from B -> C -> D (even though one recursively contains the other one). Because we have that guarantee (along with the fact that a list CAN'T change), we don't have to define the same data twice, and we can reuse existing linked lists! This is called Structural sharing. Adding to this list as an extremely low memory cost.
If we go back to our html snippets that we are sending down the wire, you can see why all of this would have an advantage over how an imperative lang might represent strings under the hood.
Your explanation of structural sharing makes sense but there's a lot of nuance there:
1. Elixir strings are Erlang binaries (byte arrays), not charlists. Charlists are becoming rare nowadays even in Erlang because they're so memory-inefficient.
2. Comparing two lists for equality requires traversing them, you can't just look at the first node. If they were built independently they won't share structure.
3. HTML templating is (reasonably) efficient because of iodata, not charlists.
Just to add to your explanation, Lisp cons cells works that way as well, and are implemented as linked list.
The great and often unmentioned pro of cons cells/linked lists is that they are a very simple form of persistent data structure, that make immutability and memory sharing very easy, so are an excellent primitive to build a garbage-collected immutable and shared-nothing language upon like Erlang, or Clojure.
I'm working on a VDOM in Ruby using Haml kinda like JSX is used in React. It performs pretty well but I don't know how it scales. The syntax is probably the best part. Memory usage not so much, however I haven't done any optimizations.