I'm so excited about Elixir these days. I would love to see more cookbook examples of using Elixir but NOT as a Ruby/Rails replacement.
While the concept is not new to me, I have recently become totally infatuated with the concept of green threads, actor models, etc... for dealing with concurrent processing. This is literally what Erlang was designed to do, and Elixir makes it user friendly.
The real beauty of Elixir/Erlang is being able to run hundreds of thousands of concurrent "threads" (co-operative in userspace, not OS-level).
Anyway I do not mean to detract from the post here -- just feel like a todo list is not a shining-star example of Elixir and would love to see some more hackers post their concurrency work here.
Phoenix is frankly a lot more than Rails. If you pop open observer on a running phoenix app (:observer.start), you'd see that it's actually a collection of dozens of processes. The power of phoenix is that it's really just those processes running on BEAM in a dependency tree, so you can easily extend that and add more processes to the startup list. My phoenix app that's running in production isn't just a rails replacement; it also has long-running genserver processes for caching, communicating with key/value stores, etc, with supervisors to recover gracefully from crashes.
If you're looking for cool Elixir applications that aren't web-oriented, look into the Nerves project (http://nerves-project.org/).
>I would love to see more cookbook examples of using Elixir but NOT as a Ruby/Rails replacement.
I would love to see more examples that get away from web dev with Elixir. While it's great with the Phoenix framework there is so much more to Elixir than that. Trying to deploy an elixir daemon has been a pain and I haven't found much around that. There is escript and distillery and while I haven't tried distillery for non-phoenix apps, escript works but deploying is still a pain.
Thanks for the link, but the issue is deploying non-phoenix apps. There are tons of documentation (well relatively tons) on Phoenix app and deployment but console apps in Elixir have very little around that I have found.
For what it’s worth, the collaborative to do list is giving you an example of websockets with channels. Channels dedicates 3 server side processes per socket IIRC. One for the connection, one to supervise the connection and one to represent the user state for that connection. That’s only possible because of how cheap those Erlang/Elixir processes are. Because of that, just about any websocket example is a decent representation.
Plus, when you really start thinking about how the entire platform is designed specifically for passing small messages around between process, websockets become an even better example since that’s exactly what they do with the browser. You enable passing small messages between millions of connected clients to a server side environment built for passing millions of small messages between clustered and horizontally scalable servers.
> The real beauty of Elixir/Erlang is being able to run hundreds of thousands of concurrent "threads"
Erlang's units of concurrency are processes, not (green) threads. Unlike threads, they are completely isolated from each other.
> (co-operative in userspace, not OS-level).
Erlang's scheduler is preemptive, which means it has the ability to context switch between tasks without cooperation at any time.
> etc... for dealing with concurrent processing. This is literally what Erlang was designed to do, and Elixir makes it user friendly.
What exactly is user unfriendly in Erlang's concurrency model? You only need to understand 3 concepts, spawn, receive and send in order to perform any concurrent task. I'm not familiar with Elixir, what does it do differently?
Elixir is a lot more readable for newbies (especially people with a Ruby background), which is what I assume they mean by "user friendly". Elixir also has some nice libraries as well... one I've been digging into recently is GenStage, which gives you a way to organize process communication sort of like a conveyor belt, except that processes request more work from producers, allowing you to build a system that isn't going to break down as processes get overwhelmed.
fwiw green/userspace threads are not a good thing[1]. They're only used because for the most part system level threads don't yet have the scaling properties applications need. Once that deficiency is fixed (yes...we're still waiting after 30 years) green threads can go away.
[1]Because insight at the system level is required to do effective scheduling and because the transition between the userspace's concurrency model and the system's typically introduces unpleasant limitations and sometimes bugs.
If you're saying that a thread/actor/CSP model of concurrency is better than "async" (callbacks) as used with JavaScript, then of course -- 100% agree on that.
So what you're really saying is that green threads are a good thing.
One of the biggest strengths of Erlang (and Elixir by extension) is that they take the approach of finding practical solutions to real problems.
Sure, theoretically there might be a better solution than green threads, but like you said, we're still waiting for that better solution 30 years later. In the meantime, Erlang has been powering highly reliable and concurrent systems for ~30 years now.
> typically introduces unpleasant limitations and sometimes bugs.
BEAM VM does a very good jobs. It serves financial, large telecom, messaging, web and many other services. It is one of the marvels of software engineering. Not only does it provide this N:M scheduling well, it also provides process heap isolation between concurrency units (processes).
Heap isolation is very important, it increases safety, fault tolerance and provides guarantees that are just not there with shared memory model. Heap isolation is also other CSP/Erlang-like-clones ignore when they say they provide an Erlang-like environment. The only equivalent is really something like forking hundreds of thousands of OS processes [+]
> Once that deficiency is fixed (yes...we're still waiting after 30 years) green threads can go away.
I am not sure when it will be fixed. We'd need to be able to run one million OS processes effectively with the same latency and memory usage as what Erlang's processes currently have. We haven't gotten there in 30 years as you said, I don't think we are getting any closer. Sure there are machines that could probably run a lot more OS processes than before, but then putting an Erlang VM on the same hardware would allow running even more of its lightweight processes and so on.
[+] Another interesting exception could be Rust. If say threads were very cheap to create and could have millions of them, then could provide the same memory safety guarantees at compile time.
REST over websockets. What's the advantage over HTTP? I know websockets allow for pushing data to clients, but this is pull so apparently there should be no advantage, only more code to write.
Edit: maybe you're sharing updates among all the users connected to the server. Still, for sending requests HTTP is enough. Any performance advantage by using websockets?
This is interesting when you want near realtime performances, as you don't have all the culprint of a http request : you basically just send a few octets (a json document, or even just a simple value), and you don't have to build a request and negotiate a connection.
This is not something obvious because we have helpers to make http requests, but there are many things going on while issuing a http request (building headers, encoding body, etc) [1]. A socket (be it a websocket or a classic socket) has this advantage that all the negotiating part is already done and you just have to send the message, in whatever format you want. Decoding the request on server side is way easier too.
clients don't have any api to get at data pushed by http/2 - it is not a websocket replacement and can't be used for the same things. http/2 will be used to multiplex static resources to clients - not application data
I've been out of front-line web development for a while. Are Websockets supported widely enough that a developer can reasonably assume they are available and not have to worry about workarounds such as comet/long polling?
Yes, I would say they're an expected browser feature these days, with over 94% global browser availability [0] (basically anything >= IE10).
The cool thing about Phoenix specifically is that they provide a JS client to integrate with a Phoenix Channels [1] backend that will automatically fail-over to long-polling if window.WebSocket is not available [2]. These Phoenix Channels are set up to be transport agnostic, so you don't have to write any special backend or client code to handle one way or the other, it "just works" for the most part.
The last time I did anything with Websockets was about 5 years ago and they were relatively well supported by browsers even then. I used socket.io because it automatically falls back to long polling when native Websockets are not supported. I am guessing if you needed 100% support this may still be a good option.
[EDIT] I see another commenter in this thread mentions phoenix.js beating socket.io. Thanks, I will have to look into this!
Please anyone correct me if I'm wrong, but there's no raw socket interface in browsers, as far as I know. Websockets are the closest thing, while still ensuring usual client side security (like preventing cross site request).
A bit tangential but one of the things I've enjoyed about using GraphQL lately is how easily it can be used over websockets. All the API work you do to support HTTP can be immediately used over websockets because the it never required a specific transport to begin with.
Phoenix channels make it super easy to setup and use websockets, but you're sort of on your own as far as figuring out how to work REST conventions into it since all the tooling that exists for REST generally assumes and requires HTTP.
It's great to be able to write the code for the API once and support it across a broad variety of transports.
That's HN for you. I should have realized after I looked over issue 156 this morning. No questions, I was just curious about experiences with it and wanted to see if that's what you were referring to. Thanks for releasing it, I can't wait to try it out.
Websockets provide full-duplex communication channels over a single TCP connection without the overhead of a http request/response cycle. For realtime applications such as the one described in this post or for a chat application, there is a significant performance benefit over plain http. Maintaining open websocket connections is also very cheap in terms of memory used as well as cpu resources.
Apart from the obvious push capabilities, yes, websocket has much less overhead and is faster than HTTPS.
Once the connection is established, you can send requests starting at 1 byte size, whereas HTTP needs the full headers on every request.
HTTPS also has a lot of handshake overhead, where you need 2-3 round trips per request instead of a single one with websocket. This can make a world of difference in a 150+ ping situation, where HTTP feels sluggish and websocket feels instant.
actually, both HTTPS and WebSocket over TLS (wss://) require a TLS handshake, so it's not like you're constantly doing that after you've loaded the page, but HTTP request/response header sizes could def be a concern.
SSL Session Reuse needs to be configured as well, otherwise every single request will have to perform a new handshake, independently of keepalive.
In practice I've always observed HTTPS requests to take at least twice as long compared to a back and forth through a websocket, even when SSR and keepalive are enabled. I don't know if having both theorically allows for single-roundtrip HTTPS requests.
Performance advantage is pushing messages from the server instead of having the client poll.
The tradeoff is dealing with half-open TCP connections (need an in-process ping to ensure the connection is still alive) and designing the app for state recovery in the case of missed messages.
Non-representative anecdote : i've recently heard of two teams that started using elixir, they both came from a ruby background, and in both case they weren't using OTP.
They both justified using elixir because it was "closer to ruby" (???), and they both seemed to live a pretty difficult time with that language. Which makes me wonder : why would you ever want to use such a special language (purely functional isn't for everyone) if you don't have any scalability issue, and don't even use OTP ?
People usually dive into Elixir expecting Ruby, but it’s a very different language. Most of this confusion comes from not wanting to sit down and read a good book about it. The only thing it takes from Ruby is variable and namespace naming (since José Valim was a ruby syntax enthusiast). Ruby syntax is good IMHO, but that’s not the only reason to go for Elixir (far from it).
I recommend the book “Programming Elixir”. Even if you end up not picking the language, it makes a good point about how OOP is a natural enemy to concurrency (because of state) and about how transforming data (I.e functional) is a better match. The language runs atop the Erlang VM, which successfully navigated the troubles of concurrency in ISP nodes a long time ago.
I don't know if I'd move a project without scalability issues (and that's an important caveat) to Elixir purely based on the language facilities like pattern matching, but I'd definitely start one with it. Pattern matching alone is something I feel the lack of every time I get into a Ruby project these days.
For a project where scalability is a concern, I'd have no qualms about moving it over. At the very least I'd carve out the slices which most impact performance and implement them in Elixir and deal with the PITA of having two app servers. In a real world project with these kinds of issues I've seen some pretty huge improvements by doing this piece by piece.
There is a significant learning curve, but anyone can learn it. It is not much like Ruby, yet it captures much of the joy of programming in Ruby and is far better for structuring large programs. There is optional typing support through Dialyzer, but even without that the compiler will catch many mistakes that annoy me and slow down development in Ruby.
Also - the performance can really make a perceptible difference in user experience even if you don't "need" the scalability.
I've seen a few (as in 2 ;) ) examples of this myself. And this is what "scares" me the most for elixir's future/potential. I think it's a great language/tech but ruby enthusiasts rushing into it thinking that it's "just like ruby but faster" may have a very bad influence on where the language goes. They may bring a lot of ruby best practices that don't transpose at all in elixir/erlang world.
That was indeed my underlying point. I see elixir more as "erlang with easier syntax", than "ruby with a more powerful runtime".
aka : you should get to understand erlang design decisions prior to jumping into the tech. Which probably means having more experience than building the typical low load website made with RoR.
We already had a ton of Rails devs who didn't know Ruby or OOP well. If Elixir becomes very popular, we will have Elixir devs that don't know OTP, functional programming (outside of pure functions/immutability), or Erlang. That's the way it goes.
I'm in the fourth month of a Phoenix project, which shields me from almost all of OTP, and I agree with your point. Maybe even "Erlang with a Ruby syntax" would be too much. Similarities with Ruby don't go any further than "def" "do" "end" and many library modules and functions deliberately named to mimic corresponding modules and methods in Ruby. Everything else is different, both syntax and semantic.
After 10 years of Rails, all things considered Elixir and Phoenix are pretty good.
The best feature of the language so far is pattern matching. It can be used in the arguments of function definitions. It makes all the "if"s go away. It seems strange after 30 years of programming, but a program without conditionals is easier to read.
Not having to use an external background job framework is great news too. Just spawn your processes to send a mail, process something asynchronously and store the result in the database. No Sidekiq (or Celery if you're into Python.)
On the awful side, Ecto is too low level and it feels like a self inflicted pain. I understand the reasons behind its design decisions, but the typical web application or mobile backend don't need all of that. Actually the team working with me on this project wrote a lot of code to build queries and wrap them into another layer to return {:ok, result} / {:error, reason} tuples. We're calling them from controllers more or less as Model.get_something(args). I checked it now and we have zero occurrences of " from " in controller code. All the queries are confined inside a model. If we were using Rails ActiveRecord would have written all that code for us. Instead we had to use a more verbose query syntax, write our functions and tests. This is a net loss of productivity.
There are some modules that can be mounted on the top of Ecto to make it look like ActiveRecord or other ORMs for other languages. I'm looking forward to using Ecto.Rut in my next project. I hope I'll never have to use Ecto directly.
We almost didn't use the supervisor layer of the language (somewhat like systemd/init for who's not familiar with it.) I used it in smaller projects and I feel like it's a little too complicated. Same for defining a GenServer, more complex and verbose than exposing the same functionality with methods from a object. I know we have to split the code between what runs in the client process and what runs on the server process, but I feel there should be a vanilla GenServer that handles the most straightfoward case inside the behavior, which is no code in the client with the exception of passing the arguments to the server. Maybe use the name, sigils or module attributes to declare if there will be a synchronous return value.
Anyway, Elixir is still young and there is time to smooth the rough edges. So far it's an acceptable contender and definitely better than the popular scripting languages when concurrency is important.
Don't take this the wrong way, but it sounds like you're approaching this from the perspective that Rails' way is the "correct" way when in fact almost everything about Rails' doctrine will cause you problems over the life of a project.
> We're calling them from controllers more or less as Model.get_something(args). I checked it now and we have zero occurrences of " from " in controller code. All the queries are confined inside a model.
Yeah, that's almost a best practice, except you're regarding the "model" (the terminology is 'schema' in Ecto) as primary instead of building it as a separate context or application (depending on your preferred approach to modularity within Elixir/Phoenix). For that matter, I fail to see the advantages of `Model.get` over `Model |> Repo.get(id)`.
> If we were using Rails ActiveRecord would have written all that code for us.
There's not a substantial difference between the kind of AR usage that's on the happy path and the equivalent query in Ecto.
> I know we have to split the code between what runs in the client process and what runs on the server process, but I feel there should be a vanilla GenServer that handles the most straightfoward case inside the behavior, which is no code in the client with the exception of passing the arguments to the server
First of all thanks for the detailed answer. It contains many interesting points.
I don't think that AR is correct, I think that AR is more convenient than Ecto in many cases we run into in web development. With AR I don't have to write my own queries and encapsulate them into functions (but it's kind of what we do with scopes) and it has a more compact syntax. Ecto is more flexible, but that flexibility is not needed in most cases. Anyway when we must really be flexible we write queries in SQL and manually unmarshal resultsets. ORMs/Data Mappers are for simple and medium use cases.
I still didn't run into problems with the AR approach. Maybe it's because all of my projects were medium or small sized. I found AR to be perfect for them to the point that I want to disguise Ecto as AR using Ecto.Rut.
> you're regarding the "model" (the terminology is 'schema' in Ecto) as primary instead of building it as a separate context or application
This sounds interesting but I fail to understand what you mean. Would you mind explaining or posting a link? Thanks.
> I fail to see the advantages of `Model.get` over `Model |> Repo.get(id)`
It's shorter but not by much. One reason is that Repo doesn't mean much to me, so it could be hidden. What I care about is Model. But getting values out of the db is not such a pain. Inserting and updating is, because they are more verbose. I quote Ecto.Rut for the insert
If all of those extra characters are for extra flexibility (maybe for using more repositories in future?) then it smells of premature optimization. I'll happily do without it.
Regardless of this discussion, IMHO a thing that Ecto should fix is requiring developers to write both the migration and the schema. It's either the AR way, migration first and auto generated model, or the Python way, model first and auto generated migration. We got some bugs because we didn't write the same things inside the migration and the schema. Mistakes happens and the tools we use should help us not the make them. Ecto is not DRY but it should.
Btw, the compactness argument applies more or less to all of Ruby vs Elixir because of the object.oriented.notation.is.shorter than the Module1.functional |> Module2.way |> Module3.of |> Module4.composing. Aliases help to some degree but they add clutter to the top of the file. It's a little nuisance but pattern matching more than evens it.
I have to look more into Agent (thanks) but it seems exactly the opposite of what I want: write in the module only the code that should run in the server. Probably what I'm looking for is a macro that writes a vanilla GenServer for me hiding all the functions run on the client.
If you don't need flexibility. Well, usually that's bad idea - you'd want to carefully handle what's can be mass-assigned and what should be carefully set by hand.
In case of ecto, you'd do:
> If all of those extra characters are for extra flexibility (maybe for using more repositories in future?) then it smells of premature optimization. I'll happily do without it.
I wouldn't say it is to use more repositories in the future (I also hate future-proofing code) but rather to make it explicit what is happening on the database side. It aligns well with other ideas in Ecto, such as letting the database do uniqueness checks.
Most of the problematic Rails projects I worked with were because of this coupling between business logic and database that ActiveRecord encourages. But this is nothing new, it is one of the top 3 complaints about Rails.
I think I came off snarkier than I intended to above, thank you for reading that gracefully.
> I still didn't run into problems with the AR approach. Maybe it's because all of my projects were medium or small sized. I found AR to be perfect for them to the point that I want to disguise Ecto as AR using Ecto.Rut.
Perhaps its a stylistic thing, but recent versions of Ecto feel quite natural to me. I frequently use Ecto without a database by using it for embedded schemas (which I don't embed)—they're basically just structs that I can use with changesets a little more cleanly.
As someone else noted, you don't have to use changesets for the happy path interactions like simple inserts. It really just comes down to whether you can live with the Repo as the first thing you type instead of the schema.
> This sounds interesting but I fail to understand what you mean. Would you mind explaining or posting a link? Thanks.
Rails teaches us to look at the model as the primary point of business logic. In my view this is putting the cart before the horse. By putting the Model (big M, not little m) we limit our thinking around abstractions to what we can represent structurally in the database. A well-designed, thought-out application model (little m, not big M) encompasses much more than database tables. With AR, even if you try to create behaviors on fat models, ultimately the only vocabulary you have to work with are those nouns the database allows you.
Ecto does something subtle but important: it demotes your data schema to just that—data. Behavior is modelled in the messages you pass between processes, which is why you see so many people in the elixir community jumping into architectural techniques like eventsourcing. While things like Ecto.Rut add some conveniences, they also encourage promotion of the data to the central artifact of the system. After building Rails applications from small to very large over 10 years, I can say for my part I want to stay as far away from that as possible. Its convenient until its not, and when you can start to feel the pain of it, its very difficult to unwind its effects throughout your system.
Worth noting, especially if you're in the first 12-18 months of using Elixir: OTP is incredible, but it takes time to wrap your head around all its pieces. I usually recommend new users coming from an MVC framework just try to muddle along using Phoenix as they would Rails until they start getting comfortable with things like supervision trees and GenServers... and then the fun really starts. All the rest of it, like Ecto's hands-off approach to the database, really starts to make sense around that time.
ExActor is more or less what I was looking for. It deserves its 491 stars (492 now.) A big thanks for that!
After so many years of software development I don't like unnecessary complexity, that's why I'm keen to shave off features from Ecto and GenServer (and a lot of other tools, not only in Elixir) and settle for a subset with an easier API.
That said, you have a point when you write that AR's approach is "convenient until its not, and when you can start to feel the pain of it, its very difficult to unwind its effects". The project I'm working on is an MVP. Who knows where my customer is going to be in 12 month. What I know for sure is that using Ecto cost them some extra time to deliver because of all that boilerplate we had to write. Would I add an extra layer between the db and the logic in a Rails application if its requirements imply complicated logic and interactions? Probably yes. We can put any kind of objects in the models (or lib) directory of Rails, not only ones derived from AR.
I think I'm still left with the impression that you may be making things a little harder on yourself than they need to be, or perhaps better stated, than they need to be now... Ecto's API has gotten streamlined over the last few versions, such that in most common operations, I can't see much more than a cursory cosmetic difference in the APIs, like starting the call chain from the Repo instead of the Schema. What I will readily concede is more complicated is the support for things that are widely considered to be antipatterns, like STI and polymorphic associations.
> After so many years of software development I don't like unnecessary complexity, that's why I'm keen to shave off features from Ecto and GenServer (and a lot of other tools, not only in Elixir) and settle for a subset with an easier API.
This is what I'm missing—there's essential and accidental complexity, and when I look at GenServer, I see an API that's been shaved down by decades of practice in Erlang to its most essential complexity. Even ExActor is just a set of macros for generating those essential parts—it basically just saves on typing, not skipping functionality. Ecto hasn't had the years of legacy that OTP has, so its API has fluxed a bit in the last few years, but its still something I think is cut down to the bare minimum for healthy database interaction.
This kind of discussion is best handled by email, i think, and mine is in my profile—feel free to send a gist of something that illustrates what you're talking about, and maybe I'll have a better understanding of what aspects of the API are headaches. I'm really curious what I'm missing about your perspective here.
It is more flexible, but often that flexibility is not necessary. I replied in more detail to the sibling answer but in the Ecto version of this sample query, don't I have to write the definition of actual and by_email? AR gives me by_email for free. Instead, actual would be a scope and I have to write that code with AR too.
It's worth pointing out Jose Valim both created and maintains the language, and he's a core team member of Rails. He knows what to take and what to leave behind from Elixir, and has been incredibly thoughtful so far.
To elaborate on this a "traditional" stack of Phoenix for the web and and Ecto for the DB uses quite a bit of OTP, particularly if websockets are involved (every connection is a GenServer).
The thing is, going from Ruby to Elixir involves two learning curves: groking functional programming, and then groking processes. They really do need to be learned in that order since, within any given process, you're just operating in a relatively normal functional world.
HTTP requests are very easily envisioned in a simple functional way. You've got request data in, and you're gonna return response data. The interaction you have along the way with OTP based database pools and so forth is as a client, you aren't really having to manage the lifecycle of any of that stuff. In other words to get going with basic web stuff it's more critical to just become familiar with functional programming, and less critical to understand how OTP works. This is totally fine from a learning curve perspective.
What's cool about Phoenix channels though is that each user's connection, and each channel that they run is a GenServer. It provides a great opportunity to get some hands on interaction with the concept in a way that naturally fits into with the web application they're building.
TL;DR:You can be a client / user of OTP processes without knowing the details of how they work, and that's generally enough for HTTP.
I did (i.e paid) the Pragmatic Studio course and a large part is making a project that mimics a HTTP server and I find it a great way to discover functional programming and pattern matching (doesn't touch Phoenix though).
I worked on a project that used elixir without OTP. There’s a lot of nice things that elixir gives you and I didn’t personally feel like there was a significant learning curve. OTP is used under the hood for lots of analytics and reporting things, but those pieces were built by other people. The project I was working on had very specific requirements for response times that simply weren’t realistic with Ruby. We could have squeezed Ruby a bit to get what we needed, but it would have introduced complexity. We also knew we’d need to add features in the future while maintaining the same performance, so siding with the simpler solution made more sense to us. At some point in the near future we’ll probably use OTP, and I’ve already got a few ideas of how we’ll be leveraging it for some of our upcoming features. OTP is the killer feature, but I find working with elixir in general to be a very pleasant experience even on projects that don’t use it.
That seems to be the case with the majority of vocal (i.e hosting or being on podcasts) people who use Elixir. In a sense, I'm lucky since I came from a C# background and don't have to worry about doing things the "Ruby Way" instead of the "Elixir way".
>why would you ever want to use such a special language (purely functional isn't for everyone) if you don't have any scalability issue, and don't even use OTP ?
There are a few reasons. One being the hype train that has surrounded the language over the past year+ (at least that's when I noticed it gaining steam). Another reason is that they just don't know enough to get into OTP. Elixir is still young and there is a lot of OTP functionality that isn't represented in Elixir that you need to drop down to Erlang to use. Erlang is not the prettiest of languages and has its own learning curve as well. I think eventually the teams that you heard of will get to OTP as get more familiar with the language.
If you're doing development with a web socket layer, it's entirely worth it to try out Phoenix Channels before you commit to another stack. It makes your life so much easier.
There are a lot of other great reasons to use Elixir though.
Phoenix Channels feels like cheating because it's just so intuitive and you get so much performance straight out of the box. It's obscene! Maybe I'm used to Ruby perf that this blew me away so much.
On free tier heroku dyno and the $7 postgres hobby dev, performance was around 27,000 messages sent per second before POSTGRES started complaining. It wasn't even the app itself that was complaining, it was postgres. This was sent via websocket using phoenix channels and persisted to the database.
If you're a backend guy, add this tool to your repertoire - you'll become a wizard and just have tighter code with less headaches.
The reason Elixir pulls people from Ruby is the language and community emphasis on developer productivity IMO.
It’s the one thing that Ruby excels at so well it justifies using Ruby in the face of plenty of other “fast” options. Dev time and time to market are still infinitely more important than hosting costs unless you are already at scale.
What Elixir brings to the table is the productivity and time to market with an architecture that scales for most every scenario experienced in server side dev aside from heavy math. And that architecture is designed in a way that makes maintainability a first class citizen because of the lack of long term dependency entanglement growth.
> and not second rate Ruby on Rails refugees looking for a faster Rails
That's pretty harsh, and not at all true in my experience. There's a growing community and I had no trouble hiring two remote people. I ended up with a lot of qualified candidates.
Moreover, a lot of great people came over from Rails. Jose and Chris McCord were both formerly Rails people.
I've met a lot of extremely talented Elixir devs, and Elixir enthusiasts who're eager to work with the language full-time. You're actually in a really good position if you're looking to hire Elixir devs, because there's more excitement around the language than open jobs for it.
That's why I love Phoenix Context's that were introduced in 1.3 - it allows you to really structure your code and not just throw everything in a "model" like in Rails.
Hi, I used gtk-recordmydesktop (beware it will freeze your desktop under Ubuntu 17.10 Gnome but works fine under Unity). It gives you an .ogv file and then there's some editing to get a gif. I described the process at the end of: https://blog.openbloc.fr/javascript-es2015-starter-kit/
Mainly convert the .ogv to .mp4 with ffmpeg (don't have the command here), create a png palette from the video so your gif colors are more accurate and use ffmpeg to create the gif:
The ghost blog allow me to directly upload images to my cdn, it's a little more work to publish a video and I was too tired when finishing this article yesterday to think of better options anyway :)
Elixir is pure pleasure. Coming from a ruby background, I din't have huge difficulty in understanding the syntax, and unlike other functional languages like Clojure, it's not an abrupt change from procedural paradigm. You can still assign variables in functions etc.
While the concept is not new to me, I have recently become totally infatuated with the concept of green threads, actor models, etc... for dealing with concurrent processing. This is literally what Erlang was designed to do, and Elixir makes it user friendly.
The real beauty of Elixir/Erlang is being able to run hundreds of thousands of concurrent "threads" (co-operative in userspace, not OS-level).
Anyway I do not mean to detract from the post here -- just feel like a todo list is not a shining-star example of Elixir and would love to see some more hackers post their concurrency work here.