Hacker News new | past | comments | ask | show | jobs | submit login
Reasons to use Phoenix instead of Rails (medium.com/elviovicosa)
122 points by elvio on Nov 26, 2017 | hide | past | favorite | 108 comments



I've used Rails for a long time, and Phoenix since early versions (0.6 or something like that?).

Phoenix is amazing but productivity wise I haven't seen it get near the productivity levels you can get with Rails. Other developers I've spoken to say the same thing. This is even more true with the more recent release of Phoenix 1.3 encouraging the use of contexts. I think it's a good pattern to extract to once you have more knowledge about your application but trying to think about it up-front has slowed development speed down and has been hit or miss on whether or not the context was "correct". https://hexdocs.pm/phoenix/contexts.html

I think Rails and Phoenix have a heavily overlapping place in web development but my personal tl;dr is that Rails is great for getting shit done and Phoenix is great for scaling, especially when it comes to websockets.


> Rails is great for getting shit done and Phoenix is great for scaling

I think thats a fair assessment. Rails is super productive - but if you're building say, a financial exchange, Phoenix would probably serve you better.


Yes! Contexts are the equivalent of building a greenfield rails app with a bunch of properly namespaced and isolated Rails engines. You end up with some really awkward names and boilerplate code that just doesn’t feel right. You can get that sense while reading the Phoenix context documentation.

The Phoenix story should include an area where the app can evolve, and as it matures and becomes more well understood, pieces of it could be moved into contexts.


100% agreed. I see what the Phoenix team was trying to accomplish, but I don't think it will pan out in projects how they expect. It reminds me a lot of fat model and skinny controller when that was popular.

I think the underlying issue I have with contexts is that it's forcing us to predict the future. We have always had to predict/plan but I think contexts push it a bit further that we can reliably predict/plan. This is especially true for new apps which tend to change rapidly.

Like I said before I think it's a great pattern to extract existing code into but it's feels too heavy to start with.


Would you have any thoughts on how they compare in maintainability?


I do, I think it all boils down to the team. I'll admit Rails apps are a bit harder to untangle but poorly written code is poorly written code.

A lot of clever metaprogramming tricks can make that poorly written code a lot more painful though. :)


I've used Django and Phoenix and Laravel. And they all have one thing in common: They're not the same at all.

I don't know why people compare MVC-framework-X-that's-inspired-by-Rails'-foundations-and-fundamentals to Rails. Rails isn't great because MVC. Rails is great the vast and mature tooling and ecosystem that exists in Ruby (bundle, rake, thor; Rubygems), and the effortless ability to metaprogram (which is why Rails is what it is).

Per Rubygems, there are literally hundreds of thousands of people who are smarter than me who have developed libraries that solve the majority of things that I will/may need to implement. I'm not smarter than them and I'm okay with that. I want to focus on domain-specific stuff. Not anything else.

Those other frameworks... They're just not the same.

It's like comparing Ghost to Wordpress. Sure, they both allow you to blog, but Ghost doesn't have the Wordpress ecosystem of themes and plugins (blah blah blah security, I know).

I don't want to spend my time reinventing the wheel and neither do my clients or users. I'm not trying to earn style points with my stack and neither should anyone else. I'm not trying to impress HN or colleagues or friends. I'm trying to impress my clients and my users.

Rails will, for at least the next 5 years, allow me and anyone else who's familiar with OOP to greatly outpace any other web framework, given it's the right tool for the job[0].

There are other comments about scaling below and I'd love to comment on each of them but I won't. Rails scales just fine. I proudly serve an Alexa Top 5000 website that peaks at 500k rpm a few times a month (Nginx/Puma/Postgres/Redis). It all sits on a $30 VPS (2 cores @ 2GHz/8GB RAM). Sub-50ms response times and as stable as can be. It's not the most trivial of applications, either (pushing 50k writes/min and 1000 open pg connections). Sure, you'll spend a day tweaking your setup, but the cost of a day's time is low in comparison to switching to something like Phoenix.

[0] No, you're not going to mine crypto with Ruby.


Phoenix is way overhyped, particularly on HN. This is the same as what happened with Golang and then with Rust.

A few years ago, I remember that all these articles kept popping up about people switching from Node.js to Go.

Now it's funny because fairly recently I read a well thought out article on HN encouraging developers to dump Go for Node.js... These days you hardly read anything about Go at all. Not that there is anything wrong with it, but reality caught up with the hype and it's no longer pretending to be a silver bullet.

Right now Phoenix is pretending to be a silver bullet, but it's really not.


It's underhyped because the concepts Elixir brings to the table are most likely alient to most Rails developers (myself included at the time!)

Most people think it's about a faster activerecord, it's not. It's so much more!


I really like Ecto for several reasons, but mostly because it makes every database call super obvious. If you Repo.something(), you're probably hitting the database.

A couple other reasons I like it: 1. It's makes shooting yourself in the foot with N+1 queries really difficult. 2. It's not married to Phoenix. I haven't tried this in a while and maybe this isn't representative of the current state of things, but last time I tried to use AR in a non-Rails project, I ended up switching to Sequel gem.

For a nice, if not slightly biased, comparison of AR and Ecto, I recommend Darin Wilson's talk "Thinking in Ecto"[0].

[0]: https://www.youtube.com/watch?v=YQxopjai0CU


I’m old enough to remember when people said Rails was overhyped:)

Hype aside, I really like Phoenix a lot.


For what it's worth, this submission is a comparison of Phoenix -- a framework explicitly inspired by Rails and which has some core members who were/are Rails core members -- and Rails.


I had similar thoughts initially. After understanding erlang's OTP concepts, it all became clear. Phoenix's simplicity combined with the power of erlang&otp is fantastic.


Mind linking me that article about dumping Go for Node.js? I'd love to hear the reasoning behind that!



That's because "Rails with performance" was one of Phoenix's selling points from day one, Phoenix was inspired by Rails and Elixir is heavily influenced by Ruby, prior to 1.3 killing the models in favor of contexts Phoenix contained an extremely similar structure to Rails MVC with less implicit conventions.


"I don't know why people compare MVC-framework-X-that's-inspired-by-Rails'-foundations-and-fundamentals to Rails. Rails isn't great because MVC. Rails is great the vast and mature tooling and ecosystem that exists in Ruby (bundle, rake, thor; Rubygems), and the effortless ability to metaprogram (which is why Rails is what it is)."

This is exactly it, IMO. My employer uses both Rails and ASP.NET MVC, and it is not uncommon for the .NET developers to need to hand-roll one or two of the proposed features of a project, where Rails developers could have just used a well-tested gem. Soft-delete and content versioning are obvious examples: ActiveRecord plus community gems make these features trivial in Rails.

Rails does fall down on raw performance: this becomes noticeable on large batch jobs and big test suites, but it's never been a blocker.

Personally, I am very interested in Buffalo[1], which is essentially Rails for Go, but it will take at least a few years for the Buffalo ecosystem to be able to match Rails.

[1]: https://gobuffalo.io


I'm in love with the simple explicit composability of everything in the Elixir ecosystem. In Phoenix everything is a pipeline from the connection leading down through the routes, controller, etc. Phoenix is just a plug in a Mix application and it doesn't impose itself in everything I do. I'm not a Phoenix developer, I'm an Elixir developer who happens to be using Phoenix to manage web traffic.

I think this is where Rails and Phoenix diverge in philosophy, Phoenix prioritizes explicitness and minimal assumptions with how and what you're going to do with their tool, whereas Rails is famously opinionated providing a 'Rails way' of doing most things.

What Rails does, it is very good at, but when you move outside of its expertise, you'll may find yourself in hot-water fast. Phoenix can be what you need it to be, and when you need to do something outside of Phoenix's domain everything is composable, so pick and chose what you need.


So, I've used Rails (ruby), Django (python), Flask (python) Revel (go), Spring (java), Node.js (javascript), and even have used C, php and Go to roll my own website from scratch[1].

The thing is, I've always scaled my website(s) to thousands or tens of thousands of requests and hour; with one website even getting close to a million an hour... all with no problem. In the case of Rails (as is discussed in the article), my bottleneck (from a usability perspective) is always bandwidth. Most applications (I build anyway), require a hefty amount of data. Waiting 200ms for the database + an additional 20ms for rendering, the 20ms is not noticeable. Scaling Rails (or any modern web app) is as easy as just launching another instance and load balancing.

Given that, I really don't see any advantage toward Phoenix. Plus, why I personally love rails is all the gems - which is super powerful. Most other frameworks simply don't have Rails simple logic, with the easily extendable gems.

[1] http://austingwalters.com/building-a-web-server-in-go-handli...


I mean... 1 million requests an hour isn't that many.

That's around 300 requests per second. Assuming 1000ms upper bound on requests, you need 300 workers to handle that load. Assuming 150MB per worker, that's 45 GB of memory required to handle the load. So like... 5 m4.xlarge instances on EC2 (to give redundancy and allow loss of 2 hosts). That's $700/month.

That's not that much. We've got a Rails app that pushes 5000 qps. And to be fair, we just dial up the number of instances and it handles it fine. It runs on over 100 instances. It costs us around $10k/month. Not the end of the world, cost wise, but we have multiple Go services that handle similar levels of traffic and run on a dozen instances. Additionally, deploys take a long time (rolling restarts plus health checks on 100 machines takes time).

Moving to Go (or Elixir) allows us to handle far more requests per unit of hardware. While latency would indeed improve, it's not the primary motivator for us moving away from Rails.

I haven't even mentioned the websocket story on Rails. That's a whole new can of worms.


* 1 million requests an hour isn't that many.*

What percentage of websites serve 1 mil or more requests per hour, .01%? .001%? Meaning Rails is going to be performant enough for 99.9+% of projects, and for those projects it would have been a mistake to trade dev time for performance you’ll never need.


1M request per hour at peak?

A SPA backed by rails is probably going to make at least 10 requests on page load. So in terms of actual traffic, 100k page loads during a peak hour. Assume a roughly linear peak increase/decrease and we've got roughly 1M page loads per day. 30M page loads per month.

How many websites have 30M (non-unique) page loads per month? After some rough scouting on Alexa ranks, I'd put the over-under at probably 10k US sites, and 50k worldwide. Assuming Alexa has 50M sites, then 0.1% of publicly facing sites serve that much traffic.

Rails is used often on private, internal sites and tooling. Those sites wouldn't come up. That would definitely skew the 0.1% number.

Not making a huge argument here, I just started down that analysis path out of curiosity and figured I'd share it.


Great analysis! According to Netcraft, there are > 600million websites, so assuming Alexa ignores the 550 million with near zero traffic, you can get to the .01% pretty easily. Another area I'd tweak is the skew towards SPAs. Most sites aren't SPAs, especially as you slide down the traffic rankings.


Yeah, the skew towards SPA is more Rails focused. I haven't worked on a non SPA Rails app in over 5 years. P(SPA | Rails) is higher than P(SPA).


Do rails apps actually use 150MB per worker or was that just a lofty ceiling estimate? Or is that with an app server that doesn't utilize process forking, and loses the benefit of shared libraries/heap/etc?


Yes, but you only need one process per core, just like NodeJS.

Since 1.9 you can use real OS threads to achieve parallel IO, and certain parallel computations which can proceed without holding the Global Interpreter Lock.

JRuby offers completely unrestricted threading with a single process, plus a 3x performance boost, plus the advantage of an incredible amount of work put into their VM and GC. It's a really underrated option these days.

The problem with forking, is MRI Ruby is unaware what memory is actually inherited by forking, so eventually it causes the entire heap inherited from the parent to copy into the child.

I'm actually working on a patch to fix this. The solution is simple, just don't mark, collect, or allocate into inherited pages, but the implementation itself is fiddly.

What's really exacerbated this, is that most Linux distros now have Transparent Huge Pages turned on by default, and flipping a single bit in inherited causes a 2MB copy instead of a 4kb copy!


I'm fairly certain this was fixed in 2.0? https://medium.com/@rcdexta/whats-the-deal-with-ruby-gc-and-...


Nah, unfortunately simply moving the GC bits from the object itself to bitmap in the page header made Ruby CoW friendlier but not CoW friendly! Each Ruby page is 4x OS pages on Linux, so marking into the header of each Ruby page still causes 1/4 of the parent heap to copy into the child process.

The bigger issue is heap fragmentation. Lots of inherited pages have a couple of free slots that Ruby will happily consume, effectively causing you to pay a 4kb copy for a 40 byte object.

This means allocating a few hundred objects can cause basically the entire heap to copy. Combine this with Transparent Huge Pages, where flipping one bit causes a 2MB copy rather than 4kb, and a few hundred object allocations can cause the entire parent process memory to be duplicated into the child.

Aaron Patterson is doing some great work to bring GC compaction to MRI, which will help reduce fragmentation, but it's a huge task.


Yes, they often do :/


> It costs us around $10k/month.

As someone who develops web apps for startups not in SV, that seems incredibly expensive to me.


It's the difference between 30 developers and 31 developers on payroll.


Haha, yeah. It's always interesting when you put it in that context. I often joke that AWS is our hardest working employee.


That's amazing. I make less than 1/5th of that and it is considered among the top wages over here.


Well, it's a sliding scale isn't it? If you have less traffic/paying customers you can pull that back down.


Have you ever considered JRuby on Rails? We've had 10.000 students choosing lectures online at the same time using torquebox back in 2012 on just two server instances with multimaster replicated MySQL.


I share your opinion, if you have any bottlenecks which are unsolvable because of Rails or Ruby being slow your app probably scaled up to a point where you have the resources to make a creative solution that is not Phoenix/Elixir.

Having said that, while I don't find any major reasons to opt for Phoenix instead of Rails, my main reason to work with Elixir/Phoenix right now on side projects is that it won't make me unlearn Rails which I can always go back to, in fact, using Phoenix for a while made me miss a lot of the available libraries, plug-in solutions and wide range of community knowledge.

So ATM I have more reasons to choose Rails over Phoenix, but I suppose that's why these sorts of articles are important, publicity to increase a community and fulfill Elixir's biggest need right now.


> in fact, using Phoenix for a while made me miss a lot of the available libraries, plug-in solutions and wide range of community knowledge.

This is my main reason for focusing on Rails rather than Elixir at present. The sheer breadth of gems available for rails to quickly build sophisticated applications is hard to beat for any up and coming language/framework.

That said, my gut feel is that many of the developers who build and maintain these gems have moved on and that Elixir is the new hotness and will probably catch up in the next few years.


I wonder if you might extrapolate on which libraries, plug-in solutions you missed in Phoenix.


My bottleneck with Rails was always memory usage. Our concurrency was always limited by how many web and task workers we could run in the amount of memory we could afford, which was not very many. It was not straightforward to optimize this and ignoring it didn't work for us, so we put lots of time into it and it changed the whole productivity argument for Rails.


You should really only need one process per core, plus a few thread per process. Obviously, this is a problem on Heroku where they give you 8 "cores" but only 512mb of RAM per 1x dyno but on a DigitalOcean or AWS server you shouldn't be running out of RAM before maxing all cores.


Agreed. I have a sidekiq job that imports data into elasticsearch from a mysql database using activerecord, and it uses hundreds of megs of memory, which is ridiculous considering how little data is actually being imported.


This is basically my job at ChartMogul and we've pretty much solved this problem. The two biggest issues for us were: Ruby prefers to grow the heap really quickly rather than spend much time in garbage collection. You can turn this growth factor down at runtime using an enviroment variable.

The second problem is importing a huge chunk of rows at once means they have to exist in RAM at the same time. Use batched iterators to reduce peak memory usage. All GCed languages have this problem, Go included.

You'd think Go's GC was somehow revolutionary from the way they talk about it, but it's basically the same as the Ruby GC, plus a little more parallelism. What helps Go is that the compiler can optimise and use stack allocation and re-use temprory variables. If it fails, it causes a nightmare, and the Go standard library is full of tricks to convince the compiler to do stack allocation.

Java, OTOH, has compacting garbace collection, so after high peak memory usage, it can release the memory back to the OS. Aaron Patterson has been working on doing the same for Ruby. If you use JRuby, you'll get this right now, plus it's about 3x faster for shuffling data around.


Another difference (which, as I recall, mattered for us) is that class objects in Ruby take up quite a bit of permanent space in memory.


Ruby is actually pretty good in this regard. If you define your module or class anonymously, but give it a name using a constant, Ruby will GC it when possible. The standard way of defining modules and classes obviously means they can never be GCed.

Java doesn't, or at least didn't, collect anonymous classes without an additional GC flag being enabled, which could bite you quite hard using JRuby with gems which made heavy use of anonymous classes.


In practice, modules and classes are not defined that way - likely not in your own application, and certainly not in all the gems you depend on, or in the Rails framework itself. That entire set of transitive dependencies can take up a lot of memory.


The class definitions will be a tiny fraction of memory usage. A template Rails app memory usage only has about 20% managed by Ruby. The rest is the VM, C libraries, maybe long strings etc. Definitely not class definitions pulled in from gems.


Yeah we struggled to keep our memory usage under 500M per worker. Sidekiq helped because it was able to use threads effectively, but especially when we were running resque workers, we could only handle tens of tasks per second, and often fell behind. I think Rails' autoload-the-world philosophy was a big part of the problem and we spent some time trying to untangle dependencies, but it was swimming against the stream.

I'm not sure if these same problems do or don't come up with Phoenix, but when I briefly used it, it did seem to have a smaller memory footprint.


500MB per worker is totally standard. What happens is a job causes a huge array or hash to be allocated, and after it‘s finished the memory can’t be returned to the OS due to heap fragmentation. Java does some crazy stuff with compaction. C programs typically try and internally allocate into arenas to avoid it.


The thing is we're not even using rails, it's a simple Sinatra app with ActiveRecord, so there's not much being loaded that's not being used. Could be the ActiveRecord itself is the problem though.


Tbh, 200ms for the db is massive. I have a 100ms limit at which my mongodb writes into the slow-log. It's an almost empty log. My point being, apps can have all sorts of bottlenecks and you only have to throw the right brain at the problem to fix it, no need to replace the techstack.

Yes, gems. The ecosphere in ruby is superb. I'm started working with node a lot more these days and it's a) confusing and b) small packages for all those things whatevertheycallitthesedaysscript is missing. 243 gems in my 7yr old Ruby vs 918 node modules in an app we started this month. What's Elexir/Erlang like in this respect?


even for "old databases" (mysql, postgresql) 200ms is huge or to it differently say probably his database is not the bottleneck. He probably fetches big lists, and even lists with over 100 entries are extremly slow on python/rails, whatever. on java/c++/go/rust these things are way faster.


The database doesn't need to be an issue - it just currently is. We have a fairly complex query on a several terabyte large PostgreSQL database and didn't want to spend more money. Basically, we just haven't sharded it yet, and don't want to pay for a larger machine. The point is - typically the issue isn't the web app. Almost all web frameworks are designed to be scalable from the start. Just launch another instance.

Also, we do about 99 writes for every 1 read, so query times don't matter as much. For the given app I was using as an example.

Network I/O and large databases are usually the bottlenecks from what I've seen consulting and developing.


This was probably true 10 years ago, but these days it's basically FUD. Since then Ruby improved performance something like 5-80x from 1.8 to 2.5 and moved from an interpreted language to a VM nearly as fast as LuaJIT. Go only has 3-5x the throughput for tasks like: grab 100 rows from PostgreSQL, serialize to JSON, respond via HTTP. Lots of the hot code in Ruby around fetching rows from the DB, serializing to JSON, and HTTP parsing have also since been implemented as native functions in C, massively improving performance with zero effort required by the developer.


yeah this is cool, until you make a single map...


Do you mean: "take a few thousand rows and map them to a different data structure"? Because I benchmarked that recently and mapping 16,000 rows of GPS points using haversine distance in pure Ruby takes about 12ms, and about 5ms in Go. It's not that much slower. There are tasks where Ruby can be ~100x slower than Go, but a simple map isn't one of them.


MySQL and PostgreSQL are considered “old” now? I’d guess that’s much like Rails being considered “old” and “legacy.”


Well there is always question of $ if running on Phoenix would let you save $100/month it's one thing if it's 100k/month things might look a bit differently and when it is 1 mil/ month things that will change equation even more. (This is not going into areas were Ruby is not a usable option such as massive real time apps etc.)


> Waiting 200ms for the database + an additional 20ms for rendering

Those are pretty awful numbers. https://blog.gigaspaces.com/amazon-found-every-100ms-of-late...


One of my favorite perks of Phoenix vs Rails is no wasted time trying to figure out where the heck a method/function came from. Is it from the parent class? One of the included modules? Or perhaps not defined anywhere at all, with method missing magic? Going from that to explicit imports is refreshing.


(note: I use both Elixir & Ruby, for different reasons)

You can most of the time rely on object.method(:blank?).source_location to quickly determine where a specific method is defined.

More useful debugging tips can be found here:

https://www.schneems.com/2016/01/25/ruby-debugging-magic-che...


If you use Pry with Rails, you can also call binding.pry before the method in question, then type show-method [methodname] in the Pry console.


That is a real productivity booster. I also enjoy the clarity and explicitness present in the framework.


One reason not to use Elixir is the absence of vectors/arrays. Yes, you can import them from Erlang but they ain't pretty. Elixirists try to pretend that lists and tuples are all you need but Elixir's lists are Lispy lists, not the Python variety. It's one of these small print things you only discover after spending time with Elixir but it can be a deal-breaker. Ask on the Elixir lists and you'll get some very defensive responses which basically add-up to vectors/arrays being difficult to optimise in a dynamic functional language. String processing is also not quite as straightforward as in Ruby and Python due to how Erlang/Elixir uses binary representation.


You've brought this up before. [0]

[0] https://news.ycombinator.com/item?id=12013088


Is there a limit to the number of times one can state the same opinion on HN?


Can you explain a bit more what you mean? What advantages do "Python variety" lists give you over what Elixir does with lists and tuples?


Python's lists are really arrays/vectors as in Ruby ie. well-optimised for index-based access. Erlang/Elixir lists are linked lists which are more optimised for appends than index-based access though you can achieve the same with Enum.at(list, index). With small lists/arrays the difference isn't noticeable but it will affect performance with large lists.


If we're being honest, Python lists aren't arrays either, they are arrays of pointers which need to be unboxed before using. If you really really need array performance, then drop down to c, or use a language like Julia. Lists give you the ability to quickly pass these structures as immutable objects, which in turn, you want because of safety guarantees in concurrent environments.


The lack of true arrays or vectors haven’t been an issue for me across a couple of domains. Of course maps can always be used as indexed based lookups for mid-sized problems (similar to lua’s combined array/map type).

What problems have you been facing where it’s an issue? It’d be nice to know to avoid said problems upfront.


Arrays and vectors are different. I’d suggest adhering to C-like conventions of data structures.


I never bought into the pure functional programming hype. Most programs are made up of many functions and as the program gets more complex, the code paths keep getting longer... If you force everything to always be passed by value and returned by value (never by reference) it's clear that the costs of constantly cloning all these objects would quickly add up.

The problem with pure functional programming is that it prevents the developer from writing well optimized code.

State change side effects might be dangerous, but they're also a really good way to boost performance and sometimes they're totally worth it.


> If you force everything to always be passed by value and returned by value (never by reference) it's clear that the costs of constantly cloning all these objects would quickly add up.

That’s not what happens. See: http://erlang.org/pipermail/erlang-questions/2013-March/0727...

“Pass by value does NOT imply copying and never has; copying is only required for mutable data structures, and Erlang hasn't any.”


See Rich Hickey's description of structural sharing in Clojure. Persistent data structures are not simple copies.


After many projects with Rails and one with Phoenix, these are my remarks on the list in the post:

* The directory structure. Rails has a simpler one. Phoenix has some weirdness (why the migrations are in priv/db? They are private and all the other modules are not?)

* The naming conventions. About the same.

* The database migrations. About the same, but with Phoenix we have to duplicate the schema definition in the model, which is not DRY at all and encourage bugs. Either the ActiveRecord way (the truth in the db) or the Django way (the truth in the model).

* The use of dependencies. About the same.

* The ActiveRecord features. AR is much easier to use than Ecto, which is more general but often unnecessarily so. Proof: there are modules on top of Ecto to make it look like AR [1] [2]. Personally I won't use plain Ecto in an Elixir project of my own.

* The ERb templates. About the same.

* The form helpers. I really don't know: my Phoenix project was a backend to a SPA, we generated JSON plus some email templates.

* The built-in support for testing. About the same.

Advantages of Phoenix:

* no need for Sideqik, just spawn processes and send emails and the like

* create some GenServers to run long running processes side by side with the main web application

* the websocket server is an example of the previous point and it performs better than the one in Rails.

* Elixir's pattern matching is so good to use compared to any language without it (Ruby, Python, etc.)

Advantages of Rails:

* ActiveRecord is so easier to use that it translates in visible productivity gains (of course it's also 10 years of Rails vs 4 months of Phoenix). What it means for the long term maintainability of the application is up to you to decide. In my experience the impact is zero because none of my projects was meant to scale to the millions of users or to a complex architecture and none did. Your project might be different.

* The object oriented notation is more compact than the functional one: object.method.method.method vs value |> function |> function |> function, maybe with some Module.function thrown in to make it even more verbose.

[1] https://github.com/sheharyarn/ecto_rut

[2] https://github.com/MishaConway/ecto_shortcuts


'priv' is an Erlang/OTP application's directory for files that are part of an application but not source or compiled beams.


When I go back to Rails, ecto is something I miss a lot. Once I got used to “thinking in ecto” and its explicitness and composeability I fell in love with it. Ecto also makes it much harder to shoot yourself in the foot.

For anyone that wants to learn more about Ecto I recommend this talk: https://youtu.be/YQxopjai0CU


I've worked with Django, Flask, Rails, Express, and Go. It gets to the point where it's not "which framework is the best" but "which framework is the best for our company". Having interviewed at a ton of start-ups recently it's been overwelmingly Node because its super easy to hire for and onboard, with the benefits of dynamic programming and node's ecosystem. If your company has a great hiring pipeline or is super attractive to candidates, you could do something more niche like erlang, scala, elixir, go, etc., to more closely match technical needs.

Everytime I see "x is better then y" I just chuckle and slowly step away.


For someone who hasn't learnt Rails or any other backend framwork and has only briefly dabbled with NodeJS, is it better to learn Rails first or should I learn Elixir / Phoenix directly? Thanks.


Do you prefer functional programming? Do you like to "know how everything works?" i.e., you prefer explicitness over magic? Are you OK with documentation that isn't always correct?

Try Phoenix.

Are you a beginner? Do you want to learn a more marketable skill? Do you just want to get stuff done as quickly as possible as soon as possible? Do you prefer OOP?

Try Rails.

I use both in production. Rails is awesome. It does everything, is performant enough for nearly every use case, and has some of the best documentation ever. You really can't go wrong with it.

I switched to Elixir and Phoenix mostly because I prefer functional languages. Honestly, if Elixir weren't functional I wouldn't be using Phoenix over Rails.

I also felt - after some years - constrained by the style of modularization Rails enforces - which, to be fair, is a benefit when you're just starting and is suitable for nearly every project. But, if you're an experienced developer, it might bother you in larger projects that you can't break up your code the "right" way. It's a minor detail for some people, and a major pain for others. ymmv.

tl;dr - use Rails unless you're willing to trade initial productivity for specific and possibly not very important language preferences.


Probably Rails, since most web frameworks are inspired by features from Rails.

Then you can build the missing "parts" in Elixir for Phoenix.


Downvoting this makes no sense. Rails is an excellent backend framework to learn because it's been around long enough for people to know what it does well and what it doesn't, and chances are you won't be big enough to hit the Rails scaling ceiling (it's way higher than most people think).


I think it depends a bit on the purpose.

If your goal is to learn a new framework and get a job, I believe Rails would be the best option, since there is a huge number of jobs available for Rails developers. Of course you can still get a job with Elixir and Phoenix, but not as many as compared to Rails ones.

If your goal is to learn a new web framework and maybe use it to play with personal projects, then both Rails and Phoenix would be great options.

Keep in mind that you can also learn both frameworks and benefit even more.


Thank you all for the great replies. I was leaning towards learning Elixir/Phoenix but I also felt it's probably the 'new shiny thing' factor attracting me. This kind of validates it. So, I will probably learn rails and keep an eye out for Phoenix.


Most of the comments seem to answer what I was going to ask after reading the post: "what about the gems?"

My question now would be "does Phoenix have something like devise and carrierwave gems?" Please link if so.



Thank you.


For authentication I like: https://github.com/riverrun/phauxth


Strange list to be honest. The fundamental reason to choose Phoenix would be: 1) unparalleled capabilities for real time features 2) BEAM VM and all it's capabilities available via more conventional syntax than Erlang via Elixir


Real time? Not really. There's no time bound of any sort offered by any part of the Erlang ecosystem. What they mean is "quick enough", for some definition of quick and enough.


There’s various types of real-time systems: https://en.m.wikipedia.org/wiki/Real-time_computing#Criteria...

You’re talking about hard real-time. Erlang, as it says on the official site, targets soft real-time.


I know Erlang says that and I have always been a bit mystified by it.

Definition of soft real time: "the usefulness of a result degrades after its deadline, thereby degrading the system's quality of service".

Well, duh. That's pretty useless. Any system in production is soft real-time by that definition.

There's really nothing special about Erlang that makes it amenable to "pretty quick" responses. It is not as if admission and rate control are baked into BEAM. If you don't pay attention to your messaging architecture, head-of-line blocking will kill you.

I know that they claim that their sharded-GC design helps with shorter pauses, but there's no real evidence to back the imputation that other GC designs have really held back the industry; consider the large number of sites that have been implemented in Java/Python/Ruby. I have put Java/Scala/Go/Erlang systems in production, and rarely have I have ever had to worry about GC tuning.


> Any system in production is soft real-time by that definition.

Most systems don't fit in that definition because they don't have deadlines in the first place. A site like Hacker News is not soft real-time; the faster it loads the better, but there's not a set number of seconds after which a user would give up.

A video streaming service, or most API with response time guarantees, would be soft real-time systems. There is a deadline, but it doesn't need to be respected 100% of the time, as users can live with a few dropped frames or out-of-spec responses.

> There's really nothing special about Erlang that makes it amenable to "pretty quick" responses.

It's not about "pretty quick" but about predictable, reliable response times. The key feature is the preemptive scheduling. Processes can only block a given scheduler for a very short time; all functions in the language that can take a long time to execute are built to yield to the scheduler periodically. So you don't get a slow request because some other request decided to turn a huge map into a string, or run an expensive loop, or block waiting for I/O, or run a GC, etc.

This is of course a tradeoff between throughput and latency, because all the yielding and checking comes at a cost. Go is a bit in the middle ground, it also uses lightweight processes, but does not yield as much so a hot loop can block for some time (but also runs faster as a result).


> A video streaming service, or most API with response time guarantees ...

Preemption, the design of Erlang's GC etc. don't contribute to predictability or reliability any more than other frameworks.

Consider a system written in Erlang, and another written using a very different architecture, say Python/C++ (YouTube).

In both cases, Python and Erlang are basically used for orchestration; the action happens in the systems layer below.

In both cases they use non-blocking I/O underneath, and some sort of adaptive bitrate streaming if not enough bytes are going through within the time bound. There is nothing that is particular to the Erlang system that monitors bitrate and does something about it. In both cases, the soft real time guarantee has to be accounted for explicitly; you don't get it in any shape or form from the Python or Erlang architecture. The only 'guarantee' you get is the promise of using the underlying API in the most sensible way possible, and to push out bytes with as little overhead as possible.

What you get in both cases, is convenience. As a side note, here's BBC's Kamaelia framework (http://www.kamaelia.org/Home.html) written in Python. (I'm not sure if they still use it though).


What's missing from your thought experiment is concurrency. Imagine there's 100k people connected to the server. If you make the system in Erlang, you can handle each connection in a separate process, which individually monitors its own bitrate and fetches or sends data with synchronous calls. This makes the code very straightforward.

The VM ensures each process won't block the scheduler for more than 1ms or so, so that processes that have short but time critical operations to do (e.g. push a video frame down a socket) get a chance to run quickly, no matter what other processes are doing and whether it's I/O or CPU bound.

Kamaelia, gevent, Node.js, etc. do not provide that guarantee. OS threads do, and there are good frameworks based on them (e.g. Celluloid), but they don't scale to more than a few thousands.


That's a fair point.


There is something very special about Erlang VM fully preemptive scheduling


But is it really?

I'm sure those who know Erlang well will correct me on this, but if it is doing IO of any sort, it is limited by the number of IO operations (posix/db calls) under the covers. There can be atmost as many as the number of kernel threads created by the VM, which is typically equal to the number of processors.


Scheduler threads use async IO internally, they context switch to another process while the first one is blocked waiting for the IO to complete. When async is not possible, they dispatch operations to a thread pool.


In context of web apps real time generally means things like chat, voice etc. not hard realtime as in RTOS.


Seems kind of circular to say, "Use Phoenix instead of Rails because Phoenix is similar to Rails." Which is pretty much points 1, 2 and 3 of this list of 5.


After 10 yrs of Rails I have some high expectations and miss information about Elexir's incarnation of rubygems. Is there anything like RSpec?


I'm using Rails since 2005, used RSpec all the time (and still do), and I must say the built-in testing framework (ExUnit) is "good enough" for me. I haven't felt the need to use something that would mimic RSpec more.


There is an RSpec clone called ESpec https://github.com/antonmi/espec.

I've used it in my own applications, however I would probably not recommend using it as it relies on a feature of the language (tuple modules) that might be removed in the next major version.

Elixir ships with a unit testing library called ExUnit, the documentation for which can be found here. https://hexdocs.pm/ex_unit/ExUnit.html


I've been programming Ruby and using RSpec for quite some time, but quite honestly I wouldn't use it for a new greenfield Ruby project tomorrow. I've consistently found that it's too difficult to teach and for a team to use effectively time and time again. I'd likely stick with MiniTest. Similarly as I wade into Elixir, I'm sticking with ExUnit. It's simple, concise, and gets the job done.


Hex (the package manager) is pretty great along with the rest of Elixir's core ecosystem (ExUnit, hexdocs, Mix, etc).

In Ruby-land I use RSpec almost exclusively but in Elixir I stick to ExUnit and it's great. I feel like there's not as much of a need for something RSpec like in Elixir-land due it being functional.


the package manager is here: https://hex.pm

with elixir having a lot of former and present rails devs in the eco-system - you can usually find "equivalent" packages("gems")..

say search for rspec https://hex.pm/packages?_utf8=&search=rspec&sort=recent_down...


"Author of http://www.phoenixforrailsdevelopers.com" All of this looks like a sales pitch to me. No doubt that elixir / erlang are an extremely useful and brilliant technologies but the article is simply bollocks measuring apples and oranges.


As a technology gathers hype around it, it is normal to see people try to make a quick buck out of it through books or only courses.

I'd recommend interested developers to go read Phoenix' own website and form their own opinion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: