Hacker News new | past | comments | ask | show | jobs | submit login
My favorite Erlang Program (2013) (joearms.github.io)
189 points by Tomte on Aug 31, 2016 | hide | past | favorite | 38 comments



What I found interesting (or very, very boring, depending on how you think about it) about this when I first saw it is that this ability is just due to the nature of how receive works. That is, you can receive any kind of message, at any time. Want to listen to a different set of messages? Just call a different receive clause, be that in a different function, or inline.

I remember having a coworker who was in love with Akka. He was trying to sell everyone on it, despite my team's having a strong Erlang presence. He showed us Akka's "become", talking about how powerful that could be, and how Erlang didn't have anything comparable. He didn't seem to realize that Akka's needing a special keyword for it was actually a limitation rather than a feature, that Erlang didn't have a special keyword to allow you to do it because it didn't -need- a special keyword to allow you to do it. That's not a dig at Akka, or Scala (it's an impressive language in its own right), but rather to point out how sometimes we miss the complex capabilities a simple set of abstractions can give us.


Re: Akka, to paraphrase James Gosling's comments on Java "We brought a bunch of Java programmers halfway to Erlang".


FYI that wasn't Gosling, that was Guy Steele: http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/m...


My mistake! And in retrospect, Guy Steele saying that makes way more sense.


halfway is very optimistic interpretation :)


So was the original comment about bringing C++ programmers halfway to Lisp :)


very true


You don't need to use "become" in Akka - you can just call a function blindly if you want. It's useful to have a distinct concept of "become", because it's useful to not have arbitrary invisible behaviour changes everywhere, but rather demarcate which parts of the system do what. (I mean, just imagine trying to debug a system based on this universal server).


How does that work? I've never seen an example, and I don't see how you could do it cleanly without growing the stack, given the JVM's constraints.

Also to add onto yetihehe's comment, it's -really freaking easy- to debug such a server as Joe illustrates, or anything else that behaves in such a way (changing behavior by calling another function, to eventually wind up in a different receive statement), in Erlang. It's effectively an FSM determining what kind of messages you handle, the stack traces are clean (TCO means any time you get an exception, it's only in the current 'state'); the only complexities are the fact you're building an FSM, i.e., making sure your transitions are right, and what to do about out of order messages (i.e., a message that doesn't match the current state). There's nothing specific about the Erlang code that makes it harder to debug than the Akka code; in fact, just the fact it's pure Erlang makes it easier; your stack traces are going to be just your own code, no libraries muddying them up.

As for demarcation...it's not really any different. That is, Scala's

  def func:Receive = {
    //cases
  }

  def receive = {
    case "foo" => become(func)
  }
is equivalent to Erlang's

  func() -> 
    receive
      %cases
    end.

  receive
    "foo" -> func()
  end.
But you'll note that in Joe's example he even uses the atom 'become' to indicate the transition is happening, i.e., receive {become, F} => F() end). It's totally equivalent. I'm not saying Scala is less clean, or whathaveyou, just that Akka felt they had to create a new keyword, for some reason, rather than leave it up to the user to implement. Exactly why, or whether it's equivalent to calling a function, I don't know (having never seen an example of the latter, nor commentary on its characteristics re: call stack); my point was simply that it's a natural extension of the language fundamentals in Erlang, but Akka felt the need to create a new keyword just for it, and that people often miss that needed complexity can arise out of a solid set of fundamentals.


I really tried to follow here but I just don't get it. Having a "become" concept in the language/framework seems pretty arbitrary. The whole "become" example in erlang is just a language demo. You would usually have subtle variations of this same code with various different intents and it would be too obscure to define each separately. For example some "become"-s would have context state, or they will be temporary, or for the purpose of pooling the underlying process/actor data structure. Just "become" is not an appropriate name or abstraction in most real world cases.


Oh, agreed, the original example Joe has is just a language demo. But the ability to change one set of receives for another (that is, "this process listens for messages of type A, and in this one case, changes to start listening for messages of type B", which both Erlang and Scala can do just fine (again, my point was just that something you can do out of the box in Erlang, required a special keyword in Scala, which led to someone familiar with Scala to think you could not do it in Erlang) can be useful. I posted on someone else's comment in this thread with a real world example I had, where I had to change what my actor was listening for.


Not a problem, in stacktraces you can see where it was called and which line causes errors. Also you can dynamically trace this and even trace only those servers which become what you want. I've debugged systems which behave similarly, it's easy after you learn you can write "debugging servers" which are as simple as this "become server".


Is he aware of how Akka came into existence :)?


We've been experimenting with Elixir at work, and it's been really cool to drop down into the Erlang ecosystem and see how people think when they're designing OTP programs, especially when it comes to things like Riak and Mnesia and those sorts of things. My only complaints about this ecosystem is that everything feels scattered in specific places (I've had to read a lot of books, versus being able to read tutorials) and that it's very hard to tell an abandoned project from a feature complete project on GitHub. You can generally look at downloads in the Hex package manager for a rough idea of popularity, but then again build servers can inflate unpopular packages. Someone in the Elixir slack joked to me "75 stars? That's super popular for Erlang!". I can deal with it, but I wish it weren't so.

All that said, I think I'm going to stick around with these languages. Everything feels very well designed and battle ready, in stark contrast to using JS. Elixir also seems to remove a lot of the cruft and awkward syntax of Erlang, and having Lisp-style macros makes the GenServer boilerplate go away entirely.


Elixir did make Beam/OTP more accessible and more "fun" :)


And not to diminish Jose's work, but a lot of it was a coat of paint -- read the Kernel module, where most of the language core is located, it's pretty much entirely macros. Most of the code in setting up the language, outside of the standard lib, was bootstrapping the macro processor. The better docs, build system, package manager, etc. were all just icing on the cake to make it a 21st century language, with Phoenix as the cherry on top.


The thing that constantly gets me about Elixir and Phoenix is the default tends to be the correct way of doing it. Let me explain.

For example, the other day, I was looking at filtering a next parameter to remove the domain... however it's unnecessary as redirect/2 in phoenix if passed a URL rather than a path will throw an error by default. As an aside different exceptions will return different HTTP error codes - for example a database not found exception from Ecto will cause a 404. There are hundreds or thousands of these little decisions that are the correct/simpler decision.

Elixir and Phoenix, for want of a better way of saying it, show great programming taste. So to say it's a coat of paint is a bit harsh; it's the right coat of paint, in my opinion.


> Elixir and Phoenix, for want of a better way of saying it, show great programming taste. So to say it's a coat of paint is a bit harsh; it's the right coat of paint, in my opinion.

Fair enough. On rereading my comment, it came off more brash than I intended. I completely agree with this statement.


I am in no way equating a 20 year effort by Erlang team to that of Elixir team. But I think Elixir is making the whole ecosystem much more accessible. I once tried to bring Erlang into our shop about 6-7 years ago but could not get the team excited about it, with Elixir we will be going into production this Fall and everyone loves it.


For sure, I wasn't trying to suggest you were. I guess the point I was trying to make without actually saying any of these words is that it's really interesting that the same ecosystem with a few extra things had that effect. I always liked the idea of Erlang and I've read Joe's thesis and stuff, but never could be bothered to learn the language because it felt niche, and same as your coworkers, we're all really excited about Elixir.


How much does your team get into actual Erlang/OTP code? I'm wondering since I'm taking the long route, learning Erlang/OTP first and a distinct second for Elixir. Is this not optimal?


I think that's optimal but for us it was more people getting into Elixir first and starting to dig in into Erlang/OTP. basically Programming Elixir -> Promgramming Phoenix -> Elixir in Action -> Designing for Scalability with Erlang/OTP


See also: Little Elixir and OTP Guidebook. Currently reading that book, it's good so far.


About to finish it myself, also very good! Elixir in Action's second part moved a bit too fast for me, but I'll revisit after I am done with Guidebook.


Fun is in the eye of the beholder. I'm worried that Elixir will take off because it's "hip" and ruin the chances for other people to use a pure and consistent language (Erlang) at work.

It is amazing what kind of middleware bloat is forced unto people these days.


One could argue that Erlang/OTP has had decades to be used at work, and no one wanted to (sadly). Elixir taking over doesn't affect Erlang because Erlang wouldn't had become more popular over time anyway.


I personally am doing BEAM now mainly because of Elixir. It's undeniable that a "hip", and let's face it, young, crowd, having chosen Erlang/BEAM, gives it a big breath of fresh air (this is NOT to insult the experienced Erlang devs - it's just a fact that a new young crowd choosing your tech is an endorsement). Interestingly though, I'm more and more intrigued by the underlying technology. Personally I am not a fan of some of Elixir's design choices f.() versus f() for example, and maybe, just maybe, I'll do Erlang now.

Overall though, it was a total blast the other day to just run 10 000 processes so easily, within milliseconds, and watch all my cores go to max without messing with threads or even the (somewhat obtuse) Go channels. Actors really are easy to reason about. I like this model so much that I'm even looking at it for C (C++ Actor Framework).


I'd be interested to hear about your experience with the C++ Actor Framework. It seems like without the larger ecosystem of Erlang/BEAM for fault tolerance CAF is just a nice coroutine library.


One person's middleware bloat is another's power tool.


hip is one thing, we primarily do web dev. so good unicode support & Phoenix are big selling points.


If you haven't read the code for how Elixir handles Unicode support [0] I highly recommend doing so. At compile time it reads from several Unicode Character Database text files [1] and then defines a whole slew of functions using the contents of the files that take advantage of pattern matching via unquoting. I already knew all about pattern matching before I came across this but I was still absolutely floored by how awesome this is. In the past when working with C I've needed to do something similar but it required running an external tool whose output was a .c file. It worked but it was hardly elegant.

[0]: https://github.com/elixir-lang/elixir/blob/master/lib/elixir...

[1]: http://www.unicode.org/Public/8.0.0/ucd/


Yep very elegant solution


"It is no exaggeration to regard this as the most fundamental idea in programming: The evaluator, which determines the meaning of expressions in a programming language, is just another program." -- Abelson & Sussman, SICP.


What I find amusing about this is that raw receives are actually kind of frowned upon in the Erlang community, but the creator is still using them, and the language is still kind of beautiful and cool.

This isn't in any way to talk negative about Joe Armstrong or Erlang, but more to show how cool the language is: even non-idiomatic code scales and is wonderful.


Slightly tangential, but, I think it's more not using gen* in a supervisor hierarchy is frowned upon. Using raw receives within ~that~ sometimes is the right solution.

For instance, I had an instance where I needed to serialize requests out to an external piece of hardware. Various user or system events would determine hardware control events that needed to be sent out, and then responses needed to be listened for to determine if they were successful or not, and return that error the user. Essentially an asynchronous, but serial, process. I had a gen_server to synchronize access to the hardware.

Now, the way I implemented this was, with each call that came into the gen_server, I'd send a message via gen_tcp, and then listen for the response as a raw receive inside of the same call. The alternative, to stay in the gen_* structure, would have been to send and finish, and then handle the response in the handle_info (since gen_* is coming back as a raw message). But that would have allowed more sends to occur in between the initial send and getting a response back, which would break the serialization we needed to ensure, and would have lost the reference to who had sent the initial request (since I was reusing the socket, and the acknowledgements from the hardware didn't include any sort of session), making it impossible for me to respond properly to the original caller.

So with a raw receive in there, it effectively became like a 'become' server, in that at the top it was a gen_server that implemented a gen_tcp "send" server, and then after sending it temporarily took on the characteristics of a gen_tcp "listen" server via a raw request (though obviously it was listening on the socket even before sending), before reverting back to the "send" server that waited for the next thing to send.

In fact, as I recall, while we were serializing our sends, there were certain types of events being broadcast by the hardware that we needed to drop even in the raw receive, so it was recursive. That is, our send server became a listen server, and stayed as a listen server until either we got the response we wanted, or a timeout was hit, at which point we'd revert back to the send server.


This is exactly how we are using a raw `receive` in one of our production systems -- multiple asynchronous, but serial within, processes.

Thanks for sharing this. It gives me more confidence in what we have done!


How would this program look like translated to Haskell?


id

(I'm about two-thirds joking)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: