Hacker News new | past | comments | ask | show | jobs | submit login
Let’s Talk Concurrency: Panel with Sir Tony Hoare, Joe Armstrong, Carl Hewitt (erlang-solutions.com)
250 points by mpweiher on Feb 20, 2019 | hide | past | favorite | 113 comments



In a wide-ranging discussion, there were some fundamental disagreements among the panelists as follows:

I disagreed with Tony Hoare about using synchronous communication as the primitive because it is too slow for both IoT and many-core chips. Instead, the primitive for communication should be asynchronous sending and receiving, from which more complex protocols can be constructed.

Also, I disagreed with Tony about sequential actions (using ";") as being foundational. Instead, concurrent actions are foundational for digital systems as follows:

    * Receipt of a communication activated sending other communications
    * An Actor received one communication before it received another communication
Consequently, a computation is a partial order of causality. Tony and I did agree that tooling is needed for navigating the partial order. We just disagreed about whether sequential actions (using ";") are foundational.

Furthermore, class hierarchies are not a suitable foundation for Scalable Intelligent Systems. Interfaces instead of subclassing should be used for IoT communication. Also, entities and descriptions in large ontologies do not fit in an object class hierarchy, e.g., Java and C++. Subclassing is not secure because it allows a subclass to impersonate a superclass.

I disagreed with Joe Armstrong about requiring use of external mailboxes because they are inefficient in both space and time. Instead of requiring an external mailbox for each Actor, buffering/reordering/scheduling should be performed inside an Actor as required.

Of course, Tony and Joe made other great points with which we agree entirely. See the following for more information:

  http://web.stanford.edu/class/ee380/Abstracts/190123.html


One of the things I can't see the actor model explain is how to scale an actor vertically, which obviously has it's purpose. Amdahl's law and all that.

However these days we see people build actor frameworks using CSP (golang) and "restricted shared memory" in Rust http://actix.rs

Isn't it ironic? Yet shows that the actor model can be inclusive of many "competing" paradigms, and gain from them, perhaps even need them to scale vertically?


This is a very good question.

Citadels are larger scale Actors which can be incorporated into other Citadels, where a Citadel is an Actor for a systems of Actors (perhaps including IoT).

See the following:

   http://web.stanford.edu/class/ee380/Abstracts/190123.html


Amdahl's law imposes no performance limitation on a system which does not have a sequential part. Having a sequential part is a bad idea because it is a single point of failure.


The Actor model sounds really great, and while I haven't used Erlang in a serious context, I have used Scala Akka Actors extensively and I find them very difficult to reason about. Actors might be great if you have a huge system over the network that you need to manage, but I don't find them that great for single computer programs.

For example, I tried to implement a Akka bittorrent client and found it very difficult and confusing. You end up with a big graph of actors in which most of the messages sent are related to keeping track of state and actors.

Moreover, actors are untyped so it's not obvious what messages they accept of send. Without a lot of documentation and discipline, the code becomes extremely unreadable very very quickly. Also bugs are very hard to find and very easy to create.

Maybe it's less elegant, but for a lot of problems, just using select or epoll is a lot easier.


It's helpful to keep in mind some parallels between actor systems and inheritance hierarchies.

- The tendency is to start extremely elaborate and become much more minimalistic with experience.

- The educational materials do too good a job at explaining advanced techniques without warning about how important it is to keep things simple in practice.

- A complex actor system or inheritance hierarchy should both reflect AND help tame complexity in the problem you're solving. If it doesn't meet both of these criteria, it should be simplified.

- There is no semantic reason that any relationship in a problem domain needs to be modeled by inheritance or actors despite being painful to program that way. The technique should only be applied where it helps.

I think (not from personal experience, but from what I read) that the big exception to the above is when you're using actors to optimize latency or throughput on a big multi-core machine. Then you will find yourself taking on complexity for the sake of maximum performance. I think educational materials often present techniques from these difficult, carefully-built systems as if they're helpful for writing simple, small services. This is also a parallel to inheritance, if you take a step back: high-performance data processing systems are the canonical problem for this era, just like complex native GUI programming was the canonical problem in the heyday of big inheritance hierarchies. Lessons from those domains can be misleading when applied to simpler problems.


I worked on a hierarchical, distributed control system written in Erlang, and I found the actor model well suited to that problem domain.

We had to control various devices based on sensor data, but we also needed to make sure that the workload was distributed across the devices in an efficient way. To implement that, we had one actor per device that was solely responsible for adjusting the device output based on sensor data. Then we had another actor that was responsible for looking at the behavior of a cluster of devices and distributing the load between them. This "load distribution" actor wouldn't control the devices directly, but instead would send adjustments to the actors directly controlling the device.

I find it very natural to anthropomorphize actors. That is, if you were to think about how you would organize a group of people to do the work of the system (in a very Taylorist sort of way) the roles of those people map neatly onto the actors one would create.

One common pattern is the manager/supervisor/worker pattern[0]. I see this as mapping fairly neatly onto the project manager/people manager/employee structure that you see in many organizations.

[0] https://zxq9.com/archives/1311


I think what most people want to know is whether they could re-write a multi-threaded program that is currently using synchronization primitives (locks, semaphores, mutexes, etc) with actors.

Like, show me the producer-consumer pattern using actors.


I mean, that's a pretty standard actor pattern..

https://github.com/foobatman/producer-consumer-akka-actor/tr...

There are many more examples out there. Anything that a multi-threaded program can do, an actor can do..

I just think that actors are inherently more complex than threads for simple problems. For complex problems, I think actor are probably a better choice.

A good example of a simple program is managing multiple TCP connections. At least in my (somewhat limited) experience with actors, polling TCP file descriptors is easier to manage creating an actor for each one.


> I think what most people want to know is whether they could re-write a multi-threaded program that is currently using synchronization primitives (locks, semaphores, mutexes, etc) with actors.

Really, it's not obvious? Of course you can: instead of shared memory you now have each actor having its own memory, instead of using locks and working on shared memory you now have messages and can ask actors to do something with their memory and send you the result back.


You just have to think a bit differently about your synchronization primitves. You don't have locks, you have a message queue and that is your synchronization primitive. A resource that should only be accessed by one thread is abstracted away using an actor, and the queue guarantees that only one process at the time has access, as the actor can only process one message at a time.

You can implement locks using actors if you really want to. An actor represents the state of the lock and stores which actor locked it and only responds to the unlock message from that actor and ignores all lock and unlock messages from all the other actors until it is unlocked.



"You end up with a big graph of actors" - you are almost certainly doing something wrong, the actor graph shouldn't need to be "big", and the supervisor hierarchy almost never deeper than 2-3 levels. A typical mistake when doing actor-based system design is to think that actors are replacements for modules or classes (or taken to the extreme: function calls), which is not the case.


Massive Inconsistency Robust Ontologies will have trillions of Actors on a many-core computer.

Please see the YouTube video here:

  http://web.stanford.edu/class/ee380/Abstracts/190123.html


Didn't watch the video, but as I understand actors is that you can have trillions of instances of your actors, but you're not having a large number of actually different actors. For example let's say you have a server that handles requests. You implement the request handler as an actor and for every request you spawn an instance of that actor to handle that request.


There will be zillions of different kinds of Actor in a massive inconsistency robust ontology, each with a different implementation although constructed on a common system.


According to Carl Hewitt, the actor model is based on physics (especially quantum physics). Doesn't this then imply that the natural scope of such a system entails huge numbers of actors, interacting in a probabilistic way?

So in order to understand our systems with huge graphs of actors, we should attempt to develop something akin to the heat equation for actors. Trying to understand the behaviour of every individual actor in the context of the whole system is just too much.


I will admit that I do not have much experience creating actor systems. But the actor model in inherently complex. To be fair, that's because it's trying to solve a complex problem. But unless you actually need thousands of concurrent processes, often you can do without it.


I played with erlang at uni, and now I'm working with scala, and I find the latter very messier than the former


Scala and Akka are... less simple. It's also not like the actor model takes over the whole paradigm. Actors are largely a runtime concern. You're still using modules and functions. The JVM is also less suited to implementing an Actor model. A lot of the BEAM's power comes from the preemptive scheduler.


That (the preemptive scheduler) and OTP/gen_server, which covers about 90% of the cases Erlang should be used for anyway. These are the foundations of Erlang-land, however, sadly neither of those have equivalents in Akka or on the JVM.


I have a long time been a evangelising concurrent ml. It generalises the actor model, and for me it has always hit the sweet spot between "do everything yourself" and "one inflexible way or the highway".


IIRC, it tries to "generalize" CSP, or rather just make it usable, because it isn't in its default form at all. CSP is not as general as actor model. But you can't generalize actor model into a more general concurrency model, unbounded nondeterminism is as far as we can go.


Actually, the Actor Model generalizes Concurrent ML because message and types are Actors.

For example, if anAccount:Account then the following

     anAccount.deposit[$5]
is defined as follows:

     Account.send[anAccount. deposit[$5]



I used to drink a lot of cool-aid from erlang. It is true, that they have got concurrency model right. They embraced actor model, messages between processies are copied over for a good reason, etc. However, I don't think it is enough to call it a day. Comparing to modern languages, erlang lacks a lot. Elixir is trying to fill those gaps, but how many layers of abstraction can you add to an ecosystem before it is unusable? With go and rust available, erlang/elixir looks like a very good tool but for a very limited pool of use cases - routing/filtering/messaging.


I fundamentally disagree with your premise, despite seeing how you came to that conclusion. Elixir/Erlang are particularly optimized for the operations you're speaking about, but Elixir is very Lisp-ey under the hood. Macros are a game changer. Combine that with a strong standard library, many of which is delegated down to the Erlang calls anyway, and the the pure developer joy that comes from coding in Elixir in both the small and the large, increased debuggability from having highly readable, functional code.

But the real power comes from the BEAM. Turns out modern servers map very strongly to phone switches of the past, and the distributed system primitives given by the BEAM keep on ticking, 30 years later. Modeling a web server as a single process per request, the supervision model, and the power of preemptive scheduling is something I don't see in other languages, at least as explicitly. Preemptive scheduling is really a wonderful thing, and I don't think Go or Rust provide this. Please correct me if I'm wrong. This is to say nothing of the observability, hot code reloads, or any of the more fundamental parts of the BEAM that you wind up needing in practice.

I'll be frank, I think Go is an unnecessarily verbose language. I don't like reading it, and any time I've had to write it, I have not enjoyed it. I find Go's concurrency model worse than Erlang's despite being similar at first glance. GenServers are a much better abstraction to me than GoRoutines and friends. If it weren't from Rob Pike and the marketing of Google, I don't think it would be nearly as popular as it is. The type system from Rust is great, and the borrow checker is a fantastic addition to type systems especially in that class of language, I have no use for Rust in my daily life. It is on my short list of languages to become more familiar with, though.


"Modeling a web server as a single process per request, the supervision model, and the power of preemptive scheduling is something I don't see in other languages, at least as explicitly."

That's how most production websites of the past 20 years have been built, but these services are pushed up to the OS level rather than the language level. Apache, PHP, CGI, and everything built on that ecosystem used a process-per-request model. The OS provided preemptive scheduling. If you were doing anything in production you'd use a tool like supervisord or monit to automatically monitor the health & liveness of your server process and restart it if it crashes. The OS process model restricts most crashes to just the one request, anyway.

There was a time in the early-mid 2000s when this model gave way to event-driven (epoll, libevent, etc.) servers and more complicated threading models like SEDA, but the need for much of that disappeared with NPTL and the O(1) scheduler for Linux, though process-creation overhead still discourages some people from using this model. Many Java servers are quite happy using a thread-per-request or thread-pool model with no shared state between threads, though, which is semantically identical but with better efficiency and weaker security/reliability guarantees.

Now, there continues to be a big debate over whether the OS or the programming language is the proper place for concurrency & isolation. That's not going to be resolved anytime soon, and I've flipped back and forth on it a few times. The OS can generate better security & robustness guarantees because you know that different processes do not share memory; the language can often be more efficient because it operates at a finer granularity than the page and has more knowledge about the invariants in the program itself. One of the interesting things about BEAM (and to a lesser extent, the JVM) is that it duplicates a lot of services that are traditionally provided by the OS or independent programs running within the OS. In some ways this is a good thing (batteries included!), but in other ways it can be frustratingly limited.


I think you're right that this will flip back and forth; but the key difference in my mind between the process per request model of Apache and friends, and the process per connection model of Erlang is that in Erlang, I can do a million connections/processes per machine, and that would be very unfeasible with Apache.

Both approaches _do_ give me a very straightforward programming environment for isolated processes, although the isolation guarantees are smaller in Erlang. I'd like to think it's easier to break the isolation for cross process communication with Erlang, but that's probably debatable.

In my mind, the Erlang model is validated by the Apache model, but it adds scale in a way that doesn't require a mental flip to event-driven programming (although, beam itself is certainly handling IO through event loops with kqueue or epoll or what have you underneath).


"I can do a million connections/processes per machine, and that would be very unfeasible with Apache."

It's somewhat less infeasible now than it was in the early 2000s. The main barriers to C1M with an OS process per connection are:

1. Stack size. With 8M stacks 1M processes would take up 8 TB of RAM.

2. Process creation overhead - loading the executable into memory, setting up global context, opening sockets.

3. Context-switching overhead: swapping page tables, TLB flushes, saving registers, etc.

For #1, recent versions of Linux will happily let you create threads or processes with 4K stacks now. They also don't actually allocate the memory for the whole process, they just map pages, and then the page fault is what assigns a physical page to a virtual address, so if you never touch a memory location it doesn't exist in RAM. For #2, new processes get COWed from their parent and can inherit file descriptors as well, so all the read-only data (executables, static data, MMapped files, etc.) is essentially free. #3 is a legitimate reason why language-based solutions are faster (they don't have to flush the whole TLB on context-switch, and know exactly which registers they're using), but mostly affects speed rather than concurrent connections.


in Erlang, I can do a million connections/processes per machine, and that would be very unfeasible with Apache.

Very niche use case and even more in the context of serving HTTP requests, where the JVM/Go/C#/Rust and even nodejs will smoke erlang because it can't compete in raw performance.


One reason why I occasionally look in on DragonflyBSD is because of it's implementation of lightweight kernel threads seems like a compelling approach to addressing some of those trade-offs.


Go is not fully preemptive, but in practice it usually is (it preempts at function calls): https://github.com/golang/go/issues/10958.


> If it weren't from Rob Pike and the marketing of Google, I don't think it would be nearly as popular as it is.

This is only true in part. The fact is that Go is a supremely accessible language (warts and all) that helps you get things done.


Case in point, Go has its roots in Oberon-2 and Limbo, and we all know how they both went.

Had Go been created while their authors would be working elsewhere, and it wouldn't have taken off like that.

Naturally one can use Dart as counter example, but it only failed because Chrome and Angular teams weren't willing to keep pushing it forward.


Well looks like dart is getting back into the game thnx to Flutter


I don't have any high hopes for Flutter until I see it on https://developer.android.com/about

Fucshia is currently also exploring other UI stacks as per available commits.

Currently it looks like internal teams competition.


"true in part ..."

Sure, it would not get the initial visibility. But we would not keep using it if was an ineffective language.


Actually we would.

For example, even though I am not a big fan of Go, I have to use it when customers required us to deal with Docker or K8S.

Just like I have my issues with C, but would certainly use it when writing an UNIX driver.

Programming languages are products, and get used because of the eco-systems they carry along, bullet point features are usually secondary to that.


Well, I guess we diverge in our views (besides affection for Go) in that I see the adoption of Go for Docker (init release 2013) & K8S (2015) as merit based choices. Go was made public in 2009.

> Programming languages are products, and get used because of the eco-systems they carry along, bullet point features are usually secondary to that.

Non-sequitur.


K8S was initially developed in Java, the decision to switch to Go came later and they are still fighting the language, including having to maintain their own generics workaround.

https://fosdem.org/2019/schedule/event/kubernetesclusterfuck...


That says far more about K8S development team than the Go language..


It's absolutely relevant to the point that "we wouldn't keep using it if it wasn't an effective language" (modulo any disagreements about what "effective" means!). Many languages are heavily used due to network effects (popularity, marketing, community) and platform effects, not solely on technical merit. JavaScript and C come immediately to mind as examples of the platform effect on language selection. (The fact that modern JS transpilers exist merely papers over JS' dominant footing in the Web space.)


I wrote a thing about the economics of programming languages a while back:

https://www.welton.it/articles/programming_language_economic...

And it absolutely does make sense to view them as products in order to understand their uptake, or why they don't become popular.


You know, Ford Edsel was a "product" as well.

I maintain that it is a non-sequitur, if not patronizing, to state the obvious facts about software language eco-systems. My perception remains that Go sufficiently delighted a critical mass of developers who then proceeded to create the said eco-system. Mere marketing can not engender a vibrant community.


I don't think those things are obvious to a lot of people.

> a critical mass of developers

How do you reach a critical mass of developers without something that looks like "marketing"?


Please see my first post in this thread. As mentioned, I do agree that sans Rob Pike, Ken Thompson, and the Google host, the language would have likely languished in semi-obscurity. But if it was an entirely flea ridden dog, no amount of marketing would have afforded it the mind share that it possesses.


Sure, you have to have something that is reasonably high quality for marketing to work, for many products.


Your article puts it very clearly, I enjoyed reading it.


Yeah I could have omitted that line, but I do still think there's truth in it. If it weren't from a large company writing a ton of tooling in it (Kubernetes in particular), I think adoption would be significantly lower. It would be nonzero, and I don't mean to suggest it would be zero, but it would not be in the "top popularity class" in my opinion were it to not have that marketing arm behind it. I also think it's more optimized to Google's developers (read: huge army of disparate technical levels) than small/medium or even some larger shops. It's great that Kubernetes can be written in it, and that's a point in favor of it. But that doesn't make it a great language.


What marketing? The only time I hear that is about people mad at Go being popular and not liking it, I've never seen marketing from Google toward Go. The language is popular because it's powerful and yet very simple to onboard, the standard lib is good as well as the documentation, that's why it's popular not because of Google.

You mention Kubernetes, but forgot all the other widely used projects that are not from Google: Docker, Grafana, etcd, all the Hashicorp tools ( terraform, packer, consul .. ), Prometheus, InfluxDB, Hugo, CockroachDB ect ...


Rust offers no specific scheduling; only type system affordances for describing important properties around parallelism and concurrency. The standard library gives an API for the OS’s threads, and soon, an API for defining cooperative tasks. Being a low-level language, you can implement any sort of primitives you want. There’s an actor library built on top of said tasks, for example.


The thing is the BEAM model doesn't have a bright future because it can be replaced by Kubernetes and is language neutral, almost all the feature BEAM provides are better done in k8s ( HA, deployment ect ... ). As for hot code reload, I've never seen why you would need that since you can use blue / green or canary / rolling deployment, the only reason I see is to keep some state in your app, which I think is a terrible idea.

Two others things:

- deploying Erlang / Elixir app is difficult ( even with distillery... )

- Erlang is slow, much much slower than Go


> As for hot code reload, I've never seen why you would need that since you can use blue / green or canary / rolling deployment, the only reason I see is to keep some state in your app, which I think is a terrible idea.

Most applications at least have connection state, at the least a TCP connection. It is at minimum disruptive to disconnect a million clients and have them reconnect. Certainly, your service environment needs to be able to handle this anyway [1] in case of node failure, but if you do a rolling restart of frontends, many active clients will have to reconnect multiple times which adds load to your servers as well as your clients. Actually disconnecting users cleanly takes time too, so a full restart deploy will take a lot longer than a hot code reload, unless you have enough spare capacity to deploy a full new system, and move users in one step, and then kill the old system.

Certainly, hot loading can introduce more failure modes, but most of those are things you already need to think about in a distributed system -- just not usually within a single node; ex: what happens if a call from the old version hits the new version.

[1] There are some techniques to provide TCP handling, but I'd be surprised to hear if anyone is using them at a large scale.


It depends of what you mean by state, I was talking about internal state in the application. Your example is about network state like websockets not REST APIs ( what 99,9% of people use ), even with that it's easy to rollout new connections with canary deployment, and with a load-balancer in front of that your replace old instances with new one with no disruption and you can drain your old instances. Even if the connection is cut, in your client logic you should have a proper reconnection mechanism.

Hot code reload is imo a bad practice and should be avoided.


Hot code reload is imo an enabling practice, and should be done everywhere possible. Restart to reload may be useful or practically required for some deployments, and it's sort of a test of cold start, but it's so disruptive and time consuming. I've done deploys both ways, and time to remove host from load balancer, wait for server to drain, then add back is time I won't get back. You can do a lot more deploys in a day when the deploy takes seconds; which means you can deploy smaller changes, and confirm as you go.


If it's disruptive and time consuming it means you don't use the right process / tools. If you're CI/CD pipeline is properly setup ( and it's actually easy to do ) you don't have to do anything.

https://kubernetes.io/docs/tutorials/kubernetes-basics/updat...

That's the power of Kubernetes, and since it's very popular the community and tooling are great, good luck replicating that with BEAM.


I'm not sure if you can totally replace the lightweight BEAM processes with k8s equivalents. Sure if throwing more resources to scale more horizontally is not a top concern for you, then it probably doesn't matter much. But BEAM does make things much more efficient and less costly in general.

Also, message-passing and the actor model is not a particular design focus of k8s compared to BEAM.


Have you tried edeliver? it make use of distillery and I find it easy to deploy with, I guess it all boils down to your server architecture but you should give it a try someday.

https://github.com/edeliver/edeliver


> Comparing to modern languages, erlang lacks a lot.

The "a lot" part is emphasized. It has to be something massive? Classes? Objects?

With the same token one can say most other languages lack a lot as well. That "lot" would be fault tolerance. Notice how most operating systems today use isolated processes instead of sharing memory like Windows 3 or DOS did. There is a reason for that. When the word processing application crashed, it would take down the calculator and media player with it. So modern operating systems have isolated concurrency units. And so do languages built on BEAM VM.

And of course you could still spawn OS processes and run a language that uses shared memory between its concurrency units (threads, goroutines, callback chains). But you can't spawn too many. Or even worse, everything has to run in a container, so now you have a container framework as another layer. And the question then becomes "how many layers of abstraction can you add to an ecosystem before it is unusable?" ;-)

> Elixir is trying to fill those gaps, but how many layers of abstraction

Elixir is not built on top of Erlang the language. It compiles to the same BEAM VM bytecode. But the intermediary representation layers between the running bytecode and the language syntax was already there. They didn't add another layer on "top" so to speak.

> They embraced actor model,

Don't think so. The creators at the time had no idea about the actor model. They embraced fault tolerance, concurrency and low response time most of all.


> That "lot" would be fault tolerance. Notice how most operating systems today use isolated processes instead of sharing memory like Windows 3 or DOS did.

Rust has full memory safety for concurrent code (if you restrict yourself to the Safe Rust subset), unlike other commonly-used languages such as C/C++, Go, JVM/CLR-hosted languages etc. This provides a far more general model of concurrency than something like the Actor model; it can also express other patterns very naturally, such as immutability (like Haskell), channels/CSP (like Go), plain old synchronized access etc. Of course Actors can be expressed too, as can isolated processes ala Erlang (that's what exclusive mutability is for!) but you're never restricted by the model.


Good point. Rust does have good memory safety in regard to concurrency and checks it at compile time. I'd say it's probably the most exciting development that happened in programming languages in the last 10 years or even more.

But I also think Rust has a steeper learning curve and is too low level for many cases. It wants the user to think well and hard about lifetimes, memory layout, ownership, whether to use threads (can you spawn 1M threads easily?), if a thread crashes can you supervise it and restart it, or maybe use futures (promises?) instead. Those are useful thing to think about and might make sense when writing a storage layer, a fast proxy, or a network driver, but that's too many low level choices to think about when say I want to add an item to a shopping cart.


> This provides a far more general model of concurrency than something like the Actor model

It doesn't. Actor model is as general as it gets for concurrency. Any constraint only makes it less general.


> Elixir is not built on top of Erlang the language. It compiles to the same BEAM VM bytecode.

Actually it is, at a semantic level. Elixir source is reduced to Erlang's abstract syntax tree format, which is represented in Erlang terms. The same Erlang compiler is used to generate BEAM code for both languages. This isn't just a detail - the ramifications of using Erlang semantics permeates the language. But not to the language's detriment in any way.


You're right. I thought it went straight to the core representation like LFE does but I checked and see it translates everything to an Erlang AST. It's just that Erlang AST allows for representation that are not necessarily valid in Erlang (variables don't have to start with camel case, rebinding etc).


The Actor Model is a formalization of what needs to be done for IoT and many-core computers. The ideas were circulating widely before work began on Erlang even if the engineers did not read the literature.


Thank you for clarifying.

I do like how the Actor Model and Erlang work arrived at the same concepts independently. It seems like an extra validation of the concept.


The problem is that Erlang and its BEAM-based descendants still seem to be the only languages that actually do get concurrency completely right. Lots of languages have some form of an actor model, sure (whether as libraries or - like Pony - baked into the language), but all of them seem to rely either on OS threads per process (which are obscenely heavy) or green threads / coroutines (which lack preemption) (if you happen to know of any other languages/runtimes that offer lightweight preemptive concurrent actors/processes, let me know).

Until that happens, BEAM is unfortunately a hard dependency on getting the concurrency model fully "right".


> Comparing to modern languages, erlang lacks a lot.

Like?

> With go and rust available,

Yeah, tell me when those “modern languages” will be distributed, stop using stop-world GC, and contain at least 1% of introspection Erlang has.


Rust, at least doesn't use GC. But yeah. Introspection and distribution is something BEAM does super well. I don't know of a better platform for that.


(Actually a meta-reply to several comments.) Erlang got a lot right a long time before anyone else, but when that happens, it is a strong temptation to assume that everything about that early success is fundamental to the solution and anyone lacking it can't possibly succeed. But you don't really have the evidence for that, because for a long time, you had only one data point, and you can't draw lines (let alone n-dimensional hyperplanes) through one point meaningfully.

The evidence says that modern people mostly don't care about distribution, don't care about stop-the-world GC in a very large percent of the cases (especially as stop-the-world time is getting shorter and shorter), and don't need Erlang-style introspection very often. I know how useful the latter in particular is, because I also used Erlang for a long time and I used it a lot. But again, what happens, especially when you''ve got the only solution, is that the one solution selected early gets worked into the system very deeply and looks very fundamental, but that doesn't mean that it's the only viable way to do it. So when running an Erlang system, I needed a lot of introspection to keep it running. But when I run non-Erlang systems, I do not simply collapse into a heap and wail that I don't have introspection and watch helplessly as the system collapses around me... I solve the problem in other ways. Entire communities of people are also solving the problem, and sharing their solutions, and refining them as they go as well, just as Erlang did with their solutions.

The Erlang community has basically been complaining for the whole nearly 10 years of Go's 1.0+ existence that it doesn't have every single Erlang feature, but it was never about having every single feature that Erlang has. Erlang is a local optimum, and while I think it's a very interesting and educational one (and I mean that, quite sincerely and with great respect; anyone designing a new language today ought to at least look at Erlang and anyone designing a language with a concurrency story ought to study it for sure), I'm not even remotely convinced it's a global optimum. To get to any other optimum does mean that you'll have to take some steps back down the hill, but if all you look at are the steps down the Erlang slope but not the steps up some other slope, you won't get the whole story.

(I would call out the type system in particular as something deeply sub-optimal. I understand where it came from. I understand why someone in the 1986, with 33 fewer collective years of experience than we have now, would say that the easiest way to have total isolation is to use total immutability, and the simplest way to ensure types can't go out of sync is to not have types. But it crunked up the language to have immutability within a process (that is, "A = 1, A = 2" could have totally worked like any other language without breaking Erlang itself; have separate operators for what = does today and an operator for "unconditional match&bind" and everything works fine), when all it needed was to ensure that messages can't contain references/pointers but can only copy concrete values. And it doesn't solve the problem of preventing things from going out of sync to simply not have types, because you still have types in all of the bad ways (if two nodes have two different implementations of {dict, ...}, you still lose in all the same ways), you just don't get types in any of the good ways. It was a superb, excellent stab at the problem in 1986, and again, I mean that with great respect for the achievement. But in the cold harsh world of engineering reality, it is one of the huge openings for later languages to exploit, and they are. There are others, too; this is in my opinion one of the biggest, but not the only.)


The type system is far from the static type system we get in many other languages nowadays. Though I would say from a productivity/code maintenance perspective I haven't found it to be a problem yet. It's very hard to introduce bugs in a functional language unlike in many other dynamic languages. If you mean efficiency and dynamic typing being a hindrance to AOT then yeah this is one big sore point of Erlang, I agree, though in the majority of cases the system can still function well even with this hindrance.

Syntax-wise Elixir already allows you to reassign a variable so the "A = 1, A = 2" example you mentioned is moot for the developer productivity point of view (though I understand that it's still valid from an efficiency standpoint, since Elixir actually just creates new variables with different suffixes under the hood).


>don't care about stop-the-world GC in a very large percent of the cases

Stop-the-world GC is a pain to deal with in tons of domains. Games, communications, control, various system-level tools. Just because a lot of web developers don't care about those doesn't mean the domains are small.


One of the most balanced, insightful and respectful critiques I've read on the topic. Brightened up my day reading it. Thanks.

I've stated previously I'm an Erlang fan, for several of the reasons you've highlighted. I similarly don't believe it's a "global maxima".

Perhaps the most saddening observation is the number of languages that have come after Erlang - intended for server-type loads - that haven't learned from and built on its strengths in concurrency and fault tolerance.

I remember a separate discussion between Joe Armstrong and Alan Kay[0] where Kay posed the wonderful question (paraphrasing): "what comes next that builds on Erlang?"

That's a tempting prospect. My personal wish list would include 1st class dataflow semantics and a contemporary static type system and compiler that's as practically useful as Elm's today.

The key point is to build on what's been proven to work well, not throw it away and have to re-learn all the mistakes from scratch again.

[0] https://www.youtube.com/watch?v=fhOHn9TClXY


Rust doesn't use GC unless you explicitly use a garbage collector library.

Go's stop-the-world pauses are usually sub-millisecond for heaps measured in the tens or hundreds of gigabytes.


Actually from my dabblings with Erlang I never felt the need to deal with Elixir, then again I was a big Prolog fan during university days (yes I do know that the resemblance is only superficial).


Not to be contrary but I’d call it more than superficial, the inventors of Erlang were using Prolog before they created Erlang. Also, IIRC the first version of what would become Erlang was actually a Prolog variant.


Erlang lacks holes in the region of mutual exclusion of an Actor making it very difficult to things like a readers/writer scheduler for a database.

See the following:

http://web.stanford.edu/class/ee380/Abstracts/190123.html


Mutual exclusion: https://en.m.wikipedia.org/wiki/Mutual_exclusion

Are you saying that the Erlang VM enforces mutual exclusion within its Actors, and that mutual exclusion should not be enforced within Actors?


Every Actor has a region of mutual exclusion.

However, the region of mutual exclusion can have holes so that

    * activities can be suspended and later resumed
    * other activities can use the region of mutual
      exclusion while a message is being processed by
      another Actor
For example, a readers/writer scheduler for a database must be processing multiple activities concurrently, which is very difficult to implement in Erlang.


It is like complaining that Prolog got logic programming right but that comparing to “modern languages” (whatever your own definition of that is: Rust? JS? Haskell? Swift?) it lacks a lot. Prolog, as Erlang, was never designed to be a general-purpose language.


> Comparing to modern languages, erlang lacks a lot.

Could you elaborate on that? What exactly does erlang as a language (tooling/libraries/frameworks aside) lack in your opinion? What do those modern language have that erlang doesn't?


Pony looks like an interesting option but the ecosystem is tiny at the moment


go and rust are for a different situation. While they both have good concurrency capabilities (from using go, from reading about rust), neither has OTP. Erlang isn't just a language, it's basically a distributed operating system (nodes on the same or different computers). It's been a while since I've used go, and I haven't used rust for anything beyond small toy stuffs to get a feel for it, but do either have, built-in, the distribution capability of erlang?


> erlang/elixir looks like a very good tool but for a very limited pool of use cases - routing/filtering/messaging.

wasn't that the original goal for erlang?


Many of the systems I build do require routing/filtering/messaging and I have yet to find a more pleasant environment to work in than Erlang. I can agree that the ecosystem is a bit lacking if you want to build a quick web application, but out of curiosity, what are you referring to when you say that Erlang lacks a lot compared to modern languages?


How do you feel about Phoenix (and, by extension, Elixir) as something that might scratch the 'quick web application' itch?

As a web developer, I've been quite happy learning Elixir/Phoenix in the past year, and learning Erlang has been on my list, so I'm very interested in hearing from people with more Erlang experience when it comes to 'web stuff'.


I must admit that I have not (yet) played with Elixir. It looks very exciting though!

I don't do much "web stuff" with Erlang. The closest I go is probably HTTP-based control interfaces for other services (routing, validate input, do something in the system and return a response).

I usually turn to Python and Django when I want to create web stuff. Generic views, the forms API, DRF, admin, and an ORM that integrates well with all of the aforementioned are godsent if you just want to get something online quickly.

I still have old Django project that I wrote for a customer back in 2007. Besides upgrading Django two times a year and updating the ui a little it has been running w/o problems ever since. And I still find my way around the code instantly.

I have yet to try something that comes close to being as convenient to work with.


I love Phoenix but creating form on that framework feels needlessly complex.

Formex library is outdated and doesn't seem to work on Phoenix 1.4 but it makes it so much better to create form.

With Phoenix form library you have to add code in several files (schema, context, controller, view, and template). I can never remember all of it correctly so I have to refer to pragprog text book all the time. Formex require less.

I don't believe this is some sort of a deal breaker but it is important enough. There are tons of web app require forms CRUD operations admin like stuff to insert data via web.


I don't see a point here. Elixir doesn't actually fix all that much. And other modern languages still haven't managed to get concurrency model right. So the answer is we need new languages, not Go and Rust. With high performance actor model runtimes, AOT compilers and while we are at it addressing modern security problems of speculative CPUs, 3rd party packages, etc.


I wonder what you see that Elixir doesn't fix. Syntax-wise, Elixir definitely makes writing the code a much much more pleasant and enjoyable experience. And I would say the package ecosystem has been great and really promising so far. I guess the biggest complaint against BEAM would still be its performance and lack of easy AOT compilation, as you wrote there. However, the dynamic nature of Erlang/Elixir makes the development process much easier and more rapid in many cases. Also, compared with most of the dynamic languages that I've used so far, it's amazingly easy to maintain and bug-free. Of course you can always pine for the one perfect language, but PL design is always a process of making tradeoffs, and IMO Erlang/Elixir already satisfies the vast majority of use cases while being extremely enjoyable to work with. There are also some attempts to develop statically typed languages targeting the BEAM though the message passing between actors makes it not that easy.


While I do like Rust, many of its improvements are already present in Ada, and Rust still lacks something like SPARK.

Naturally since Ada is tainted by its history, other approaches more interesting to the common crowd are needed.


I don't think Ada has anything like the Rust borrow-checker! A feature-set comparable to SPARK will need to wait until a better characterization of Rust Unsafe code is achieved, but in the long term it is absolutely a goal to be able to write Rust code that carries "contracts" for its use of unsafe features, and/or "proofs" that the code meets some specification.


Actually SPARK is going to have an ownership model as well.

https://fosdem.org/2019/schedule/event/ada_pointers/


>how many layers of abstraction can you add to an ecosystem before it is unusable?

Isn't this even more true when speaking about serverless and all the surrounding technologies?


ecosystem and various beam details aside erlang has tons of great stuff but it is often difficult to use, which limits it hugely and makes it feel "old" etc.

Simplifying, modernizing, and exposing all of the tracing and debugging and releasing and other goodies inside of BEAM would be huge.


Do they all talk at the same time?


No, they optimize the talk filling the others speakers pauses.


Organizers gave each a single fork for dinner.


Concurrency is not parallelism, friend.


This guy. Fun at parties. ;)

A good point but most would think of the two the same. I know I had a hard time getting some ideas across on some research I was doing until I separated these two concepts out. Then people started to understand.


Aren't concurrency and parallelism orthogonal?


> Aren't concurrency and parallelism orthogonal?

As in they're independent of each other?

The answer would be no. Concurrency doesn't require parallelism but parallelism require concurrency.

The second answer to this quora link explains it way better than I can. https://www.quora.com/What-is-the-difference-between-concurr...


Personally, a blog post by yosefk, the greatest CPP basher of all time [0], cleared the differences up for me: https://yosefk.com/blog/parallelism-and-concurrency-need-dif...

[0] https://wiki.c2.com/?CppBashing


> parallelism require concurrency

I don't think it does. For example in a SIMD system there is parallelism, but no concurrency. Operations are being performed in parallel, but there is only a single program (SI..) and no concurrent tasks.


I think you have to add "Joint" Parallel as a definition of two separate cores that collaborate on the same data at the same time - that requires concurrent memory/disc access. That Erlang can't provide BTW.


Or the same single threaded executable running as two processes on different cores/cpus - there is parallelism but no concurrency within any of the processes?


Still, different pieces of hardware execute the same instruction on different data. Think of this as data concurrency if you like.


Well, here is someone suggesting that they orthogonal:

http://composition.al/blog/2014/11/24/yet-another-blog-post-...


Not if they happen at the same time.


"concurrency includes parallelism as a special case." -Carl Hewitt http://lambda-the-ultimate.org/node/5231?from=200&comments_p...


They try, but then each stops, waits a random interval, then speaks again.


Is this happening over a dinner?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: