The article doesn't mention [Vert.x](http://vertx.io/), another JVM library/framework (also works outside of the JVM with Javascript for example) that is loosely based on the actor model. And now with Kotlin coroutine support, they've managed to abstract away a lot of the nodejs like callbacks that clutter up your code - you still need to be aware of what's happening wrt callbacks and such, but it makes your code a lot nicer: http://vertx.io/docs/vertx-lang-kotlin-coroutines/kotlin/#_g...
Thanks for pointing this out. I've never heard of Vert.x, but I will check it out and see if it seems worth including when I revisit this chapter in the future.
It's getting there but it's still not as concise as Groovy support that's been around for 5 years or more[1]. Having said that, sweet sweet (real) static typing is a huge advantage.
The docs at http://vertx.io/docs/ are available in different editions depending on what language you use with Vert.x, of which there's 7 available. Besides JavaScript and Kotlin, there's also:
I have coded one significantly large system using Akka. I think conceptually, actors are quite simple and powerful. But in practice with Akka, I hated the fact that I couldn't just set a breakpoint and just follow a message all the way through the system via multiple F7's. Every time I got to a "send" call, I would need to find out which of the other 200 actors in the code actually dealt with that message, set another breakpoint, and keep going. Because all you have at runtime is an ActorRef which could really be anything, and all the sends are asynchronous. This was perhaps in large part due to the fact that there was a lot of Akka 101 learning going on during early development so people went a bit crazy with actorifying everything. If in doubt, create a new actor. Because erlang, resilience, Telcos use it, supervisor hierarchies, self healing, and a blog I read last week. Questioning it was a case of the emperor's new clothes. Interestingly none of the big ticket things that Akka was sold to the dev team on ever eventuated.
A while into the project, my team and I were responsible for delivering a subcomponent within the system. We did so with extreme prejudice to only using actors at the very edges and entry points where a bit of asynchronicity was required. But everything within the service boundary was just plain old Java. This was the easiest to understand and maintain part of the system, as attested by another team who we handed over the entire codebase to. Guess which part had the least bugs? Most extensible?
I don't know if akka has improved since then, but the debug workflow was such a departure from the normal way of doing things on the JVM it was enough to put me off and then steer clear of it in general. There are definitely other patterns and libraries I would reach for first.
I have come to realize that breakpoint style debugging should be avoided. Atleast for people working on Services. Prefer logging, or other variants like Event History, State Change history, etc.
Makes it easier to jump to distributed systems that span multiple processes or machines, where you can’t just set a breakpoint. Also makes it easier to debug production systems, where you may have logs but can’t jump back in time to attach a debugger.
Is this a good time to mention concurrentlua, a reimplementation of Erlang's concurrency model in Lua, using coroutines to model processes? https://github.com/lefcha/concurrentlua
It has not seen a lot development over the past ... couple of years, but it worked quite nicely when I last tried it.
Relating Pony to some of the languages mentioned, two that seem immediately comparable are Akka and Erlang.
"Akka’s receive operation defines a global message handler which doesn’t block on the receipt of no matching messages, and is instead only triggered when a matching message can be processed. It also will not leave a message in an actor’s mailbox if there is no matching pattern to handle the message. The message will simply be discarded and an event will be published to the system."
"[In Erlang,] receive statements have some notion of defining acceptable messages, usually based on patterns, conditionals or types. If a message is matched, corresponding code is evaluated, but otherwise the actor simply blocks until it gets a message that it knows how to handle."
Pony's message passing is syntactically similar to its methods, with a "be" (for "behavior") keyword introducing a handler rather than "fun". As a result, the message receive operation is global, like Akka, but the type system ensures that an actor cannot be sent a message it does not recognize. These are both different from Erlang, which has a local receive operation. An Erlang process, upon receiving an A message, can call another receive statement and block only looking for B messages. At that point, any incoming A messages will queue up. That has always seemed to me to be a good way to write deadlocks without realizing it.
Pony is otherwise similar to a typed Erlang that compiles to machine code and doesn't (currently) have process monitoring. [:-(]
There was some discussion about distributed Pony, but it has rather strong semantics for message passing, which would require a rather complex underlying protocol. [I'd rather have simple distributed messages and handle failures at the program level.]
[I only have experience with Erlang and Pony; I'm hoping my description of Akka is true from the discussion in the post.]
* Pony has two message passing options: copy or share (by ref). Both are safe, Erlang does only copy.
* Pony is non-blocking only, there are no blocking calls in any stdlib function.
* Erlang supports no native threads, only green threads. So it favors IO over CPU, distributed computing on different cores is only via distributed nodes.
* Pony supports native threads, but no distributed actors on different hosts yet.
* Pony is massively faster, uses much less memory, and has better cleanup via a clever GC protocol.
I wrote this a little over a year ago, and honestly at the time I wasn't even aware of Pony. However, I am now and will likely revisit this chapter sometime this year as part of work to get this book to some finished state.
Not sure whether or not Pony will make it in, but I will definitely think about it more. It is really interesting to see a new actor-model language being developed. I also see that there have been a couple of papers written about Pony, which is a good sign. Definitely on my radar for the future, but I still need to learn more about it.
Personally I'm not too fond of Pony's take on the actor model. I believe it is in the models core that every actor is in total control of their own behavior, and the only interaction is via messages that can be interpreted as the receiver pleases. In Pony, it is the sender that directly commands the receiver what it will do next (via function calls/behaviors).
My take is that you are looking at it the wrong way. Pony uses a single message queue for each actor (which is completely invisible at the code level) and typed messages (which syntactically look like functions).
Behaviors (the message handlers) have a lot of limits on them compared to functions (like no return values), so it really is message passing.
I'm not sure I really like the syntax that makes messages look like functions, though.
> Pony uses a single message queue for each actor (which is completely invisible at the code level) and typed messages (which syntactically look like functions).
I totally agree with this definition but believe that it leads to awkward programming. The problem stems from forcing the message queue to be FIFO, which does not always make sense. I think the Erlang style selective receive is the correct abstraction, as it avoids having to maintain a separate queue in userland.
I understand why Pony didn't go down that road though, we have a lot more shared experience in type systems for function calling compared to message passing.
When I took Pony for a test drive, the language itself felt pretty solid; the main obstacle was the tiny standard library.
When Pony's standard library grows to a useful size, it's going to be huge. The type system is gorgeous, the documentation is pretty good, and the community is extremely friendly and helpful.
I wish I could target the browser too. Like my biggest problem is writing a front-end and backend in two languages. If I could do both in one, that would be such a killer feature.
From a quick skim of it I don't see any mention of the real difficulty I have encountered with the Actor model in practice: managing imbalanced actor workloads.
If you consider a simple chain of actors that do some work and then send on a result to a next actor that then does some more work:
A -> B -> C -> D
Suppose something (anything) happens that make messages to B take slightly longer to process than the rate at which A is sending them. The result is that B's message inbox expands indefinitely. One of two things will happen depending on the queuing model: either A will block on send because B refuses to accept another message, or B will simply run out of memory from too many pending messages.
This of course, is just the linear scenario. Consider a complex network where there may even be cycles (something D does could somehow indirectly result in a message coming back to A). You've essentially got a completely unstable, unpredictable system. The only way to avoid it is to ensure that every actor is massively over-resourced so that it can never fall behind on processing its messages.
Of course, the second stage of this is that you start designing scalable actor "pools" where more than one actor can service messages and you can scale the size of the pool up and down. But this is at the cost of adding even more dynamic, unpredictable behaviour, and to some extent you just lost the simplicity you were trying to gain by using Actors in the first place.
I'm curious if any thoughts from more experienced folks about how to handle these issues!
I don’t find that kind of fully async processing pipeline very idiomatic TBH, at least in Erlang - more typically, every request would be its own actor, and do blocking requests to other actors that own shared resources.
Thank you for the insightful comments. That link looks perfect (down to the point of having a diagram with a->b->c->d just like mine!).
The idea of each request being its own actor being more idiomatic is intriguing. Naively though does it really solve the problem? Don't you just end up with an overflow of actors now instead of an overflow of one actor's inbox?
You can put a bound on the number of actors under a given supervisor.
That being said the main reason an Erlang program will use an actor per request is not to prevent overload, it’s for fault tolerance; in your a -> b -> c -> d example a bug (e.g. uncaught exception) triggered by one request that happens in process c can cause all requests in flight to fail, while with a process per request there’s only one affected.
It does mean you may have thousands of actors rather than four or so, but that’s not a problem since Erlang is built and optimized for that kind of workload. On a platform where actors are OS threads, it may make sense to use a different approach.
Right, but in my scenario (and I appreciate perhaps this means I'm "doing it wrong"), the messages themselves have a fairly heavy payload (around 800 bytes). Given that, whether I'm spinning up an actor or a message, if it can't be processed quickly it's consuming a non trivial chunk of memory.
I wouldn't say you are doing anything wrong (or, at least I couldn't say that for certain without knowing more about your problem domain :) ).
However, I also don't think you have to worry about the message payloads being too large. In Erlang, normally objects are copied between processes. However, for large objects (>64 bytes) they are put in a shared heap, and only the pointers are copied between processes[0].
It is a non-trivial problem, but one common enough that I would suggest any Actor library worth its salt would include mechanisms to tackle it. (Here's Akka talking about back-pressure support in Streams, for example. https://doc.akka.io/docs/akka/2.5.11/stream/stream-flows-and...) Akka also provides infrastructure for actor pooling and routing.
I don't see the manifestation of this problem as forfeiting the point. You could on the opposite extreme write one monolithic application that does everything: web interface, application logic, and data storage. It's readily apparent that for most use-cases, the benefits of separating, say, your web and database software will more than outweigh the added complexity cost of running two applications and routing requests between them. (And we don't anymore balk at running pools of web servers in front of our databases.) Continue to chop your application into smaller segments and of course you'll see diminishing returns. The application just has to be complex enough to raise the expectation of benefit from segmentation.
I have been wondering about this past couple of days.
Does kafka streams make Actor Model obsolete? What can you do with actors that you can't with kafka streams if you were starting a project from ground up.
They are different worlds.. each actor would be analogous to an independent stream app; or one stream app with lots of KStream/KTable “components”, which feels like an anti-pattern.
I live and breathe this stuff. Started working with Elixir more recently but have worked with Akka for years. IMO BEAMVM is a great piece of technology and is worth looking at.
If you click on "Chapters" it takes you to a directory listing of the chapters, click on a chapter link and it takes you to another directory listing of pages. Click on a page link and it takes you to an actual chapter.
It appears that the "book" still needs a fair amount of polishing to be navigable.
Yep, the book is very much far from completion. Feel free to submit issues on GitHub if you notice specific things, or just wait until later this year for all the navigation problems to be solved. :)
Im firmly against these "X minutes to read" estimates because the amount of time it takes to read something is highly dependent on many variables (on both the reader side and article side). I would much rather have a "Short, Medium, Long, Novel" kind of scale which is simply a scaled word count of some kind.
Author here. I don't actually think so. This is likely closer to 40 minutes of reading time, and I personally don't think it is the most difficult writing to read.
I haven't given myself the opportunity to sit down and play with an actor model. But in the meantime, despite reading literature such as this, I have trouble envisioning what it actually entails day-to-day.
I think of my experience with channel-based communication in Clojure, Go, and even Rust's mpsc. And every time I feel an instant feeling of debt because I know I'm just one or two more channels away from misunderstanding the execution of my own application. Having to take out pencil and paper to reason about why messages are bottlenecking in this component or why I was wrong when I thought I knew how my application worked.
For example, we ripped out all of our channels in our Go application and replaced them with a couple mutexes.
When I read about actors, I get flashbacks to some of my worst struggles with channels. Where our application is split into disparate agents that everyone has to eventually map out on a whiteboard to understand even basic runtime behavior.
I owe it to myself to try it out, but I can't help but be discouraged by what I think they are. Can someone clarify? What does the actor model actually mean for your day-to-day code compared to futures, for example?
I would spend some time experimenting with Erlang or Elixir. The thing that really made the actor model "click" for me was how each actor in Erlang is just a process (green thread basically). The only way to communicate between processes/actors is message passing, and the way you end up constructing software is by modeling individual tasks and responsibilities of your application in terms of processes. The way processes are constructed in Erlang/Elixir allow you to pattern match on messages to selectively receive or prioritize messages, and discard others. A process can be written in such a way that it changes from one role to another just by being sent a message, or from one state to another (in the case of finite state machines and the like).
A typical example of how it gets applied might be a web server, where each HTTP request is handled by its own process, and potentially delegating multiple tasks concurrently to other processes in the background which handle queuing and executing work associated with that request. You might have pipelines of processes, each responsible for some specific task/transformation before handing off work to the next stage in the pipeline. On top of that Erlang/Elixir provide supervision, where components of the system are started/monitored by a special process type called a supervisor, which will restart its children when failure occurs, and if a set of conditions fail, will itself be restarted by its parent supervisor. So your application ends up structured as a tree of supervisors and worker processes, where components of the application are branches of the tree, and can be isolated from the failure of other components.
Hopefully that helps give you an idea of how the model plays out in practice. I would definitely recommend playing around with either Erlang or Elixir, they are a lot of fun, and it really changes the way you think - at least it did for me.
Why it is necessary to discard messages? Isn't it a sign of bad design when resources have to be spend to create and pass a message that is ultimately discarded?
I think it’s just a bit badly worded. You don’t normally throw them away, you leave them in the queue for later.
For instance if you have a process that gets requests and writes stuff to DB, while processing a request you can send a message to the DB, then use a selective receive to match on the response from the DB while ignoring all other messages (you’ll deal with them later i.e. when you fetch the next request to process).
Erlang has one message queue per prosess. So if a process discards the message, the message cannot be available for other processes. Moreover, as Erlang does not have message broadcast and all messages the process receives were explicitly sent to it, the process can not receive messages accidentally.
Actors are computers and computers are actors. The mapping from actor to physical computer is sometimes one-to-one, sometimes many-to-one. An actor is an entity capable of handing input, changing its internal state, sending output, and potentially creating new actors.
Whether we use it explicitly or not, the reality is that many of our systems can be modeled as actors in this sense. Whether you're using erlang (very explicitly based on the actor model), or C with pthreads and mutexes, you can model your system as actors. The specific mechanism of communication is irrelevant. And this is where I find it useful. My day job is in embedded systems. We have clear actors (multiple radios communicating over a protocol) which we treat as actors from a design perspective (of that protocol). Internal to each radio we have a number of physical computers running concurrently and communicating over a bus which acts like a shared memory. For this internal part, actors are not how it's coded. However they offer an effective model for how things work, as we can isolate the communication parts to a handful of modules (per application running). So from the internal part of our application it is as if we are sending messages (though the reality is we're writing them to a shared memory and setting some flags) and it is as if we are receiving messages (though the reality is we're reading from a shared memory when some flag goes high).
Having worked on fairly concurrent systems in Erlang specifically, I would say one of the most useful tools to understanding the behavior of the system is the tracing facilities. Even something as limited (but user friendly) as ErlyBerly[0] helps a lot.
As you scale up the number of concurrent processes who need to engage in ad-hoc concurrency, message passing becomes (in my experience) the only feasible way to manage those interactions. Being able to insert yourself in-between the communicating processes becomes necessary to truly understand what your system is doing.
This is true even outside of Erlang itself, such as deploying independent OS processes that use HTTP for communication. Wireshark has definitely been my friend.
State charts, interaction diagrams, and sequence diagrams are also invaluable.
A lot of folks here in the comments are confusing a channel system with actors. Channels can help with actor communication, but they are not the same thing. A problem with core.async in Clojure is that the scenario you describe is very simple to create. You’re managing channels and buffers (communication mechanisms) instead of managing logical modules that do work for you (actors).
If you want to succeed with actors, they need to really be a thing and you ideally want to abstract away the communication mechanism. This is what Erlang/Elixir have done and it’s beautiful. You don’t really spend a lot of time thinking about channels and buffers... you just talk to processes that are living things.
On the other hand, you're also one or two more mutexes from misunderstanding the execution of your application. In my experience, mutexes are very hard to reason about, particularly when you push them down to a fine grain, and drawing them out on a whiteboard doesn't help.
One suggestion for actors is to look at the actors as elements of serialization, rather than as elements of concurrency.
It's worth mentioning that this isn't the only way to do distributed programming. Dataflow programming works quite well, and the newest state of the art systems are exploring ways to generate dataflow parallelism implicitly from sequential programs. See e.g. Legion from Stanford:
In my personal opinion, message passing is going to be considered an evolutionary dead-end in the long term, and these sorts of more intuitive programming models based on generating dataflow graphs automatically will become more dominant.
I checked the tutorials section on Legion's site, they don't seem to be very inviting for a stranger.
Is there a better site that would help me grasp basic ideas about Legion and show its strengths in comparison to other paradigms?
First, I'll say: if you have any suggestions for how to improve the tutorials and documentation in general, we'd be happy to hear them. Even years into the project, we're still learning how to teach Legion. We're an open source project and would be happy to accept contributions, but even a set of first impressions would help.
As for what resources exist today:
If a video format would help, the bootcamp [1] has a more gentle on-ramp (despite the name).
If you'd prefer text, there is also a language called Regent. The ideas are all the same, but the code does a much better job of expressing those core concepts. You could study Regent, but write all your code in C++ if you want. Even if you never write code in it, we've found Regent can be a better way to learn. And Regent has a tutorial [2].
I absolutely agree that message passing and actor models are the way things are done now. I just don't agree that it'll be the best we can do in the future.
As a programming languages person, message passing doesn't seem very satisfying. A lot of errors that you can get in parallel codes (e.g. deadlocks) aren't prevented by message passing. There are entirely new kinds of errors that are added (mismatched sends/receives). And there are errors that are nominally fixed, but really aren't: strictly speaking, in a shared-nothing distributed system, there can't be such a thing as a data race. And yet, two messages can absolutely race, and so you end up with the same problems just at a higher level of abstraction. Fundamentally, actor models are giant balls of mutable state interacting concurrently.... I'm sorry, yuck.
The siren's call of implicit parallelism is the idea that you can generate a parallel execution automatically from sequential code. This was the objective of e.g. the auto-parallelizing compilers of the 80s and 90s. And that of course turns out to be an intractable problem. The difference today is that people are (a) more willing to work with new languages and abstractions, and (b) are willing to use dynamic analysis to cover the gaps where the static compile-time analyses fail. And at least in HPC people are willing to consider this because the next-generation machines are getting scary-complicated and it's not obvious how mere mortals are expected to program them under the traditional approach.
I agree, we haven't found a general solution to building concurrent programs without the downsides that most of the existing models incur. The Actor model doesn't remove the burden to handle edge cases from the programmer, and depending on the problem domain it might be the wrong choice, but it does solve certain kinds of problems really well.
It's about using the right tool for the job. For example, I find that Futures / Promises great for composing calls to consume various backend services. But when it comes down to concurrent processes coordinating work and being able to handle failure transparently, Actors are my first choice most of the time. And sometimes the pragmatic solution is to simply use Threads and Locks.
I'm just curious about what you mean with the following statement:
> And yet, two messages can absolutely race.
The queuing and dequeuing of messages in an Actor's mailbox are atomic operations, so there cannot be a race condition within an Actor's state. An actor handles messages in sequential fashion. But yes there could be surprises by certain patterns on the incoming messages if that's what you meant.
Consider the following example:
Imagine three actors: a Bank Account actor and two ATM actors. One of the ATM actors sends the message 'GetBalance' to the Bank Account actor, which in turn replies with a 'CurrentBalance' message. Now that the ATM knows that there are enough funds it sends a message to set the balance to a different value, but in the mean time the other ATM had already set the balance to something else. This of course is a problem since the getting and setting the balance are not atomic (two independent messages).
But this could be solved by removing the 'SetBalance' message and instead having a 'Withdraw' message which the Bank Account actor can reject and reply with the proper 'NotEnoughFunds' message.
Basically:
ATM-1 sends GetBalance to BankAccount
BankAccount replies Balance(500 EUR) to ATM-1
ATM-2 sends Withdraw(400 EUR) to BankAccount
ATM-1 sends Withdraw(200 EUR) to BankAccount
BankAccount replies NotEnoughFunds to ATM-1
ATM-1 sends GetBalance to BankAccount
BankAccount replies Balance(100 EUR) to ATM-1
ATM-1 would now display the problem to the user and show the new balance (and he should worry since somebody withdrew his money from another ATM).
In the Actor model, each actor is expected to manage its own state, not "leaking" the control to the outside world. Allowing a 'SetBalance' message is giving control over its state to the outside world, and instead the actor should expose behaviour while retaining the possibility to decide what to do given the intention behind the message.
The nice part about this model of concurrency, is that it follows quite closely how real life processes work, so it can become quite intuitive to come up with and understand many of the patterns used.
I've worked extensively on these systems as a developer. ;)
Siemens, Thales, Ericsson, Samsung - all have shipped SIL-4 systems based around the Actor/Voting model in the past, because it is an industrial standard. These systems are still out there, in some cases 20+ years later, in track-side systems as well as operations.
The actor model may not help much with overall system comprehensibility. But it can help a lot with comprehensibility of the pieces.
A HTTP request handler may send messages (and wait for responses) to get all the data it needs, instead of taking mutexes and getting the data directly. The actors responding to the data fetch requests will generally be pretty simple -- get a request, grab the data and return it (and handle writes as necessary); if they bottleneck, it's either too many requests or the underlying storage is too slow. If your request handler bottlenecks, you probably have a slow data fetcher, the handler's data processing is too slow, or you're making a lot of data requests sequentially instead of concurrently.
You absolutely need to have good instrumentation to understand your system -- at a minimum you need to know the queue lengths for at least important actors; having queue processing rate is pretty useful too.
I just finished implementing an akka system. I was tasked to build an event-driven auto-scaling self-healing and self-balancing clustered rule processing engine, and I knew intuitively something like akka was needed.
The learning curve in the beginning was frustratingly steep, and imho the only way to really grok akka is to implement something significant with it. I still feel I haven’t fully grasped its nuances, but eventually it became almost as routine to model everything as actors and messages as it was to model rest api’s with models and controllers. It’s a different way of thinking.
Akka actually uses futures, when you “ask” an actor (send a message and receive the promise of a reply).
I would say the primary benefit akka offers is changing the level of abstraction and forcing a proper modelling of the logic as message pipelines.
Indeed an actor only model will easily lead you to a mess.
At first it's simpler with actors to think concurrently mainly because of the strict message passing rule for communication.
But as your logic become more complex it will become hard to understand as the code flow is artificially separated between different entities, that doesn't scale nor compose easily.
That's being said, it can be powerful to have just some top level actor abstractions that encapsulate more common idioms.
Actors are great to build distributed micro services, you can have a supervisor to monitor and restart services, a registry to discover and subscribe. An actor can represent a service of course. Very useful as high level concurrency abstractions, not so much as low level primitives.
I've played with this a bit, professionally and personally (Scala/Akka specifically).
The way it makes you think about your model and the boundaries it introduces seem good. I really love the way it forces you to model the communication between parts of your code as a data structure - it feels like a much less costly version of the "everything as a micro-service" dream.
Scala seems like the ideal language for it - you need really simple, powerful data types for it to feel good, otherwise those messages become painful to make or get overloaded in bad ways.
Akka was still struggling with type loss in the system the last time I used it, which really felt like a pain, and where I feel it adds the boundary that makes it harder to understand.
> Akka was still struggling with type loss in the system the last time I used it, which really felt like a pain, and where I feel it adds the boundary that makes it harder to understand.
I don't know if this is what you are referring to, but: I don't understand the mix of Akka actors and Scala. Scala is a statically typed language -- in fact, that's the whole point of it -- and using Akka actors feels a bit like programming in Smalltalk. So you turn a statically typed language into a dynamically typed language with message passing. Honest question: why would you use Scala for that?
Back when I took Coursera's "Reactive Programming in Scala" course, some years ago, I asked Ronald Kuhn & Martin Odersky precisely this question, and they didn't have a convincing answer. Kuhn's answer was "good point, we're also considering Typed Actors". It's my impression Scala never did go with Typed Actors (I might be mistaken; I certainly don't use them), so I wonder what's their current answer to this question.
Yes, this is precisely the problem I had - Scala is great, Akka's model seems good, but the mix feels like you loose something along the way.
The typed actors stuff does exist, but it's experimental, and the last time I checked they had just thrown away the old version and were working on a new implementation. It seems like it would be better, but it's clearly not done yet.
I too ended up replacing all channels with a mutex or two in a small web socket relay server in Go, and things became 10x easier to reason about.
I think part of the problem is the lack of usage/support for channels in the Go stdlib itself (seems that the stdlib authors also prefer mutexs). The other issue with sending a close msg on a channel to let the consumer know there will be no more messages, and that an error is thrown if you write to a closed channel.
If you can reduce everything to _one_ mutex, that's simple. As soon as you have two mutexes, you need to start worrying whether one thread grabs A then B and another grabs B then A. In larger projects with multiple people contributing, the mutex and condvar approach does not scale well. It's very easy to get difficult to reproduce race conditions.
Having been involved in some systems that make heavy use of actors I 100% agree. We call actors "concurrent gotos" as they bring all the "advantages" of gotos with the added "benefit" of concurrency. It's a very primitive model of concurrency, only slightly better than mutexes. There are much better models in the research community but few of the ideas have yet made it into usable systems.