Suppose we have a good model of atoms -- how much of this will be a good way to think about living systems? (Or should we come up with better and more useful architectural ideas that are a better fit to the scales we are trying to deal with -- hint: in biology, they are not like atomic physics or even much of chemistry ...)
A point here is that trying to make atomic physics better will only help a little if at all. So trying to make early languages "much better" likely misses most if not all of the real progress that is needed on an Internet or larger scale.
(I personally think that much enterprise software even today needs architectural ideas and languages that are different in kind than the languages of the 60s and 70s (meaning most of the languages used today).
(preface: Riffing wildly here -- and may have gone in a different direction than your analogy's original intent --)
So, regardless of whether or not actors are a good pattern, what we need is scale-free patterns?
I can see how getting hung up on actors as a programming language feature would impede that.
How can we make the jump to scale-free, though?
- With actors, historically, we seem to have gravitated to talking about them in terms of a programming language feature or design problem -- while in some sense it implies "message passing", we usually implement the concept at scales of small bits of an in-memory process.
- With processes in the unixish family, we've made another domain with boundaries, but the granularity and kind of communication that are well-standardized at the edges of process aren't anywhere near what we expect from the languages we use to craft the interior of processes. And processes don't really compose, sadly.
- With linux cgroups, things finally go in a tree. Sorta. (It's rough trying to stack them in a way where someone arbitrarily deep in the tree can't decide to take an axe directly to the trunk and topple the whole thing). Like processes, we're still handling granularity of failure domains here (better than nothing), but not defining any meaningful or scalable shepherding of communication. And we still haven't left the machine.
I'm sold that we need some sort of architectural ideas that transcend these minutiae and are meaningful at the scale of the-internet-or-larger. But what patterns are actually scalable in terms of getting many systems to consensually interoperate on them?
I'm twitchy about trying to define One True Pure Form of message passing, or even intent passing, which seems to be a dreamier name that still converges at the same limits when implemented.
But I dream that there's a few true forms of concurrent coordination pattern that really simplify distributed and asynchronous systems, and perhaps are scale-free. Perhaps we haven't hit them yet. Words like "actor" and "agent" (divorced of e.g. programming language library) sometimes seem close -- are there other concepts you think are helpful here?
One of many problems with trying to use Unix as "modules" and "objects" is that they have things that aren't objects (like strings, etc) and this makes it difficult for arranging various scales and extensions of use.
It's not so much "scale-free" but this idea I mentioned elsewhere of "find the most difficult thing you have to do really nicely" and then see how it scales down (scaling up nicely rarely even barely possible). This was what worked with Smalltalk -- I came up with about 20 examples that had to be "nice", and some of them were "large" (for their day). We -- especially Dan Ingalls and Ted Kaehler -- were able to find ways to make the bigger more general things small and efficient enough to work uniformly over all the scales we had to deal with.
In other parts of this AMA I've mentioned some of the problems when extended to the whole world (but go for "galactic" to help thinking!)
Almost nothing in today's languages or OSs are in the current state of "biology".
However, several starts could be to relax from programming by message sending (a tough prospect in the large) to programming by message receiving, and in particular to program by intent/meaning negotiation.
And so forth.
Linda was a great idea of the 80s, what is the similar idea scaled for 40 years later? (It won't look like Linda, so don't start your thinking from there ...)
Suppose we have a good model of atoms -- how much of this will be a good way to think about living systems? (Or should we come up with better and more useful architectural ideas that are a better fit to the scales we are trying to deal with -- hint: in biology, they are not like atomic physics or even much of chemistry ...)
A point here is that trying to make atomic physics better will only help a little if at all. So trying to make early languages "much better" likely misses most if not all of the real progress that is needed on an Internet or larger scale.
(I personally think that much enterprise software even today needs architectural ideas and languages that are different in kind than the languages of the 60s and 70s (meaning most of the languages used today).