Although this paper talks initially about concurrency, you can see that really he's talking about concurrency specifically for the purpose of parallelism.
Coroutines don't solve the parallelism part, because they're concurrent but exclusive.
Async/await as implemented in JavaScript doesn't solve the parallelism part either for the same reason, and async/await as implemented in C# has exactly the same problem as threads.
There are many ideas for how to solve the problem - but I think anyone who is honest will tell you none are a perfect solution to all situations where you want parallelism or concurrency.
For example to use zmq-communicating-processes effectively you need a problem where you can divide the data or tasks cleanly a-priori. We simply don't have the mathematical understanding of how to do that to some important algorithms that people really need to run in parallel today, such as triangulation or mesh refinement.
We probably need some radical new idea, or maybe it's looking increasingly like only a mix of ideas will work.
Let's say you have a domain X that you need to triangulate. You break it along a plane, into two domains A and B, about equally large.
Imagine that you have a magical black box system that can triangulate A and B.
Would this not help you to triangulate X? I can hardly believe it wouldn't. (Again, perhaps in pathological cases yes, or if a near-optimal solution is not good enough).
Iirc, triangulation iterations can replace previous cuts. So A and B might not have any meaning in the next iteration if you triangulate properly. That's why it's okay to cut at random, because it only affects outcome indirectly.
Yes. After checking Wikipedia I was referring to the edge flipping operation which is described in the “flip algorithms” section. So there's also a divide and conquer algorithm in the article, but it needs extra steps to fix the divide edge. Was that what you were talking about?
Isn't that something you could approach with work-stealing techniques? You'd need to build the work pile as you go but that seems appropriate. Maybe I'm missing something?
The problem is: given a list of tasks that you want to solve in parallel, you cannot know ahead of time which jobs use the same data. You have to start solving them to figure that out. The problem isn't distributing the work, it's knowing when it's safe to do two things in parallel.
Coroutines don't solve the parallelism part, because they're concurrent but exclusive.
Async/await as implemented in JavaScript doesn't solve the parallelism part either for the same reason, and async/await as implemented in C# has exactly the same problem as threads.
There are many ideas for how to solve the problem - but I think anyone who is honest will tell you none are a perfect solution to all situations where you want parallelism or concurrency.
For example to use zmq-communicating-processes effectively you need a problem where you can divide the data or tasks cleanly a-priori. We simply don't have the mathematical understanding of how to do that to some important algorithms that people really need to run in parallel today, such as triangulation or mesh refinement.
We probably need some radical new idea, or maybe it's looking increasingly like only a mix of ideas will work.