Hacker News new | past | comments | ask | show | jobs | submit login

Triangulation and mesh refinement seem suitable for divide and conquer, except perhaps in pathological cases.



They’re the canonical example everyone uses of where it doesn’t work well!

In both of them there is tons of parallelism but you can’t work out what work is separate (divide it) until you have started the work.


Let's say you have a domain X that you need to triangulate. You break it along a plane, into two domains A and B, about equally large.

Imagine that you have a magical black box system that can triangulate A and B.

Would this not help you to triangulate X? I can hardly believe it wouldn't. (Again, perhaps in pathological cases yes, or if a near-optimal solution is not good enough).


Iirc, triangulation iterations can replace previous cuts. So A and B might not have any meaning in the next iteration if you triangulate properly. That's why it's okay to cut at random, because it only affects outcome indirectly.


Wait a second, are we both talking about Delaunay triangulation?


Yes. After checking Wikipedia I was referring to the edge flipping operation which is described in the “flip algorithms” section. So there's also a divide and conquer algorithm in the article, but it needs extra steps to fix the divide edge. Was that what you were talking about?


Yes, that's what I was thinking about.


Isn't that something you could approach with work-stealing techniques? You'd need to build the work pile as you go but that seems appropriate. Maybe I'm missing something?


The problem is: given a list of tasks that you want to solve in parallel, you cannot know ahead of time which jobs use the same data. You have to start solving them to figure that out. The problem isn't distributing the work, it's knowing when it's safe to do two things in parallel.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: