Hacker News new | past | comments | ask | show | jobs | submit login

Message passing has worse performance than lock.



I don't know where you heard that, and you should stop repeating it because it's almost gibberish. It's kind of like asking, "Which is faster? A car or a typewriter?" Well, a decent typist can put out 100 WPM on a typewriter, and a decent car can go 60 m/s. So it depends on where you are going. If your goal is an essay, then the typewriter is faster. If your goal is the other side of town, the car is faster.

The big disadvantage of locks is that performance decreases with contention, and performance of a shared-memory system in general degrades as the number of nodes increases. So every supercomputer in recent history uses a hierarchical approach: a network of multi-core units. Shared memory and locks for sharing data with cores in the same unit, message passing for sharing data with other units.

Just imagine trying to use system-wide locks on the IBM Sequoia. It has something like a million cores.


That doesn't obviate the original point. Message passing is generally slower, even though shared memory mechanisms cannot scale indefinitely. But there are many problems where you cannot afford to pass on the performance gain from shared memory up to that point, even if you have to tie your nodes together using a message passing architecture afterwards.

Also, advances have been made in manycore shared memory systems. The Cray XE6 (the hardware behind, e.g. HECToR [1]) has a hardware accelerated global address space with remote direct memory access that allows PGAS [2] to outperform MPI [3].

By the way, system wide locks are a red herring. At these scales, you avoid global data as much as you can, regardless of what your programming model is.

[1] http://www.hector.ac.uk/ [2] http://en.wikipedia.org/wiki/Partitioned_global_address_spac... [3] http://upc.lbl.gov/publications/pmbs11.pdf


Part of my point was that if you say "message passing is generally slower" and "shared memory mechanisms cannot scale indefinitely", then the logical conclusion is that problems are generally small. You can't meaningfully compare the speed of these two techniques in abstract any more than you can meaningfully ask if 100 is a "big number".

The comment about global locks was intended to be silly, because comparing locks to message passing without talking about what you're doing with them is also silly.

The linked paper compares MPI message passing to an alternative hardware-accelerated message passing, which is interesting, but the choice of micro-benchmarks is not very exciting. To be clear, while the GP was really comparing the actor model (private memory + message passing) against the shared memory + locks model, I was only responding to the parent comment, and when I think "message passing" I don't automatically think "private memory".


PGAS is not an "alternative hardware-accelerated message passing mechanism", unless you use a definition of message passing that is so expansive that the statement becomes vacuously true. It's distributed shared memory, integrated in the memory hierarchy, which you can manipulate at the same granularity as other memory, which you can have pointers to, etc.

You can have, say, a 100,000 x 100,000 matrix represented as an array over thousands of processors, where each processor can read and write each array element individually.


Well, my comment was a response to the OP's comment stating message passing obliterates the need for lock completely. It can only be viewed within that context. I think you took my comment out of context and went in a different direction.


I assume you meant to say lock and message-passing address different problems, and I agree, but the GP asserted message-passing would solve the problems that lock would have solved generally with better result, which is just not true, as least in the performance department.


Exactly true, my disagreement with the parent comment is unrelated to the bogosity of its parent comment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: