Hacker News new | past | comments | ask | show | jobs | submit login

This article covers mostly how _performant_ (as in fast/resource savvy) parallel programming is hard. Though robust parallel programming can be tricky too, and the way you "[Avoid] race conditions" will have an impact on performance. So the trade-off evocated in the article between speed, maintenance, memory could also take into account robustness.



Exactly. When I see people getting themselves tangled up with parallel programming, it's because they can't accept the performance trade-offs of simple, tractable solutions. 90% of the time, the inability to accept the trade-offs has nothing to do with actual performance measurements and everything to do with groundless assumptions that certain techniques are always way too slow.


But isn't the very reason you bother with parallelism that you need higher performance? What possible other reason could there be?


Scalability


The raw increase performance increase that you can get from parallel processing is at the absolute maximum proportionate to the number of processors you can add to your program. Usually it is much less.

A raw performance increase is not the most common good reason for concurrency, for multiple threads or processes in an application though it's a common bad reason. Achieving a decrease in latency is one common good reason to for concurrency on a single machine.


Being able to do processing and I/O in parallel is another source of speedup from concurrency on a single processor.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: