Hacker News new | past | comments | ask | show | jobs | submit login

The post is about concurrency, but the word parallelism is used instead. To be fair, task level parallelism makes concurrent code run faster, but doesn't really scale if done for its own sake.



I think in this case they're relatively interchangeable terms. Rather than a SIMD vectorization of a task, you are applying a MIMD solution to various parts of a task.

You can typically get more of an immediate boost with SIMD on current hardware (especially if you can effectively cast it to GPGPUs), but MIMD is more easily applied. Almost any application can be refactored to spawn lightweight threads for many calculations without any explicit knowledge of the π-calculus.

To your point and for a well-understood example, make -j doesn't always result in faster compilations. It may if you have the ability to leverage imbalances in CPU and storage hierarchies, but you can also kill your performance with context switches (including disk seeking).


> but MIMD is more easily applied. Almost any application can be refactored to spawn lightweight threads for many calculations without any explicit knowledge of the π-calculus.

MIMD hasn't been shown to scale, and its not locking that is the problem, but I/O and memory.

> To your point and for a well-understood example, make -j doesn't always result in faster compilations. It may if you have the ability to leverage imbalances in CPU and storage hierarchies, but you can also kill your performance with context switches (including disk seeking).

When Martin Odersky began pushing actors as a solution to Scala and multi-core, this was my immediate thought: the Scala compiler is slow, can this make it go faster? It was pretty obvious after some thought that the answer was no. But then we have no way yet of casting compilation as a data-parallel task (a point in favor of task parallelism, but it doesn't help us much).


"MIMD hasn't been shown to scale, and its not locking that is the problem, but I/O and memory."

We're going to have to disagree on this one, as we have some obvious examples in favor of MIMD scaling on the TOPS 500. SIMD is just a tool for a subset of parallelizable problems.


I think the post is about parallelism. Its about how Erlang naturally scales to many cores by running in parallel. If Erlang had only concurrency, as Javascript does, it would not be solving the "right problem".


Again, even if Erlang supports hardware threading properly, it doesn't magically become a good platform for parallel computing, there is no guarantee it will scale at all.

Its funny actually: a PL person thinks the key to scalable parallelism is hardware threading; a graphics or systems person thinks the key is a well planned pipeline.


http://talks.golang.org/2012/waza.slide

"Once we have the breakdown, parallelization can fall out and correctness is easy."

Joe is saying this too. And he's saying that because Erlang is a concurrent language, parallelism (he's thinking MIMD not SIMD) is easy. He says:

> Now Erlang is (in case you missed it) a concurrent language, so Erlang programs should in principle go a lot faster when run on parallel computers, the only thing that stops this is if the Erlang programs have sequential bottlenecks.

I don't think he - nor the Go chaps - conflate concurrency and parallelism.


My main issue here is that people here "parallelism" and "a lot faster" they automatically think "scaling." But just hardware threading doesn't get us anywhere near that goal, even if we write our C-style multi-threaded code by hand very carefully.

The PL community is still not having honest up-to-date conversations about parallelism; they are about 20 years behind other fields.


Ok, you have to describe what you think the honest and up-to-date conversation about parallelism is then.


Well, look at how people like Google's Jeff Dean, originally a PL person, became a systems person to basically attack parallelism problems head on. That is, look at the problems that NEED parallel computing, don't think of parallelism as a transparent benefit that is nice to have if it happens, and if it doesn't, its not the end of the world.

Once you accept that parallelism is needed, you realize that it is much more complex than just dividing things up onto multiple cores. That locking is never really the big problem, which really is one of concurrency, the problem becomes all about pumping data to the right place at the right time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: