Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I see your blog article and raise a wikipedia [1]:

"In computer science, concurrency is the decomposability property of a program, algorithm, or problem into order-independent or partially-ordered components or units. This means that even if the concurrent units of the program, algorithm, or problem are executed out-of-order or in partial order, the final outcome will remain the same. This allows for parallel execution of the concurrent units, which can significantly improve overall speed of the execution in multi-processor and multi-core systems."

Note the 'decomposability' part, i.e. the having the property of being decomposable.

[1] https://en.wikipedia.org/wiki/Concurrency_%28computer_scienc...



Zeroth, I'm not the owner of that blog. He's a CS researcher, I'm just a lowly programmer in the trenches. Just wanted to make that clear.

First, the Wikipedia article you quote is partially incorrect:

> This means that even if the concurrent units of the program, algorithm, or problem are executed out-of-order or in partial order, the final outcome will remain the same.

This is not true. Consider the following example: Mr. Foo and Ms. Bar are happily married, and own a shared bank account, whose initial balance is $400. Concurrently, they attempt to withdraw $200 and $300 respectively. The scheduling of the withdrawal operations will determine the final result:

(0) Foo has $200, Bar has nothing, and the final account balance is $200.

(1) Foo has nothing, Bar has $300, and the final account balance is $100.

> Note the 'decomposability' part, i.e. the having the property of being decomposable.

Both parallelism and concurrency are about decomposition. They are different kinds of decomposition, though. Parallel tasks cooperate towards a common goal. Concurrent tasks compete for access to common resources.


I think that Wikipedia claim is predicated on 'same result' for originally partially ordered results. Even a sequential implementation can produce different results if the requests arrive in different order.

> Parallel tasks cooperate towards a common goal. Concurrent tasks compete for access to common resources.

This definition is interesting, but I think it lack descriptive power: are web server tasks parallel as they are all contributing at fulfilling the incoming client request queue? Are threads of a matrix multiplication process concurrent as they compete for access to the cpu?

It also misses the whole reason for parallelism which is speed-up (by a constant) of execution.

Anyway, to go full circle, I stand by my assertion that async can be all about parallelism.

[BTW, this has been a great conversation, sometimes one need to put their thought in writing to actually fully understand them]


> Even a sequential implementation can produce different results if the requests arrive in different order.

I never said that the withdrawal requests must be served in the same exact order they were received. This may or may not be the case.

> are web server tasks parallel as they are all contributing at fulfilling the incoming client request queue?

At the highest level of abstraction, they're concurrent. The client request queue is a low-level implementation detail, and a high-level language could well hide it from you. (Yes, this means that people often do unnecessarily low-level programming, even when using so-called “high-level languages”.)

>> Parallel tasks cooperate towards a common goal. Concurrent tasks compete for access to common resources.

> This definition is interesting, but I think it lack descriptive power

Those particular sentences weren't definitions. The actual definitions were what I said a few comments earlier: “Parallelism and concurrency are fundamentally different things. Parallelism is running independent computations simultaneously. Concurrency control is managing the use of shared resources by temporarily granting exclusive access to them to individual computations.”

> It also misses the whole reason for parallelism which is speed-up (by a constant) of execution.

It doesn't. If there is only one goal, then the only possible point to splitting a computation into parallel subcomputations can be to speed up the global computation.


>> Even a sequential implementation can produce different results if the requests arrive in different order. >I never said that the withdrawal requests must be served in the same exact order they were received. This may or may not be the case.

My point is if there is no ordering in the first place, there is no ordering to be preserved.

> Parallelism is running independent computations simultaneously. Concurrency control is managing the use of shared resources by temporarily granting exclusive access to them to individual computations.”

Concurrency control != concurrency though.


> Concurrency control != concurrency though.

Concurrency is simply the situation where two or more processes need exclusive access to a shared resource, so it's fair to say that, whenever there's concurrency, there's a need for concurrency control.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: