Hacker News new | past | comments | ask | show | jobs | submit login

Isn't this example just like map/reduce in python?

I guess it is more like parallel running than async, since I associate async with small sleeping costs and good thread switching. How does the GIL play into this?




In asyncio and in most other asynchronous frameworks for Python, threads aren't usually involved.

Your Python interpreter is doing one thing at a time. However, a function can get to a statement that needs to wait for something outside of the interpreter -- usually I/O, but it could also be a timer.

What happens at that point isn't thread-switching. The function gets suspended and control goes elsewhere in the program, just like if you yielded from a generator. (It's the same mechanism.) The function can be woken up again by feeding it the data it was waiting for -- the asyncio main loop is responsible for this.

If you actually want your processor to do more things at the same time, though, asyncio's model of asynchronous computing isn't going to do it. Python programmers are afraid of threads (I mean, they have a lot of disadvantages and not much benefit in Python, due to the GIL), so they'll tend to use multiple processes for that, instead of threads.

EDIT: But at this point I realize you're asking about a much more specific thing in the article. This article is asking you to not be afraid of combining threads and asyncio. It's suggesting that a useful asynchronous thing you can do is to spawn a thread, perform a computation in it, and wait for the result.

At this point you do need to worry about the GIL. C extensions can release the GIL, so the article suggests you use one of those (NumPy).


Or you can use Numba to write gil free imperative code as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: