Hacker News new | past | comments | ask | show | jobs | submit login

asyncio is not a competition to threads, it's complementary.

In fact, it's a perfectly viable strat in python to have several processes, each having several threads, each having an event loop.

And it will still be so, once this comes out. You will certainly use threads more, and processes less, but replacing 1000000 coroutines by 1000000 system threads is not necessarily the right strategy for your task. See nginx vs apache.




"Viable" as in "you have no other choice sometimes". This forces you to deal with 3 libraries each with their own quirks, pitfalls and incompatibilities. Sometimes you even deal with dependencies reimplementing some parts in a 4th or 5th library to deal with shortcomings.

I really don't care that much which of them survive, I just want to rely on less of them


No, it's just useful. They are techs with different trade off, and life is full of opportunities.


Python Zen = one obvious way to do it. Having a bunch of very different ones, each with serious disadvantages, is a bad look.


Zen of Python is an ideal, and at this point, kind of tongue-in-cheek.

This is the same language that shipped with at least 3 different methods to apply functions across iterables when the Zen of Python was adopted as a PEP in 2004.


There is at least some recognition in those cases that they introduced the new thing because they got it wrong in the old thing. That's different than saying they should co-exist on equal terms.


> That's different than saying they should co-exist on equal terms.

I'm not sure who is claiming that. Here's the OP we're replying to:

> They are techs with different trade off, and life is full of opportunities.


Yes, that says each has good and bad points and you should weigh them against each other in the context of your application, to figure out which one to use. I.e. equal terms.

Zen would be: pick one of the two approaches, keep its strengths while fixing it to get rid of its weaknesses, then declare the fixed version as the one obvious way to do it. You might still have to keep the other one around for legacy support, but that's similar to the situation with applying functions across iterables.

This is what Go did. Go has one way to do concurrency (goroutines) and they are superior to both of Python's current approaches. Erlang has of course been in the background all along, doing something similar.


It's a technical thread, not a political one. If you were so sure of your argument, you wouldn't use a throwaway.

Besides, it's weird, like saying we should not have int, float and complex, there should be one way to do it.

Just because those are 3 numbers doesn't mean they don't have each their own specific benefit.


int, float, and complex are for different purposes. async and threads paper over each others' weaknesses, instead of fixing the weaknesses at the start. Async itself is an antipattern (technical opinion, so there) but Python uses it because of the hazards and high costs of threads. Chuck Moore figured out 50 years ago to keep the async stuff out of the programmer's way, when he put multitasking into Polyforth, which ran on tiny machines. Python (and Node) still make the programmer deal with it.

If you look at Haskell, Erlang/Elixir, and Go, they all let you write performant sequential code by pushing the async into the runtime where the programmer doesn't have to see it. Python had an opportunity to do the same, but stayed with async and coroutines. What a pain.


Oh, you meant like, why don't Python didn't reimplement the whole interpreter around concurrency instead of using the tools it already had to find solutions to problems?

Well, that question is as old as engineering itself, and it's always a matter of resources, cost, history and knowledge.


Multiple threads with one asyncio loop per thread would be absolutely pointless in Python, because of the GIL.

With that said, sure, threads and asyncio are complimentary in the sense that you can run tasks on threadpool executors and treat them as if they were coroutines on an event loop. But that serves no purpose unless you're trying to do blocking IO without blocking your whole process.


It would not be pointless at all, because while one thread may lock on CPU, context switching will let another one deal with IO. This can let you smooth out the progress of each part of your program, and can be useful for workload when you don't want anything to block for too long.


This entire article is about removing the GIL


I read it as each process having multiple threads and an event loop. If the threads are performing I/O or calling out to compiled code and releasing the GIL, said GIL won't block the event loop.


In Python it would be pointless, but for example it's how Seastar/ScyllaDB work: each thread is bound to a CPU on the host and has its own reactor (event loop) with coroutines on it. QEMU has a similar design.


It's also (to my knowledge) how Erlang's VMs (e.g. BEAM) work: one thread per CPU core, and a VM on each thread preemptively switching between processes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: