Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> When you're dealing with external REST APIs that take multiple seconds to respond, then the async version is substantially "faster" because your process can get some other useful work done while it's waiting. Obviously the async framework introduces some overhead, but that bit of overhead is probably a lot less than the 3 billion cpu cycles you'll waste waiting 1000ms for an external service.

but threads get you the same thing with much less overhead. this is what benchmarks like this one and my own continue to confirm.

People often are afraid of threads in Python because "the GIL!" But the GIL does not block on IO. I think programmers reflexively reaching for Tornado or whatever don't really understand the details of how this all works.



but threads get you the same thing with much less overhead.

That is not true, at least not in general, the whole point of using continuations for async I/O is to avoid the overhead of using threads, the scheduler overhead, the cost of saving and restoring the processor state when switching tasks, the per thread stack space, and so on.


The scheduler overhead and the cost of context-switches are vastly overstated compared to alternatives. The per thread stack space in effect has virtually no run-time cost, and starting off at a single 4k page for a stack, thousands still only waste a miniscule about of memory.


async implementations build a scheduler into the runtime, and that's generally slower than the OS' scheduler. 10-100x slower if it's not in C (or whatever).


GIL might not block on I/O but the implementation that uses PyObject does need the GIL no?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: