> but that bit of overhead is probably a lot less than the 3 billion cpu cycles you'll waste waiting 1000ms for an external service.
You are not waiting for that 1000ms, and you haven't been for 35 years since the first os's starting feature preemptive multitasking.
When you wait on a socket, the OS will remove you from the CPU and place someone who is not waiting. When data is ready, you are placed back. You aren't wasting the CPU cycles waiting, only the ones the OS needs to save your state.
Actually standing there and waiting on the socket is not a thing people have done for a long time.
> You are not waiting for that 1000ms, and you haven't been for 35 years since the first os's starting feature preemptive multitasking.
The point is that async IO allows your own process/thread to progress while waiting for IO. Preemptive multitasking just assigns the CPU to something else while waiting, which is good for the box as a whole, but not necessarily productive for that one process (unless it is multithreaded).
Sync I/O lets your process (not thread) do something else. In other languages, async I/O is faster because it avoids context switches and amortizes
kernel crossings. Apparently this is not the case in practice for python.
This doesn’t surprise me at all, as I’ve had to deal with async python in production, and it was a performance and reliability nightmare compared to the async Java and C++ it interacted with.
You are not waiting for that 1000ms, and you haven't been for 35 years since the first os's starting feature preemptive multitasking.
When you wait on a socket, the OS will remove you from the CPU and place someone who is not waiting. When data is ready, you are placed back. You aren't wasting the CPU cycles waiting, only the ones the OS needs to save your state.
Actually standing there and waiting on the socket is not a thing people have done for a long time.