It seems like the OP is mostly talking about 'blocking' sockets. Such sockets return when they're ready or there's an error. So send returns when its passed off its data to the network buffer (or if its full it will wait until it can pass off SOME data.) You might think that sounds excellent - but from memory send may not send all of the bytes you pass to it. So if you want to send out all of a given buffer with blocking sockets - you really need to write a loop that implements send_all with a count of the amount of bytes sent or quit on error.
Blocking sockets are kind of shitty, not gonna lie. The counterpart to send is recv. Say you send a HTTP request to a web server and you want to get a response. With a blocking socket its quite possible that your application will simply wait forever. I am p sure the default 'timeout' for blocking sockets is 'None' so it just waits for success or failure. So a shitty web server can make your entire client hang.
So how to solve this?
Well, you might try setting a 'timeout' for blocking operations but this would also screw you. Any thread that calls that blocking operation is going to hang that entire time. Maybe that is fine for you -- should you design program to be multi-threaded and pass off sockets so they can wait like that -- and that is one such solution.
Another solution -- and this is the one the OP uses -- is to use the 'select' call to check to see if a socket operation will 'block.' I believe it works on 'reading' and 'writing.' But wait a minute. Now you've got to implement some kind of performant loop that periodically does checks on your sockets. This may sound simple but its actually the subject of whole research projects to try build the most performant loops possible. Now we're really talking about event loops here and how to build them.
So how to solve this... today... for real-world uses?
Most people are just going to want to use asynchronous I/O. If you've never worked with async code before: its a way of doing event-based programming where you can suspend execution of a function if an event isn't ready. This allows other functions to 'run.' Note that this is a way to do 'concurrency' -- or switch between multiple tasks. A good async library may or may not also be 'parallel' -- the ability to execute functions simultaneously (like on multiple cores.)
If we go back to the idea of the loop and using 'select' on our socket descriptors. This is really like a poor-persons async event loop. It can easily be implemented in a single thread, in a single core. But again -- for modern applications -- you're going to want to stay away from using the socket functions and go for async I/O instead.
One last caveat to mention:
Network code needs to be FAST. Not all software that we write needs to run as fast as possible. That's just a fact and indeed many warn against 'premature optimization.' I would say this advice doesn't bode well for network code. It's simply not acceptable to write crappy algorithms that add tens of milliseconds or nanoseconds to packet delivery time if you can avoid it. It can actually add up to costs a lot of money and make certain applications impossible.
The thing is though -- profiling async code can be hard -- profiling network code even harder. A network is unreliable and to measure run-time of code you only care about how the code performs when its successful. So you're going to want to find tools that let you throw away erroneous results and measure how long 'coroutines' actually run for.
Async network code may underlyingly use non-blocking sockets, select, and poll. But they are designed to be as efficient as possible. So if you have access to using them its probably what you want to use!
Blocking sockets are kind of shitty, not gonna lie. The counterpart to send is recv. Say you send a HTTP request to a web server and you want to get a response. With a blocking socket its quite possible that your application will simply wait forever. I am p sure the default 'timeout' for blocking sockets is 'None' so it just waits for success or failure. So a shitty web server can make your entire client hang.
So how to solve this?
Well, you might try setting a 'timeout' for blocking operations but this would also screw you. Any thread that calls that blocking operation is going to hang that entire time. Maybe that is fine for you -- should you design program to be multi-threaded and pass off sockets so they can wait like that -- and that is one such solution.
Another solution -- and this is the one the OP uses -- is to use the 'select' call to check to see if a socket operation will 'block.' I believe it works on 'reading' and 'writing.' But wait a minute. Now you've got to implement some kind of performant loop that periodically does checks on your sockets. This may sound simple but its actually the subject of whole research projects to try build the most performant loops possible. Now we're really talking about event loops here and how to build them.
So how to solve this... today... for real-world uses?
Most people are just going to want to use asynchronous I/O. If you've never worked with async code before: its a way of doing event-based programming where you can suspend execution of a function if an event isn't ready. This allows other functions to 'run.' Note that this is a way to do 'concurrency' -- or switch between multiple tasks. A good async library may or may not also be 'parallel' -- the ability to execute functions simultaneously (like on multiple cores.)
If we go back to the idea of the loop and using 'select' on our socket descriptors. This is really like a poor-persons async event loop. It can easily be implemented in a single thread, in a single core. But again -- for modern applications -- you're going to want to stay away from using the socket functions and go for async I/O instead.
One last caveat to mention:
Network code needs to be FAST. Not all software that we write needs to run as fast as possible. That's just a fact and indeed many warn against 'premature optimization.' I would say this advice doesn't bode well for network code. It's simply not acceptable to write crappy algorithms that add tens of milliseconds or nanoseconds to packet delivery time if you can avoid it. It can actually add up to costs a lot of money and make certain applications impossible.
The thing is though -- profiling async code can be hard -- profiling network code even harder. A network is unreliable and to measure run-time of code you only care about how the code performs when its successful. So you're going to want to find tools that let you throw away erroneous results and measure how long 'coroutines' actually run for.
Async network code may underlyingly use non-blocking sockets, select, and poll. But they are designed to be as efficient as possible. So if you have access to using them its probably what you want to use!