I'm still reading through the article, but I have to say that I'm pleasantly surprised by QUIC and HTTP/3. I first learned socket programming around the fall of 1998 (give or take a year) in order to write a game networking layer:
Here are just a few of the immediately obvious flaws I found:
* The UDP checksum is only 16 bits, when it should have been 32 or arbitrary
* The UDP header is far too large, using/wasting something like 28 bytes (I'm drawing that from memory) when it only needed about 12 bytes to represent source ip, source port, destination ip, destination port
* TCP is a separate protocol from UDP, when it should have been a layer over it (this was probably done in the name of efficiency, before computers were fast enough to compress packet headers)
* Secure protocols like TLS and SSL needed several handshakes to begin sending data, when they should have started sending encrypted data immediately while working on keys
* Nagle's algorithm imposed rather arbitrary delays (WAN has different load balancing requirements than LAN)
* NAT has numerous flaws and optional implementation requirements so some routers don't even handle it properly (and Microsoft's UPnP is an incomplete technique for NAT-busting because it can't handle nested networks, Apple's Bonjour has similar problems, making this an open problem)
* TCP is connection oriented so your stream dropped by doing something as simple as changing networks (WIFI broke a lot of things by the early 2000s)
There's probably more I'm forgetting. But I want to stress that these were immediately obvious for me, even then. What I really needed was something like:
* State transfer (TCP would have probably been more useful as a message-oriented stream, this is also an issue with UNIX sockets, for example, could be used to implement a software-transactional-memory or STM)
* One-shot delivery (UDP is a stand in for this, I can't remember the name of it, but basically unreliable packets have a wrapping sequence number so newer packets flush older packets in the queue so that latency-sensitive things like shooting in games can be implemented)
* Token address (the peers should have their own UUID or similar that remain "connected" even after network changes)
* Separately-negotiated encryption (we should be able to skip the negotiation part on any stream if we already have the keys)
Right now the only protocol I'm aware of that comes close to fixing even a handful of these is WebRTC. I find it really sad that more of an effort wasn't made in the beginning to do the above bullet points properly. But in fairness, TCP/IP was mostly used for business, which had different requirements like firewalls. I also find it sad that insecurities in Microsoft's (and early Linux) network stacks led to the "deny all by default" firewalling which lead to NAT, relegating all of us to second class netizens. So I applaud Google's (and others') efforts here, but it demonstrates how deeply rooted some of these flaws were that only billion dollar corporations have the R&D budgets to repair such damage.
Yeah, it really sucks that the developers of TCP didn't foresee these issues in 1981 when they first designed it. I can't believe they were so short-sighted.
Okay, enough with the sarcasm. Is it too much to ask for historical perspective in protocol design?
Agreed, as well as the "Oh why didn't the Berkley people implement the OSI 7 layer model, then TCP would have been layered over UDP".
The reason that TCP beat out all the other protocols is because it didn't "layer" everything. OSI was beautiful in the abstract, but a complete cluster-fuck in the implementation.
Now we have enough processing power that the abstract layering makes more sense. But where the layers interact with cross-layer requirements like security was never actually dealt with in the OSI days.
https://beej.us/guide/bgnet/
Here are just a few of the immediately obvious flaws I found:
* The UDP checksum is only 16 bits, when it should have been 32 or arbitrary
* The UDP header is far too large, using/wasting something like 28 bytes (I'm drawing that from memory) when it only needed about 12 bytes to represent source ip, source port, destination ip, destination port
* TCP is a separate protocol from UDP, when it should have been a layer over it (this was probably done in the name of efficiency, before computers were fast enough to compress packet headers)
* Secure protocols like TLS and SSL needed several handshakes to begin sending data, when they should have started sending encrypted data immediately while working on keys
* Nagle's algorithm imposed rather arbitrary delays (WAN has different load balancing requirements than LAN)
* NAT has numerous flaws and optional implementation requirements so some routers don't even handle it properly (and Microsoft's UPnP is an incomplete technique for NAT-busting because it can't handle nested networks, Apple's Bonjour has similar problems, making this an open problem)
* TCP is connection oriented so your stream dropped by doing something as simple as changing networks (WIFI broke a lot of things by the early 2000s)
There's probably more I'm forgetting. But I want to stress that these were immediately obvious for me, even then. What I really needed was something like:
* State transfer (TCP would have probably been more useful as a message-oriented stream, this is also an issue with UNIX sockets, for example, could be used to implement a software-transactional-memory or STM)
* One-shot delivery (UDP is a stand in for this, I can't remember the name of it, but basically unreliable packets have a wrapping sequence number so newer packets flush older packets in the queue so that latency-sensitive things like shooting in games can be implemented)
* Token address (the peers should have their own UUID or similar that remain "connected" even after network changes)
* Separately-negotiated encryption (we should be able to skip the negotiation part on any stream if we already have the keys)
Right now the only protocol I'm aware of that comes close to fixing even a handful of these is WebRTC. I find it really sad that more of an effort wasn't made in the beginning to do the above bullet points properly. But in fairness, TCP/IP was mostly used for business, which had different requirements like firewalls. I also find it sad that insecurities in Microsoft's (and early Linux) network stacks led to the "deny all by default" firewalling which lead to NAT, relegating all of us to second class netizens. So I applaud Google's (and others') efforts here, but it demonstrates how deeply rooted some of these flaws were that only billion dollar corporations have the R&D budgets to repair such damage.