Personally I am _very_ excited by HTTP/3 (and QUIC), it feels like the building block for Internet 2.0, with connection migration over different IPs, mandatory encryption, bidirectional streams and it being a user-space library – sure, more bloat, but from now on we won't have to wait for your kernel to support feature X, or even worse, your ISP-provided router or decade old middleware router on the Internet.
I haven't had the chance to read the actual spec yet, but it's obvious that while the current tech (HTTP2) is an improvement over what we had before, HTTP/3 is a good base to make the web even faster and more secure.
HTTP/3 won't be IPv6: it only requires support from the two parties that benefit from it the most: browser vendors and web server vendors. We won't have to wait on the whole internet to upgrade their hardware.
I'm worried, not because of the standard itself, which seems well thought out, even if rushed.
I'm worried because you have a protocol implemented in the userland for a few mainstream languages. It seems everyone now has to pay the price of a protocol implementation on top of a protocol implementation on top of a protocol implementation. Big players---either because they have thousands of open source developers, or is backed by a corporation---they have it easy. Smaller players? Not so much.
Also, note that the exact problem that HTTP/3 tries to solve was known in the design process of HTTP/2 and some people even noted having multiple flow control schemes at multiple layers would become a problem. We are letting the same people design the next layer, and probably too fast in the name of time to market.
This should definitely live in a way people can make use of it easily, with an API highly amenable to binding. If it gains traction, we need a new UDP interface to the kernel as well, for batching packets back and forth. This kills operating system diversity as well, or runs the risk at doing so.
OTOH, I see the lure: SCTP never caught on for a reason, and much of this is the opposite of my above worries.
> some people even noted having multiple flow control schemes at multiple layers would become a problem
It could, but it didn't in reality. HTTP/2 has two levels of flow control, stream-level and connection-level. You use 1 connection per site and as many streams as you want multiplexed inside that connection, thus stream-level flow control is necessary to avoid stream head-of-line blocking.
The actual layering violation is connection-level flow control, which seems to duplicate TCP flow control, but it's not mandatory, as you can see most if not all open source implementations simply set a very large connection-level window size to hand off flow control at this level to TCP.
There is a good reason for this to exist, which is to compete for bandwidth with HTTP/1.1 domain sharding technique which uses N connections per "site", effectively having N times the Initial Congestion Window (IW) than what HTTP/2 can have in one connection. IW was a huge issue in improving connection startup latency, and after managing to convince Linux netdev to raise it to 10, Google couldn't get them to allow applications to customize its value anymore. The only solution for Google was to add some flow control information in HTTP/2 and pass it to coupled TCP flow control to improve IW. So in reality only one flow control scheme is working at any time, instead of the common perception of TCP over TCP meltdown. For anyone else they can simply not do connection-level flow control in HTTP/2 and nothing of value is lost.
The TCP state machine sucks and all of its timing parameters are outdated and unsuitable for modern networks. QUIC frees us from the tyranny of the kernel. Being in userspace is a feature.
So what if we use our experiences and in-depth knowledge of a past protocol, take into account the flaws, and build something better? You say "re-implemented TCP" as if it's the only possible way to build a reliable packet protocol, and that it has no flaws, and we can't make any improvements to it.
TCP isn't alien technology we don't understand. We do understand it, and its limits, and its constraints, and that means we can build a better one next time.
The problem with coming “to the realization that they have re-implemented TCP” is that it was ad-hoc. In this situation, the re-implementation was done by people very familiar with TCP, both its strengths, weaknesses, and assumptions, who very deliberately set out to “re-implement” TCP to work better with how are networks actually are configured.
How does it work with a debugger? With TCP the connection didn't die just because you paused the program. But when everything is in userspace then that can't happen anymore?
This is a core feature in TCP/IP. Only the endpoints actually involved in the connection care about what a "connection" really is. If they share a connection, it should be nobody else's business that they do.
This is definitely not true in this world which is filled with NATs everywhere. The intermediate routers very much care and must care about what connections exist.
Not really. Or just up to a point only. Then it will drop them into the bit bucket without telling either the sender or receiver. With TCP the sender will "find out" eventually the receiver isn't getting the data.
The point is that with streams on top of UDP all that has to happen in the application layer.
Would we be better served by Google reimplementing TCP on top of UDP, or by fixing TCP in Android, and on their servers, and telling us how they did it?
If it's better for TCP to be handled in userspace, fine -- they should build the APIs for that on the OSes they control; and agitate for it in the OSes they don't.
And, maybe, just maybe, they could turn on path MTU blackhole detection, please please please please please; it's only been in the Linux kernel for all versions of Android, but not turned on.
> I'm worried because you have a protocol implemented in the userland for a few mainstream languages. It seems everyone now has to pay the price of a protocol implementation on top of a protocol implementation on top of a protocol implementation. Big players---either because they have thousands of open source developers, or is backed by a corporation---they have it easy. Smaller players? Not so much.
This can be partially mitigated in the same way it has been worked-around before: Through proxies. The fact that HTTP/3 is still only HTTP makes it even easier.
E.g. on the server side it might be good enough to have an API gateway, load balancer or CDN which understands HTTP/3 and forwards things in boring HTTP/1.1 to internal services. That's not very different from terminating TLS somewhere before the actual service implementation. Actually service implementations don't even have to speak HTTP - they can also talk via stdin/out to a HTTP/3 server in another language - which means back to CGI.
On the client side, we could deploy a client-side proxy server which translates localhost HTTP/1.1 requests into remote HTTP/3 requests. If that thing is part of the OS distribution, it's actually not that much different from a TCP/IP stack which is delivered as part of the kernel. However if it's not part of the OS it might cause some trust issues. And apart from that it might be a bit inconvenient for users, since now applications need to be changed to make use of the proxy.
> This can be partially mitigated in the same way it has been worked-around before: Through proxies. The fact that HTTP/3 is still only HTTP makes it even easier.
But if we’re doing that we get none of the so called benefits Google-HTTP 2.0 and Google-HTTP 3.0 brings, so what’s the point of using them in the first place?
That’s completely ignoring Google-HTTP 4.0, 5.0 and 6.0 probably coming next year, and the issue of when Google thinks it is “reasonable” to break compatibility with the real HTTP, ie HTTP 1.1.
You still get some of the benefits for the connection to the client, assuming your use case fits. Many typical small setups serve static resources through the "proxy" (i.e. nginx for static assets and distributing requests to backends), benefiting there almost automatically. Similarly CDNs, which nowadays are used even by tiny projects.
(also, if you want your concerns to be taken seriously, I'd tone it down a bit. "so called benefits", "Google HTTP", and "probably coming next year" when QUIC has been in development and testing for over 5 years all don't really give the impression you actually care about the details)
> Big players---either because they have thousands of open source developers, or is backed by a corporation---they have it easy. Smaller players? Not so much.
If you think that's bad, try building a browser from scratch these days!
Then, make it adhere 100% to the HTML5 and CSS3 specs! (W3C versions; I know WHATWG has uses living docs.)
SCTP is a reliable transport protocol with streams, and for WebRTC there are even existing implementations using it over UDP.
This was not deemed good enough as a QUIC alternative due to several reasons, including:
SCTP does not fix the head-of-line-blocking problem for streams
SCTP requires the number of streams to be decided at connection setup
SCTP does not have a solid TLS/security story
SCTP has a 4-way handshake, QUIC offers 0-RTT
QUIC is a bytestream like TCP, SCTP is message-based
QUIC connections can migrate between IP addresses but SCTP cannot
I'm assuming the parent meant that NAT, as implemented in SOHO routers, does not support SCTP. They could implement it, but don't, and thus, NAT breaks it.
> But because of the second point, why should someone implement it?
I'm reading this as "SCTP has ports, why should someone implement it?" There is way more to SCTP than ports. For example, SCPT can deliver data on multiple independent streams, something HTTP/2 in many ways reinvents.
Irrespective of the protocol, my optimism for the future of the web has been curtailed by developments like extensions having less and less power over time (recent example is Google's plans to intentionally cripple ad blockers), plugins going away, hobbyist websites becoming more burdensome to set up and maintain if insecure http is deprecated, browsers planning to disable autoplay, etc. It feels like the golden age of the creative and vibrant web peaked during the brief window where all the new HTML5 stuff was around, Firefox used the old extension system, and Flash and Java applets were still common.
After that point it's been becoming more and more sterilized. My web apps that automatically played some sound aren't going to work anymore without some obnoxious "click here to begin" screen that doesn't fit in with the content. No more plugins letting us extend our browsers in new ways (what a convenient "coincidence" for Google that this gives them more control over what the user gets to do and makes tracking what goes on easier). I have to give Reddit Enhancement Suite permission every single time it tries to show a preview from a domain it hasn't previously done so from before. It's all suffocating. HTML5 makes up for some of the lost capability but it's not enough and what parts of HTML5 are going to actually work are basically at the whim of Google now.
But at least HTTP/3 will let us load buzzfeed listicles a few milliseconds faster, so there's that.
This is a special case of production values going up, as also seen in movies, video games, and many other products. User expectations gradually rise until only large organizations of professionals can meet them.
On the other hand, we already live in this world. When was the last time you used a homemade CPU or graphics chip?
It's still possible for an indie scene to arise that values hand-crafted stuff, possibly at a different layer.
Yep. Sadly, the Web is a victim of its own success. There aren't just a few bad actors anymore. There are legions more than willing to write an endless number of malware extensions with randomly permuted uuids. There's an ad industry that long ago went off the deep end and are hoping people don't notice just a little longer. The price list for exploits is well known and buyers are easy to find.
Then again, it is still a massive cross platform content publishing and distribution system that works, despite the hostile ecosystem it inhabits. And it even includes the first truly successful cross platform programming environment.
At least Let's Encrypt has made certificates easier than ever to add and update. Installation of a self-updating certificate takes less than 10 minutes on many server setups.
The sheer fact that you need to involve a third party for encryption shows that the web is fundamentally, conceptually broken and no longer lives up to its original design goals.
1. You don't need Web PKI certificates for encryption. Indeed in TLS 1.3 this is very obvious because the encryption switches on before any certificates are even involved. You need certificates to... certify identity. And this isn't some oddity of "the web" which might show it's "broken" but simply a mathematical fact about what identity is. If you don't want certificates, you have to just magically know every identity somehow. Works for ten PCs in your office, doesn't scale for tens of millions of web sites.
2. Tim's "Original design goals" are for a system that runs at CERN in Switzerland and is modelled on an earlier system he'd worked with in the 1980s. Tim's system has no encryption, nor does it have most other features you'd expect.
You don't need a third party. You can `openssl req` a self-signed certificate, and as long as whatever device you want to talk to accepts it, you get secure communication.
The other comment sums it up, a third party is a good line between convenience and security.
>You can `openssl req` a self-signed certificate, and as long as whatever device you want to talk to accepts it
Device? We're talking about browsers. Browsers are getting increasingly hostile towards self-signed certs. Ironically, Google doesn't trust third-party root CAs, so they became one themselves. It's good to be the exception to the rules you push on others.
The public internet is not a sandbox for hobbyists any more, like in was in 1993. Now there are incentives to crack you, impersonate you, tamper with the information you're serving. The web had to adapt or perish.
I agree. It may only theoretically be a problem that a set of trusted CAs can dictate who can communicate with each other, but theoretical problems have nasty ways of eventually becoming concrete.
It's definitely worth having the encryption that prevents a lot of problems today, but I'm worried that QUIC has no unencrypted variant at all. That's almost certainly safer for the user, but it means that if a government blacklisted you out of a certificate, you're screwed.
I'm trying to interpret your stance in the most favorable possible manner, but... dude. If you think hobbyist websites are increasingly burdensome to set up, you haven't been paying any attention at all.
The environment became more restrictive with the loss of Flash/Java and now things like breaking autoplay, and more burdensome in some ways like with the https issue, even if it's faster to spin up a cloud instance and JS libraries are more streamlined now.
HTTPS (cert creation and auto-renewal) is trivial thanks to LetsEncrypt.
Flash/Java (applets, presumably) were never easier to deploy than HTML...
and deploying static sites continues to get easier and easier. See eg Netlify or Zeit/Now.
Autoplay is abused by advertisers and is a terrible UX. I get that you have a particular, outdated workflow and you'd prefer that nothing change, but really that ship sailed a long time ago.
It's all well and good to opine for the days of old, but when you consider the real-world implications that led to the removal of Flash/Java from the ecosystem, I'd gladly give up the opportunity to experience your art installation without a clickthrough to keep our systems secure.
This off-topic post is akin to "we have homeless people, so no resources should be allocated to space flight".
The energies invested in developing HTTP successor protocols are not being deprived from efforts to stifle Google from ruining the concept of the web browser as a _user_ agent.
It's a misconception that you have to wait with IPv6.
If you're a large organisation you can move to IPv6 "today". What you do is, internally you cease buying IPv4-only gear and using IPv4 addressing etcetera. Everything inside is purely IPv6. A lot of your networking gets simpler when you do this, and debugging is a LOT smoother because there's no more "Huh 10.0.0.1, could be _anything_" everything has globally unique addresses because it's not crammed into this tiny 32-bit space.
At the edge, you have protocol translators to get from IPv6 (which all your internal stuff users) to IPv4 (which some things on the Internet use) but you probably already had a bunch of gear at the edge anyway, to implement corporate policies like "No surfing for porn at work" and "Nobody from outside should be connecting to port 22 on our machines!".
This isn't really practical for "One man and an AWS account" type businesses where your "Internet access" is a Comcast account and an iPhone, but if you're big enough to actually have an IT department, suggest they look into it. It may be cheaper and simpler than they'd realised.
> What you do is, internally you cease buying IPv4-only gear and using IPv4 addressing etcetera. Everything inside is purely IPv6.
"Throw everything away and start from scratch." uh yeah, that's totally gonna work for a large organization. They'll be done in an afternoon! That includes rewriting all your legacy apps that only support ipv4, including the ones you bought from 3rd parties where you don't even have the source code.
> It's a misconception that you have to wait with IPv6.
Yes and no. As I stated at the end of my comment, the problem w/ IPv6 is that who's benefitting the most isn't clear: I am interested in it, as a power user. Average Joe doesn't care. App developer doesn't care (no killer IPv6 apps yet). Large ISPs with extensive CG-NAT deployments don't care (not worth the money, yet, see IPv6 adoption in the UK).
Who cares about HTTP/3? Average Joe — Not really. Mozilla/Google — Hell yeah they do. It'll be in Chrome before anyone else (if it isn't already). Same with nginx/Apache/any other webserver, Joe Blog with his own VPS will want to enable it. And that's all you need.
It may be my embedded developer bias but I don't actually consider moving things outside of the kernel to be necessarily a good thing. Standard kernel interfaces are (usually) a guarantee of stability and good isolation, are generally easier to probe using standard tools, easier to accelerate in hardware etc... Not everything should be in the kernel of course, but low level network protocols should be IMO because they're good targets for potential hardware-acceleration (I'm convinced that it would make sense to handle SSL in-kernel for instance, with a userland daemon handling certificate validation, but that's a story for an other day).
I mean, if you can easily update whatever userland library you're using, why can't you upgrade your OS? If the library is easy to upgrade it means that it uses a well defined and backward-compatible interface. What do you get by shifting everything one layer up? In the end it's just software, there's not really any reason why upgrading a kernel driver should be any harder than upgrading a .so/.dll.
So the logic is "kernels are too slow to update and integrate the last new standards, so let's just move everything one step up because browsers auto-update"? Except that there's no technical reason for that, on my Linux box my browser and my kernel are updated at the same time when I run "apt-get upgrade" or "pacman -Syu" or whatever applies. The kernel I'm using at the moment has been built less than a week ago.
So if the problem is that Windows sucks balls and as a result people end up effectively re-creating an operating system on top of it to work around that, then yeah, from a practical standpoint I get it but I'm definitely not "_very_ excited" about it. It's a rather ugly hack.
If in general if the question is "who do you trust more to select and implement new internet standards, kernel developers or web developers?" then I take a side-glance at the few GBs used by my web browser to display a handful of static pages at the moment and I know the answer as far as I'm concerned...
So yeah, it might make sense, but I still think it just goes to show what a shitshow modern software development has become. Instead of fixing things we just add a new layer on top and we rationalize that it's better that way.
The problem isn't your kernel that needs updating. That's the least of your problems.
The problem are the network appliances that are sitting between you and the server. i.e. the whole internet. To support feature X, everything between you and the server will need to support that (unless it's backwards compatible, but that's not always the case, as described in the article).
Decade-long adoption will solve this problem, until one day your packet gets routed through some router running Linux v2.5 and your connection silently fails.
This isn't good enough to build a faster (and more reliable) internet on, whereas UDP is a 40-year old standard, and we can assume everybody supports it, even Linux v2.5
I agree about the use of UDP to be able to reuse network gear, my comment was prompted by this part of the parent's comment:
>from now on we won't have to wait for your kernel to support feature X
This is orthogonal to the issue you're discussing (for instance as a thought experiment you could design a new protocol on top of ethernet in userland using raw sockets and it won't be supported by anybody, or you could implement something on top of TCP in the kernel and it'll work everywhere).
I just wanted to point out that outdated kernels aren't a fatality, it's a consequence of bad industry practices (in particular, although not uniquely, by Microsoft with its Windows OS). On Linux everything is updated together and the kernel is mostly just another package, so it's a non-issue. It's also means that applications don't have to package a custom updater (and all related infrastructure) by themselves.
Supporting a new protocol on top of IP using raw sockets can be supported by endpoints that are running your software. Except, of course, if you go through NAT or any of these "middleware boxes" that litter the Internet.
> On Linux everything is updated together and the kernel is mostly just another package, so it's a non-issue
Except say on my linux (ubuntu), yes the kernel is patched, but the version doesn't increase very often at all sadly. Yes I decide to run the mainline kernels since I'm on a laptop and I find that beneficial, but it's not the default of most linux installations I believe.
I just now realized that the telephony mindset has won---the "dumb network" (slung packets to an intelligent edge) that was the Internet back in 1992 is now dead a buried. All hail the "smart network!"
Indeed. But (not sure if that's what GP meant just guessing) it's sad that tons of middleboxes that peeked too much inside packets effectively broke "end-to-end" goals, so now it's impractical to deploy any new protocols (such as SCTP) alongside existing UDP and TCP. Most practical progress is made by layering more and more on top of existing protocols :-(
Actually I'm kinda relieved QUIC succeeded at all with much less layering on top existing stuff than usual. (Compared to say Websockets-over-HTTPS-over-TLS-over-TCP-over-someIPv6-over-IPV4-tunnel...). If it's feasible to deploy a major new protocol over just UDP, that's practically as good as directly over IP!
P.S. I think encryption is the main force that held back the (economically almost inevitable) desire of middleboxes to "add value" by manipulating inner layers.
> UDP ... we can assume everybody supports it, even Linux v2.5
If you can actually remember the days of Linux 2.5 (development branch which became 2.6) this is a hilarious analogy. I guess that's what the kids are calling ancient these days, eh? Linux 2.5, when dinosaurs roamed the earth! It even did UDP, can you believe it?!
I generally agree with that, but what about mobile? AFAIK android kernel updates are really slow to reach users; when they actually do (depending on support from vendors, which vary a lot).
Honestly, I'd feel much better if people were standardizing QUIC and we simply run HTTP/1.1 over it.
Instead we now have transport layers that are application specific and 3 completely different web protocols with none of them being considered legacy, 2 of them being complex enough that people aren't very willing to move.
That does not look like a good foundation for anything.
Part of the process while moving QUIC through IETF was separating it from the HTTP layer, which is why now HTTP/3 and QUIC are different things - although given history and involved players, the HTTP use case was a priority, but other companies are looking to use it underneath other protocols. From my understanding, compared to HTTP/2, the stream concept has moved out of the HTTP spec in the underlying level, where it hopefully can benefit other use cases too.
Indeed, and the encryption is split too. Overall it is starting to look nice to use, although I need to read the spec in more detail to understand a bunch of the details.
Googlenet is not internet 2.0 and barely anyone in the world beyond a couple of megacorps can benefit from HTTP/2, HTTP/3, HTTP/4, etc. It feels more like the web is dead, completely captured by megacorps.
So, presumably organisations like the IRS and Wikipedia are "barely anyone" and all of the big technology companies are "a couple of megacorps" but can you explain why you believe the _users_ don't benefit?
[About a third of all "popular" (ie top 10 million) web sites are HTTP/2 today]
Or did you just mean "I don't care about the facts, I'm angry and the world changes which I don't really understand, so I just make things up and call that truth because it's easier" ?
>[About a third of all "popular" (ie top 10 million) web sites are HTTP/2 today]
Don't forget that a huge chunk of them are hosted on megacorp cloud platforms.
Everything became so "simple" and "streamlined" that companies are forced to outsource all their hardware and platform management and then hire a small army of AWS certified devops.
> companies are forced to outsource all their hardware and platform management
Nothing is being forced. You can still set up a server in your basement, or rent/build a data center and run nginx to get all of the benefits of H2, TLS1.3, etc. You can even get "megacorp-quality platform management" with things like Outposts, GKE on-prem, Azure stack, etc.
The web is captured by megacorps but it's not captured because of HTTP/2.0. It's captured because of network effects or whatever. And you are wrong, there is a benefit from using HTTP/2.0 on any website that has more than 1 resource to download.
QUIC is developed by an IETF working group where anyone can participate, and there are definitely some productive participants who don't work for Google (or any of the other big companies).
Running nginx as reverse proxy on internal system. HTTP/2 happens automagically if a client requests it.
It definitely has an impact on our system which requires sub 50ms response times on 2000+ concurrent requests.
It's a PITA if you want to debug the streams because not plain text, but given that we're over TLS, that's not really possible anyway.
In testing, we use ye-olde HTTP/1.1 and no TLS, but even over HTTP/2 and TLS, the browser will still display a JSON request/response happily. Rare that we have to go lower in the stack.
About the mandatory encryption and the performance: this will prevent ISP caching of static content. That would be bad news for, say, Steam and Debian, who use HTTP (not HTTPS [0][1]) to distribute content. (They verify integrity with secure hashing, of course.) I presume they'll decline to adopt HTTP/3.
Bit late to the party, I misunderstood the HTTP/3 but I am very excited for QUIC, and _not_ because of Google's illuminati spec but because I hope people will be interested in secure UDP.
I like datagrams so much more than an accept-listen-keepalive-blocking-foreverloop-callback-async-threaded-future hell that is TCP.
I could be wrong of course I need to read the spec too. Anything UDP makes me giddy.
> or even worse, your ISP-provided router or decade old middleware router on the Internet.
I can guarantee you that middleware will continue to exist. If they need to they'll force QUIC connections to terminate and switch to TLS 1.3. There's no way that companies will allow encrypted communications leaving their companies en-masse without being able to decrypt the content. Even more so for any totalitarian state governments that need to spy on their citizens..
> There's no way that companies will allow encrypted communications leaving their companies en-masse without being able to decrypt the content.
Then they'll install MITM certificates on the individual endpoints that they already control. The ability to intercept connections between endpoints is inexorably going away.
> As the packet loss rate increases, HTTP/2 performs less and less good. At 2% packet loss (which is a terrible network quality, mind you), tests have proven that HTTP/1 users are usually better off - because they typically have six TCP connections up to distribute the lost packet over so for each lost packet the other connections without loss can still continue.
> Fixing this issue is not easy, if at all possible, to do with TCP.
Are there any resources to better understand _why_ this can't be resolved? If HTTP 1.1 performs better under poor network conditions, why can't we start using more concurrent TCP connections with HTTP 2 when it makes sense?
I'm a bit wary of this use of UDP when we've essentially re-implemented some of TCP on top, though I understand it's common in game networking.
>Are there any resources to better understand _why_ this can't be resolved?
The issue is TCP's design assumption around a single stream. You don't get any out of order packets but that also means you don't get any out of order packets, even if you want them. When you have multiple conceptual streams within a single TCP connections you actually just want the order maintained within those conceptual streams and not the whole TCP connection, but routers don't know that. If you can ignore this issue, http/2 is really nice because you're saving a lot of the overhead of spinning up and tearing down connections.
>If HTTP 1.1 performs better under poor network conditions, why can't we start using more concurrent TCP connections with HTTP 2 when it makes sense?
Because it performs worse under good conditions. TCP has no support for handing off what is effectively part of the connection into a new TCP connection.
> just want the order maintained within those conceptual streams and not the whole TCP connection, but routers don't know that.
seems to imply that routers inspect TCP streams and maintain order. I'm not aware of any routers that actually do anything like this, and things need to keep working just fine if different packets in the stream take different paths. Certainly in theory, IP routers don't have do inspect packets any deeper than the IP headers, if they're not doing NAT / filtering / shaping. The protocols are designed to strictly minimize the minimum amount of state kept in the routers.
As far as I'm aware, only the kernel (or userspace) TCP stack makes much effort at all to maintain packet order (other than routers generally using FIFOs).
The other problem with TCP the assumption that packet loss is caused by congestion. That's why a 2% loss causes more than a 2% drop in bandwidth. Unfortunately, congestion is no longer a problem on the modern internet. [1]
TCP fix requires a lot of coordination. First you need microsecond time stamps. Then you need an RFC to reduce RTOmin below 200ms. Then you need ATO discovery and negotiation. A lot of moving parts and you end up with a protocol that’s still worse than QUIC. Also note that Linux maintainers have refused to accept patches for all of these things and QUIC is to some extent a social workaround for their intransigence.
> why can't we start using more concurrent TCP connections with HTTP 2 when it makes sense
Because using 6 TCP connections per site is a hack to have larger initial congestion windows, i.e. faster page loading, ending up using more bandwidth in retransmission instead of in goodput. Instead we could have more intelligent congestion control algorithms in one TCP connection to properly fill up the available bandwidth. See https://web.archive.org/web/20131113155029/https://insoucian... for a more detailed account (esp. the figure of "Etsy’s sharding causes so much congestion").
Several things that excite me about this protocol:
— UDP-based and different stream multiplexing such that packet loss on one stream doesn't hold up all the other streams.
— Fast handshakes, to start sending data faster.
— TLS 1.3 required, no more clear-text option.
Overall this has the potential to help with overall latency on the web, and that is something I am really looking forward to.
(Yes I'm aware that there are many steps that can be done today to reduce latency, but having this level of attention at the protocol level is also an improvement.)
The documentation says how in theory it could happen, but all actual client software just does ALPN, which is a TLS feature to let you pick a different sub-protocol after connecting. Since it's a TLS feature you are obliged to use encryption.
If we aren't using TCP anymore does that mean all network congestion tooling developed in the last 30 years are suddenly worthless and quality of service will degrade everywhere?
> [..] around 70% of all HTTPS requests Firefox issues get HTTP/2 [...]
Frequent use of Google probably puts this number on the higher end without revealing much information about general adaptation.
Personally, I am waiting for HTTP/5, since the speed for new protocol versions seems to be set on "suddenly very fast".
That said, I think HTTP/2 was a good add-on for the protocol.
On the other hand a lot of over-engineered protocols fail or are a giant pain to use. I think we will only see adaptation if there is a real tangible benefit to upgrade infrastructure.
Quic doesn't really convince me yet. It is certainly advantageous for some cases, but it isn't obvious to me. Yes, non-blocking parallel streaming connections are certainly great... 0-RTT? Hm, I don't think the speed advantages are worth the reduced security if used with a payload. Maybe for Google and similar services, but otherwise? Quic needs to re-implement TCPs error checking and puts these mechanism outside of the kernel space. Let's hope we don't see other shitty proprietary protocols that are "similar" to HTTP.
0-RTT is one of those features where the decision was it's better if we build it and in the end nobody uses it (because it's so dangerous) than if we don't build it and then we all wish we had it, because now we need an entirely new protocol to get it.
Protocols that live on top of a transport (QUIC or TLS 1.3 itself) that offers 0-RTT are supposed to explicitly define whether and how it's used. HTTP is drafting such advice.
You should definitely avoid software that "magically" uses 0-RTT today without that definition being completed, particularly client software. Because of how TLS works, if you never use client software that can do 0-RTT, nothing you send can be replayed, so you're safe. The danger only sneaks in if you run client software that does 0-RTT _and_ the server has dangerous behaviour. Well, you can't tell about the server, but you can easily choose not to run that client.
No popular TLS 1.3 clients (e.g. Firefox, Chrome) do 0-RTT today. They've talked about it, and I can imagine it sneaking in for specific jobs where nobody can see how it causes problems, but I do not expect them to screw up and start doing 0-RTT GET /money-transfer?dollars=1million because they've been here before and they know what will happen when some idiot builds a server.
In client software libraries it's a bit scarier. So, if you use an HTTP library and one day it's like "Yay, now we do 0-RTT to make everything faster" that's probably going to need some stern words in a bug report.
> No popular TLS 1.3 clients (e.g. Firefox, Chrome) do 0-RTT today.
This was wrong. 0-RTT is enabled in current Firefox builds. I haven't been able to determine under what circumstances Mozilla now chooses to do 0-RTT, but you can switch it off if you're concerned, it is controlled by the pref security.tls.enable_0rtt_data
And with required authenticated encryption, one would hope an intervening switch or router couldn't accidentally forge a message that's supposed to be hard to forge when you're trying.
The important part of QUIC is that lost packets will not block delivery of all other data being delivered over the same connection, but only the data from any affected streams (for example, a single HTTP request/response will usually be one stream).
Really neat resource. Coming into this thread with next-to-no knowledge of HTTP/3, this was a great high-level overview of the motivation and resulting protocol.
I'm wondering if anyone with a little more knowledge could go deeper into what the difference is between "TLS messages" and "TLS records" as talked about in this[1] snippet:
> the working group also decided that [...] [QUIC] should only use "TLS messages" and not "TLS records" for the protocol
From my understanding quickly reading through the spec, it looks like HTTP/3 starts with a standard TLS handshake for key exchange, but then QUIC "crypto" frames are used to carry application-level data instead of TLS frames[2]. Is this accurate? If so, why define a new frame format? Just to be able to lump multiple frames into one packet[3]?
> From my understanding quickly reading through the spec, it looks like HTTP/3 starts with a standard TLS handshake for key exchange, but then QUIC "crypto" frames are used to carry application-level data instead of TLS frames[2]. Is this accurate?
Sort of, kinda, no? It's a "standard TLS handshake" from a cryptographic point of view, but the TLS standard specifies that all this data travels over TCP. QUIC doesn't use TCP, so for QUIC the same data is cut up differently and moved over QUIC's UDP channel. So, everything uses QUIC's frames, not just application data.
QUIC needs to solve a bunch of problems TCP already solved, plus the new problems, and chooses to do so in one place rather than split them and have an extra protocol layer. For example, "What do I do if some device duplicates a packet?" is solved in TCP, so TLS doesn't need to fix it. But QUIC needs to fix it. On the other hand, "What do I do if some middleman tries to close my connection to www.example.com?" is something TCP doesn't solve and neither does TLS but QUIC wants to, so again QUIC needs to fix it.
One reason to do all this in one place is that "it's encrypted" is often a very effective solution even when your problem isn't hostiles just idiots. For example maybe idiots drop all packets with the bytes that spell "CUNT" in them in some forlorn attempt to protect "the children". Ugh. Now nobody can mention the town of Scunthorpe! But wait, if we encrypt everything now the idiot filter will just drop an apparently random and vanishingly small proportion of packets, which we can live with. "I just randomly drop one entire packet for every 4 gigabytes transmitted" is still stupid, but now everything basically works again.
>The work on sending other protocols than HTTP over QUIC has been postponed to be worked on after QUIC version 1 has shipped.
I'm very interested in this bit. I'm working on a sensor network using M2M SIM cards which are billed for each 100kb. Being able to maintain an encrypted connection without having to handshake every time could have nice applications.
At first glance I don't think it's fair to say "ENet did this a decade ago". ENet simply provides multi channel communication over a UDP stream. It doesnot provide 0/1 RTT handshakes, encryption of the protocol beyond the initial handshake, or HTTP bindings. Based on some Github issues it doesn't even look like there was a protocol extension/version negotiation.
QUIC is also decently old itself, the last 7 years have been spent proving it is well suited for the real world and able to be iterated upon. This is the kind of difference that matters for standards track vs ignored.
nitrix isn't referring to the other features, simply the concept of reliability over UDP to minimize overhead. The games industry has been using this concept for decades for efficient networking, and only now is the web community thinking about it.
Your comment made me wonder: Is the PR to add ipv6 support to ENet still open? Last times I checked was maybe 3 years ago. Seems it's still open: https://github.com/lsalzman/enet/pull/21
One thing I don't understand is, if it's encrypted, we'll never see hardware accelerated QUIC ?
I've read it's 2 to 3 times more CPU intensive, aren't we implicitly giving an artificial competitive advantage to the "Cloud" ? By the "Cloud" I mean big provider with like (obviously) Google, Cloudflare, Akamaï ...
That is raising the barrier of entry for newcomers, is it not ?
> One thing I don't understand is, if it's encrypted, we'll never see hardware accelerated QUIC ?
I think parts of it can still be hardware-accelerated. For example, OpenSSL et al will take advantage of available AES encryption CPU instructions, if it knows about them. So, if the TLS library supports such offloading, then the HTTP/3 library would get that benefit.
> I've read it's 2 to 3 times more CPU intensive, aren't we implicitly giving an artificial competitive advantage to the "Cloud" ? By the "Cloud" I mean big provider with like (obviously) Google, Cloudflare, Akamaï ...
Happily, a number of those vendors are kernel developers, and contribute changes back upstream. So, if the bottleneck is in the kernel (for example, by a lack of UDP fast processing paths), then I expect those cloud providers would be working on contributions to make kernel UDP as performant as kernel TCP.
The next thing that would be missing is support for UDP offloading in the NIC space. But TBH I don't know much about the current state of hardware offloading, so I can't speak to it.
> Isn't TCP already versioned ?
I was curious about this, so I looked it up, and I don't think it is. IP is certainly versioned (IPv4 vs. IPv6), but looking at the list of protocol numbers[0], I only see one entry for TCP. And I don't see anything that looks obviously like 'TCPv2'.
> I was curious about this, so I looked it up, and I don't think it is. IP is certainly versioned (IPv4 vs. IPv6), but looking at the list of protocol numbers[0], I only see one entry for TCP. And I don't see anything that looks obviously like 'TCPv2'.
Currently there is only a single TCP, it didn't need new version, because it has options mechanism to add additional information as needed. If it would need to be redesigned a new protocol would be created and a new protocol number would be allocated. Kind of like what happened with ICMP and ICMPv6.
I'm not sure there's a reason large parts of it couldn't be hardware accelerated.. for that matter why specific implementations couldn't happen on a network controller for that matter. Sure, initial implementations will be software only, so that it can be implemented on top of the OS, that doesn't negate the opportunity to move pieces down the stack.
The original creator of quick also explicitly named it as an acronym [0]. But of course, if the big boys at IETF decree it's not an acronym, it isn't. Just like we've always been at war with Oceanea.
> A lot of enterprises, operators and organizations block or rate-limit UDP traffic
That was my first thought, and the following seem to be assuming that companies will decide to change their policy.
But many public WiFi block UDP traffic, are they going to change their policy? Are the people in charge of it even aware about it? (Think coffee shops, restaurants, hotels, ...)
Are we going to have websites supporting legacy protocols ("virtually forever") in order to build a highly available internet?
Also, ISPs in some countries have not been UDP-friendly. I'm thinking about China mainly, where UDP traffic if being throttled and often blocked (connection shutdown) if the volume of traffic is consequent - I assume they apply this policy to block fast VPNs.
Are they going to change their policy? Worst scenario here, would be to see a new http-like protocol coming out in China, resulting in an even larger segmentation of the internet
Working in a school I block QUIC traffic so my web filter can (attempt to) keep kids off porn. Such filtering is required by law for schools. I haven't found a passive filter that handles QUIC. I don't want to install invasive client software or MITM connections.
There won't ever be a passive filter. The QUIC traffic is deliberately opaque.
If you control the clients you may be able to retain your status quo for some time (by just refusing to upgrade) but the direction is away from having anything filterable. So client software or MITM are your only options.
I don't block Wikipedia. I looked at some wire traffic and I can see the SNI header as normal in Firefox 67 and Chrome 72. I found a about:config flag to enable esni, toggled it, restarted the browser, and I still see the SNI. Using Cloudflare's ESNI checker it says my browser isn't using it.
Ignoring ESNI will probably work fine for a good length of time. If pornhub implements it or something I'd probably have to revisit. Or, since I control the clients I might disable it in their browsers.
If enough people bark up the filter vendor's tree I'm sure they'll add a checkbox to drop esni traffic. They added one for QUIC recently.
Disappointingly, out of all of the changes in HTTP/3, cookies are still present. It'd be nice if HTTP/4 weren't also a continuation of Google entrenching its tracking practices into the Web's structure and protocols.
JavaScript for this particular website is overkill, in my humble opinion. Just because it is part of the web, it does not necessarily mean it should be abused. I especially dislike websites that send me JavaScript-only, even when the site is almost completely static.
Those people making sites that are completely broken without javascript have very precise numbers to look at showing that approximately none of their repeating visitors disable javascript.
We, by the other side, have no unbiased number to look and discover if it's a common behavior ;)
I reckon the key word in this comment is "approximately".
I might still be able to get what I need from a site that someone believes is "completely broken", including on repeat visits, without using Javascript.
Sometimes HN commenters debate what it means when a site "does not work" without Javascript. Some believe if an HTTP request can retrieve the content, then the site works. Others believe if the content of the site is not displayed as the author intended then the site is not "working".
I would bet that the definition of "completely broken" could vary as well.
Do the people running sites try to determine how many users are actually using Javascript to make the requests, e.g., to some endpoint that serves the content, maybe a CDN?
Browser authors could in theory include some "telemetry" in their software that reports back to Mozilla, Google, Microsoft, Apple, etc. when a user has toggled Javascript on or off. Maybe it could be voluntarily reported by the user in the form of opt-in "diagnostics".
OTOH, what can people making sites do to distinguish if a GET or POST accompanied by all the correct headers sent to a content server came from a browser with Javascript enabled or whether it was sent with Javascript off or by using some software that does not interpret Javascript?
The content server just returns content, e.g., JSON. It may distinguish a valid request from an invalid one, but how does it accurately determine whether the http client is interpreting Javascript? If a user were to use Developer Tools and make the request from a custom http client that has no JS engine, can/do they measure that?
Regardless of how easy or difficult it would be to reliably determine whether a client making a request is interpreting Javascript (i.e. more than simply looking at headers or network behaviour), the question is how many people making sites are doing that?
They can more easily just assume (correctly, no doubt) that few users are emulating favoured browsers rather than actually using them. One might imagine they could have a bias toward assuming that the number of such users is small, even if it wasn't. :)
> Why is there even a setting? How many people would ever want to turn Javascript off?
Because without JS pages load much faster and browser takes less memory. Ad and tracking often doesn't work without JS. Why would anyone want to use JS?
It's on by default because it is very useful. Just become some websites abuse the ability to run active content doesn't mean Javascript shouldn't be an integral part of the modern web. Browsers should allow the user to crack down on the abuse, but expecting sites to cater to people who disable Javascript is a bit far.
Out of all the web storage methods, cookies are still the most reliable and secure to implement sessions. I for one am glad they are not going anywhere.
I’m not a web developer, so sorry for the dumb question, but how would you possibly do authentication (the login on this site, for instance) without cookies or something that’s functionally equivalent?
That's a bad idea because it is visible on the UI and anywhere the user copies/saves the URL, and it would also still have all the downsides that cookie-based tracking enables. Cookies are how HTTP application sessions work, it isn't possible to just get rid of them, without replacing them with something identical in functionality even if you change the name.
Assuming https, the querystring is encrypted, so should be safe in transit. Could show up in server logs though, I'd think. The server can log a lot of things though, depending how it's configured.
Session ID in URL is a terrible idea because guess what, people share links with each other. Example: A school enrollment system in Finland logs you on with another person's account if they give you the link to a page they are viewing (which they often do), because the session is in the query string.
Its possible to do this on the server side. Form submissions don’t require JavaScript, and authentication post submission can be done with cookies. Or if you want to go way back http basic auth.
To be clear: I support JavaScript on the web, just hoping to answer your question.
Also sorry: I answered the wrong question. Http basic with would still work.
I remember a time when basic auth gave you a unique url and the referrer was used to validate you. This was easy to break because you can fake the referrer.
Cookies date back to the late 1990s, and it's relatively easy for browsers to implement anti-tracking features that block cookies except for their respective sites and block cross-site tracking cookies. Firefox and Safari have features like this. Don't know about Chrome, but it's likely that Chrome blocks other peoples tracking in favor of Google's.
TLS 1.3 _being required_ makes me sigh loudly. What about local development, where tools like tcpdump and wireshark are really handy? What about air gapped systems? What about devices that are power constrained?
It's not that I think an encrypted web is bad, it's a very good thing. I am just spooked by tying a a text transfer protocol to a TCP system.
> What about local development, where tools like tcpdump and wireshark are really handy?
You can tell browsers to dump the session keys, which then can be read by wireshark [1].
> What about devices that are power constrained?
That's thinking from 10 years ago. 10 years ago, there were no native AES extensions in power constrained devices. But now there are, so encryption is really power efficient.
> I am just spooked by tying a a text transfer protocol to a TCP system.
I guess instead of "TCP system" you meant transport layer protocol. I can actually understand your view: stuff is getting more complicated. I can fire up netcat, connecting to wikipedia, typing out a HTTP/1.0 request manually. With 1.1 this is hard and with 2.0 it's impossible due to TLS requirements. But there are reasons for this added complexity: you want to be able to re-use connections, or use something better than TCP. As long as there is a spec, and there are several implementations lying around, I think it's okay to add complexity if there is a performance reward for it. Most people care about the performance, who wants to fire up netcat to do a HTTP request.
To clarify, HTTP 1.0/1.1 were successfully transmitted over TCP, multiple versions of SSL, then several versions of TLS. Just seems a bit pretentious to be tying to TLS 1.3.
Those older SSL and TLS versions are insecure now or at least deemed a bad idea from today's security ideas. TLS 1.3 partly was about removing insecure modes from TLS 1.2. If HTTP/3.0 supported anything other than TLS 1.3, then those insecure setups would persist.
Of course there are disadvantages, like when you are in a lan or such. But I think those cases are covered well by the HTTP/1.x family already and if not you can always add root certificates yourself or make public DNS names you control point to your 192.168.... address.
- HTTP/1 is "1 HTTP stream over 1 TCP-ish L4 connection" (TLS-over-TCP is a TCP-ish L4 connection)
- HTTP/2 is "multiple HTTP streams multiplexed over 1 TCP-ish L4 connection"
- HTTP/3 is "HTTP over QUIC"
HTTP/3 is meant to replace HTTP/1 or HTTP/2 only to the degree that QUIC replaces TCP. In your air-gapped system, or for local development, QUIC-instead-of-TCP is less compelling.
What about them? Don't use HTTP/3 if you don't want encryption.
The whole point of HTTP/3 is that it doesn't treat TLS as a separate layer, that it tightly binds parts of the two protocols to allow more efficient use of time and data. It's not just an option, the protocol doesn't make sense without it. If doing encrypted HTTP isn't what you're after, then this protocol isn't for you.
That's a good point. I completely understand clients MUST use TLS. On the server side though, a workflow I really like is to have a pass-through proxy that terminates TLS so I don't need a TLS stack in each one of my apps. This is a pretty common pattern so I'm sure libraries will allow for http3 without TLS -- who knows though, maybe I'm a crazy eccentric heathen.
To expand on that point: load balancers will also have to maintain encrypted connections between themselves and their web servers behind the scenes. That's probably a "best practice" security-wise, but it's convenient to be able to handle the TLS stuff at a load balancer level and stick to plain HTTP behind the scenes.
I suppose this can still happen regardless, except the HTTP/3 connection would stop at the load balancer (which would have to translate to plain ol' HTTP/1 for the servers behind it).
This is often the case today for load balancers or CDNs that support HTTP/2. For connections from reverse proxies the number of round trips for connection establishment generally does not matter since these connections will be kept alive for a long time, across requests. I don't see why this would change with HTTP/3.
If your client or server has support for key log files, Wireshark can deal with TLS quite well. In fact, this is usually how I debug my QUIC implementation.
Perhaps it's best if a non-web protocol caters to those use cases so it can best serve it's 99% use case anyways.
This follows into the debugging conversation, web browsers and web servers have debugging tools 10x better than reading HTTP packets in Wireshark/tcpdump.
In a Diffie Hellman setup, configure the machine you aren't sat in front of (usually the server) to use a fixed secret value instead of a random one.
Now since you know this value, and the other value you need (from the client in this case) is sent over the wire, you can run the DH algorithm and decrypt everything.
You should (obviously) never do this in production, although it is what various financial institutions plan to do and they have standardised at ETSI as an "improvement" on TLS (you know, like how TSA locks are an "improvement" over actually locking your luggage so random airport staff can't steal stuff) ...
Security engineers around the world have been working for decades to clean up the mess made by engineers, product designers, and business owners overlooking security because it's inconvenient.
If you are a developer or engineer then eat the complexity tax as part of your responsibility and ensure that you are shipping code and products that are secure for the end user who probably doesn't have the expertise to overcome the security gaps left by "developer inconvenience".
Or, you know, abstract the application layer and then apply the TLS layer on top of it so that it can be secured, but not effect the application code/logic.
To be fair, that's been tried a lot, and it keeps causing issues.
I'm at the point where I believe that you can't "layer on" or "abstract away" security like you can with other things, it needs to be thought about at every step.
Just look at attacks that can take advantage of content-length to pluck out which page the user is requesting of a mostly-static site, or how compression and encryption seem to almost be at odds with one another.
You can't ever just assume TLS will handle it when it's abstracted away, and while HTTP/3 may not get rid of those kinds of attacks entirely, bringing "security" closer to the application logic may enable better protections.
Possibly... I'm just concerned about yet another layer to have to be far more concerned about than I already am. I really enjoy building applications. I'm pretty good at systems and orchestration to an extent, but would rather not have to focus too much.
Going from using an application framework that's more abstracted, such as ASP.Net (not mvc/api/core) to those where you are closer to the metal (node, python, .net core/mvc/api) was a jump.
Thinking in terms of leveraging push with HTTP/2 alone has me concerned. The tooling around building web applications hasn't even caught up to the current state of being, let alone moving farther. Another issue is dealing with certificates against local/internal development in smaller organizations. It may get interesting, and it may get more interesting than it's actually worth in some regards.
You are going to be using tools like tcpdump and wireshark for debugging, but can't figure out how to install a root certificate on your local machine?
Any data on HTTP/3 performance? I don't see it in the book. There's the general claim that it's faster/lower latency, but there are no numbers behind that claim -- last time I checked QUIC's performance benefits were incredibly slight.
It is really easy to observe the performance benefit of QUIC in a congested datacenter network. In the face of loss, QUIC tail latency is dramatically better than TCP tail latency. This is mainly due to TCP's 200ms minimum retransmit time; a single dropped packet will add at least 200ms to the request time (modulo tail loss probing which can lower this to ~20ms in many cases). When your request service time is 10µs this makes a huge difference.
The book says 7% of all internet traffic already uses QUIC (HTTP/3) and Chrome has long implemented Google’s version of it.
But this book isn’t about concerning yourself with using it or implementing it, it’s about understanding what the future holds, how it works, and what roadblocks lie ahead.
The lack of API support in OpenSSL for it’s TLS requirements and poor optimization for heavy UDP traffic loads on Linux et al (they say it doubles CPU vs HTTP/2 for the same traffic) sounds like it’s going to be a major hurdle for widespread adoption any time soon.
Yeah a lot of the public internet is hosted by all-in-one vendors like Google Sites, Wordpress, Squarespace, etc, and object stores like S3, GCP buckets, Cloudflare, CDNs. The website owner just uploads the content, the hosting vendor does all the TLS, protocols, load balancing, whatever else. If the hosting providers update the protocols they use a huge chunk of websites will just immediately use the new one with no interaction whatsoever required on the part of the owner of the website.
If they add easy support in NGINX and HTTPD, then its easier for self-hosted endpoints to change as well, with minimal to no effort on their side.
> The book says 7% of all internet traffic already uses QUIC (HTTP/3)
The way I have understood, the book says that what is now in use (these 7%) is a "Google-only-QUIC" whereas the "standardized HTTP/3" is still used... nowhere?
The IETF QUIC remains a work in progress, perhaps to be published in 2019. HTTP/3 is an application layer on top of (IETF) QUIC, it might also be published in 2019 or later. There are implementations of current drafts, and the rough shape is settled but they're a long way from being truly set in stone and aren't in anything ordinary people use.
So unsurprisingly nobody is already doing a thing that isn't even standardised yet, but people are, as you see, writing about it.
Therefore I believe I'm right that claiming that HTTP/3 is used at all is wrong, and that 7% is not even the same QUIC that will be used with HTTP/3. So " already uses QUIC (HTTP/3)" is a wrong statement, the right can be only "GQUIC is used at the moment", also, as far as I understood, "making according to Google the 7% of the traffic" (and not on 7% of the sites as claimed). And HTTP/3 and the matching IETF QUIC are used nowhere. So, again
"> The book says 7% of all internet traffic already uses QUIC (HTTP/3)"
was wrong: the book doesn't say that, and that what is claimed that the book says (even if it doesn't) is false in more aspects.
Well, Google is in control of Chrome as well as the two most visited websites in the world. They can dream up any protocol they want, implement it in Chrome and use it for its sites, at any pace they desire. This is basically what happened with SPDY (now HTTP 2.0) and the upcoming HTTP 3.0 as well.
IMO it's positive. We are getting free new stuff, and I actually prefer to have two incremental steps, where HTTP 2.0 still uses TCP, giving stuff like multiplexing and pipelining, and HTTP 3.0 uses a novel UDP based transport layer protocol, improving stuff further.
There is objectionable stuff like the recent manifest 3.0 version to make ad blockers crappier but this is not one of the objectionable things imo.
Happy for everybody, but since it only really delivers benefits in less than 2% of use cases (those with crappy connections) I personally can't wait to have it be as quickly implemented and supported as ipv6 was.
It's sad that the site doesn't work without javascript. We had this exact navigation working with iframes 20 years ago. And I could resize the TOC on the left back then.
Hey, javascript is fundamental to web today, unlike 20 years ago. Even if a site like this definitely wouldn't need javascript since it's so simple, there really isn't much of a trade-of since less than one percent of all visitors are likely to have javascript disabled.
I disagree JavaScript is fundamental to the web - I'm a huge JavaScript fan (top 1% I'd say), and write it for my job... I've written a few books on it even... but I always use noscript when browsing the web: hackernews, reddit, twitter - can all operate fine without JavaScript. Dodgy third-party ad scripts/malware do need it, but I don't really want them running anyway.
Yes, 20% of sites I load are either a blank page or "You need to enable JavaScript to run this app" (it's the new "An error has occurred"). If it's a friend's site, or something that obviously needs it - like a game, or art project - then I'll temporarily whitelist it. But if not, then hey, I just got a 20% productivity boost by saving some time on whatever it is that thinks it needs JavaScript!
Most if not all ads can be blocked with an ad blocker like uBlock origin.
I used uMatrix myself in the past (I also used NoScript a much longer while ago), but it requires too much time to cherry pick the remote hosts (usually CDNs) and files to allow.
There is a way to make an iframe in the body of the page change by clicking the navigation bar in the main page without reloading the main page/navigation bar?
exactly, same as with a frame... I haven't done much with resizable iframes in a while, but I do remember that at least in the v5-v6 browser era that iframes wouldn't resize properly to match their containers without JS to handle the event, I'm unsure if this is still true, as I haven't really done anything with iframes in nearly a decade now.
aside: used to use/implement something like ajax callbacks with hidden frames and dynamic form posts.
I'm still reading through the article, but I have to say that I'm pleasantly surprised by QUIC and HTTP/3. I first learned socket programming around the fall of 1998 (give or take a year) in order to write a game networking layer:
Here are just a few of the immediately obvious flaws I found:
* The UDP checksum is only 16 bits, when it should have been 32 or arbitrary
* The UDP header is far too large, using/wasting something like 28 bytes (I'm drawing that from memory) when it only needed about 12 bytes to represent source ip, source port, destination ip, destination port
* TCP is a separate protocol from UDP, when it should have been a layer over it (this was probably done in the name of efficiency, before computers were fast enough to compress packet headers)
* Secure protocols like TLS and SSL needed several handshakes to begin sending data, when they should have started sending encrypted data immediately while working on keys
* Nagle's algorithm imposed rather arbitrary delays (WAN has different load balancing requirements than LAN)
* NAT has numerous flaws and optional implementation requirements so some routers don't even handle it properly (and Microsoft's UPnP is an incomplete technique for NAT-busting because it can't handle nested networks, Apple's Bonjour has similar problems, making this an open problem)
* TCP is connection oriented so your stream dropped by doing something as simple as changing networks (WIFI broke a lot of things by the early 2000s)
There's probably more I'm forgetting. But I want to stress that these were immediately obvious for me, even then. What I really needed was something like:
* State transfer (TCP would have probably been more useful as a message-oriented stream, this is also an issue with UNIX sockets, for example, could be used to implement a software-transactional-memory or STM)
* One-shot delivery (UDP is a stand in for this, I can't remember the name of it, but basically unreliable packets have a wrapping sequence number so newer packets flush older packets in the queue so that latency-sensitive things like shooting in games can be implemented)
* Token address (the peers should have their own UUID or similar that remain "connected" even after network changes)
* Separately-negotiated encryption (we should be able to skip the negotiation part on any stream if we already have the keys)
Right now the only protocol I'm aware of that comes close to fixing even a handful of these is WebRTC. I find it really sad that more of an effort wasn't made in the beginning to do the above bullet points properly. But in fairness, TCP/IP was mostly used for business, which had different requirements like firewalls. I also find it sad that insecurities in Microsoft's (and early Linux) network stacks led to the "deny all by default" firewalling which lead to NAT, relegating all of us to second class netizens. So I applaud Google's (and others') efforts here, but it demonstrates how deeply rooted some of these flaws were that only billion dollar corporations have the R&D budgets to repair such damage.
Yeah, it really sucks that the developers of TCP didn't foresee these issues in 1981 when they first designed it. I can't believe they were so short-sighted.
Okay, enough with the sarcasm. Is it too much to ask for historical perspective in protocol design?
Agreed, as well as the "Oh why didn't the Berkley people implement the OSI 7 layer model, then TCP would have been layered over UDP".
The reason that TCP beat out all the other protocols is because it didn't "layer" everything. OSI was beautiful in the abstract, but a complete cluster-fuck in the implementation.
Now we have enough processing power that the abstract layering makes more sense. But where the layers interact with cross-layer requirements like security was never actually dealt with in the OSI days.
> The QUIC working group that was established to standardize the protocol within the IETF quickly decided that the QUIC protocol should be able to transfer other protocols than "just" HTTP.
> ...
> The working group did however soon decide that in order to get the proper focus and ability to deliver QUIC version 1 on time, it would focus on delivering HTTP, leaving non-HTTP transports to later work.
Personally I am _very_ excited by HTTP/3 (and QUIC), it feels like the building block for Internet 2.0, with connection migration over different IPs, mandatory encryption, bidirectional streams and it being a user-space library – sure, more bloat, but from now on we won't have to wait for your kernel to support feature X, or even worse, your ISP-provided router or decade old middleware router on the Internet.
I haven't had the chance to read the actual spec yet, but it's obvious that while the current tech (HTTP2) is an improvement over what we had before, HTTP/3 is a good base to make the web even faster and more secure.
HTTP/3 won't be IPv6: it only requires support from the two parties that benefit from it the most: browser vendors and web server vendors. We won't have to wait on the whole internet to upgrade their hardware.