> This is called head-of-line blocking. In the diagram below, request 2 cannot be sent until response 1 arrives, considering that only one TCP connection is used.
Thats not really head of line blocking, because in HTTP1.1 you'd just open up another connection. The issue with HTTP1.1 is that opening up lots of connections can introduce lots of latency, especially if you are doing Encryption.
HTTP1.1 performs much much better over high latency or lossy links than http2
> With HTTP/2, this problem is solved with streams, each stream corresponds to a message. Many streams can be interleaved in a single TCP packet. If a stream can't emit its data for some reason, other streams can take its place in the TCP packet.
This is where HTTP2 failed. It shoved everything into one single TCP connection which works fine on LAN and LAN like networks, and sucks balls in the real world. This is Top of line blocking and was entirely predictable had the HTTP2 team bothered to talk to anyone who did networking.
Its part of the reason why I was greatly suspicious of QUIC, because it appeared like it was designed by the same people that thought http2 was a good idea.
However QUIC seems to be actually reasonable. I've yet to fully test it in real world scenarios, but it does offer promise for highspeed latency resistant data streaming. One day I'll re-write my TCP multiplexor to compare the performance.
I find it difficult to believe that the team that built the prototype for http/2 (SPDY), implemented it for chrome, tested it on gazillions of customers around the world, wasn't bothered to talk to anyone who did networking.
Sure you can argue that the tradeoffs they selected were the wrong tradeoffs and that they may have been biased by their narrow domain specific goals (reducing latency to render an average web page). Perhaps they thought that people with really horrible packet loss will just fallback to http or use amp or whatever.
But I cannot believe that the mistakes were caused by them just not being aware of how networks work or, worse, failing to talk with somebody that did
SPDY's header compression allowed cookies to be easily leaked. This vulnerability was well known at the time so had they even asked an intern at Google Zero to look at it they would have been immediately schooled.
In their performance tests vs HTTP 1.1 the team simulated loading many top websites, but presumably by accident used a single TCP connection for SPDY across the entire test suite (this was visible in their screenshots of Chrome's network panel, no connection time for SPDY).
They also never tested SPDY against pipelining - but Microsoft did and found pipelining performed the same. SPDY's benefit was merely a cleaner, less messy equivalent of pipelining.
So I think it's fair to say these developers were not the best Google had to offer.
another explanation - they did test it in other scenarios but the results were against their hopes so they 'accidentally' omitted such tests in the 'official' test suite. Very common tactic, you massage your data until you get what you want.
> But I cannot believe that the mistakes were caused by them just not being aware of how networks work or, worse, failing to talk with somebody that did
As someone who was part of the rollout of HTTP2 for a $large_website, I can confirm that "this will harm mobile performance" was outright and flatly rejected. This included people who were our reps on the w3c. I just had to sit there and wait for the real world metrics to come in
"multiplexing will remove bottlenecks!"
"benchmarks prove that its faster!"
"you just don't understand how TCP works"
"the people at google are very smart, what do you know?"
"server push will reduce latency"
etc etc etc.
We even had the graphs of page size over time (going ever up) and average usable bandwidth (not keeping up, especially on mobile) None of that mattered until the rollout had a real world effect on our performance.
It doesn't matter. You can provide the numbers when asked by the proponents of HTTP2/3 'do you have proof of your claim??', they will just turn around and say your real world data is not valid or that they need peer-reviewed article in Science.
> (...) they will just turn around and say your real world data is not valid or that they need peer-reviewed article in Science.
This sounds like a bullshit conspiratorial excuse. If you have real world data and you aren't afraid of having peers looking through it, nothing prevents you from presenting it to peers.
So where is that data?
Instead, you just have vague unsupported unbelievable claims made by random people in the internet, as if that's any way to decide over policy, and any faint doubt raised over that claim is faced with conspiratorial remarks complemented by statements on how everyone around OP is incompetent except him.
I will go as far as to claim OP's assertion is unbelievable, to the point of sounding like bullshit. It's entirely unbelievable that people designing protocols for a multinational corporation whose bread and butter is stuff done over TCP connections were oblivious to how TCP works, and the most incompetent of them would bother to design the first major revision of HTTP. Unbelievable.
But hey, some random guy online said something, so it must be true!
We've banned this account for repeatedly breaking the site guidelines, and not just in this thread.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
We've banned this account for repeatedly breaking the site guidelines, and not just in this thread.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
Are you saying there's ddos potential because UDP itself doesn't have a connection handshake? There's still one on top of UDP. The initial packet could be forged, but so could one from TCP (a SYN flood).
> It’s not an error in thinking to assume a good faith basis
Its nothing about faith here. I was looking at the evidence. The problem was that a lot of people I was working with had faith that SPDY was something that it wasn't.
They either interpreted what I was saying as either an excuse to not change infra (the change was minimal), "old think" because I was not a hot young fullstack node engineer, or that they thought I was trying to be more clever than google.
Here's a third option: the five month old anonymous HN account claiming to know what the H2 designers were thinking of is wrong. How would you compare the likelihood of that to your two options?
The main problem you're talking of is head of line blocking due to packet loss. But packet loss as a congestion signal is nowhere near as common as people think, and that was already the case during the original SPDY design window. End-user networks had mostly been set up to buffer rather than drop. (I know this for an absolute fact, because I spent that same time window building and deploying TCP optimization middleboxes to mobile networks. And we had to pivot hard on the technical direction due to how rare packet loss turned out to be on the typical mobile network). The real problem with networks of the time was high and variable latency, which was a major problem for H1 (due to no pipelining) even with the typical use of six concurrent connections to the same site.
Second, what you're missing is that even a marginal improvement would have been worth massive amounts of money to the company doing the protocol design and implementation. (Google knows exactly how much revenue every millisecond of extra search latency costs). So your "marginally better" cut isn't anywhere near as incisive as you think. It also cuts the other way: if SPDY really had been making those metrics worse like one would expect from your initial claims about H2 performing worse than H1, it would not have launched. It would not have mattered one bit whether the team designing the protocol wanted it deployed for selfish reasons, they would not have gotten launch approval for something costing tens or hundreds of millions in lost revenue due to worse service.
Third, you're only concerned with the downside of H2. In particular HPACK compression of requests was a really big deal given the asymmetrically slow uplink connections of the time, and fundamentally depends on multiplexing all the requests over the same connection. So then it's a tradeoff: when deciding whether to multiplex all the traffic over a single connection, is the higher level of vulnerability to packet loss (in terms of head of line blocking, impact on the TCP congestion window) worth the benefits (HPACK, only needing to do one TLS handshake)?
> The main problem you're talking of is head of line blocking due to packet loss. But packet loss as a congestion signal is nowhere near as common as people think
Surely packet loss due to poor signal quality is rather common over mobile networks and that packet loss still affects TCP's congestion window.
Admittedly anecdotal, but I just connected to a 5G network with low signal strength and it certainly seems to be the case.
> Surely packet loss due to poor signal quality is rather common over mobile networks and that packet loss still affects TCP's congestion window.
Two points:
1. It's not commonly realised the TCP is terrible on lossy networks, where terrible means gets less than 10% of the potential throughput. It only becomes apparent when you try to use TCP over a lossy network of course, and most real networks we use aren't lossy. Engineers who try to use TCP over lossy networks end up replacing it with something else. FWIW, the problem is TCP uses packet loss as a congestion signal. It handles congestion pretty well by backing off. But packet loss can also mean the packet was actually lost. The right responses in that case are to reduce the packet size and/or increase error correction, but _not_ decrease your transmission rate. Thus two responses to the same signal conflict.
2. Because of that, the layer two networks the internet uses have evolved to have really low error rates, which is why most people don't experience TCP's problems in that area. As it happens just about any sort of wireless has really high error rates, so they have to mask it. And they do, by having lots of ECC and doing their own ACK/NAKs. This might create lots of fluctuations in available bandwidth - but that is what TCP is good at handling.
By the by, another reason we have come to depend on really low error rates on layer 2. That's because TCP's error detection is poor. It lets roughly one bad packet through in every 10,000. (Adler32 is very poor on small packets.) You can send 100,000 packets a second at 1Gb/sec, so you need to keep the underlying error rate very low to ensure the backup you are sending to Backblaze isn't mysteriously corrupted a few times a year. <rant>IMO, we should have switched to 64 bit CRC's decades ago.</rant>
It isn't. Or at least wasn't back in the UMTS / early LTE era that's being discussed; I got out of that game before 5G.
The base stations and terminals are constantly monitoring the signal quality and adjusting the error correction rate. A bad signal will mean that there's more error correction overhead, and that's why the connection is slower overall.
Second,the radio protocol doesn't treat data transmissions as a one-and-done deal. The protocols are built on the assumption of a high rate of unrecoverable transmission errors. Those rates would be way too high for TCP to be able to function, so retransmissions are instead baked in at the physical protocol level. The cellular base station will basically buffer the data until it's been reliably delivered or until the client moves to a different cell.
And crucially, not only is the physical protocol reliable but it's also in-sequences. A packet that wasn't received successfully shows up just as one (radio) roundtrip latency blip during which no data at all shows up at the client, not as packet loss or reordering that would be visible to the TCP stack on either side.
Other error cases you'd get are:
- Massive amounts of queueing, with the queues being per-user rather than per-base station (we measured up to a minute of queueing in testing). The queue times would translate directly to latency, completely dominating any other component.
- A catastrophic loss of all in-flight packets at once, which we believed was generally caused by handover from one cell to another.
Everyone who is using a phone knows that what you are saying is not true. Otherwise we would not experience dropped calls, connection resets and mobile data being unavailable. Mobile networks are unreliable and you can't paper it over with some magic on TCP or HTTP2/3 level.
EDIT: better yet, anyone can just use network tools on their smartphone to see for themselves that mobile networks do drop TCP packets, UDP packets and ICMP packets very freely. Just check yourself!
Huh? I'm not talking about papering over it on the TCP or HTTP/2 level. I'm talking about the actual physical radio protocols, and my message could not be more explicit about it.
If you don't understand something, it'd be more productive to ask questions than just make shit up like that.
You made a claim that packet loss in mobile networks is not a common occurrence. This claim is patently wrong and anyone with a smartphone can see for themselves.
In reality it's quite hard for somebody to observe that themselves using just their smartphone. The only way they can do it is by getting a packet trace, which they won't have the permissions to capture on the phone, nor the skill to interpret. (Ideally they'd want to get packet traces from multiple points in the network to understand where the anomalies are happening, but that's even harder.)
In principle you could observe it from some kind of OS level counters that aren't ACLd, but in practice the counters are not reliable enough for that.
Now, the things like "calls dropping" or "connections getting reset" that you're calling out have nothing to do with packet loss. It's pretty obvious that you're not very technical and think that all error conditions are just the same thing and you can just mix them together. But what comes out is just technobabble.
Modern mobile networks use exactly the same protocol to carry voice and data. Because voice is just data. When your call is fading or being intermittent then the packets are being dropped. In such situation packets of your mobile data for instance a web page being loaded by a browser are also being dropped. Mobiles drop packets left and right when reception deteriorates or there are too many subscribers trying to use the shared radio channel. And HTTP2 or 3 can't do much about it because it's not magic, if you lose data you need to retransmit it which TCP and HTTP/1.1 can do just as well. BTM UMTS which you claim you were so professionally involved in also uses converged backbone and carries both data and voice the same way so you should have know it already lol :)
But I am not saying that HTTP/2 or HTTP/3 are magic that fix packet loss in mobile networks.
I'm saying that from the point of view of either endpoint, there is very little packet loss in mobile networks, because of error correction and retransmissions being handled at the physical layer. This is the third time I've written it. Both previous times you've not answered that, and instead made up a strawman about HTTP/2 and magic. Why do you keep doing that?
Do you not believe that cellular radio protocols do error correction? Or that they do retransmissions at that level, rather than just try transmitting each packet once and then give up?
The parent is just moving goal posts. The whole idea behind multiplexing data streams inside a single TCP connection was that in case of a packet loss you don't lose all your streams. But it doesn't work in practice which is not really surprising when you think about it. When you have multiple TCP connections it's less likely that all of them will get reset due to connectivity issues. Whereas with data multiplexing when your single TCP connection is reset all your data flow stops and need to be restarted.
A problem with radio signal or with a wire would affect all TCP connections at the same time. It does not matter if it is one or many, the outcome will be the same. I believe in real life this is a majority of cases. A problem affecting just one TCP connection out of many on the same link must be related to the software on the other side, not network itself.
Umm... Like, pretty clearly H2 wasn't meant to be the Holy Grail? Not sure where you're getting that from. (Though as an aside, it feels like you've now backtracked from "H2 is a failure that's worse than H1" through "H2 was a marginal improvement" to "H2 wasn't the holy grail".)
It didn't need to be the Holy Grail to be worth creating. It just needed to be better than H1 was, or better than what H1 could be evolved to with the same amount of effort. And likewise, it's totally possible for H2 to be better than H1 while also H3 is better than H2.
You appear to be confused by the idea that somebody would ever create something other than the absolute best possible thing. Why create H2 first, rather than just jump straight to H3?
One obvious reason is that H2 is far less complex than H3 thanks to piggybacking on TCP as the transport mechanism. The downside is that you then have to deal with all the problems of TCP as well. At the time, there would have been a lot of optimism about eliminating those problems by evolving TCP. An extension for connection migration, an extension for 0-RTT connection setup, an extension for useful out of order data delivery, etc.
It was only a bit later that it became clear just how ossified TCP was. Up until then, one could have told a story about how Microsoft controlled the primary client operating system, and were not really motivated to implement the RFCs in a timely manner, and that's why the feature rollouts were taking a decade. In the 2010s, it became clear that evolution was impossible even when all the coordination and motivation was there. See TCP Fast Open for a practical example.
So around 2015-ish you see big tech companies switch to UDP-based protocols just so that they can actually do protocol evolution.
The other plausible reason is that it's going to be far easier to get the initial work for making H2 funded, since the scope is more limited. And once you show the real-world gains (which, again, would have been there since H2 is better than H1), you then have the credibility to get funding for a more ambitious project.
> It was only a bit later that it became clear just how ossified TCP was.
It has almost _always_ been tied to OS, but moreover the OS of every node in between you and the webpage. That was the most frustrating thing, there were and are solutions for making TCP more latency resistant but also get better throughput and deal with "buffer bloat" which was a big thing at the time.
I was working in transcontental bulk transfer which used to mean that things like aspera/fasp was the defacto standard for fast/latency/loss resistant transport. So I had seen this first hand. I suspect it was probably why I was dismissed, because I wasn't a dyed in the wool webdev.
That's quite the history lesson, thx for the info.
I agree that H2 is defacto better than H1, and easier to implement when compared to H3. However, I'll call out the 2 biggest time sinks of the RFC: stream prioritisation and server push. Both of which had narrow application, and incomplete/inefficient specification. H3 seems to have ditched both. My question is, how did this ever end up in the final RFC? As both seem like the kind of thing that could have been easily disproved in SPDY deployments, as well as just asking people doing HTTP for a living.
Oh so typical moving goal posts. Once your little project failed to deliver you claim that it wasn't really meant to provide revolutionary improvement.
> You have two options: -highly accomplished and highly experienced engineers were actually too stupid (...)
You're making quite clear you are the type of person who is extremely quick to accuse everyone and anyone of being incompetent in the absence of evidence or in spite of evidence.
You do not need to Google too hard to find tons of open-source benchmarks of real world servers showing off performance gains from switching to HTTP/2 and HTTP/3.
But here you are, claiming everyone is incompetent and that their work was bad. In spite of all the evidence.
It's clear that you have nothing relevant to say about the topic and no evidence to even suggest your beliefs have a leg to stand on.
People react to incentives. Windows got PowerShell instead of an improved cmd.exe because improving or fixing existing things does not matter in the yearly promotion package talks, but new stuff does.
That's actually a hilarious example (in the "hilariously wrong" sense), because Jeffrey Snover had to accept a demotion to make PowerShell. He was told it was a useless project, not fit for someone at his level to work on, and bumped down a level when he persisted.
Interesting, because a Windows kernel hacker told it differently [0]:
These junior developers also have a tendency to make improvements to the system by implementing brand-new features instead of improving old ones. Look at recent Microsoft releases: we don't fix old features, but accrete new ones. New features help much more at review time than improvements to old ones.
(That's literally the explanation for PowerShell. Many of us wanted to improve cmd.exe, but couldn't.)
(I'd heard the story in person from Snover years before that, so it definitely wasn't just sour grapes from somebody about to leave the company.)
I can totally believe that MS had a culture where making improvements to cmd.exe was impossible to justify, but that doesn't actually mean that a greenfield project in the same space would be considered promo-worthy either. It seems more likely that everything in that space was viewed as career suicide and you needed to be working on Internet stuff, just like at today's bigco anything except AI will have the same problem.
Wouldn't it be more likely that maintaining backwards compatibility (an important Windows trait, after all) while also implementing all of Powershell would be a considerably more difficult task than just making a new scripting environment?
yes, too much to ask for a multi-trillion company, especially that this company was until recently well known for maintaining crazy level of backward compatibility
But… this is backwards compatibility. The command prompt is backwards compatible. Sometimes the best way to ensure that is to preserve in amber and replace.
cmd.exe is a terminal emulator and PowerShell is a fully-formed scripting language (that Windows desperately needed). This analogy doesn't work the way you think it does.
This is mixing up cmd.exe which is the dos-like (but not dos) scripting interpreter, and conhost.exe which is the actual old terminal emulator/console that the kernel would spin up whenever you ran cmd.exe.
cmd.exe can be used with shitload command line utilities and is a full blown scripting language just less powerful than Unix bash. The analogy works just fine.
> Your error in thinking is that you assume that they really wanted to build something better instead of building something new that would propel their careers.
How do you explain that some HTTP/2 server implementations handle an order of magnitude more connections than their HTTP/1.1 implementations? Is this something you downplay as accidental benefits of doing something to propel their career?
HTTP/1.1 (RFC2616) specified a limit of two connections per server, which most browsers initially interpreted to mean per-origin, which still lead to quite a lot of unnecessary blocking. I think browsers eventually decided to increase that to 6, but as evidenced by viewing the "Network" tab of the Developer Tools on a lot of modern pages, it is in fact not very uncommon anymore to have a substantially larger number of resources per origin.
While it's true that HTTP/2 can be worse than HTTP/1.1, I don't think it usually is; it's pretty easy to demonstrate just how much better HTTP/2 is over a typical Internet connection. SPDY and HTTP/2 were clearly better and rarely worse, whereas HTTP/3 is almost never worse (maybe when interplay with TCP rate control is poor?) On very unreliable and very high latency connections it can definitely go the other way, but statistically my experience is that a large majority of cases see an improvement on plain-old HTTP/2.
That said, for all of the complexity HTTP/2 adds, it is kind of a nice protocol. I like that all of the "special" parts of the HTTP/1.1 request and response were just turned into header fields. HPACK is a minor pain in the ass, but it is pretty efficient. You get multiple concurrent bidirectional streams per connection and they can each send headers/trailers. There's even the MASQUE protocol, which enables unreliable datagrams with HTTP/3. Put together this makes HTTP/2 and HTTP/3 amazingly versatile protocols that you can really use for all kinds of shit, which makes sense given the legacy of HTTP/1.1.
There are some pitfalls even still. For example, all of this added complexity has made life a bit harder for load balancing and middleboxes. TCP level load balancing or round robin is basically defeated by using HTTP/2 multiplexing, without the client being explicitly cautious of this.
This limit is completely artificial. Let's limit HTTP/2 to maximum 6 multiplexed streams per connection and see how it fares with HTTP1.1 with 6 TCP connections to the server. All of sudden HTTP1.1 wins :)
It’s also a specious argument anyway. The six connection limit isn’t purely artificial, opening and tracking TCP connection state is expensive, and happens entirely in the kernel. There’s a very real cap on how many TCP connections a machine can serve before the kernel starts barfing, and that cap is substantially lower than the number of multiplexed streams you can push over a single TCP connection.
You’re also completely ignoring TCP slow start process, which you can bet your bottom dollar will prevent six TCP streams beating six multiplexed streams over a single TCP stream when measuring latency from first connection.
You’re making the claims, feel free to share the data you already have.
Oh and while doing that, also feel free to respond to the rest of comment that outlines why opening an unbounded number of TCP connections to a server might be a bad idea.
I'm sure it can. However in my real world use cases, HTTP/2 and 3 work better when available for me, so I'm glad to have them. I'm sure this comes as no surprise to most people since SPDY went through a lot of testing before becoming HTTP/2, but just because it doesn't win on every point of the graph does not mean it doesn't still win overall.
Besides that, more than half my point is that I like HTTP/2 and HTTP/3 for the featureset, and you can't get that by increasing the max connection limit for HTTP/1.1.
> That's not really head of line blocking, because in HTTP1.1 you'd just open up another connection.
All browsers cap the number of connections which are opened to a single domain (I think on IE this was 4, and has increased to 10, but it's not a large number).
Each of those connections needs its own TCP and TLS handshake making a total of 6 trips. Also, although this didn't take off, h2 had push promise which could have been a big help.
Yeah HTTP/2 push is so great that Chrome removed it.
Straight from the horse mouth: "However, it was problematic as Jake Archibald wrote about previously, and the performance benefits were often difficult to realize"
https://developer.chrome.com/blog/removing-push
> This is Top of line blocking and was entirely predictable had the HTTP2 team bothered to talk to anyone who did networking.
>
> Its part of the reason why I was greatly suspicious of QUIC, because it appeared like it was designed by the same people that thought http2 was a good idea.
Agreed - i've always found them to be living in a world of their own, which doesn't match the realities of actual networking out there.
In fact, the very same team after touting QUIC over UDP as the revolutionary protocol is now complaining that the real world realities aren't matching their expectations and so are now proposing to do QUIC over TCP. Here's that proposal https://mailarchive.ietf.org/arch/msg/quic/N82WBOa_RJIb4cPQw...
One thing in favor of QUIC is that it had a lot of other people also working on it, not just the HTTP2 team. People like Daniel Stenberg who have worked in networking for ages and are not beholden to the incentive structures at Google.
It is imposible to meaningfuly evaluate these different protocols against each other without considering the "use case". For example, if someone is requesting 100 pages from a site, all hosted on the same domain, and nothing else,1 then HTTP/1.1 is the best of the three. The requests can be sent over a single connection. HOL blocking is not an issue. I have tested this exhaustively outside the browser for over a decade. (People promoting HTTP/2 and HTTP/3, and HN commenters submitting snarky replies every time I mention the utility of HTTP/1.1 pipelining, apparently have not tested this at all.)
1. This is text retrieval. The www is not a handful of browsers controlled by companies that seek to profit from advertising. It was and still is a facility that provides for (hyper)text retrieval.
HTTP/2 and HTTP/3 come from an advertising commpany and a CDN that expect ads hosted on different domains in every page. That is what _they_ want. Is that want _www users_ want. We do not know because www users were never asked. We do know that www users do not like ads. When given the choice, they say, "No."
The advertising company keeps repeating this HOL blocking as a problem of HTTP/1.1, and now it gets parroted everywhere, and few even know what it means. HOL blocking is not a problem if the www user is not requesting ads and tracking from different domains. How many www users actually want to request ads and tracking and particpate in telemetry. The so-called "tech" company might try to argue that all of them do, or more commonly that, "They do not care." Meanwhile its own employees call ad blocking a "right of passage" (direct quote from a reply I got on HN).
The truth is that when evaluating these new HTTP protocols, it matters what the www user is trying to do. Many users are simply trying to retrieve information as text. But the advertising company believes that www users only want to do what is in the interest of the advertising company: let the advertising company's browser automatically send requests for ads, tracking and telemetry purposes. (Except for employees of so-called "tech" companies profiting from the sale of online ad services. They exempted and are free to block the ads and tracking.)
> entirely predictable had the HTTP2 team bothered to talk to anyone who did networking
I think it's good that things have evolved rather than being stuck behind naysaying. Sure, there were pitfalls/hurdles that could have been avoided, but it's not clear that maintaining perfection every step of the way would have got us to QUIC.
This is not some pitfall. This is a major problem in tech and IMHO totally unacceptable. The same things are re-invented all over again and again without actually making much of an improvement.
The result of this is that we get constant upgrades and "improvements", but overall reliability, UI usability, latency and performance in general are actually worsening every year.
Reliability: Gut feeling. I'm a support engineer. The modern DevOps/Cloud stack is just garbage. People only believe they save money with it because they don't calculate in what they will pay me.
UI usability: This is a discoverable, good UI: https://jbss.de/bilder/scr118d.gif . You can instantly tell what is a button, where you can enter text and what is inert. You can even read out all the hotkeys from the screenshot. Now, without hovering over it with your mouse, check the header of this post (the grey line) and tell me which strings will take you to a place (= are a link), which do an action (= are a button) or are inert. They all look the same and you need to investigate it first with the cursor. And even then you can't tell the buttons ("flag", "vouch") apart from the links ("parent", "context").
Latency and performance: I played RTS games for 15 years. Despite getting old and rustly, i easily hit 120 APM during office work. I can tell if an application can handle it. Things like the Windows Start menu were fast enough for me until like Vista, but now they aren't - i need to wait a bit or i click before it is popped up.
How about latency of the key press? Why Apple II has practically zero latency and your modern monster desktop is visibly lagging in a fucking text editor? Despite Apple II being literally millions times slower than a modern rig.
The specific thing they’re talking about connection migration (and resumption) through multiple disconnection events — HTTP/1 and 2 do not offer a similar feature.
You will need to back up that accusation, because now you are accusing Daniel Stenberg of being a liar. On paper HTTP shouldperform much better than HTTP 1.1 and especially HTTP 2 on links with high packet loss since TCP handles packet loss very poorly.
> you folks just don't even bother to learn TCP first before shitting on it.
Let me stop you right there. I promise you you're not the only person who really knows how TCP works. The people who made HTTP2 and HTTP3 are clearly smart, knowledge folks who have a different perspective than you do. It's OK to disagree with them, but it's a bad look for you to assume that they're ignorant on the subject.
I didn't assume they are ignorant. I assumed that the are fraudulent. They knew they couldn't really improve existing protocols because it's simply not possible but moved forward anyway for personal gain. Just like everything from Google for the last 20 years. You make big splash with new and 'revolutionary' 6th version of instant messaging, get your promo and move on. And here we are, HTTP2 ran its course, time for HTTP3 because we need the promotions and the clout of 'innovators'
They didn't fail. They got their promotions. It's you, the end user, who is left holding the bag. But fear not, HTTP3 is on the horizon and this time it's going to be glorious!
That's just plain wrong. I commented in more depth in https://news.ycombinator.com/item?id=39709591. In short, TCP treats packet loss as congestion signal and slows down. If the packet loss was due to congestion that's absolutely the correct response and it increases TCP's "goodput". But if the packet was lost due to noise then it has the opposite effect and goodput plummets to a fraction of what the link is capable of.
That HTTP2 is the worst of the bunch is a given. But HTTP3 should on paper be able to handle packets loss better than HTTP1.1 and way better than HTTP2.
And SACK does not seem to help under my real life workloads. Maybe poor implementations. I don't know.
Just a few years ago HTTP2 was the best thing since sliced bread and any criticism was silenced. This begs the question if HTTP2 was so great then why did they come up with HTTP3? SACK is not a silver bullet because when you have high latency high loss link then nothing really helps. The difference is that HTTP2/3 folks like to deny reality and claim that they can do better when in fact they can't
I don't have high latency, I have high packet loss and high latency on some packets but most not. And that is something TCP cannot handle without breaking down totally but some UDP based protocols can handle it just fine. I don't know about HTTP3 though, that might also fail under those circumstances.
I suspect your case is not that the packets are simply dropped but that the TCP connections are reset. Lookup your TCP stack statistics to verify. If that's the case try to find out if the resets are made by your side, the source or the intermediaries.
I was actually curious why SACK's don't resolve issue, but according to https://stackoverflow.com/questions/67773211/why-do-tcp-sele...
> Even with selective ACK it is still necessary to get the missing data before forwarding the data stream to the application.
Yes, TCP provides the guarantee that your application will always receive data in the same order it was sent. Your kernel will do the necessary buffering and packet reordering to provide that guarantee.
So SACK might reduce packet resends, but it doesn’t prevent the latency hit that comes for having to waiting for the data went missing. Even if your application is capable of either handling out-of-order data, or is simply capable of handling missing data.
It's possible to build something similar on top of TCP, see Minion [0] for an example. There are multiple reasons why this is less practical than building on top of UDP, the main two being, from my perspective: (1) this requires cooperation from the OS (either in form of giving you advanced API or having high enough privilege level to write TCP manually), and (2) this falls apart in presence of TCP middleboxes.
Yes, bytes from a *logical* stream need to be delivered in order. But in HTTP2 (3) multiple logical streams are multiplexed on top of one physical TCP (QUIC) connection. In the HTTP2 case this means that a dropped segment from logical stream A will block delivery of subsequent segments from an different logical stream B (which is bad for obvious reasons). QUIC doesn't have this problem, which is a large part of its value proposition.
Except it doesn't work in practice and real world data proves it. Multiplexing streams inside of a single TCP connection don't magically make your data link less prone to dropped packets or high latency.
I am curious of why the kernel does not allow it; there could be an API that gives you fragments of the stream in "events" like {slice: [10000, 10100], data:<blob>} and let the application have a peak preview of future data
>In the diagram below, request 2 cannot be sent until response 1 arrives, considering that only one TCP connection is used.
That's wrong, request 2 can be sent before response 1 arrives. Blocking is inability to receive response 2 until response 1 arrives. With HTTP/2 responses can be unordered.
In practice it's not wrong. HTTP 1.x servers, and especially "middleboxes" get this badly enough wrong that when you ship this feature ("HTTP 1 pipelining") your users will report a low but persistent error rate. Oops the password request and image download were kinda sorta fused together.
You can (and some very minor browsers do) just insist it's not your bug and then painstakingly reject every incident where this is implicated, or you can just accept that this was never going to work in practice as the document you're disparaging does.
Do you get a mostly eliminated error rate if you only use pipelining over TLS, or have server operators situated their defective middleboxes behind their TLS termination?
I guess, it's about things like UserGate, they generate MITM certificates on the fly and decrypt traffic. On the other hand, if they can't handle http1, why they would handle http2?
Not really, because broken proxies include anti-virus software that sits at the endpoint, as well as corporate TLS-MITMing boxes.
Browsers have spent years trying to find a way to deploy pipelining, but nothing really worked. You can't even allowlist based on known-good User-Agent or Via headers, because the broken proxies are often transparent. It's also very hard to detect pipelining errors, because you don't just get responses out of order, you may get response bodies mangled or interleaved.
The idea is truly dead. With H2 being widely supported now, and having superior pipelining in every way, there's no incentive to retry the pain of deossifying H1.
To be very clear/direct, the HTTP pipelining you're referring to never fully worked and was always hit with problems with middleboxes, problems with servers, significant security vulnerabilities around request smuggling and more besides.
Every project I've been involved with that tried to use them eventually turned them off. So the quote is true in practise.
The biggest difference is that HTTP/3 (and to a lesser extent, 2) are designed and implemented entirely and exclusively for for profit-business use cases at the detriment to all other use cases and specifically longevity. Since there are no HTTP/3 implementations that allow the use of non-CA TLS or even just plain text that means in order to host a visitable website for major browsers you have to get continued re-approval from a third party corporation every ~3 months. This means that websites using HTTP/3 will have very short unmaintained lifetimes. HTTP/3 sites will not last many years. They'll simply become unvisitable in a fraction of a year if there's ever any problem in the huge chain of complexity that is acme2 (and when acme2 is deprecated and turned off a huge portion of the web will die, much more than died when acme1 was turned off on LE).
There is one feasible change that can be made now: Firefox needs to change the flags in it's HTTP/3 library build so that self signed certs are allowed.
Nope. It should be possible to set up infrastructure to serve web content without being beholden to a certificate authority. By all means there can be a bunch of warnings whenever anyone tries to access it but it should still be possible.
But you can : you can do whatever horrors you want, privately. You want your own browser with specific compiled-options ? Then do it and deploy it on your private perimeter.
However, if you want to expose something publicly, then your own ideas matters less than the interests of your clients (at least, this is how I see things) : so exposing to the internet something without TLS or with a self-signed / private CA certificates is something that should be denied (those three propositions are the same, if you think about it).
These kinds of security mindsets exist because browsers have been made extremely insecure these days by encouraging and even setting the default behavior to automatically execute random programs downloaded from random places while at the same time exposing bare metal functionality for speed.
This incredibly insecure business use case has made it so using a browser for merely surfing the web is dangerous and that's why CA TLS is required. But if you just turn JS off... it's fine.
There is so much more to the web than just business web applications selling things or institutional/government websites with private information. There are human people on the web and their use cases matter too. HTTP/3 disregards them. It's fine for now but when Chrome removes HTTP/1.1 for "security reasons" it's not going to be fine.
These kinds of security mindsets exist because, as a network architect, I know at least a couple of ways to put myself between you and your destination, and from there to read and/or rewrite your unencrypted data. Of course, if I manage any network between you and your destination, things get a lot more easier.
I do not want my coworkers to do that on any of my communications, nor my family, nor anybody.
The only known way to prevent this is encryption.
And no, it has nothing to do with browsers : the same applies to my emails, ssh, IRC and whatever.
Yes. And there's almost zero risk to such (ARP poisoning? dns poisoning? etc) MITM attacks when you turn off javascript and don't blindly execute all programs sent to you as an end user.
The problem with MITM attacks is when you execute programs or exchange money or other private information. The risks when viewing public documents that don't require execution is minimal. That's my point. One use case "web app stores" ruins everything for everyone by requiring the mindset you advocate for as browser defaults. But the entire justification goes away if the end user just turns off JS auto-execute. It's not intrinsic to all use cases for the web or even most.
EDIT: Mentioning wikipedia is missing the point. Of course there are cases where CA TLS should be used. I am not denying that. I am saying there are many cases with CA TLS makes things fragile and short lived and it is not needed: like personal websites run by a human person. And these use cases are not invalidated by the existence of yet another corporate person (wikimedia).
> The risks when viewing public documents that don't require execution is minimal.
If you’re living in a well developed country with strong privacy laws, you might have a point. But most of the people in the world don’t, and in many places simply looking at LGBT communities can land you in jail.
Then there’s places like the U.S. with multiple states currently doing their level best to criminalise so much as thinking about an abortion. I don’t see why those states would be above scanning people’s clear text browsing habits to any signs of a possible abortion, and using it as evidence of an illegal abortion having been committed or about to be committed. They’ve certainly jailed women for less (even while pregnant).
Just because you’re among a group of people that is lucky enough to have no worries about being oppressed, or discriminated against, doesn’t mean everyone has that luxury. Encryption is good for everyone, I don’t anyone being able to easily know what I do online, because I have no idea who those people, or what their motives might be, and quite frankly I don’t care. I just don’t want them rummaging around in my life looking for opportunities to exploit me or others.
Lets talk again the first time a US state based CA revokes a cert under pressure for an abortion clinic site being against some state's law. Then you'll really want that HTTP/1.1 back. If we go CA TLS only it just means there's a single point of failure/censorship. HTTP+HTTPS is robust from censorship in a way centralized CA HTTPS only can never be.
So we’re just going to ignore the fact that most websites these days are co-located on shared IP addresses, and there’s perfectly good ways of encrypting the TLS SNI header?
So we squeeze a tiny bit more performance out of existing networks, then that immediately becomes normal again. Except now we have protocols that are difficult to implement, debug and understand.
> the UDP protocol wasn't optimized by routers and operating systems over the last decades due to its low usage, making it comparatively slower than TCP
I haven't read the article yet but I think that means that UDP was used less than TCP and so routers/operating systems didn't optimize for it as much as they did for TCP. Hope this helps.
there's nothing to optimize with UDP, you put a datagram on the wire and off it goes. There's no sequence number like in TCP to re-order and construct a stream on the receiving side. There is no stream, it's UDP. You put a datagram on the wire and that's it. There no syn/ack either so no congestion control in routers, no back-off or anything.
Most of the complexity actually comes from encryption. If you don't need encryption, HTTP/1.1 is great. Especially since unencrypted HTTP/2 and 3 are usually not supported.
HTTP/2 is sometimes considered a mistake.
HTTP/3 is certainly more complex than HTTP/1.1, but that's in large part because it is actually several protocols in one. It replaces TCP with QUIC and therefore implements its features. It also has encryption built-in, so it also provides some of TLS features.
It is based on UDP, but ideally it should QUIC/IP, the only reason why UDP is in the middle is to facilitate adoption.
So if you consider HTTP/3 with builtin TLS vs HTTP/1.1+TLS+TCP, I don't think there is much of a difference in complexity.
It's more complex for server builders and client library builders (libcurl, netty, etc.). For web application developers, it's essentially zero effort if you use decent cloud based hosting.
We have our stuff in Google cloud. I just launched a website (while spaceX was launching a rocket) there. Simple bucket behind our load balancer. It serves http3 if your browser can handle it. If you check with curl (which can't) it falls back to http2. Our API runs there as well and a few other things. Just works. It's not even a configuration option. It's just part of the package.
Most of this stuff is either necessary complexity or useful complexity. Running without TLS is not really something you should be doing over a public network. And some people would argue even on a private network. So that's necessary complexity.
UDP vs. TCP is a no-brainer as well for mobile and roaming type use cases. Just a lot easier to deal with via UDP. With TCP you have to deal with connections timing out, connection overhead, etc. With UDP, which is connection less, switching networks is a lot less dramatic.
And then there's the notion of not needing multiple connections to download/stream multiple things. Since UDP has no connections, HTTP3 multiplexes it's own notion of "connections" on top of that. So, you are not constrained by browsers limiting you to just 4 or 8 connections per website (or whatever the number is these days). A bit more complex to implement but useful.
That's a pretty silly take. You're basically asserting that implementation difficulty is always the main priority, and there can never be any worthwhile tradeoffs that could offset that complexity.
But in reality, the world probably only needs a few dozen H3 implementations, just like there are only tens of production TCP stacks. But those implementations will be used by billions of people, hundreds of billions of machines, and handle effectively all data transmission of the entire humanity.
The leverage is massive, even the most minor improvements will be able to pay off any level of engineering effort.
I don't really see how one leads to another, could you elaborate? Looking at these protocols, it seems to me that later versions of HTTP attempt to address errors resulting from the overly simplistic design of HTTP/1.1.
Thats not really head of line blocking, because in HTTP1.1 you'd just open up another connection. The issue with HTTP1.1 is that opening up lots of connections can introduce lots of latency, especially if you are doing Encryption.
HTTP1.1 performs much much better over high latency or lossy links than http2
> With HTTP/2, this problem is solved with streams, each stream corresponds to a message. Many streams can be interleaved in a single TCP packet. If a stream can't emit its data for some reason, other streams can take its place in the TCP packet.
This is where HTTP2 failed. It shoved everything into one single TCP connection which works fine on LAN and LAN like networks, and sucks balls in the real world. This is Top of line blocking and was entirely predictable had the HTTP2 team bothered to talk to anyone who did networking.
Its part of the reason why I was greatly suspicious of QUIC, because it appeared like it was designed by the same people that thought http2 was a good idea.
However QUIC seems to be actually reasonable. I've yet to fully test it in real world scenarios, but it does offer promise for highspeed latency resistant data streaming. One day I'll re-write my TCP multiplexor to compare the performance.