Certainly the web can mostly coast indefinitely. There are webpages from decades ago that still function fine, even that use JavaScript. The web is an incredibly stable platform all things considered. In contrast, it's hard to get a program that links to a version of Zlib from 10 years ago running on a modern Linux box.
> Certainly the web can mostly coast indefinitely.
I'm not sure about that, for anything besides static resources, given the rate at which various vulnerabilities are found at and how large automated attacks can be, unless you want an up to date WAF in front of everything to be a pre-requisite.
Well, either that or using mTLS or other methods of only letting trusted parties access your resources (which I do for a lot of my homelab), but that's not the most scalable approach.
Back end code does tend to rot a lot, for example, like log4shell showed. Everything was okay one moment and then BOOM, RCEs all over the place the next. I'm all for proven solutions, but I can't exactly escape needing to do everything from OS updates, to language runtime and library updates.
this problem -- great forward compatibility of the web -- has been taken care of with application layer encryption, deceitfully called "transport layer" security (tls)
The web is the calm looking duck that is paddling frantically. You want to be using SSL from the 90s, or IE vs. Netscape as your choice etc. Nostalgia aside!
Yeah but you can just continue to use HTTP/1.1, which is simpler and works in more scenarios anyway (e.g. doesn't require TLS for browsers to accept it).
Without HTTP/1.1 either the modern web would not have happened, or we would have 100% IPv6 adapation by now. The Host header was such a small but extremely impactful change. I believe that without HTTP/3, nothing much would change for the majority of users.
But also, the only thing in most of the organizations I've been in that was using anything other than HTTP 1.1 was the internet facing loadbalancer or cloudflare, and even then not always. Oh yeah we might get a tiny boost from using HTTP/2 or whatever, but it isn't even remotely near top of mind and won't make a meaningful impact to anyone. HTTP/1.1 is fine and if your software only used that for the next 30 years, you'd probably be fine. And that was the point of the original comment, nginx is software that could be in the "done with minor maintenance" category because it really doesn't need to change to continue being very useful.
Maybe you just haven't been in organizations that consider head-of-line blocking a problem? Just because you personally haven't encountered it, doesn't mean that there aren't tons of use cases out there that require HTTP/3.
>Maybe you just haven't been in organizations that consider head-of-line blocking a problem?
I have not. It is quite the niche problem. Mostly because web performance is so bad across the board that saving a few milliseconds just isn't meaningful when your page load takes more than a second and mostly is stuck in javascript anyway. Plus everybody just uses cloudflare and having that CDN layer use whatever modern tech is best is very much good enough.
Sure, but there's video streaming, server to server long polling bidirectional channels, IOT sensors and all sorts of other things you probably use every day that can really benefit from HTTP3/quic.