TLS 1.3 _being required_ makes me sigh loudly. What about local development, where tools like tcpdump and wireshark are really handy? What about air gapped systems? What about devices that are power constrained?
It's not that I think an encrypted web is bad, it's a very good thing. I am just spooked by tying a a text transfer protocol to a TCP system.
> What about local development, where tools like tcpdump and wireshark are really handy?
You can tell browsers to dump the session keys, which then can be read by wireshark [1].
> What about devices that are power constrained?
That's thinking from 10 years ago. 10 years ago, there were no native AES extensions in power constrained devices. But now there are, so encryption is really power efficient.
> I am just spooked by tying a a text transfer protocol to a TCP system.
I guess instead of "TCP system" you meant transport layer protocol. I can actually understand your view: stuff is getting more complicated. I can fire up netcat, connecting to wikipedia, typing out a HTTP/1.0 request manually. With 1.1 this is hard and with 2.0 it's impossible due to TLS requirements. But there are reasons for this added complexity: you want to be able to re-use connections, or use something better than TCP. As long as there is a spec, and there are several implementations lying around, I think it's okay to add complexity if there is a performance reward for it. Most people care about the performance, who wants to fire up netcat to do a HTTP request.
To clarify, HTTP 1.0/1.1 were successfully transmitted over TCP, multiple versions of SSL, then several versions of TLS. Just seems a bit pretentious to be tying to TLS 1.3.
Those older SSL and TLS versions are insecure now or at least deemed a bad idea from today's security ideas. TLS 1.3 partly was about removing insecure modes from TLS 1.2. If HTTP/3.0 supported anything other than TLS 1.3, then those insecure setups would persist.
Of course there are disadvantages, like when you are in a lan or such. But I think those cases are covered well by the HTTP/1.x family already and if not you can always add root certificates yourself or make public DNS names you control point to your 192.168.... address.
- HTTP/1 is "1 HTTP stream over 1 TCP-ish L4 connection" (TLS-over-TCP is a TCP-ish L4 connection)
- HTTP/2 is "multiple HTTP streams multiplexed over 1 TCP-ish L4 connection"
- HTTP/3 is "HTTP over QUIC"
HTTP/3 is meant to replace HTTP/1 or HTTP/2 only to the degree that QUIC replaces TCP. In your air-gapped system, or for local development, QUIC-instead-of-TCP is less compelling.
What about them? Don't use HTTP/3 if you don't want encryption.
The whole point of HTTP/3 is that it doesn't treat TLS as a separate layer, that it tightly binds parts of the two protocols to allow more efficient use of time and data. It's not just an option, the protocol doesn't make sense without it. If doing encrypted HTTP isn't what you're after, then this protocol isn't for you.
That's a good point. I completely understand clients MUST use TLS. On the server side though, a workflow I really like is to have a pass-through proxy that terminates TLS so I don't need a TLS stack in each one of my apps. This is a pretty common pattern so I'm sure libraries will allow for http3 without TLS -- who knows though, maybe I'm a crazy eccentric heathen.
To expand on that point: load balancers will also have to maintain encrypted connections between themselves and their web servers behind the scenes. That's probably a "best practice" security-wise, but it's convenient to be able to handle the TLS stuff at a load balancer level and stick to plain HTTP behind the scenes.
I suppose this can still happen regardless, except the HTTP/3 connection would stop at the load balancer (which would have to translate to plain ol' HTTP/1 for the servers behind it).
This is often the case today for load balancers or CDNs that support HTTP/2. For connections from reverse proxies the number of round trips for connection establishment generally does not matter since these connections will be kept alive for a long time, across requests. I don't see why this would change with HTTP/3.
If your client or server has support for key log files, Wireshark can deal with TLS quite well. In fact, this is usually how I debug my QUIC implementation.
Perhaps it's best if a non-web protocol caters to those use cases so it can best serve it's 99% use case anyways.
This follows into the debugging conversation, web browsers and web servers have debugging tools 10x better than reading HTTP packets in Wireshark/tcpdump.
In a Diffie Hellman setup, configure the machine you aren't sat in front of (usually the server) to use a fixed secret value instead of a random one.
Now since you know this value, and the other value you need (from the client in this case) is sent over the wire, you can run the DH algorithm and decrypt everything.
You should (obviously) never do this in production, although it is what various financial institutions plan to do and they have standardised at ETSI as an "improvement" on TLS (you know, like how TSA locks are an "improvement" over actually locking your luggage so random airport staff can't steal stuff) ...
Security engineers around the world have been working for decades to clean up the mess made by engineers, product designers, and business owners overlooking security because it's inconvenient.
If you are a developer or engineer then eat the complexity tax as part of your responsibility and ensure that you are shipping code and products that are secure for the end user who probably doesn't have the expertise to overcome the security gaps left by "developer inconvenience".
Or, you know, abstract the application layer and then apply the TLS layer on top of it so that it can be secured, but not effect the application code/logic.
To be fair, that's been tried a lot, and it keeps causing issues.
I'm at the point where I believe that you can't "layer on" or "abstract away" security like you can with other things, it needs to be thought about at every step.
Just look at attacks that can take advantage of content-length to pluck out which page the user is requesting of a mostly-static site, or how compression and encryption seem to almost be at odds with one another.
You can't ever just assume TLS will handle it when it's abstracted away, and while HTTP/3 may not get rid of those kinds of attacks entirely, bringing "security" closer to the application logic may enable better protections.
Possibly... I'm just concerned about yet another layer to have to be far more concerned about than I already am. I really enjoy building applications. I'm pretty good at systems and orchestration to an extent, but would rather not have to focus too much.
Going from using an application framework that's more abstracted, such as ASP.Net (not mvc/api/core) to those where you are closer to the metal (node, python, .net core/mvc/api) was a jump.
Thinking in terms of leveraging push with HTTP/2 alone has me concerned. The tooling around building web applications hasn't even caught up to the current state of being, let alone moving farther. Another issue is dealing with certificates against local/internal development in smaller organizations. It may get interesting, and it may get more interesting than it's actually worth in some regards.
You are going to be using tools like tcpdump and wireshark for debugging, but can't figure out how to install a root certificate on your local machine?
It's not that I think an encrypted web is bad, it's a very good thing. I am just spooked by tying a a text transfer protocol to a TCP system.