> Internet Explorer people have expressed that they intend to also support the new protocol without TLS, but when they shipped their first test version as part of the Windows 10 tech preview, that browser also only supported HTTP/2 over TLS. As of this writing, there has been no browser released to the public that speaks clear text HTTP/2. Most existing servers only speak HTTP/2 over TLS.
I'm hoping it will stay this way. Defaults are important, so it's the platforms' responsibility to support and enforce the "safer" options.
> The fact that it didn’t get in the spec as mandatory was because quite simply there was never a consensus that it was a good idea for the protocol. A large enough part of the working group’s participants spoke up against the notion of mandatory TLS for HTTP/2. TLS was not mandatory before so the starting point was without mandatory TLS and we didn’t manage to get to another stand-point.
Which is interesting, because I remember quite clearly the "Snowden discussion" at the IETF, and there were consensus for an "encrypt everything Internet".
> There is a claimed “need” to inspect or intercept HTTP traffic for various reasons. Prisons, schools, anti-virus, IPR-protection, local law requirements, whatever are mentioned.
Right. So IETF made it non-mandatory so law enforcement can get their "master keys" in a way. Also this "anti-virus" kind of protection, is basically what Superfish was. I'd rather that kind of behavior was stopped.
IETF would better start actually becoming useful and come up with ways to replace the CA system over the next few years, instead of taking protocols from others and ruining them as they standardize them. Otherwise we should rethink a new model for standardization if IETF is as useless/malicious as it is right now.
> > Internet Explorer people have expressed that they intend to also support the new protocol without TLS, but when they shipped their first test version as part of the Windows 10 tech preview, that browser also only supported HTTP/2 over TLS.
> I'm hoping it will stay this way.
There's a good reason it will probably stay that way: middleboxes. I wouldn't be surprised if plaintext HTTP/2 is a can of compatibility worms due to broken or misbehaving transparent proxies.
HTTP/2 over TLS bypasses these middleboxes. And even in the case where the broken middleboxes MITM the connection, their TLS handshake won't offer HTTP/2, so it'll fallback to HTTP/1. In case newer middleboxes do offer HTTP/2, they will have been tested against MSIE's HTTP/2 implementation, reducing the chance of compatibility problems.
> Which is interesting, because I remember quite clearly the "Snowden discussion" at the IETF, and there were consensus for an "encrypt everything Internet".
I think this is still the official IETF position. RFC 7258 "Pervasive Monitoring Is an Attack" is published as Best Current Practice and has not been retracted.
Certain corporate companies formed a consortium to prevent encryption to ensure that monetisation of personal information would continue.
At the very last stage, the IETF appeared to be hijacked by very large telcos (e.g. ATT, Verizon, Ericsson, Comcast) to remove the mandatory requirement for TLS
As an outsider, this look likes a careful co-ordinated attack on the IETF standards process by a small number of "serial IETF professionals" who are paid by the big carriers to be inside the organisation and ensure that standards do the bidding of corporate masters. (some hyperbole there)
Waiting until the last phase restricted discussion, and used the existing momentum to complete HTTP2 standard while removing one of the fundamental reasons for HTTP2 to exist.
It is a very sad day that consumer rights have been compromised by big money. And as the Lenovo Superfish debacle showed, likely it will backfire in the long run.
Thankfully the browser and server vendors can do an end-run round this by simply not supporting http2 without encryption. Then no matter what the standard says ordinary users will be protected and it'll be one more reason for sites to move to https everywhere. The article discusses this in TLS mandatory in effect
Even when only doing domain validations, CAs still usually ask for personal information. You would have to lie at least somewhat convincingly to obtain a certificate without providing personal details which feels fraudulent and could potentially put you at risk of having the certificate invalidated. I'm assuming let's encrypt will address this since it's going to be a fully automated(?) system.
How could certificates ever work in embedded applications connected to consumer LANs that provide interfaces over HTTP? Aren't certificates tied to IP addresses, which wouldn't work with e.g. DHCP? Not to mention certificate expiration and updates…
Tied to domains, but you still have a good point. Try getting a cert for "foo.myhouse.lan" (and that ignores complications of coordinating hostnames of embedded device ahead of time).
On a manged corporate intranet, Active Directory can push out a new CA and your computer trust whatever certs the local IT department wants to cook up, but it breaks down with a BYOD office or SOHO lan.
I manage a few intranet websites, this encryption requirement basically kills any interest I had in experimenting with HTTP/2 which was already miniscule. I'm not seeing the use case for HTTP/2 unless you're already all-https or in the Top N websites and want to slim down bandwidth use.
Certificates are generally based on hostnames. You can put an IP as the common name but it's problematic (and you can't get one signed by most authorities for an IP address as far as I know).
Generally the devices just generate a self-signed certificate and you have to click through the warning.
Exactly. Worse, it trains users to ignore cert warnings.
I don't have any problems with the campaigns to make the public internet HTTPS-only. However, for software inside an intranet, or software that just wants to expose an interface on http://127.0.0.1:*someport* non-SSL is the better default.
If people want to protect their intranet that's great, but it means that they have to go through the work of buying a cert, since only they know the hostname it will be exposed as. That's a poor initial-install experience.
My view is starting to change on this.. can you really trust a LAN beyond a certain size? (That size being what one person can comfortably architect and maintain.)
Nowadays, I'm a firm believer in "encrypt all the things", but that's because I'm a geek and can deal with the PITA. There needs to be either an encryption mechanism that's completely separate from authentication, or the use case of LAN encryption for regular people needs to be addressed in some other way.
I'm a big believer in a (local/p2p) transport encryption mechanism /in addition to/ one for auth, and for it to be transparent to any UX... that's very much our goals for telehash v3 :)
Typing IP addresses to certs is a certificate vendor addition to standard SSL issuance policies. Some of the high end (as denoted by more insurance) certs from say Symantec will be "locked" to an IP address, but again this is mostly handwaving.
The only one of the counterarguments that interests me is that it defeats caching. I mean, if 100 users in a large network want to access the same video or other large resource from the Internet, it seems pretty ridiculous that the connection must use 100 times as much bandwidth as it would if they could just install a simple caching proxy, especially if it's just some cat video or online game, which is probably the common case. True, not all large resources are as innocent, and there is no real way around encrypting and not caching everything if you don't want devices on the network to tell the difference... but the result is just so pathological. The price of freedom?
[For the record, YouTube seems to use HTTPS by default for video content, so this is already the case for some large percentage of the types of large resources typically accessed from shared networks.]
Caching already happens through CDNs at the ISP level, such as through the Google Global Cache (YouTube), Netflix Open Connect. That roughly covers about half of network traffic.
Plus, running a squid proxy on 100 users isn't nearly as effective as it once was; pages contain far more dynamically generated content than they used to. Think about a Facebook News Feed or Twitter Stream.
Lets say one uses http/2 across microservices in a datacenter (or "cloud") with (possibly) ipv6 (or ip4) over secure (vpn or physically secure) links. Would you reallly want to complicate the stack by having to choose between using both 1.1 and 2 or do double encryption?
I get that browsers demand tls as there's no sane ui/ux to show that the link is secure because of vpn etc to the user. Not so for other clients.
The answer is probably to have a way to sign and/or encrypt headers separately so that clients can request authentication and/or encryption on a per-resource basis. Perhaps a public and a private header section.
Checksumming and cryptographic signing of responses as an alternative to full-blown encryption might be useful as well (since response signatures could still be cached).
As the article mentions, for better or worse, TLS-piercing proxies aren't exactly unusual anymore. An ISP may not be able to just jam one in front of their customers, but use cases involving a corporate entity owning computers such that they can push a root cert update and wanting such a cache are still unaffected.
I still don't see many arguments about advertising when it comes to TLS. I can't deploy TLS across my sites without losing a huge amount of ad inventory due to cross-site request policies (no HTTP content from HTTPS domains).
AFAIK, Google is the only network actively working on having HTTPS-supported ads. The value of the ads drops significantly as the auction pressure from all the HTTP ads is gone, meaning any sites that rely on ad revenue can not afford to use TLS.
I'm surprised to see Certificate Transparency presented as a band-aid. My understanding is that assuming it will be deployed successfully, forged certificates will require very significant resources to use successfully for an attack, typically limiting such attacks to nation states.
But I would love to know where I'm wrong about this.
For how long will HTTP 1.1/1.0 live alongside HTTP/2? It's all nice if every web page has TLS, but if I can just not upgrade to 2.0, it will not matter at all...
That's not completely accurate about HTTP/1.0. It's true that HTTP/1.1 requires "Host:" (a compliant server MUST reject any request from a 1.1 client that lacks that header). However, HTTP/1.0 clients had been sending "Host:" headers for years before the 1.1 standard came out.
It's still possible to use a 1.0 client today if you don't want to handle other client-side requirements of 1.1 like chunked transfer-encoding. Likewise, embedded devices can speak 1.0 only without any problem.
It's perfectly normal (and allowed) for servers to send back a version string of "HTTP/1.1" even if the client sent the request as "HTTP/1.0". As long as they don't do anything in their response that assumes that the client has 1.1 features, all is fine. This basically just means:
* Don't use chunked encoding in the response. (Technically a 1.0 client could specifically indicate support for that by sending a "TE: chunked" header, but since chunked encoding arrived at the same time as 1.1 I think most servers just assume that HTTP/1.0 clients never support it)
* Don't assume that the client supports keep-alive connections. However, prior to HTTP/1.0 clients often did indicate that they could do keep-alive by sending "Connection: keep-alive". The only real difference in 1.1 is that now the client must support it unless they specifically indicate that they don't by sending "Connection: close". In the absence of a "Connection:" header, a 1.1 client supports keep-alive and a 1.0 does not.
Most well-behaved servers will only send chunked-encoding if the client claimed to be HTTP/1.1. If the client is HTTP/1.0 it will fall back to doing "Connection: close"
The only big thing chunked-encoding gives you is the ability to do a keep-alive connection when the server doesn't know the Content-Length in advance. (Technically it also added "trailers" for sending headers after the reply body, but that's little-used)
> HTTP/1.0 is basically dead and has been for years
There are probably a lot more active HTTP/1.0 clients than you think. Just slap a `Host: foo.com` and `Connection: close` header onto the request and your HTTP/1.0 client can talk to just about any HTTP/1.1 server.
So basically, while HTTP/2.0 could force TLS, the world could still be like it is today, with some pages running web servers with HTTP/1.1 without TLS?
I'm hoping it will stay this way. Defaults are important, so it's the platforms' responsibility to support and enforce the "safer" options.
> The fact that it didn’t get in the spec as mandatory was because quite simply there was never a consensus that it was a good idea for the protocol. A large enough part of the working group’s participants spoke up against the notion of mandatory TLS for HTTP/2. TLS was not mandatory before so the starting point was without mandatory TLS and we didn’t manage to get to another stand-point.
Which is interesting, because I remember quite clearly the "Snowden discussion" at the IETF, and there were consensus for an "encrypt everything Internet".
> There is a claimed “need” to inspect or intercept HTTP traffic for various reasons. Prisons, schools, anti-virus, IPR-protection, local law requirements, whatever are mentioned.
Right. So IETF made it non-mandatory so law enforcement can get their "master keys" in a way. Also this "anti-virus" kind of protection, is basically what Superfish was. I'd rather that kind of behavior was stopped.
IETF would better start actually becoming useful and come up with ways to replace the CA system over the next few years, instead of taking protocols from others and ruining them as they standardize them. Otherwise we should rethink a new model for standardization if IETF is as useless/malicious as it is right now.