Not all information needs to be secure, pure and simple, and individual anonymity is more important for the health of internet culture than all the security in the world.
This section covers my feelings on the topic:
"TLS does not provide privacy. What it does is disable anonymous access to ensure authority. It changes access patterns away from decentralized caching to more centralized authority control. That is the opposite of privacy. TLS is desirable for access to account-based services wherein anonymity is not a concern (and usually not even allowed). TLS is NOT desirable for access to public information, except in that it provides an ephemeral form of message integrity that is a weak replacement for content integrity."
> individual anonymity is more important for the health of internet culture than all the security in the world.
TLS provides exactly as much anonymity as HTTP (ie, none), so it is not trading that away. It only wins security without losing anything.
Mr Fielding suggests a hypothetical system that would keep the same amount of privacy as HTTP while ensuring the integrity of the content. But without a proof that it works (or even a full design — how does it prevent a MITM from substituting the signature too?), we can't know that it has any potential. And it certainly doesn't have value now, since it doesn't exist yet.
Even if it did exist, the same amount of privacy as HTTP is still the same amount of privacy as TLS.
And integrity is not something we can overlook. Imagine that, say, the Chinese government changed a set of RFCs inside its border by MITM them, in such a way as to suggest a different but compatible implementation of cookies that allows them to read them. Then chinese implementors would create insecure tools, and weaken privacy tremendously!
> TLS provides exactly as much anonymity as HTTP (ie, none), so it is not trading that away. It only wins security without losing anything.
This is simply not true. If I request a resource from a server that is cached by my ISP, my request never reaches the source server and they'll never be able to measure that I requested that resource.
In that case, you just unexpectedly told a third party about your request, whereas with HTTPS, you only tell the intended party about your request. How does it improve your anonymity to unexpectedly have third parties intercept your request, read it and respond to it?
If you want to choose to use a third party mirror, then you can do so: just explicitly request the mirror, over HTTPS to avoid a MITM attack.
That's message confidentiality, not privacy. The win to privacy is that instead of telling the intended party and every third party in the middle about you accessing some resource, you only tell the subset of third parties between you and the cache.
Since it's more difficult to intercept access to all possible caches than to a centralised server, that's a win for privacy. At a cost of message confidentiality, of course, but if your message content doesn't need to be confidential (i.e. you're just GETing a resource), it's not a big loss.
> you only tell the subset of third parties between you and the cache.
That's not a privacy win at all if you want privacy from the cache, if you don't trust the cache, if the cache is wholly owned or used by you alone.
Even if all of that doesn't apply, you don't even have a guarantee that it will hit the cache. If it is a cache miss, no “privacy win”.
How many asterisks does this claim of a privacy win need before it should no longer be considered valid?
> At a cost of message confidentiality, of course, but if your message content doesn't need to be confidential (i.e. you're just GETing a resource), it's not a big loss.
You are forgetting the loss of message integrity, as well. That is a big loss.
Hypothetically that is true, but do any ISPs do these sort of large-scale caching of resources any more?
Also, if the site wants to track you that way, couldn't they (or anyone in the middle of your connection) just send no-cache headers with everything and a well behaved transparent cache will retrieve from origin every time anyway?
In any case, personally I am inclined to trust the operators of the websites I visit more than Comcast, my ISP. I do not want my ISP to do anything to my traffic except to forward it on to the intended recipient.
So you want your ISP to MITM-attack your requests? (transparent proxy of your outgoing plaintext HTTP) Outside of unusual local/niche applications, are ISPs even doing this? (with Verizon doing their MITM to add their deplorable X-UIDH head, I suppose this is possible) It would probably be a huge copyright violation for any ISP to cache without some sort of negotiated agreement with the website. If such negotiations did happen, that just moves the anonymity problem from the website to the ISP, ascting as the website's agent.
Local ISP caching in traditional Akamai-style was a DNS trick where your browser makes an explicit reqest to the local cache. You would still make the same requests into the cache no matter the transport.
Let's take the argument that news sites are public information and should not be encrypted. This argument breaks down pretty quickly if someone wants to inject bad things into public WiFi but there is another, more subtle, problem.
TLS stops a passive snooper (like GCHQ or a coffee shop) from seeing exactly what content you are reading as only the host is sent in the clear and not the full path. For example, it would be difficult to see exactly what articles I am reading on HN but trivial to see what I like to look at on BBC or The Guardian.
You can build up quite a profile of someone from what they choose to read. This could be used for national "security" or advertising purposes. It can also have a chilling effect if people think twice before clicking on a controversial headline.
How does TLS disable anonymous access? My identity and/or location is not a single bit more exposed than it already is when I access a site either with or without TLS. There are indeed authority and centralization issues with the current CA system, but again, it has nothing to do with my anonymity.
Meanwhile, caching is such an overused and undersubstantiated argument that it's not even funny anymore. How many websites you recently visited was served by a cache operated by anyone other than the same party who owns the website, or a CDN that the website owner trusts enough to let them have the TLS keys? Are untrusted proxies (read: MITM as a service) such an integral part of the web that we must keep them alive at any cost?
There are decent reasons to eschew TLS for public information -- for example, PGP signatures that can be verified out-of-band are much cheaper for large blobs of static data such as .deb packages -- but anonymity and caching are not among them.
Maybe, but currently some site today can set a cookie in your browser and track you anyway -- a lot simpler than fiddling with the TLS stack.
I assume that if you browse in "Private Browsing" or "Incognito" mode, then the TLS Session Resumption data is wiped once you exit that mode (similar to how cookies and local storage is wiped).
The site you visit yes. But I am referring to a MITM. A cookie would be hidden by the secure tunnel. But the TLS resemption parameters might be visible as it happens before the tunnel is established. I am not familiar enough with the protocole to know if it is the case.
The resumption parameters might be used to uniquely identify a person... that's an interesting point.
But is that a big enough flaw to justify throwing out the baby of TLS with the bathwater of tiny details like that? I'm sure there are people who are much smarter than both of us who can fix that without giving up on TLS altogether.
Ah, I see. Yes, from a cursory glance at RFC 5077, it seems that the SessionTicket is sent as part of ClientHello, which is not encrypted (page 6).
This is still no worse than plain unencrypted HTTP at worst, and server admins or clients could well choose not to support this if they do not wish to.
HTTP/2 specification contains a "prioritisation" of data as a hint for the server, instead of the idea of caching data on the client. With the reoccurring "net neutrality" debates, let's hope this protocol cannot be misused/used to prioritize certain packets for parties who pay extra. I am not into this debates, but it would be certainly a disadvantage for startups over established parties. Given the many problems with SSL (heartbeat, broken/outdated certs, hijacked cert vendors) an HTTP/2 without SSL would be a nice fallback scenario - wildcard certs for new startups are still a bit expensive, especially if one will have to replace (=costs) the certs every few months due security concerns. Even the largest commerce website Amazon.com only uses HTTPS for a tiny little fraction, the payment dialog page (which requires with its separate login).
It seems HTTP/2 is good for big companies that operate centralized services to save traffic. Whereas the current HTTP 0.x and HTTP 1.x is proven to be good enough for everyone. There is a threat that some popular web services might get HTTP/2-only access in a few years. Maybe the Firefox forks Iceweasel/Fennec/etc and Chromium forks can make the HTTP/2 protocol support an opt-in (and not the other way around).
> With the reoccurring "net neutrality" debates, let's hope this protocol cannot be misused/used to prioritize certain packets for parties who pay extra
FUD. There is nothing like that in HTTP/2 and indeed that can't be, because packet routing happens several layers below. If anything, the fact that it's encrypted makes deep packet inspection and content-based routing decisions more difficult.
By the way, Amazon.com fully supports HTTPS. I'm using a browser extension to enforce this whenever possible and have browsed Amazon HTTPS-only for years.
"With HTTP/2 browsers prioritize requests based on type/context, and immediately dispatch the request as soon as the resource is discovered. The priority is communicated
to the server as weights + dependencies. [...] Responsibility is on the server to deliver the bytes in the right order!"
With HTTP/2 a malicious website may send you the advertisement and tracking code first, then wait and later send you the actual content data. With HTTP/1 a browser plugin can decide to not download a advertisement related tracking JS file, simple and effective.
Correct me if I am wrong or misunderstood HTTP/2 - but with constructive arguments and sources.
Edit: I believe in ads sponsored websites. Ads about the topic of websites are great, though like many I dislike personalized invasive ads that follow me around websites about things I already bought anyway or never will. I would like an "fair"-AdBlocker that only blocks invasive ads and allow the good ads.
Yes, you have completely misunderstood this. HTTP/2 provides you with a mechanism to optimize traffic _within your own connection_. This is not at all related to the net neutrality debate where your ISP determines a priority _between different connections_.
Furthermore, ad blockers currently work with blacklisted hostnames. Even with HTTP 1.1 the data could be sent from the main server, but usually it is sent from 3rd-party servers, which makes blocking easier. This is the same for HTTP/2 and completely unrelated.
Well, any website today can have some JavaScript that verifies that ads and tracking cookies are loaded and executed before actual content loads. If the advertisement is not shown successfully, then the script can refuse to load the content too.
I am also not sure how stream prioritization will affect Ghostery, AdBlock or other browser plugins. Is there any evidence that these plug-ins are not able to do things in HTTP/2 (e.g. blocking URLs, domains, preventing DOM elements from showing up, deleting cookies, etc...) that they could in HTTP/1? I would be interested to know.
I don't see how the stream priority feature of HTTP/2 affects "net neutrality" at all, since it only affects stream prioritization behavior within one TCP connection.
If ISPs decided to slow down or speed up certain traffic, they would just modify the QoS for the entire TCP connection based on source and destination IPs -- no need for Layer 7 parsing. Also, since HTTP/2 requires TLS, ISPs can't read the stream priority (or anything else within the stream) even if they wanted to.
Also, HTTPS costs should be minimal for startups (in fact, they should be more significant if you are the size of Amazon). For instance, if you use Amazon Cloudfront, the price difference between serving one million requests over HTTPS vs one million requests over regular HTTP is at most 60 cents in any AWS region.
Binary HTTP/2 headers can be parsed a lot faster than plain text (HTTP0.x/1.x), so the packets can be prioritized depending on the content. Sure the routing has happened in another OSI layer, but nowadays a lot of packet inspection is done by certain ISPs & co.
Not an argument for or against it, but we know from the news that for example Google has to pay ISPs in France for their Youtube traffic, the same goes for Netflix.
But I don't want to start a discussion about that topic, it was just one of many arguments about HTTP/2. We should stay on topic and write about HTTPS-only or HTTP/2 which requires an SSL certificate.
> Binary HTTP/2 headers can be parsed a lot faster than plain text
Any advantage in parsing speed that binary has is going to be so small that it won't matter. On modern out-of-order, speculatively executing hardware, wasted CPU times usually mis-predicted branches and waiting for data to be loaded into the L1 cache. The handful of extra instructions to convert an integer from ASCII to a register-sized-int is minimal, and can be "free" if it fits in the L1 cache.
> (HTTP0.x/1.x)
There's you're problem - HTTP/1.x was not designed to be fast; it was designed to allow simple implementations (most headers can be ignored) and arbitrary ad-hoc extensions. It is entirely possible to make a replacement for HTTP that is much faster to parse than HTTP/1.1. A protocol being "text base" is orthogonal to that protocol allowing annoying-to-parse arbitrary size fields. A binary protocol can be just as slow to parse if it is badly designed.
But if HTTP/2 requires TLS, then ISP can't read the traffic in any case right? I don't think parsing is a main concern here.
Also, If the Google.fr deal is anything similar to the Comcast-Netflix deal here in the United States, then it has nothing to do with protocols, TLS or even (by some definitions) net neutrality.
Basically, Comcast asked (and got!) money from Netflix to install (additional?) peering connections between Comcast and Netflix. Comcast doesn't even need to throttle packets deliberately (i.e. explicitly violate net neutrality) to ensure that its customers have a bad experience, the congestion at the then-existing peering points would have guaranteed that.
At the end of the day, it doesn't matter what protocol we use, if you have many many bits to deliver to lots of people on somebody else's network, that middleman who owns the destination network might price-gouge you.
> so the packets can be prioritized depending on the content
No they can't because HTTP/2 frames are unreadable (they're encrypted). An ISP can prioritize whole streams, but not individual frames inside the stream.
> Not all information needs to be secure, pure and simple
It seems most HTTPS proponents across this thread seems to ignore this very thing. Not everything needs to be secure. Pure and simple.
Don't argue about what part of TLS does that with what cookie. We're not even listening to this, as it's not interesting to us.
If you want us to listen, you should try to argue why you think we need TLS for everything. We don't. So why are you so hellbent on making our lives more complicated than they need to be?
You may be ok with the world eavesdropping on some of your communications with third parties. That is your opinion, and you can use whatever protocols you want. What you don't get to do is force the rest of us to make the same choices.
What has become very clear in recent years is that far too much usable data can be extracted out of noisy channels. Any amount of identifying bits leaked where an eavesdropper can hear them can effectively remove the privacy you thought you had elsewhere.
Also, keep in mind that the ISP itself is often the enemy. Are you cool with Verizon editing ALL of your HTTP to add X-UIDH identifiers? What about if they do the same trick in a side channel, keying to a hash of your HTTP request and IP?
The problem here is that the idea that there exists a subset of HTTP requests that do no "need to be secure" is really just a restatement of the "but I"m not a target" fallacy. Targeting presumes a human with intent, when the threat is a computer that builds a database of all the traffic it can see. So yes, data needs to be secured, and that means all that we possibly can.
> If you want us to listen
If you want US to listen, you will need to wake up to the brave new world of pattern-of-life analysis and stop trying to keep the world in plaintext. I don't like a lot of things about TLS, but it is the crypto we have right now; I would love having better, less complicated options in the future.
If you find it complicated, that's your problem. What you apparently see as a technical discussion some of us see as a human rights problem, and stopping the surveillance-as-a-business-model industry (and the occasional overly-ambitious government agency) ranks quite a bit higher than complaints about a few standards becoming a bit more work to implement.
On top of that, having only data that must be secure makes it very apperent to anyone looking that 'hey this is totally important information.' Having everything encrypted makes all encrypted traffic seem perfectly normal.
HTTPS brings complexity. In lots of cases I dont want that complexity. Sometimes that complexity gives rise to (heartbeat) bugs, problems and stability issues. When it doesn't, it always leads to more work.
I want to be able to setup a website, access that website instantly without having to meddle with getting SSL-certs, or creating my own self-signed certs.
Sometimes you want consumers to be software, components (IoT thingies) which doesn't always have an up to date crypto-stack or have a crypto-stack at all. Getting them to accept the HTTPS website can result in all kinds of issues (ref wget --no-check-certificate, even on modern Linux systems)...
In general, getting HTTPS up and running is about an order of magnitude more work than plain HTTP. And if you don't need it, why should you be forced to put in the effort?
The answer is obvious: You shouldn't. Because you don't need HTTPS everywhere. You don't. That's a purely factual statement. I cannot for the life of me see how anyone has a problem accepting that.
TLS everywhere is great for large companies with a financial stake in Internet centralization. It is even better for those providing identity services and TLS-outsourcing via CDNs. It's a shame that the IETF has been abused in this way to promote a campaign that will effectively end anonymous access, under the guise of promoting privacy.
I think he makes a very good point here: if browsers did not support plaintext HTTP at all, and only CA-verified TLS, it would be practically impossible for those who want to run a server somewhere, to anonymously serve a site containing public information. If everyone has to obtain a certificate from a CA, that is another way they can be tracked by a central authority.
> if browsers did not support plaintext HTTP at all, and only CA-verified TLS
That's a very big "if", and it reeks of FUD.
Show me a browser that has any plan to drop support for plaintext HTTP any time in the foreseeable future.
Firefox ain't one of them. Last time I checked, their plan was to reserve some of the more dangerous features (such as access to the camera) for secure websites. Hardly a plan to drop support for plaintext HTTP.
If you still aren't convinced that the current controversy is just a bunch of FUD, I'll bet $100 that 10 years from now, I'll still be able to post public information (say, the full text of RFC 2616) on a plain HTTP site and have you access it with a mainstream browser.
A very big "if" indeed, but not a completely unrealistic assumption in my opinion.
It will not happen at once but gradually.
Neither Firefox nor Chrome intend to support plain HTTP/2 without encryption. Google already favors pages with TLS, my bet is they will also favor HTTP/2 sometime soon. Like you said: Browser will "reserve some of the more dangerous features (such as access to the camera) for secure websites.". They might even show a red bar for HTTP instead of a green one for HTTPS.
> ... that 10 years from now, I'll still be able to post public
> information (say, the full text of RFC 2616) on a plain HTTP site
> and have you access it with a mainstream browser.
I'm sure you are right. We will be able to use HTTP in 10 years much in the sense that we are still able to use RSS today.
I don't know whether to laugh or cry about this but it is what I see coming.
When abortion was legalized, some people argued that we'd be murdering children soon. Has that happened?
If something is moving in the right direction, but if you're worried that it will go too far, the solution is to get involved and stop it at the right time, not to spread FUD about the hypothetical doom of the world.
> Firefox ain't one of them. Last time I checked, their plan was to reserve some of the more dangerous features (such as access to the camera) for secure websites. Hardly a plan to drop support for plaintext HTTP.
So basically, by your own admission, you say that websites with a near-future version of Firefox will only be able to offer a "full" web-experience if they are offered via HTTPS.
HTTP-based websites will be reserved for an inferior web.
> Classic slippery slope argument.
But somehow saying that this is moving in a HTTPS-only direction is a slippery slope argument? How long until Javacript is only allowed via HTTPS? How long until video and media-APIs will only work with a "secure" DRMed connection, signed by the MPAA?
Taking HTTPS everywhere and removing support for HTTP is the slippery slope and we're already walking it.
Every feature of every part of the HTML spec has to be supported for every transport. End of discussion.
HTTPS everywhere is a misguided effort. Trying to artificially limit HTTP to further your cause is just GOT-level political bullshit. Stop playing dishonestly. If HTTPS everywhere can't win through on its own merits, you should let it die.
> How long until video and media-APIs will only worked with a "secure" DRMed connection, signed by the MPAA?
The slope is so slippery I think I might actually fall off my chair. I don't think you know what the fallacy actually is.
There's no arguing against facts - moving to promote HTTPS and make some features HTTPS-only does go in that direction. But that doesn't mean things will continue going in that direction.
If I keep driving north I'm sure I'll fall off a cliff eventually. The magic happens because the road isn't straight.
> Every feature of every part of the HTML spec has to be supported for every transport. End of discussion.
Where does that feeling of entitlement come from? What makes you think you have the right to access my camera or microphone via your web page, or even execute arbitrary JS on my computer in the first place? Websites have no right to do such things. They are privileges that I grant on a case-by-case basis via my user-agent and various plugins. You don't even have any guarantee that your DOM and CSS will render as you intended, because I block all sorts of things and sometimes even tweak the styles to make the content more readable. My computer, my rules.
So I see no problem with restricting websites to a known-to-be-safe subset of features by default, until and unless a website can demonstrate that they take my privacy and security seriously. Privileges must be earned, not taken for granted, and ruthlessly revoked at the first sign of misuse.
The HTML spec describes the maximum privileges that a website can hope to have, not the minimum that it can expect to have. If your website doesn't need any special privileges, feel free to use whatever transport you want.
> If everyone has to obtain a certificate from a CA, that is another way they can be tracked
They have to be tracked just as much even without that. They have to give their credentials to the hosting provider and to the DNS provider. Even if they didn't and plugged their server directly to the backbone, their IP address and traceroute would leak information about their location.
That said, your point is valid in a parallel universe where anyone could run an anonymous server without registering for DNS or contacting an ISP or a hosting provider or anything at all.
But in that universe, clients would have no way to know if that server contains valid information or data that was substituted on-the-fly. Imagine WWII-style Radio Londres, but one day the broadcast is substituted by one that looks like it comes from Charles de Gaulle, and gives the wrong instructions.
> They have to give their credentials to the hosting provider and to the DNS provider.
Back in the real world you don't need a hosting provider nor DNS to serve a website.
The following is a real address: http://123.234.34.56 (although it doesn't currently point to any real webserver)
You can probably trace the ISP it belongs to, but if that ISP is in another country than the regime you want to hide from, they need to launch a cross-country, international police-investigation, and possibly defend their claims through a trial, to get authorities in that country to force that ISP to divulge the identify of the customer at the other end of that IP.
Being able to serve websites anonymously, through plain HTTP and no hosting-partner DNS is a very real option. With today's high-speed internet connections it's a more real option than ever before.
Pretending this option doesn't exist doesn't lend your arguments any favour.
> You can probably trace the ISP it belongs to, but if that ISP is in another country than the regime you want to hide from, they need to launch a cross-country, international police-investigation, and possibly defend their claims through a trial, to get authorities in that country to force that ISP to divulge the identify of the customer at the other end of that IP.
Which is different from getting the identity of someone who registered a HTTPS certificate with a CA in another country how?
This is why the people advocating this are also advocating alternative CAs like letsencrypt, where the process is automated, only requires proof you control the domain (no real world info needed), and is free. Thus, the CA model is used, but only at a bare minimum.
Then again, I'd like to see browsers support an encryption model based on DNS records, using one of the many unused crypto record types, like KEY, for delivering the private key's fingerprint. For basic use, there would be no need for an outside party to verify its validity, just that the connection isn't being tampered with. This way, there's no tracking by a central authority unless one wants or needs further verification.
> Roy Thomas Fielding (born 1965) is an American computer scientist,[1] one of the principal authors of the HTTP specification, an authority on computer network architecture[2] and co-founder of the Apache HTTP Server project.
> If the IETF wants to improve privacy, it should work on protocols that provide anonymous
access to signed artifacts (authentication of the content, not the connection) that is
independent of the user's access mechanism.
But it seems to me that there is basically no way to request access to any kind of data, without it being traceable in some manner; at the very least the ISP would still see the traffic. I guess you could argue for TOR, but that still allows vectors and has its own issues to worry about.
Funny, we need the same kinda access that radio and TV used to provide; where you could just "tune in" to something and have a listen, and you were more or less untraceable; even if you were to broadcast on that frequency your traceability, while triangulable, is still fairly anonymous. But on the internet, there is no such way to broadcast like that. Maybe that's a design flaw, maybe it's a feature.
100% agree. If you want to ensure the integrity of content, then sign the content. If you want to protect people's interactions with websites when exchanging non-public data then use encryption when that's warranted. But credential-based encryption should never be used across sites or with public data because (as the poster notes) it just becomes another way in which broader interaction with the Internet can be tracked.
It's trading yet more liberty for just a little bit more security, and haven't we all done enough of that already?
"100% agree. If you want to ensure the integrity of content, then sign the content."
That's what "subresource integrity"[1] is about. For static data, a cryptographic hash of the linked content is attached to links. This detects any modifications of the document. Now you can use a CDN without trusting it.
A big problem with "HTTPS Everywhere" is that it encourages termination of the secure connection at a CDN. The CDN can then tamper with the content. Some do, adding ads and trackers. Subresource integrity will detect such tampering, but HTTPS Everywhere will not.
If you're using a CDN, you don't know if the CDN has injected ads or spyware. If you use Cloudflare's RocketLoader, what the user gets is not what you sent.
Once a CDN is involved in "HTTPS Everywhere", it's security theater.
I (as the the website's owner) have to trust one more party, which I know and have a contract with. That's a whole lot better than having to trust any random third party between me and the user of my website.
"authentication of the content, not the connection"
Popular websites that hold themselves out as businesses, i.e. "exclusive" sources of content (often generated by users, go figure), they have no reason to support this concept. Because then it does not matter where the user gets the content. But they might have reasons to support TLS.
One could argue users want authentic content (signed content), not authentic "websites" (single sources for content trying to serve too many users, all at the same time).
There is another thread on the HN front page right now about a Blackhat conference talk on x86 CPUs. There is another talk on that page about how TLS relies on the trustworthiness of internet routing.
What is the point of securing connections when you have no control over routing?
Instead of relying on securing "connections", I think schemes that send out "encrypted blobs" with the hope they arrive at the proper destination make more sense.
Encrypting blobs is not something for which an "authority" is needed. This is something over which the user can retain full control without involving third parties. As it should be.
TLS might have encryption that works well enough to "secure a connection" but if I am not mistaken it still has no reliable way to verify an endpoint (recipient) is the one you want it to be. Some people call that "authentication".
I'm not even sure that TLS can reliably perform "authentication of the connection" as Fielding states.
This doesn't feel right to me. No one can touch this man's credentials, but lets suspend the argument of authority for a second and look at what he is saying critically: Is TLS more private overall than plantext http?
If you want to remain private, how could TLS prevent this that plaintext would not? HTTP is not tor.
I think the argument is "HTTP is edge cached (by your ISP, etc) and so a request need not imply a connection received at the remote end. HTTPS is not subject to caching or other benign man-in-the-middle operations so knowledge of who clicked what is centrally available." This feels like a weak objection to me, since the government will just snoop at the ISP level.
Can anyone explain what he's referring to with the statement
> with TLS in place, it becomes easy and
commonplace to send stored authentication credentials in those requests, without visibility,
and without the ability to easily reset those credentials (unlike in-the-clear cookies).?
Cookies are orthogonal to presence of TLS, I thought (unless they're marked as secure, in which case they are only supplied to https hosts?)
Is there some other way of identifying a particular user/browser/session[1] other than the quirks & features enumerator along hte lines of Panopticlick?
If there is (ISTR some 'session storage' for resuming TLS in nginx), is that cross-trackable across different services (potentially all TLS-terminating in the same place, such as Cloudflare or AWS)?
One good point is that I hadn't considered is that the lack of proxyability means every request which can't be filled from the browser cache must hit the actual endpoint, making it easier for them to follow along action-by-action when it might otherwise have been served up before getting to them by a caching middle-proxy.
My (limited) understanding is that you're potentially providing more information to the remote service, but are better secured against people snooping on your traffic as it flows between you and them.
[1] also not including client certificates, because exactly 1 site on the internet actually uses them :P
Besides Session IDs and Session Tickets[1] which already exist in the TLS protocol. He could be referring to the Token Binding Protocol Draft[2] which, quoting from it's summary, "allows client/server applications to create long-lived, uniquely identifiable TLS bindings spanning multiple TLS sessions and connections".
His argument is mostly based on analysing the size of the data transferred. Let's assume HTTP/2 for the moment. You have a single encrypted channel to a particular website that contains multiple interleaved opaque streams. It's not easily possible to extract the exact size of a single request from this. Furthermore, for a typical news website, for example, there will be an huge number of pages, they are dynamic and constantly changing and they will all have a very similar size.
You do get privacy. If anyone claims otherwise, he should go and prove that it's possible and easy by providing a firesheep-like tool. It would make for a nice research paper.
Fair enough, the HTTP request path can be hidden through TLS. I'm not sure if privacy is a goal of HTTPS, though.
On the internet layer, IP packets can still be traced from origin to client. I'm probably not involved enough to formulate an educated opinion, however.
> If the IETF wants to improve privacy, it should work on protocols that provide anonymous access to signed artifacts (authentication of the content, not the connection) that is independent of the user's access mechanism.
He basically wants a better version of Freenet. Fine. However, that's orthogonal to the effort to make all channels secure channels. IETF can do both, if people are interested. Plaintext TCP needs to die, and building the infrastructure to move everyone to HTTPS is a step in that direction.
He shouldn't obstruct HTTPS just because it's not Freenet(bis). To use his Freenet(bis) vaporware, people will need to become familiar with managing private keys. HTTPS has a similar requirement, so HTTPS Everywhere is a step in the direction he wants to go.
> TLS is NOT desirable for access to public information, except in that it provides an ephemeral form of message integrity that is a weak replacement for content integrity.
TLS both encrypts and authenticates the response. Is TLS authentication a "weak replacement" for some other, better "content integrity" system that's widely available in browsers?
Roy suggests content signatures... but is there a web mechanism to authenticate those? Or is he just wishing there were something better than TLS? (Don't we all?)
I think his point is not that TLS is not good at securing information from the host to the viewer, his point is that in doing so, it leaks information about the viewer to the host and potentially to third parties. For public information, TLS effectively asks each viewer to sign the guest register in return for seeing the page.
Contrast this with the case where you could download one giant file with hashes for millions of public sites. Once you have a copy of that file, you can now fetch a copy of any of those pages from any source you like, and still validate that your copy is authentic without losing any trace that you accessed that file.
> If the IETF wants to improve privacy, it should work on protocols
> that provide anonymous access to signed artifacts (authentication
> of the content, not the connection) that is independent of the
> user's access mechanism.
[...]
> It would be better to provide content signatures and encourage
> mirroring, just to be a good example, but I don't expect eggs to
> show up before chickens.
If I wanted to have the eggs before the chickens, what would I do?
Sign my content with PGP? Sign the HTML file? Offer it as separate signed download? Are there examples of pages doing this?
How exactly is TLS different to plaintext in anonymity? - There is no client-cert.
Also HTTPs everywhere does not necessarily mean "real" CAs. Self signed certs, even without pinning, would raise the bar of snooping from monitoring (easy) to traffic manipulation (hard). In this case there would not be a green lock in the address bar of course.
This whole thread feels like one propaganda attempt to sway the techinal community. And yes, here is where the people to manipulate are.
I would wager that absolutely no one has ever become uniquely identifable as a result of using TLS. People have MAC and IP addresses tied to their real identities. People have social media profiles and run scripts from dozens of places on every page load.
Can someone please describe a situation in which someone reasonably wouldn't have been trackable to Google or to the NSA, but becomes trackable as a result of HTTPS?
I can't think of one. Screw this guy and his politics.
Https for everything is worthy for at least one reason: to shut up the scum operators who insert their ads. Privacy and confidentiality are of a much less importance.
Roy is a wannabe industry shill who plays politics at a very amateur level. Roy could care less about your privacy, as long as ads and tracking work. He fundamentally thinks that ad blocking is theft and that you have no right to privacy.
This section covers my feelings on the topic:
"TLS does not provide privacy. What it does is disable anonymous access to ensure authority. It changes access patterns away from decentralized caching to more centralized authority control. That is the opposite of privacy. TLS is desirable for access to account-based services wherein anonymity is not a concern (and usually not even allowed). TLS is NOT desirable for access to public information, except in that it provides an ephemeral form of message integrity that is a weak replacement for content integrity."