Hacker News new | past | comments | ask | show | jobs | submit login
Intent to deprecate: Insecure HTTP (groups.google.com)
180 points by dochtman on April 13, 2015 | hide | past | favorite | 293 comments



How about we solve the "certificate" situation first?

I've just paid lots of money for "certificates". I quote the word, because they don't actually certify or even signify anything. The whole procedure for "domain-verification" is a joke, and many outfits are incompetently run (their "verification" e-mails bounced from my servers because they ended up in RBL, which nobody seemed fit to correct).

I see this as a scam, or extortion — pay up, or you won't be "certified". And pay up significant amounts of money, if you want a wildcard cert.

If we care about encryption just for the sake of encryption, let's change our browsers to allow self-signed certificates. Label them as such, but don't label them as "unsecure", because the padlock icon really isn't any more secure than a self-signed cert.


But what kind of technology would think self-signed certificates could ever be better (let alone not infinitely worse) than no encryption at all? That sounds like hell. We should call this new, purely hypothetical protocol, SSHell, or SSH for short. Not that such a thing would ever be used by anyone, of course.

(Yes, I know that technically SSH uses key pairs; but they show you the key, you choose to trust it. Same real chance of a MITM as with a self-signed SSL certificate. But at least you (and others) can now detect when there's a change in who's sending your data, and choose who to trust yourself instead of relying on Mozilla to pick who to trust, like they did CNNIC.)


The trust-on-first-use model makes it hard or impossible to regenerate certificates. If you don't have that problem with SSH it is because you connect to your own servers only.

There have been several attempts for a distributing certificate checks on the wider Internet, but they all come with their own set of problems.


Alternately, you do have that problem with SSH, and you work around it by scratching your head, saying "oh, right, because I switched hosts/reinstalled the OS/ran that mysterious command," and then nuking the key entry.

Alas, this does not scale well into the HTTP realm.


What about trusting not the key itself, but the CA which signed the cert - whose cert came within the cert chain upplied by the server ? Then everyone could get their own "root" CA and renew the certificates to their heart's content.


The problem is it requires knowledgable, involved users if the private key ever needs to change. If I suddenly get an SSH key error on a company server, I can just ask the sysadmin if it's legit. That wouldn't really work on sites intended for the general public - users would just click "ok" on every error or they'd be scared off.

Although I think it could work great on sites intended for small numbers of clueful users. (Web-based admin consoles, etc.) But something else is needed to make HTTPS usable by average people.


Oh, certainly. I agree completely with everything you said.

But right now, the situation is that I go to arstechnica.com, which offers no encryption whatsoever, and Firefox just loads it up like nothing's wrong.

Yet I go to self-signed.example.com, and Firefox presents me with a giant warning screen, followed by a pop-open warning dialog, where I have to click to confirm the security exemption, add the certificate in, and OK another scary prompt. It's so over the top that laymen probably think proceeding will give that site complete interception control over every site they ever go to again in the future.

That's absolutely ridiculous. Worst case, self-signed should be treated like HTTP is now. No padlock, no green address bar.

People who are knowledgeable and able to confirm self-signed certs should be able to very quickly and very easily do so. This will greatly help with developing inside the local network, or for small communities that know what they're doing and don't want to pay hundreds of dollars for wildcard SSL certs.


This.

A self-signed certificate is significantly better than no encryption whatsoever (even if you're being phished, you at least now know that no other phisher has viewed or altered the response in transit), but browsers for reasons that defy explanation treat them like they're worse.

There was even an MTA (exim maybe?) that on seeing an untrusted certificate would actually downgrade to plaintext in some circumstances. Great job, guys; you really dodged a bullet there...


Lol, took me a minute to get the sarcasm.


If all a domain-validation SSL cert means is that the certificate issuer could reach you via admin/administrator@domain.com - that's nearly all they do, besides ask for some payment information - why not automate that part of the process?

That is, the certificate presented by the site should be signed in a fashion that proves that whoever signed it owns the domain.

How do we know you own the domain? Because you control the DNS. If you control the DNS, you can control the domain in so many ways, including receiving the DV email, so it seems like a proper way to verify it.

If you control DNS, you can set a TXT record and put a public key in it.

So why not have browsers actually just ensure that a certificate is signed by the public key stored in DNS? Is there a good reason not to do this?


This is called DANE described in RFC 6698. But it requires DNSSEC. Without DNSSEC, the visitor to the site can be easily be MITM'd by spoofing DNS. It is presumably more secure when a CA verifies an email (or DNS) because the CA can control their DNS servers and be reasonably sure their DNS is not spoofed. Random clients internet clients can't without DNSSEC.


Might just be because you need DNSSec for that technique to be viable (otherwise an attacker could just spoof the DNS response too) and that hasn't been widely available until recently. Offhand it sounds like a pretty good approach now.


Point is, this is what we do now. We do this without DNSSEC. The step of receiving the email is pretty much redundant and is only there because people understand email better than creating a TXT record.

DNSSEC is a terrible ide and should be abandoned for many reasons. So should this method of domain validation. You know who knows for sure that you own the domain you say you own? The registrar. That is who should issue you your cert, not some third party.


How do you check the difference between your self-signed cert for your site and my self-signed cert for your site?


I rarely care. Even the CA-signed certs are usually for "RL Media Enterprises Inc." or something equally opaque to me, rather than something more meaningful.

Bluntly, the refusal of a certain part of the security community to simply secure transport first and then worry about authentication on top of that is both frustrating and mind-boggling.


I sure care. If my browser starts to accept YOUR self-signed certificate for gmail.com, we're back at http.


we're back at http

No. With http I can phish you, and anybody else can read or alter that phishing attempt. With self-signed certificates, I can phish you, and you know that my phishing attempt was neither altered in transit nor read by anyone else. We now have a channel over which we can negotiate authenticity.

If you went to my blog and saw that a CA had verified that I am who I claim I am, that doesn't particularly help you, because you don't know anything about me. But you might like to know that, whoever I am and claim to be, no other party is interfering with our communication. My issue is not with things like Gmail or my bank, but with the thousands of "ordinary" sites where learning the identity of the business that owns the site doesn't actually give me any useful information. That is, even if I see the name of the company in the certificate, I don't have a reason to trust them more than I would trust a phisher because I have absolutely no sideband interactions with them to begin with.


Bluntly, the refusal of a certain part of the security community to simply secure transport first and then worry about authentication on top of that is both frustrating and mind-boggling.

This sounds a lot like the thinking that brought us the TSA. Do something, anything!


He's asking to decorrelate the authentication problem with the encryption problem, because at the moment the main problem is that to get encryption (without a big ugly warning), you basically also need to pay a CA for authentication.

I really don't see your point with TSA, we're not talking about security theater here.


Bingo.

I have a blog. No ycombinatorer has any idea who I am or whether I'm trustworthy, so a verification from a CA that I am who I claim I am isn't particularly helpful to either of us if I link here.

Since you don't know who I am to begin with, presumably you wouldn't trust me with any greater information than you would give to a phisher, since even with a CA-signed certificate I might have nefarious purposes. But with encryption you would at least know that whoever you are in fact communicating with actually sent the message you received and not something else.

It's genuinely puzzling to me that so many people obtusely claim there's no value there.


If I'm reading your blog, why am I going to "trust" you with any "information" at all? You shouldn't need to prove your identity to publish a blog, and if you need either positive identity, non-repudiation, or encryption, then you need something that 99.999% of your fellow bloggers don't.

So to me, the whole thing sounds like a red herring, or rather a Trojan horse for the imposed removal of anonymity from the Web. No one has articulated just what problem is being solved here, but plenty of people have articulated the downside.


Considering that's something we can't do at the moment with http I hardly think it's important. If nothing else, using https means purely passive attacks aren't possible. Active attacks are much easier for devices to spot by looking for key changes, too.


But would you use gmail.com over plain HTTP? Would you use it if it presented a self-signed cert?


No, but check out how XKCD is presented at the moment. It doesn't look like it's secure or insecure, it looks like http. I would happily read webcomics over a self-signed certificate if it was made easy for me.

We don't need identity verification on many sites, being resistant to passive eavesdroppers or transient active attackers (such as sites that present different SSL certs when accessed from a public wifi) is a nice to have as they prevent some attacks rather than none.


So you would be OK with your ISP adding popover flash video ads to XKCD for you? Or your boss calling you in, asking why you are reading comics instead of doing useful work?


No, I'm not okay with it, which is why I'm so against the near ubiquity of http which suffers from this exact problem on most mobile networks and many free wifi networks. The things you describe are not only possible but widespread.

By requiring a perfect solution to auth and ident (rather than iterative improvements) you are part of the problem.

Unidentified but authenticated connections should not be penalised compared to unauthenticated and unidentified connections. If someone MITMs a TLS connection with a forged certificate they can indeed do all the things that are trivial already with bog-standard http. If a client records TLS keys of sites they've already visited there is partial mitigation of this attack.

This doesn't have to have any effect of the CA scam business model, although obviously I would be in favour of a combination of key pinning and some sort of hand-wavey consensus determination for initial pins of arbitrary sites, but the fact that people are being forced to pay in order than browsers won't prefer plain-text over end-to-end encryption is absurd.


Generally you trust your ISP slightly more than strangers sitting near you.

No version of HTTPS has ever hidden what site you were visiting.


Really? http://arstechnica.com/tech-policy/2014/09/why-comcasts-java... doesn't bother you?

HTTPS sure does hide the URL's you are hitting. It may leak the domain name, and you are also resolving DNS entries in the clear, but there's a difference between the Wikipedia entry on puppies and on something more nefarious.


Feel free to stop with the strawman attacks at any point.

That said, I would prefer comcast ad injection to someone running firesheep. And hiding which comics I looked at for two hours isn't going to help my job very much.


No strawman here. I would prefer that nobody but the origin host can deliver content as the origin host.


You looked at a position saying it wasn't important, gmail or bank level important, to secure webcomics. You then argued that position was completely okay with having webcomic visits altered. That's a strawman.


I am not sure where you are getting this from, though it is very probably I wasn't completely clear. To clarify:

I believe all content (banks, GMail, XKCD) should be served over HTTPS and protected by a trusted certificate. Self-signed certs are not trusted and should not be (the problem of "should GMail give me a self-signed cert?" cannot be solved; you need some type of external trust).

Notice, I am not saying trusted and signed by a CA. We can replace the trust model and do something different to fix the currently broken CA system. However, I never argued and am not arguing that some content is OK to be served over plain HTTP.


No version of HTTPS has ever hidden what site you were visiting.

Eh? That's why SNI was invented, because pre-SNI https did not send the hostname in plaintext.


SNI wasn't put in to leak information.

The server sends the hostname in plaintext when it sends the certificate.


You know gmail.com is one of the domains with known invalid certs signed by browser-supported CAs in the wild, right?


You're going to end up with similar issues to key-signing, maybe we could have certificate exchange parties or a friend-of-a-friend network passing on certificates from trusted parties.

Right now a certificate doesn't mean much anyway, there is hardly any difference between accepting a self-signed certificate or one that has been issued by any one of the CAs for security purposes, it's mostly a feeling, nobody went there to check who ordered the certificate.

All it says is that someone paid someone else some money.


A self-signed cert says "I claim to be example.com", while a signed cert says "I claim to be example.com and I previously convinced someone else that I was example.com, too."

It's certainly not perfect, but it is a much bigger hurdle for a mitm attacker to get past.


But it's just wrong to conflate these two things. Security of transport and authenticity of parties are two separate questions and should be treated as such. Particularly since conflating them essentially says "anonymous secure transport will not be possible".


Anonymous secure transport is fundamentally not possible. If you don't know who you're talking to, you may be talking to a MITM.

It's possible to have pseudonymous secure transport, if you identify sites with key-based pseudonymous identities rather than some form of authenticated identity, but you have to have some notion of identity or you don't have a secure transport at all.


That remains a silly claim no matter how many times it's repeated. All that is required for secure transport is a two key pairs. I need not have any idea who the other party is, but I can be certain that I have received exactly what that party transmitted and that no other party could read it in transit (remember, even the sender can't recover the original plaintext in this case if he's lost it).

It's pretty basic cryptographic theory and I'm not sure how such a fundamental misunderstanding became so widespread. A secure channel need not tell me anything about the probity of the other participant. (In basic terms, even if I'm being phished, there's an advantage in keeping some other criminal syndicate from also reading my information.)

"But you don't know with whom you are actually communicating to begin with!" you may say. Agreed, and not cared about. I'm communicating with someone. Step 1 is to make sure that whoever that person is, she and I share a secure communication channel that no other person can alter or intercept. At that point she and I can negotiate authenticity. Solve the simple problem first, and the harder problem becomes easier.


"Secure" against what threat model? If your threat model is a passive eavesdropper that simply reads the contents on the wire, but cannot actively change things or impersonate as another hostname, it will be secure.

While this attacker is too weak for most security use cases, it will cover many forms of passive mass surveillance by ISPs and governments, so it is quite valuable in that sense.


Guess it sucks when reality gets itself conflated. You cannot have "security of transport" (encryption?) without authentication.


Repeating that over and over does not make that true. Or have I just been imagining the existence of Tor, etc.?


Tor puts quite a lot of effort into authenticating part of the infrastructure, why do you think they do not? And Tor isn't providing a "secure" transport, they're trying to provide a "mixed" transport to hide you among others. If you were the only Tor user (or facing a big enough foe) and Tor did no authentication, then those random encryption hops could get hijacked easy enough since a fake directory could get published right to you, and you'd happily encrypt each hop with a MITM key.


or someone is going to stuff the signature in DNS like we seem to do for a lot of other items (e.g. SPF, DKIM)


If you're in a position to MITM a TLS connection, you can most likely also alter those signatures for your target.

You would need to use something like DNSSEC as well, relying on a government-controlled PKI [1], which isn't really any better than the current situation.

[1]: http://sockpuppet.org/blog/2015/01/15/against-dnssec/


The current situation is that everyone in the Starbucks can completely monitor all of your plaintext. Self-signed, encrypted, unauthenticated connections are better than plaintext connections.


This might help:

http://www.coniks.org/


You can get normal certificates for free these days and wildcard certs don't cost significant amounts of money.

At startssl you can get unlimited amounts of wildcard certs for 60 $. I know startssl isn't that popular and partly that's justified.

In summer hopefully let's encrypt will start and make it even easier to get certs for free.

Or in other words: People are working on solving the certificate situation - and even now it is not as bad as you make it sound.


Agreed. Enforcing SSL on the entire web will just mean a lot of very sloppy and bad SSL. Better to just make it abundantly clear when traffic is insecure, because that's a problem you'll have to solve whether you force SSL or not.


How about we solve both these problems and stop quibbling about which one we should solve first, given that they both need to be solved?


I'd rather not spend hundreds on a wildcard cert, or thousands on an EV cert, plus the costs in setting it up, until I'm sure the model is actually sound. Trusting CNNIC, no DNSSEC, etc show that Mozilla and Google aren't taking this very seriously. It's more a dog and pony show at this point with encryption.

Plus I'm still cognizant of the bullet I dodged by not having OpenSSL on my server back when Heartbleed hit.


> I'd rather not spend hundreds on a wildcard cert, or thousands on an EV cert, plus the costs in setting it up, until I'm sure the model is actually sound.

I'm 100% sure the model isn't sound.

I'm also 100% sure that a model which includes unencrypted HTTP will never be sound. The cert problem is a fixable problem, but it's not fixable while unencrypted HTTP exists.


> The cert problem is a fixable problem, but it's not fixable while unencrypted HTTP exists.

Sure it is. Whether or not HTTP exists has zero bearing on solutions to the cert problem; the cert problem is independent of the unencrypted problem (whereas the unencrypted problem is dependent on the cert problem, since the cert problem is precisely why the unencrypted problem currently exists).


Chicken, meet egg.


It's really not a chicken/egg problem. We solve the easy problem first, then the hard one. I'm not yet sure there's actually a good solution to the cert problem.

There is one solution I can think of, but it involves equating URLs with identities via a Namecoin-like system, and that technology just isn't there yet.


No. It's only chicken and egg because we needlessly conflated two very distinct problems a few decades ago.

Problem 1: isolate the communication between myself and whatever other party is actually sending me a message. Easily solved by encryption. (You're being MITM'd? That sucks. But you have now at least isolated the communication to you and the attacker. The problem domain just shrunk quite a bit.)

Problem 2: verify that the other party is who she claims to be. Not easy to solve but a completely separate problem from Problem 1.

We could solve Problem 1 tomorrow (modulo the time it takes to upgrade every browser/mail client/etc.) by simply encrypting all traffic, period, and not doing any authentication whatsoever. We would then be exactly where we are right now in terms of having a PKI system with all of its advantages and faults, but we would then have the amazing bonus feature of preventing all passive attacks, period.


Let's Encrypt is scheduled to launch in mid-2015, so by later this year, the procedure to get a proper certificate will be something like:

    sudo apt-get install lets-encrypt

    lets-encrypt example.com
https://letsencrypt.org/howitworks/


It seems kinda wrong to require encryption for say a static website receiving no input from the user in the first place.

Perhaps the web browser's shouting about security issues could be delayed until the moment when something happens where security is relevant.


Even on a static page you want to be sure that what you see is what the server sent. Without any third party injecting anything.

That's what comments like this like to forget: HTTPS provides not only encryption, but also authenticatoin.


> Even on a static page you want to be sure that what you see is what the server sent.

So have a PGP signature of the page available and the browser can check it.

> HTTPS provides not only encryption, but also authenticatoin.

And this is the problem. Those are two separate issues and should be handled separately.


That's what subresource integrity[1] is for. Links will have a secure hash of the content they link to. So you don't have to secure page assets such as pictures, CSS, Javascript, fonts, etc - just the entry pages for a site. This cuts the need for encrypted traffic way down.

This has a big advantage over HTTPS Everywhere - neither you nor your users have to trust your CDN. Put your main pages' HTML, and special pages such as login and transaction pages, on your own HTTPS server, and the public stuff on some CDN, unencrypted. This is much more secure than letting some CDN possess your private keys.

[1] http://www.w3.org/TR/SRI/


> HTTPS provides not only encryption, but also authenticatoin.

Comments like that are a reinforcement of an increasingly-common point: that HTTPS shouldn't be conflating those two things, since doing so is resulting in a lot of pain whenever these sorts of suggestions to encrypt all web traffic come up.

All I want is a way to set up an encrypted connection without caring about it being authenticated, thus offering at least basic protection against random passerbys at Starbucks. Let me do that without having to buy a certificate from some schmuck who was arbitrarily trusted by a bunch of browsers (without any actual guarantee of trustworthiness, mind you; just an empty pinky promise that they won't be naughty).


Yessiree, Bob, those are my cat pictures, and here's the 1024-bit key to prove it.


No, that's not a good idea, there are lots of cases where the static page being sent to you contains secret information, and should yell loudly when the source can't be verified. Secret information that you don't want to be intercepted or modified by an attacker.


Sure, but there are also plenty of cases where you don't care. Why make everyone everywhere bother with HTTPS? If you want to protect the user, as a browser, go ahead and yell or block access, but only when the user's data is at risk. Blocking access to my page because someone might have misrepresented what I put there, and thus forcing me to deal with HTTPS? That's more than a little excessive IMO.


There have been cases of ISPs injecting ads into the websites customers visit.

http://arstechnica.com/tech-policy/2013/04/how-a-banner-ad-f...


This should be treated as a legal problem, not a technological one.


> the padlock icon really isn't any more secure than a self-signed cert.

I understand this comes from a frustration, and an understanding that the certificate based authentication is not perfect. But labeling it as "as secure as a self-signed cert" when a self-signed certificate provide no authentication (or an unpractical one at best) is uncalled for.

So as long as no practical and better solution for server authentication has been found, this is the best we have and it is still working pretty well (you don't see a lot of rogue certificates in the wild).


> But labeling it as "as secure as a self-signed cert" when a self-signed certificate provide no authentication (or an unpractical one at best) is uncalled for.

It's perfectly called for. The CA system is based on arbitrary trust. An actually-effective system should not rely on trust at all.

Even something like what Namecoin does - using a Bitcoin-style blockchain as a public ledger, but for SSL certs instead of DNS entries - would be a massive step in the right direction in comparison to the current CA system.


It's not entirely arbitrary. Some CAs may actually be worthy of trust. I imagine it would be possible to modify the browser's UI to reflect the trustworthiness of the server certificate to encourage better diligence on their part.


> Some CAs may actually be worthy of trust.

By what measure? Some empty promises of good security practices, perhaps? Or maybe some pinky swear that they'll always act in the best interests of the internet as a whole rather than in the interests of whichever government or set of shareholders happens to be in a position of power relative to them?

Basically: name one.


But self-signed certificates are more secure than CA-signed certificates, because they don't involve a third party. My various servers only use self-signed certificates among themselves because it drastically reduces the amount of initial trust required.

The huge problem there is key distribution (easy since I run all the servers), but in terms of just "security" that's much more secure than involving a third party CA.


Self signed is one thing in an internal environment/infrastructure role but quite another in a web server interacting with the public role. You're comparing apples to oranges.


"Internal" begs the question of key management, which is the whole issue to begin with. At any rate, my claim stands that a self-signed certificate is by nature more secure than a CA-signed one, because it does not require the additional level of trust in the CA.


It's only more secure between two parties that can reliably confirm their identities with each other out of band. A CA, however badly implemented in practice, is more secure by design because the worst case scenario (a subverted CA) is no different from a self-signed certificate from a client perspective.

In terms of what you're focusing on in this thread (verification of identity) I don't disagree. But a MitM attack on coffee shop wifi is a problem which is exacerbated by self signed certificates.


You make a good point. This might be more of a UI issue: perhaps self-signed certificates could use a different browser icon that lets people know traffic is encrypted, but the identity of the site is not verified. Use a large "E" in the icon?

I don't mind at all visiting someone's personal site that has a self signed certificate but a UI change to avoid the glaring waring dialog would be nice.

There are large advantages of having almost all traffic encrypted.


This is especially bad on Firefox where the dialog to allow a self-signed cert isn't even very intuitive. At least on Chrome (last I checked, anyway) there is a clear "correct" thing to do if you want to allow the cert.


I wonder if web-of-trust SSL is the right way to solve this.


On its own, probably not; web-of-trust is one of those things about PGP that was awesome on paper, but in practice ended up being a significant pain point.

A cryptographic ledger, on the other hand (think Namecoin), would probably be at least slightly more effective; it would effectively confer the same benefits (decentralized registration authority) without the logistical nightmare of a web-of-trust.


Well-stated. You are not the only one.


Hey, this is Richard, the author of the post. All the feedback here is great, but if you've got thoughts on whether we should pursue this strategy or not, please comment on the mozilla.dev.platform list.

https://lists.mozilla.org/listinfo/dev-platform

https://groups.google.com/forum/#!forum/mozilla.dev.platform


As I mentioned in the thread in question:

"Basically, the current CA system is - again, to put this as gently and politely as possible - fucking broken. Anything that forces the world to rely on it exclusively is not a solution, but is instead just going to make the problem worse."

Please keep that fact - that the CA system is, as I put it with all the gentleness and politeness it deserves, "fucking broken" beyond any repair - in mind.

(And no, "the CA system is fucking broken" is not an opinion; it is a verifiable fact, as much as the concept of gravity is a verifiable fact)


If it's not a bother can you elaborate on the way(s) it's irreparably broken?


Seriously?

Just on my Debian box there are 173 entities (along with all their employees, disgruntled employees, hackers, and probably governments) who can sign a certificate for google.com that my computer will accept. I can think of five cases in as many years off the top of my head of a fake google.com (or related) certificate being found in the wild by Google because of various levels of CA incompetence and/or fraud.

Worse yet, this bungled attempt at authenticity has been awkwardly nailed to the much simpler (and in most cases much more important) question of cryptographic security, with the result that going through the absurd charade of convincing a CA of my identity is required simply to offer a client the assurance that I, whoever I may be and however much the client does or doesn't trust me, actually sent the message the client received and that nobody in transit could read it.


The idea of having to trust a central authority for verification is the root cause of the vast majority of the brokenness; it means that proper TLS-based security for the web is not only financially prohibitive for even individuals in developed nations, let alone developing (with very few exceptions in the CA space providing cheap or free certificates), but is also a single-point-of-failure in terms of security.

Nowadays, we have this magical thing called a "blockchain" that can be used for everything from currencies (Bitcoin) to domain names (Namecoin); with some further refinement, using a blockchain as a certificate authority would fix both problems right away.


I certainly agree it should be replaced, but I think it's a bit off the mark to say that self signed certificates should be trusted the same as CA signed ones (something that badrami was suggesting throughout the thread). Yes, certificates insufficiently identify a site's owner but self signed certificates are as bad as a CA's worst case scenario.


When it comes to security, you should always assume worst-case scenarios are going to occur.

In which case, the equivalence of self-signed and CA-signed is entirely on-the-mark. There's no real guarantee that the certificate authority is any more secure or trustworthy than, say, my five-year-old niece.

This is why decentralized systems (lately, that's been interpreted to mean "systems using a cryptographic ledger or blockchain" or "systems that rely on mesh topology graphs" (i.e. something similar to Namecoin or something similar to PGP, respectively), but those aren't the only models out there) are ultimately necessary for this; that way, you don't have to trust one arbitrary centralized authority, but instead can trust, say, a majority of a collection of hundreds or thousands or millions of such authorities coordinating via an agreed-upon protocol/convention/etc. My own bet would be on a cryptographic ledger (PGP-style webs-of-trust aren't nearly as end-user-friendly, whereas a "blockchain" has more potential in that area, since it's easier to abstract away from the end user), but pretty much anything at this point would be less convoluted - and more secure/trustworthy/effective - than the current system.


I disagree. There's a significant amount of security we gain from collectively using the CA system over self signed certificates. If a CA is subverted my browser or OS vendor can pull the CA or the CA, if trustworthy, can revoke the certificates.

Let's say a CA has issued certificates for example.com to someone with nefarious intent. It's discovered that the CA's security is completely compromised and my vendor pulls the plug. In our current scenario I can visit example.com while being MitM'd and my browser vendor has made sure I get a big alert when I connect.

In a scenario without CAs, I visit example.com and my browser vendor has no idea that I'm being MitM'd nor do I since I've never been to example.com and examined the certificate.

Is it perfect with CAs? No. Will some get victimized by a CA's carelessness regardless of when it's caught? Probably. But most of us remain more secure with it than without it. For most users on most sites it works albeit haphazardly. It should absolutely be replaced. But to suggest that the security benefits should be abandoned because it's possible that it could happen is short sighted. It would be open season on internet users.


> I disagree. There's a significant amount of security we gain from collectively using the CA system over self signed certificates.

You're actually losing security by trusting the CA model, though. You have no means of control or independent audit. This is the same reasoning behind free-and-open-source software being inherently more secure and trustworthy than their closed-source counterparts; "transparency is a dependency of trust" is just as applicable here as it is in any other security-sensitive situation.

This is why decentralization is absolutely essential, and the longer we go on sitting on our haunches and pretending that the current system is "good enough", the worse the problem becomes.

> If a CA is subverted my browser or OS vendor can pull the CA or the CA, if trustworthy, can revoke the certificates.

That trustworthiness is a very big if.

> In a scenario without CAs, I visit example.com and my browser vendor has no idea that I'm being MitM'd nor do I since I've never been to example.com and examined the certificate.

There are numerous ways to achieve certificate verification without relying on a centralized CA system. Even with self-signed, you can detect private key changes (this is how SSH is protected against MITM attacks; in practice, this rather-simple security measure has been very hard to circumvent). For more verification, there are plenty of ways to achieve that in a decentralized manner, be it web-of-trust (PGP-style) or a cryptographic ledger (Namecoin-style) or something else entirely. Hell, there are already systems like DNSChain that implement the latter approach; that would be infinitely better than the current system.

> But to suggest that the security benefits

What security benefits? All the purported "benefits" are entirely fictional, since they rely exclusively on arbitrary trust in arbitrary entities. That's not security, no more than me handing you a briefcase full of cash and you promising you'll hold onto it for me is "security".

The sense of security you feel with the current CA system is very much false. You're relying enirely on luck, and have absolutely zero assurance that your luck will continue to be good.


What is the point? If you need a secure connection then use https, if there is no value securing the connection then use http. Why deprecate a working technology that continues to have many valid use cases?

As an analogy this strikes me as deprecating an array for a linked list because a list has certain features that array's don't. Ie. there is a time and place for both.


It will just make it easier to create walled gardens, and charge companies and individuals for certificates. It will also pretty much get rid of privacy, since to get the cert you will need to provide someone with your detailed information. Using self-signed certs will become harder. Yes, it's a good idea to use https _almost_ everywhere, but forcing everyone to use it is a terrible, stupid idea.


> It will just make it easier to create walled gardens, and charge companies and individuals for certificates.

letsencrypt.org will solve this problem.

> It will also pretty much get rid of privacy, since to get the cert you will need to provide someone with your detailed information.

That's something that applies to buying/owning a domain as well. Domain-verified certificates require very little private information, and I suspect letsencrypt.org might be able to provide a way to get trusted certificates without giving up privacy.

> Using self-signed certs will become harder.

How so? The proper way to use self-signed certificates would be to distribute them in a safe way (e.g. deploy them through your Active Directory Domain, provide the fingerprint through a separate channel, etc.) and install them in your trust store. I couldn't find any intended changes to this in their announcement.

> Yes, it's a good idea to use https _almost_ everywhere, but forcing everyone to use it is a terrible, stupid idea.

I agree that it should be a slow process, and I think that's exactly what they're suggesting. Eventually, when the tooling and processes around SSL/TLS catch up with the new "encrypt everything" mentality, it would probably be no big deal to deprecate almost every use-case of HTTP without SSL/TLS.


Can we please stop saying "letsencrypt.org will solve this" until letsencrypt.org is actually solving this?


Why? We're discussing long-term changes to how HTTP without encryption should be handled. I think an upcoming project with major impact on the availability of certificates should be a factor in this discussion. Besides, it's not like letsencrypt will be the first CA to offer free certificates - StartSSL has been doing it for years, although letsencrypt obviously has a different approach.


> Why? We're discussing long-term changes to how HTTP without encryption should be handled. I think an upcoming project with major impact on the availability of certificates should be a factor in this discussion.

Because Let's Encrypt might solve this well, or it might not solve it well, or it might never even be released. At the moment, factual statements about what effect Let's Encrypt will have on the world are similar to "$(THIS_YEAR+1) will be the year of Linux on the desktop."


Let’s Encrypt does not even provide a RPM for Fedora or RHEL yet or at least there's no mention of it on the front page [1].

[1]: https://letsencrypt.org/howitworks/


There is an open protocol[1] on how to retrieve the certificates, so you can implement a client in a language of your choice, or wait until someone else does it.

[1]: https://github.com/letsencrypt/acme-spec


That comment was about package management (i.e. actually installing this on servers), not about language implementations. Let's Encrypt's documentation currently assumes a Debian-based operating system, which therefore leaves out a very large portion of the web server population.


letsencrypt.org is not even working, and it seems to be Linux only, at least for now. What about other OSs?


That's why I said "will solve". For now, you can use startssl.com for (non-commercial) free, trusted certificates.

Mozilla is one of the backers behind letsencrypt. I'm sure they won't be deprecating anything until well after it is up and running. This was an email to the Mozilla community asking for feedback, not an announcement of any major deprecations in the next version of Firefox.

Letsencrypt is based on an open protocol [1], I'm sure there will be implementations for all major platforms.

[1]: https://letsencrypt.github.io/acme-spec/


> It will also pretty much get rid of privacy, since to get the cert you will need to provide someone with your detailed information.

If you're using a self-signed certificate, you don't need to provide information to anyone.

> Using self-signed certs will become harder.

Why?

I haven't given this much thought yet, but these don't sound like convincing arguments against the proposition.


Because all of the modern browsers already make it harder for users to use them. It will only get worse.


I'm having a very hard time picturing anyone having such a blatantly wrong and FUD-filled opinion as anything else than either a joke or a NSA shill.

Care to convince me otherwise?


Please do not accuse users of being 'shills' (for the NSA or otherwise) alert the mods (hn@ycombinator.com) instead if you are fairly sure about it.


Hear that NSA? where is my check for this week?

Edit: never mind. Mailman was late.. got it now!


The point is that HTTP to HTTPS is what telnet is to ssh. There are no use cases for HTTP where HTTPS cannot fill that need. With free certificates available now, and more coming, there is no reason to not use HTTPS because of cost. The only valid reason I can see to continue supporting HTTP in some form is for legacy devices (consumer routers, printers, etc.) that don't yet do HTTPS. My 5 year old printer does HTTPS, so we should be able to get rid of these devices soon.


Did you know that there are still valid use cases for devices that have less 512kb of ram that can be access over http?


I did. I work with these, time permitting. For these devices we have a range of options, including having the central station support plain HTTP, or having a special legacy mode in the browser. These devices should not hold back HTTPS adoption on the web.

Edit: I bet there are 1000x more IE6 users, than users of these devices. Shall we support IE6 until every single of it's users upgrade?


We're not talking about some JS framework but about an established internet protocol. If I want I can spend time and still support IE6 users, and no browser will prevent me from doing that. That's not the case with this proposal.


I've worked on this kind of device. You can rarely use them directly anyway, you need some kind of reverse-proxy/caching to use them otherwise they will crash if to many people are connecting on it directly. Anyway, these kind of device are rarely connected to the outside world so on the worst case, HTTP is still not a problem on an internal network.


I work with them every day, and the point of having HTTP server is not to allow many people to connect to them but one person, or script to do it once in a blue moon without a need to install any software. If HTTP is removed from browsers, this will simply not be possible anymore


Look, you are clearly a minority (developer who works with HTTP-enabled limited RAM devices). Why should we hold back the progress of the web to cater to your very specific use case? Why should my grandmother be subjected to phishing attacks and injected ads because your situation would be slightly inconvenienced by dropping plain HTTP from popular browsers?

Also, you are a developer. I am sure we can figure out some way for developers to re-enable plain HTTP in some hidden settings. That way people like you and me can continue to do what we do, while the rest of the web users are enjoying the benefits of being more secure by default.


Why should my embedded system have to support unnecessary functionality? Just so your grandmother is safer from phishing attack?

I'm a developer, but my clients are not.


Nobody is asking your embedded systems to support unnecessary functionality. At worst, you are going to have to one time go to about:config and change enable_plain_http from 0 to 1. Then for you life goes on as before. Is that too much to ask for much greater security for everyone involved?


Just like I can disable javascript? Oh wait, I can't! Does having ftp support in firefox affect anyone's security? Getting rid of HTTP will in best case make people think that they are more secured, that's all. Most of the malware that is active today propagates not because HTTP is unsecure, but because we made the web an Advertisement Delivery System. Malware now attacks browsers, through Flash, Javascript and Java.. first fix this before you start working on getting rid of an established protocol.


Once again, telnet was an established protocol, yet we got rid of it. I am sure people presented passionate arguments in favor of keeping telnet forever, saying that there are other threats on the Internet.

Getting rid of HTTP will make people more secure. It will ensure that basically all sites people visit will be protected by a trusted cert.

Note that nobody is talking about installable malware here. We are talking about protecting the web. For example, if your connection to news.ycombinator.com is protected by HTTPS, this makes it that much more difficult for the NSA to spy on you, or for me to insert a goatse into the content while you are at work, and I am sitting in the next office over form you.


> Once again, telnet was an established protocol, yet we got rid of it.

No we didn't. Telnet is still pervasive as a communications protocol, partly because of its simplicity and partly because of its momentum. This is particularly true in the realm of embedded devices; even rather-modern enterprise-grade HP printers (for example) open up a Telnet port for terminal-driven configuration by default.


The analogy doesn't seem close to me. The problem with telnet is that it was used for connections intended to be authenticated, but they were authenticated very poorly, by sending a cleartext login/password. That is easy to compromise, so it needed to be replaced with a better authentication method. If telnet were normally used without any authentication at all, e.g. only for guest-type access to BBSs, then the case to replace it would have been much weaker.

The analogous case might be if you're doing plaintext password authentication over HTTP. I agree that for that use-case you should switch to HTTPS.


I wholeheartedly disagree. Any content should be delivered over an encrypted channel because any content site can be used to launch an attack on the user os site owner if the data in transit can be tempered with. Passwords and session keys are just a small subset (though admittedly a valuable one) of the stuff you may go after. For example, do you want the FBI putting you on a tracking list because you googled something they consider suspicious just out of curiosity? Do you want a rival of yours messing with how your resume you publish on your personal site is presented to Google or potential employees. Do you want to read man pages that tell you how to run commands over an unauthenticated connection? Do you want Pillsbury to double the amount of their ingredients you need for a chicken pot pie? There is almost no use case where content should remain unprotected.

Apple.com is a great example of this. It's served over plain HTTP, and likes to http://store.apple.com. Can I intercept that link and send you to http[s]?://store.appl.com or http[s]://store.apple.con? I can then grab your credit card. Heck, I could process your order so you won't even know something went wrong. Or how would it look if suddenly people started getting a goatse instead of the latest video of Jony Ivy tilting his head to the side?


Then your issue is with sites that host content on http, and the solution is to address the content hosts—not deprecate a protocol. Besides, If you can MITM http then you can probably do the same for DNS, so https doesn't really solve the attack vector you mention.

Finally, Read this thread for use cases for http. They may not apply for you, but are sufficient that I think we can keep http around for awhile longer.


There are no use cases where HTTP is more capable than HTTPS. There are some hurdles in dropping HTTP, but HTTPS is a strict improvement for almost all cases. The only argument I can see for plain HTTP is legacy devices that no longer receive updates. For these, a UI similar to the self-signed cert warning will work.

My problem is that we still allow plain HTTP. That should never be the case, just like we should not use telnet for remote server access.

DNS hijacking is a problem. It is a problem at a different layer of the system. Just because that layer is vulnerable, does not mean that we should leave a gaping security hole at a different layer. I don't buy the argument that we must have perfect security or no security. Besides, there are ways to mitigate the DNS hijacking issue, such as the HSTS header.


Can you list some of the use cases where you can do something using HTTP that you can't using HTTPS? I had always assumed one was a superset of the other, feature-wise.

As long as HTTPS offers all features of HTTP, then the reason to deprecate HTTP is to prevent accidental use of it, and websites which don't know/care from leaving their users vulnerable.


Things you can do with HTTP that you can't do with HTTPS:

* Create a website for free (you have to pay for certificates to use HTTPS)

* Report sensor data from small embedded devices with extremely limited CPU

* Your JS can talk to APIs that don't yet support HTTPS (e.g. NextBus). If your serve your JS over HTTPS, your browser will complain if you try to access a HTTP-only API.

* Transfer large files at gigabit speeds on consumer-grade hardware


> Create a website for free (you have to pay for certificates to use HTTPS)

No you don't. You can get free certificates.

But even if that was true, it's still a misleading argument. You can only make a website for free if you are OK with your website not having a domain name and running it off your personal laptop with electricity paid for by your roommates/parents. For any actual website, there are already a multitude of real costs of which a certificate is just one more.

> Report sensor data from small embedded devices with extremely limited CPU

If your CPU is really so limited then why are you using HTTP at all? Use a custom binary protocol with or without encryption as appropriate.

> Your JS can talk to APIs that don't yet support HTTPS (e.g. NextBus).

Then those APIs suck - avoid them and/or ask the providers to get with the program.

> Transfer large files at gigabit speeds on consumer-grade hardware

AES-NI can decrypt at 3.5 cycles per byte. With modern consumer grade hardware you will not find symmetric streaming crypto to be a serious bottleneck.


> You can only make a website for free if you are OK with your website not having a domain name and running it off your personal laptop with electricity paid for by your roommates/parents.

Actually, if you are willing to do without your own second-level domain name and just have a third- or fourth-level domain name, there are plenty of services where you can have a free web site (or app) running of over HTTP. E.g., any number of free (or free-within-limited-quota) static site hosting services, or even Google App Engine.

Of course, in the Google App Engine case, you also get HTTPS for free (within usage quota), as long as you are willing to have an <app>.appspot.com domain name, so the "create a website for free" isn't really a "with HTTP, but not HTTPS" thing.


Sure, although GAE is blocked in China. AWS works, and has a free tier that will get by for a year, but doesn't give you free certificates.

Also, for a lot of newbies, installing SSL certificates is a PITA.

0. You realize you need a SSL certificate. You're presented with a dizzying variety of options and already lost. Are you supposed to get Positive SSL, Negative SSL, Essential SSL, Comodo SSL, Start SSL, Wildcard SSL, EV SSL, Rapid SSL, Slow SSL, or EV SSL aux Mille Truffles et Champignons? Most newbies ask, "Why isn't there a simple [click here to get HTTPS certificate] button?"

1. You get your certificates by e-mail, but you still can't install them directly. Your webserver wants a .pem file, so you Google "How do I create a PEM file". The top 10 tutorials tell you to concatenate THREE files: your_domain_name.crt, DigiCertCA.crt, and TrustedRoot.crt in that order. What you received by e-mail was FOUR files: AddTrustExternalCARoot.crt, COMODORSAAddTrustCA.crt, COMODORSADomainValidationSecureServerCA.crt, and your_domain_name.crt. You're lost, no tutorial is helping with what to do with FOUR files instead of THREE, in what order to concatenate them, and StackOverflow bans your question. You're fed up, quit, and use HTTP. (Not me; I'm describing an actual case of observing someone else's frustration trying to set up HTTPS.)

The only way HTTPS will gain popularity is if we can get rid of the certificate-issuing economy and make it easy for newcomers. The majority of content creators unfortunately do not understand the basics of security nor can we expect them to have the patience to learn it.


@Hello71 That was funny :) But with all due respect,

    apt-get install apache2
or even opening up Ubuntu's GUI package manager and clicking Apache and then "Install" gets you a running webserver, zero questions asked. SSL certificates are a longshot from that. You're bombarded with questions throughout the process; I still can't memorize the command to generate a CSR. Also, from the perspective of achieving the objective of sharing content with the world, a website is a necessity while SSL is optional. In general, optional things that want to succeed need to be dead zero friction. Necessities like web servers can be hard and people will still get them because there's no alternative.

When Ubuntu can get you a working SSL web server, CSRs generated, certificates all auto-signed by authorities, set up and ready to go, zero questions asked, with

   apt-get install apache2
that will be the day HTTPS will outshine HTTP. Yes, I know cryptographers are tearing their hair out at the thought of "auto-signed", but it would be a hell of a lot more secure world than now, because people would at least use HTTPS, rather than now, when the process is just seriously too much for most people that they end up resorting to HTTP instead. Better of two evils.

Alternatively, browsers should not throw huge error messages about self-signed certificates. They should just do what SSH does instead: display the fingerprint, ask yes/no, store the fingerprint, and warn the user if the fingerprint changes in the future.


That. I have managed to get through the steps but there was nothing simple about it and I wouldn't be able to replicate it without looking it all up again. Unfortunately, SSL isn't even an option on my el-cheapo shared webhosting service (as I understood it, it needs a dedicated precious IPv4 address).

Contrast this with Dan J. Bernstein's wonderful CurveCP. It's so simple to set up and requires no CA involvement; you just need to be able to add a NS server entry.


> as I understood it, it needs a dedicated precious IPv4 address

A good webserver would actually be able to provide multiple SSL certs on a single IP address by using "Server-Name Indication" (SNI). This is definitely (as far as I know) supported on nginx, and probably supported by Apache's httpd.


0. You realize you need an HTTP server. You're presented with a dizzying variety of options and already lost. Are you supposed to get Apache, Navajo, NGINX, IIS, Jetty, lighttpd, heavytpd, httpd, or ApacheNavajoNGINXIISJettylighttpdheavytpdhttpd? Most newbies ask, "Why isn't there a simple [click here to get HTTP server] button?"

1. You get your server by wget, but you still can't install it directly. Your OS wants an executable file, so you Google "How do I create an executable file". The top 10 tutorials tell you to use some software called Eclipse, but that needs some kind of "JVM". What you received in the archive was some ".c" crap. You're lost, no tutorial is helping with what to do with .c files, in what order to concatenate them, and StackOverflow bans your question. You are not fit to operate a computer. (I made this story up too.)

The only way HTTP will gain popularity is if we can get rid of the different servers available and make it easy for newcomers. The majority of content creators unfortunately do not understand the basics of security nor can we expect them to have the patience to learn it.


It actually is (or at least was; it's going to be significantly castrated soon) possible to "create a website for free with HTTP, but not HTTPS" if you use Heroku. An HTTP-only, single-1x-dyno setup is free, whereas if you want HTTPS, they will try to bleed you dry.


> You can only make a website for free if you are OK with your website not having a domain name and running it off your personal laptop with electricity paid for by your roommates/parents.

Many online free hosting sites that give you a subdomain are HTTP only because wildcard HTTPS certs are expensive.

Many of those services would go away or turn paid-only in an HTTPS-only world.

The children of the future have already lost Geocities and dial-in BBSes that we grew up with; how will they manage if their Internet isn't exactly like ours? /s

> If your CPU is really so limited then why are you using HTTP at all? Use a custom binary protocol with or without encryption as appropriate.

I might not want to write the client for said custom binary protocol.

Like it or not, a lot of embedded devices (Arduino, Spark Core) provide libraries for serving JSON over unencrypted HTTP. Restricting yourself to devices that are more powerful (say, Raspberry Pi 2) would require you to use a bigger form factor for the hardware, and/or would cost more.


I highly doubt a company that's running OK today will get shut down by a the $30/month expense equivalent that is a wildcard cert.


No, but prices will go up. Free tiers will be significantly reduced or outright eliminated. Cheap tiers won't be as cheap anymore. A lot of individuals and small businesses will now suddenly be faced with more costs, and while $30/month might seem small to you, that adds up considerably over several years.

Not everyone on the internet is a multi-millionare YCombinator alumnus.


... Please. You're talking about the difference of a single paid plan or something. There's no viable product or company out there now where $30 a month is even make anyone blink.


Maybe if the entirety of your perspective comes from massive multi-million-dollar websites. For everyone else (particularly Mom-and-Pop small businesses, individuals trying to sell things online, etc.), that cost can very well be prohibitive, particularly when it dwarfs the costs of site hosting and even the domain name.

I'm talking about the little taqueria across the street from my house. They're perfectly viable, yet are going to be scared off from investing in a cert if it's going to be the single most expensive thing for their online presence. It's these sorts of folks that your argument blissfully ignores.


> AES-NI can decrypt at 3.5 cycles per byte. With modern consumer grade hardware you will not find symmetric streaming crypto to be a serious bottleneck.

The "old style" Raspberry Pi and, iirc, also the B2, don't carry crypto accelerators, and these are the first choice for people doing embedded linux prototyping.

Same for most other mobile chipsets.


They also don't have gigabit ports.


I wouldn't exactly count on this. Chinese "pi alternative boards" already include gigabit ethernet, Android apparently also includes support for wired LAN (but somehow the manufacturer of my tablet went broke before they could sell the adapter cable)...


From where can one get free certificates?

Not everyone has modern hardware. My hosting provider doesn't have AES-NI for example.


> From where can one get free certificates?

e.g. https://www.startssl.com/?app=1

Also, if HTTPS became a requirement, then demand would increase for free HTTPS certificates in exchange for, say, advertising.


>Also, if HTTPS became a requirement, then demand would increase for free HTTPS certificates in exchange for, say, advertising.

Any plan that puts more advertising on the web is going to get a big fat thumbs down from me.


If I remember correctly there were some issues with StartSSL, but I don't have the details right now.


It's a TOS violation to use their free certificates in a commercial project.


They charge for certificate revocation, which became a bit of an issue during the Heartbleed massacre.


It is only an issue if you actually care about security. If all you want is a checkmark or the ability to show up in browser then that is more than good enough.


Their customer service is appalling, but you pay for what you get.

The original point was that requiring HTTPS would mean free-to-host HTTP services would go out of business as it was impossible to provide free HTTPS certificates, and in that sense, StartSSL proves them wrong.


Not if those free-to-host HTTP services are commercial entities; StartSSL's free certificates can't be used commercially per their terms of service.

And if StartSSL is the only option, then I'm sticking to HTTP. They're a nightmare to deal with even for non-commercial use, let alone commercial.


> No you don't. You can get free certificates.

The only free certificate authority that actually exists right now that I'm aware of is StartSSL, which I've at least found to be pretty damn horrendous to work with, and which only supports non-commercial uses of certificates, meaning that small businesses are still forced to pay up.

And no, Let's Encrypt doesn't count, seeing as it's not operational yet.

> For any actual website, there are already a multitude of real costs of which a certificate is just one more.

That's one more that can mean the difference between a mom-and-pop shop having a website and said shop not having a website because it's too expensive.

Just because plenty of factories dump industrial waste into rivers doesn't mean that I'm not an asshole if I dump a bottle of OxiClean down a storm drain.


> If your CPU is really so limited then why are you using HTTP at all?

Because it does not require user to install any other software and still have access to the device (PLCs and other embedded controllers)?


> * Report sensor data from small embedded devices with extremely limited CPU

This proposal is specifically about web browsing.

> * Your JS can talk to APIs that don't yet support HTTPS (e.g. NextBus). If your serve your JS over HTTPS, your browser will complain if you try to access a HTTP-only API.

I believe that is the point of this proposal. NextBus will probably never switch to HTTPS without external pressure to do so.


Also: set up a web site on a private network, which works without modifying every browser/OS that needs to access it (e.g. installing a custom root cert), and which doesn't depend on a third-party organization. (e.g. a CA)

This will still be a problem even when you can get free domain-validated certs from Let's Encrypt. That's still an outside dependency for a private system.


>* Transfer large files at gigabit speeds on consumer-grade hardware

Can't modern hardware do AES at slightly over a gigabit per second per core even without the pervasive 10x accelerators?


Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy.

In theory, HTTP is a subset of HTTPS. But that subset is much simpler for common tasks. HTTPS requires you to either subject yourself to the complexities of the CA system or be accused constantly of attacking yourself and everyone else who visits.


So why not require HTTPS, and let a thousand "make HTTPS simple" startups bloom? Seems like we're saying "we should just accept insecurity because we can't think of an alternative".


> So why not require HTTPS, and let a thousand "make HTTPS simple" startups bloom?

Much the same reason we don't all cut our hands off and let a thousand prosthetics startups bloom, I think.

> Seems like we're saying "we should just accept insecurity because we can't think of an alternative".

It seems more like we're saying "We should unnecessarily break the free Web and just cross our fingers that somebody figures out how to fix it sooner or later."


The analogy doesn't hold. There's absolutely no value in all chopping off our hands, but there _is_ value in requiring all traffic to go over HTTPS.

We can deprecate HTTPS for a decade without actually disabling it, and in the unlikely event that we _don't_ figure it out, we can undeprecate it some years later.


> there _is_ value in requiring all traffic to go over HTTPS.

I'm not so sure. It seems to me that there is almost no value in requiring all traffic to go over HTTPS compared to requiring a subset of traffic to go over HTTPS.

> We can deprecate HTTPS for a decade without actually disabling it, and in the unlikely event that we _don't_ figure it out, we can undeprecate it some years later.

If you want to deprecate it in name only, I guess I can't object since we wouldn't actually be doing anything. I'd figured this would involve trying to hinder HTTP use (similar to how we make self-signed certificates a royal PITA). At any rate, I think you're overestimating the likelihood that we'll figure it out in the short term — particularly given that the proposed happy scenario is to deprecate HTTP and then have thousands of additional for-profit actors trying to put themselves between people and the ability to have websites.


You can't do ephemeral hosting with HTTPS unless you use a self-signed cert, and then browsers go absolutely mad.


How is my router admin page supposed to have an always up to date SSL certificate?


It could be a ten-year self-signed certificate. You accept and remember it the first time you connect (preferably over a cable).


  telnet www.example.com 80
  GET / HTTP/1.0


openssl s_client -connect www.example.com:443

GET /


> Can you list some of the use cases where you can do something using HTTP that you can't using HTTPS?

Sure: host a web server anonymously. With HTTPS+PKI (and it's the PKI half of this that people are complaining about) I have to claim an identity.


Not everything needs to be encrypted. It's not that hard to grasp.


Are you sure?

Probably Baidu thought the same thing about their Javascript. It's a public file, so why use HTTPS? Then they got turned into the Great Cannon and now they're being described as a weapon of the PRC in major newspapers.

It's obvious that governments are systematically weaponising any non-SSLd connections. If you don't use SSL, you're making it easier for them to hack your users and turn your website into a weapon. If you wouldn't let botnets run wild on your servers, you should take the same care to encrypt your website.


> Are you sure?

Yes, I'm sure that the communication between my browser and the test app I'm running on localhost doesn't need to be encrypted.

> It's obvious that governments are systematically weaponising any non-SSLd connections.

Over the public internet, sure.

OTOH, if a government controls a root CA trusted by your target population, TLS doesn't provide any protection at all against them anyway.


Yes, it absolutely does. If they actually use that CA to MITM connections, it is usually detected pretty quickly, and browsers quickly revoke that CA's trust.

Even in a scenario where this doesn't happen, it elevates the required attack from a passive eavesdropping attack (which is comparatively simple to conduct en masse, and to analyze data retroactively) to an active attack (which must typically be done in a targeted, real-time fashion).


> If they actually use that CA to MITM connections, it is usually detected pretty quickly

Really? How?


Certificate pinning [0] is a common way.

[0]: http://en.wikipedia.org/wiki/Transport_Layer_Security#Certif...


Right. Because no one ever changes a certificate legitimately.


Some people run TACK. Chrome also, IIRC, reports back on when certs have changed and Google can clearly see if there's widespread disagreement on what key is being served to visitors.


How is that going to help? If you've been MITMd then the attacker can intercept your TACK and Google traffic too.


Chrome pins the Google certs, so MITM will only work if you get the user before they've first downloaded Chrome. And then you have to ensure you only ever MITM those clients, or your attack will be detected.

And TACK literally was designed to solve this problem. If the MITM interferes with you communicating to other TACK clients, you detect their attack. If they don't, you detect their attack.


People may need to run embedded servers on many different devices, and be able to access them. Forcing them to use not the latest version of the browser simply because they can't use it to connect to a device that costs 100k and will not be replaced every year like your average iDevice will not make anything better or safer.


The issue was authentication, not encryption. Though SSL of course does both.

Authentication can be done much more cheaply than encryption (well assuming you have the clout to force a change in standards). It could be implemented as simply as <script src="baidu" hash="a84b3">. In particular such a request could be transparently cached without security concerns.


It seems like this would still have one of the issues SSL has where it's easy to get wrong and then it's no better than not having it, except that people think they're safe. E.g. use a weak hash function and someone can write a malicious script that hashes to the same value.


Yes, indeed, and I actually suggested this on the Mozilla thread.


Yes it does. For the umpteenth time, an unencrypted HTTP session allows an attacker to inject arbitrary content into the request/response stream. They can use this to do anything, from asking you to log in, sending them the password, to stealing credit card numbers, to presenting fraudulent content, to giving false information, to tracking you, to hijacking links from the page you are trying to view, to just showing you ads where there should be none, to defrauding the site/domain owner (via serving bogus content instead of your own).


And a MITM'd captive portal can do the same over SSL against users who don't understand the gravity of whatever warning their browser shows. For HTTPS to be everywhere, there needs to be zero reliance on certifying authorities that your uncle and grandmother have never heard of, zero dialogs/UI indicators that they'll just learn to ignore anyway, and zero effort to maintain the server-side of it.

It's a much bigger challenge, and I find it wildly cavalier for anyone to say "just use HTTPS everywhere" without directly addressing the faults pointed out by others. And by addressing, I don't mean dismissing out of hand.

Hell, I'd settle for acknowledging instead of addressing some days. There are real world problems on both sides that need to be considered.


I am acknowledging that there are issues with the CA system, and elsewhere have proposed plans for how to eliminate them from our trust chains (tldr: registrars issue you a CA that's only good for the domain you bought from them; then you are your own min-CA).

But these are two separate issues. Going from plain HTTP and HTTPS to HTTPS-only is a step in the right direction. It's also step 1. Step 2 is to drop CA's and work out a better trust system that relies on less parties being involved.

Also, let's give people some credit. Yes, some people ignore the self-signed cert warning. Some people also respond to Nigerian prince emails. We aren't talking about cutting off email because someone might get hurt. Unless you are ready to drop all untrusted certs, those dialogs need to stay in place.


This. Maybe I want all the transparent caches on the Internet to cache my stupid cat picture?

How is this not going to destroy the cachability of the internet?

Being able to operate transparent caches is important for institutions (Edit: some which may not have access to their users local cert stores).

There is many things like blogs, news articles, photo galleries that gain zero value by being encrypted. Encrypting these things is going to require more hardware, more energy usage and more bandwidth and thus again more energy usage.


The use of HTTPS doesn't destroy cacheability, it just requires that one of the legitimate endpoints authorize the cache to be there.


Right, it forces the cache to come from the content provider rather than being provided downstream by the client's organization.


But for many users, their downstream organization is their ISP. From my point of view, any caching by them is malicious. I didn't send my request to the IP of their caching server, I sent it to the IP of the site I'm trying to reach. Redirecting my request to their cache is a MITM attack.

If my employer wants to use caching, they can install a cert for their proxy on my machine (or require me to do so), so it's not a problem - although it is more technically complex.


If it's inside the organization then they can put in a local cert and run a cache.

It only prevents caches that are in the middle and trusted by neither end. I'm okay with that in almost all cases.


What about dorms, hospitals (patient wifi), libraries, hotels, cafes and small ISPs? They do not have access to their clients certs-


How much of the internet is actually cached? Especially when most of the traffic is video feeds that always get resent.


When I worked for an IT consulting firm I saw medium sized institutions (colleges, hospitals, hotels) save huge amounts of bandwidth by implementing transparent HTTP caches.

I totally am for securing POSTs, HTML, etc with SSL, but images and CSS, etc really make little sense.


I'm not saying it's a bad idea, but the thought that once HTTP is deprecated I'll need to go through extra effort to manage the myriad of devices in my LAN is bothering me. The configuration interfaces of these devices (routers, printers, IP cameras, etc.) mostly don't support HTTP. Most never had updates available and certainly never will. We're not just talking about 5+ year old devices (which despite their age still function as designed): none of the most recent routers I bought support it, except if flashed with some powerful and well maintained alternative firmware.

I'm already annoyed enough by firmware update tools that only work with a specific old version of Internet Explorer... I rather not throw dozens of equipment away just because browsers started to insistently refuse HTTP connections. Let's not talk about the fact that I often spin up simple HTTP servers on my computer just to be able to transfer files to other machines on the same network, and I rather not have to worry about creating certificates (to then find out that the machines in question are "obsolete" too and don't support HTTPS?).

Can we just consider HTTP to be like telnet? Old, super insecure, definitely not meant to be used by everyone daily and definitely not over the Internet, perhaps not even available/installed by default, yet super compatible and simple to implement both for the client and the server.

I really hope that should mandatory HTTPS become a thing (and I'm not saying it shouldn't), an exception is added for local area connections.


This just makes me think that we need is not an "improved HTTPS", but to rip out TCP/IP and encrypt everything at that level.

Now we only need to convince either Google or Microsoft to implement something like that in their OS and encourage/mandate the use of it. Only these 2 matter because they are the owners of the biggest platforms.

Yes Apple has a pretty big platform, too, but it's kind of irrelevant since it's a closed Apple-only ecosystem anyway so if it adopts something it doesn't mean the others will too. On the other hand, if either Android or Windows adopts something as major, I think the other one would, too.

What I'm thinking is something like MinimaLT or perhaps Trevor Perrin's "Noise" if it ever becomes real.

http://cr.yp.to/tcpip/minimalt-20131031.pdf


Share those thoughts on the mailing list.


I find myself with some cognitive dissonance. This makes perfect sense.

But selfishly, I've worked for many corporations that operate an http proxy that they scrutinize and they proxy https which they cannot scrutinize (without detection), so I feel comfortable that they are not. (Yes I periodically review the certificate store for changes).

If the vast majority of sites were https, they might decide to do MITM for those https connections and either instruct everyone to ignore the warning or install their own certs on all of the computers. Indeed I think many corporations may already do this. I would probably not use many websites if they made that change.


Wouldn't the corporation install it's own root certificate in that case? you can MITM without notification. presumably they could tack it on after workstation install.

If they're already going through the trouble of monitoring everyone's traffic, a few extra steps don't seem like that big of a hassle.

Of course, you're right, some large fraction won't bother with their own root cert, and their users will learn many bad habits.


> Wouldn't the corporation install it's own root certificate in that case?

Yeah, but I can audit the certificate store (and I have). On some systems, I am granted local administrator privilege. I wouldn't take out any certs that they install, but if I found that they installed and/or used one, I'd probably stop using most public websites (at least the ones that I have a username/password with).


This ship has long since sailed. SSL HTTPS interception for corporate entities is off-the-shelf and readily available.


I don't think there's anything inherently bad about trusted MITM. If the trust exists, then you can have a local caching proxy that can drastically reduce the load on servers, WAN links, etc, and is a major part of Roy Fielding's original description of the REST architecture. [1]

IMO, I think the majority of use cases that people care about with HTTPS are about integrity (i.e. authentication) rather than confidentiality. We don't need full-on encrypted requests for those use cases, we just need secure MACs[2] (significant performance difference) as part of the standard so that endpoints can verify it hasn't been tampered with, even if it's been cached somewhere along the way.

[1]https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arc...

[2]http://en.wikipedia.org/wiki/Message_authentication_code


>IMO, I think the majority of use cases that people care about with HTTPS are about integrity (i.e. authentication) rather than confidentiality.

I disagree, and so does the IETF: Pervasive Monitoring is an Attack[0].

[0] https://tools.ietf.org/html/rfc7258


But uncle Joe and grandma Margaret don't care about confidentiality. If they did, they wouldn't crab about that shifty Snowden guy, and they would donate to the EFF. They just don't want their Facebook login stolen, or their banking login sold on some forum, or their family photo gallery erased. Confidentiality, for the unfortunate majority of Americans anyway, just isn't a selling point.


>They just don't want their Facebook login stolen, or their banking login sold on some forum, or their family photo gallery erased.

That is confidentiality, no? If login credentials are transmitted in the clear then anybody listening can impersonate them. That's exactly what FireSheep demonstrates.


Scrutinizing traffic like that is dumb anyway because if a hacker gets in he could get control of such a system and "see everything".

As a principle, the benefits of encrypting everything always beat the benefits of "monitoring everything" in an enterprise.


> cognitive dissonance

Off-topic, but why can't people say "I'm not sure" or "I can see both sides of this argument". It's just an overused geek cliche.


Because that's not what 'cognitive dissonance' means.


Right! And what wyldfire is experiencing is not cognitive dissonance. "X is a better policy, but hurts me in Y situation" is not self-contradicting.


All I ask is please don't block non-HTTPS sites from browsers. I don't want to have to set up a valid SSL cert for every development environment I work with.


I see no reason not to blindly trust localhost - there is no way to MITM the connection anyway.


Of course you can mitm localhost. The privileges required to do so just usually happen to come packaged with ones that obviate the need.


That's interesting. I've never tried so I don't know, but can you issue an SSL cert to CN "localhost" or does it need a fully qualified domain name?


In don't think any ca is going to give you a cert for local host, but open ssl is happy to make a self-signed cert for random single-word CNs, and chrome will trust them if you tell it to.


Couldn't you make your own root authority, install it locally, and sign a wildcard cert to cover your dev websites?

For example, if they're all *.local that wouldn't be easily abusable.


That's a heck of a lot more complicated than

    python3 -m http.server
and then opening http://[::1]:8000/ in your web browser.

But then again, carrier-grade NAT means that anyone who is tinkering like this has to do some sort of NAT-punching to show their work off to a friend, so maybe this could be solved by an automated wrapper ala localtunnel or ngrok.


    openssl s_server -accept 4443 -WWW -cert mycert.pem
Is not much more complicated.


Especially not when you pretend mycert.pem magically already exists, costs nothing yet works on all your subdomains, and that Firefox/Chrome trust it.


That's a lot of trouble to go through that I don't today. Everything in production is fronted by a load balancer that terminates SSL, so I never have to worry about SSL certs in the course of a normal day.


It's a bit of trouble because no project has made it really easy yet.

All-in-all, this is something that could easily be made into an app and automated.


No, it's a bit of trouble because security takes time to set up. To automate this, you would need a CA that is trusted by your (and the rest of your team's) browser that also creates arbitrary certs upon request. You can see why this is a bad idea.

Secure versions of this exist - and the security requirements they introduce are part of the reason enterprise IT is so much of a pain in the ass.


Honestly, I have no problem with plain HTTP support being available but only enabled via about:config. If you are a developer, sure go for it. Having said that, the suggestion of running your own CA for your dev environments takes less time to grok and set up than Python's VirtualEnvWrapper.


It's not running your own CA that's the problem; it's remembering how to integrate with it for every project you do. Especially considering almost nobody runs SSL on their web apps - you will almost always terminate SSL at a load balancer of some sort, so it's not something web developers normally need to worry about nor is it reflective of how apps run in a production environment.


Turns out, not everybody runs an AWS-style setup with a load balancer and backend servers. In some cases that's not what you want.

Regardless, we are talking about users' browsers dropping plain HTTP. These browsers will never hit your backend servers, so you need not worry about them. In your scenario, they'll always use HTTPS. You are worried about your one in a million case as a developer. That's fine, go into about:config and enable plain HTTP. Everyone else isn't an expert in security and shouldn't be allowed to shoot themselves in the foot by default.


I would be in favor of more of an alert-based implementation. i.e. if you go to a page that is HTTP, your address bar turns red with an "insecure" icon. A setting in about:config is ok, just a minor pain in the ass because I'll have to Google it any time I need to use it.


Good. I guess we are more or less on the same page. My only qualm about permitting plain HTTP and giving a passive alert is that it allows an attacker to run arbitrary JS on your machine before you notice that it was loaded over plain HTTP and decide to disable it. However, this may be a very good transition step.


Terminate SSL at a local 'load balancer' (nginx, haproxy, vulcand) - voila, your setup is now substantially more similar to production.


I think easing the setup of personal/enterprise CAs would be hugely useful to gaining adoption. Imagine a CA hosted by Google or the like that's tied your personal or business account where you can authenticate certs. Trusted if it's your own (or vouched by a trusted second party).


Why does it have to be valid?


Chrome already blocks self signed certs out of the box.


... and then you click the "Advanced" link and click on "Proceed to x.x.x.x (unsafe)"


Not always - some "unsafe" cipher combinations it will simply refuse to load with no workaround. Though you usually only see this on dodgy IoT gadgets out of China/Korea.


I've encountered that a bunch of times. I understand needing to make it look dangerous and deterring people who don't know what they are doing from continuing, but it's silly of Chrome not to offer any way of proceeding for people who do know for certain that they want to.


Unlike FF, Chrome doesn't offer an option for a permanent exception once you click "Proceed to ..."


I recently set up a self-signed cert for a development site, and Chrome remembered it until I told it to forget it (so I could install a CA-signed cert).


It does, just not on the warning page like ff does. You can install your self-signed cert in the settings panel.


if you're on a mac, you can add the cert to your keychain and then all your dev sites that operate over *.dev will be fine.


> if you're on a mac

Thanks. I know about that, but I am not on a mac.


This gets rather tedious after a few dozen times.


Nobody has brought this up yet:

If we deprecate HTTP, then certificates can be used as tools of censorship by governments.

We require an alternative to the current HTTPS scheme first.


...and the governments will then be virtually the only ones who can still inspect traffic, since they can still get CAs to issue certificates for them. Moving everything to HTTPS dependent upon CAs only makes the whole Internet more centralised, which is horrible for privacy and anonymity.

They will all say it's "for your security", and arguably having this centralised root of trust does lower risk of attacks from random groups, but at the same time it's allowing them more power. To CAs and governments, a lot of things around encryption seem to be oriented in the direction of "if we don't have control over it, we disapprove." Self-signed certs are only one example of this; see all the other issues surrounding mobile device encryption. It's only good encryption to them if they are the ones in control of it.

Quite frankly, to me it seems random hacker groups have a lower chance of cooperating and tracking you in the same way that governments can.

I believe the relevant quote in this situation is again the classic "those who give up freedom for security deserve neither."


It's unfortunate that this is necessary. I like the concept of decentralized cache systems, but in the future CDN will be the only option for low latency.


That's what bothers me. HTTPS Everywhere means MITM Everywhere. Terminating HTTPS connections at a CDN means the CDN is the man in the middle. You don't have to attack secure servers that handle important stuff any more; just get a backdoor into Cloudflare. This makes "authorized law enforcement access" easier than ever.

On top of that, HTTP2's "it all has to go through one hole" approach means that CDNs become almost mandatory for big sites. The Web is becoming a lot more centralized. It's more secure against random attackers, but much less secure from inside jobs at CDNs, authorized or not.

The Great Firewall of China people must love this.


HTTPS has never guaranteed that you were speaking directly to a given entity. In the case of a corporation, what does that really mean, anyhow? Your web request is being handled by the CEO him- or herself?

It only means you are speaking to a device or set of devices that the certificate-holding entity has authorized to speak for them. No certificate technology can prevent a company from delegating authority to another entity's devices.

It isn't that this isn't necessarily a problem... it is that there is no way in which certificates ever solved it, nor a way in which it can solve it, and there's no choice. Whenever you're talking to X.com, you are almost certainly also talking to a third-party web stack, for instance, which means that trust has been delegated by the certificate holder to some other party's software. There's hardly a website around that doesn't have a whackload (technical term) of third parties already in the connection anyhow.

The certificate-holding entity is ultimately responsible for what they do with your trust. But the certificate can do nothing to constrain those actions. It's just a glorified number with some other glorified numbers attached to it.

It does seem to me though that HTTP2 should actually make it easier to do without a CDN in the end, though. Initial HTTP2 support will just be "HTTP1, but on HTTP2!" which really provides minimal advantages over HTTP1, but over time as we see web frameworks start to take direct advantage of being able to push down resources preemptively, the advantages of CDNs for all but the largest sites start to fade. (Perhaps not "eliminated", but certainly lessened.)

(Incidentally, as people will presumably start releasing HTTP2 benchmarks soon, keep on eye on the details. Embedding HTTP1 inside HTTP2 is not the interesting performance question and will never have big gains... the correct question to investigate is what are the gains to be had from fully using HTTP2 natively. Many SPDY benchmarks had the same problem... of course SPDY isn't faster if it's still essentially speaking HTTP1 to the target website.)


>That's what bothers me. HTTPS Everywhere means MITM Everywhere. Terminating HTTPS connections at a CDN means the CDN is the man in the middle.

CDNs are already Men In The Middle.


> On top of that, HTTP2's "it all has to go through one hole" approach means that CDNs become almost mandatory for big sites.

No, it just makes multiple requests to the same domain more efficient. Requests to external domains still work.


CDNs are already mandatory for big sites and have been for a decade or more. There's just no other way to scale and keep a handle on costs. If you don't use a CDN, you'll get murdered by transit costs (Netflix learned this the hard way before they built OpenConnect -- an in-house CDN).


The real victim here is institutions can no longer operate transparent caches.

For example, I have seen a 800 bed hospital in a rural area get very far with a relatively weak (but best it could get) Internet connection by having transparent caching.

Colleges with dorms, even in non-rural areas safe huge amounts of bandwith this way too.


I'm confused. I don't see how HTTPS makes this problem worse or better.


With HTTP, your ISP can be a benign "man in the middle" and cache resources for customers. See http://en.wikipedia.org/wiki/Proxy_server#Transparent_proxy


Generally, I think this is a bad approach. I think that configurable blocking of HTTP with user-controlled (or, for managed environments, policy-controlled) opt-in to allow HTTP for safe domains makes some sense. But not deploying new features for HTTP and limiting existing features on HTTP does not; true, one could set up a root CA and deploy certificates and manage TLS for internal, including local-box, testing, etc., but it doesn't make sense for browser to require it.

Configurable blocking of HTTP provides all the benefits of HTTP deprecation without the adverse side effects for the situations where HTTPS is an unnecessary headache, so it should be preferred.


The biggest downside I see is that it would no longer be possible to host a simple website, without also having to publicly expose the vastly increased attack surface that the OpenSSL code base brings to the table.


If this is a very long-term plan, I'm all up for it. In the short term, we're just not quite where we need to be in order to enable SSL even on less-important sites or sites that don't require any credentials.

From a hardware perspective, I'd say that by now the problem is actually solved. Hardware is powerful enough to handle SSL connections.

What's currently tricky is that too big a chunk of clients still doesn't support SNI which really doesn't go well with the increasing scarceness of IP addresses.

Needing one IP address per unique domain name, aside of the administrative overhead (multi-homing still is somewhat inconvenient) will just not be feasible as the costs for IP addresses is starting to skyrocket.

This will be fixed by either IPv6 growth (you wish) or the death of non-SNI systems, but it'll be years if not decades before we can ignore XP and Android 2.3, especially as there's no good fallback path for these clients as the SSL negotiation (and subsequent hostname validation failure) happens way before the host could react.


"too big a chunk of clients still doesn't support SNI"

I'm not sure that's true today, what's your list?

XP is only a problem when using IE. Android 2.2 and 2.3 are well under 10% last numbers I saw. They are very close to where we can make a greater good argument.


Windows XP, Android 2.3 and many scripts running on various old but still supported Linux Distros (RHEL 6 for example) and, of course, the Bing Bot.


> Needing one IP address per unique domain name

I'm tired of this bullshit. This has never been true, not even 10 years ago. You only need one IP:port combination per unique domain name. You can host tens of thousands of HTTPS websites on a single IP even if none of your users support SNI, and you don't have to pay for a SAN certificate, either.

You can serve HTTPS on port 31276 as long as you redirect properly from plain HTTP. You could even redirect conditionally, i.e. modern browsers and search engines are redirected to port 443 while the remainder are told to try port 31276.

Is this ugly? Yes. Do people care? No. A client of mine who uses shared hosting is perfectly happy with HTTPS on port 44527. Most people don't even look at the URL. Who knows, they might even think that the secret number makes their website more secure. (Of course it doesn't, but their misconception doesn't make their website any less secure, so I don't care.)

Older browsers not supporting secure ciphers/protocols is a bigger problem, but you can also get around this to some extent by offering better ciphers/protocols on port 443 and lesser ciphers/protocols on an alternate port.


Yeah. Sure. Non-standard ports will work really well in combination with enterprise firewalls.

I would furthermore wager a guess that the intersection between XP users and users behind firewalls is quite big, actually.


Yeah. Sure. If your service absolutely needs to be accessible to "enterprise" users, you can probabaly afford a dedicated IPv4 address.


Doesn't HTTPS require a dedicated IP address? With the exhaustion of the IPv4 address space, the timing couldn't be worse.


For a bit more context, the technology that allows the client to communicate to the server which hostname it is attempting to connect to is called Server Name Indication.

https://en.wikipedia.org/wiki/Server_Name_Indication


No, they fixed that. The hostname is sent before the cert is chosen.


No it isn't fixed for all devices. Windows XP is one example and even though it is no longer supported by Microsoft, at least 17% of devices on the internet are still using it.


That's not a problem with XP, that's a problem with people that use obsolete versions of IE.

Note that the subject here is talking about what to do in future browser versions, so IE8 never comes into the picture.


Meaning the hostname is also sent in cleartext (which may be problematic for some use cases).

This is the wrong level to be talking about this anyways. IPSec is the "right" thing to do, but that ship sailed I guess.


Problematic in what cases? You could always get hostname from the IP before.


You can get a PTR from an IP address, but that's not the same as "the hostname the client requested". If virtuous_activities.com and shameful_fetishes.com both resolve to the same IP address (assuming some application protocol like HTTP that can distinguish by hostname) I could certainly imagine a situation where a client would want to keep the particular hostname requested secret.

(Obviously the attacker in this case would probably also be able to sniff the requests from the resolver, but still; I'm not making this complaint up or anything, a lot of people have mentioned it before.)


No, that's not what I'm saying.

My point is that hostname has always been leaked with HTTP and HTTPS. SNI does not leak any new information.


When is hostname sent plaintext in non-SNI HTTPS? (The resolver, I suppose, but that is a separate issue.)


The certificate is sent before encryption is established.

But that's a red herring. Even if it was all kept encrypted, even if you ignored DNS and reverse DNS, you could connect to the IP yourself.

Yeah, technically there might be more than one hostname, but they're all related hostnames.


but they're all related hostnames

Huh? I used to have ~100 hosting clients per IP address, none of whom were in any way related to each other (other than in having chosen me as a hosting provider).


That's not the common case, though, and is completely awful to use as an anonymity measure.


Actually I think it's quite common, it applies to any site not busy enough to justify a dedicated server. By the long tail principle that will be the majority of sites on the internet.


Oh, I should be clear, I'm specifically talking about sites sharing a certificate. I know a lot of sites use shared hosting, but it's awkward to get a certificate for a pile of unconnected sites. Most of them will either not support HTTPS or require paying a couple dollars for an IP. (Or, these days, try to rely on SNI.)


I agree with some of the skeptical comments here, and wanted to add one more use case they are not thinking about: http://localhost:NNN/

There are many cases where you might want a service on localhost to be reachable via HTTP. Technically this should be exempt from https rules entirely, since who would the man in the middle be? If someone can MITM 127.0.0.1, they own your box already.

It is impossible to get https certificates for localhost/127.0.0.1, making SSL with an approved cert impossible for local services.


I am wondering if this will go anywhere. It seems to me that if anyone would be willing to do this it would actually be the Chrome team, not Mozilla. Mozilla currently lacks the market share to pull this off, and lately their moves have been more targeted in a different direction.

Having said that, I am really happy others are thinking along the same lines as I: HTTP should be relegated to a legacy protocol and the warnings need to very similar to what you get when accessing a site secured by a self-signed cert.


The recent disabling of TLS1.0 on so many sites means I'm finally going to have to upgrade my cell-phone, as there aren't any TLS1.1+ supported browsers for PalmOS.


What sites are disabling TLS 1.0? According to the SSL Pulse report[1] it's the most-supported version of SSL/TLS by far (99.7% support). TLS 1.1 and 1.2 are disabled by default in older versions of Internet Explorer so not supporting 1.0 would lose those viewers which is enough reason to keep it enabled for almost all HTTPS sites.

[1] https://www.trustworthyinternet.org/ssl-pulse/


Sorry, my mistake, it's SSL 3; I can't even load the Qualys SSL client test.


What about devices I access via IP address which have no DNS. Like my home router, or my printer, or my iPhone when I'm connecting to an admin web interface, or my NAS...?

Obviously I'm not going to setup a home domain, DNS, and SSL certs installed on all these devices just so they can use the latest HTTP/2.x features in their admin panels or user interfaces?


What the hell. Someone please stop this madness.


I know Google wants to do something similar with Chrome.

I have so many comments - but the first being don't they realize that they will have to make this option which will be disabled by most enterprises?


In order for this to really take off a campaign with the popular basic plug and play hosting providers to install free/cheap SSL would be needed.


Do child protection software filters work with HTTPS?


If they're installed on the computer, then they can easily use a self-signed cert to intercept (like Superfish, but hopefully generating a new one for each device).

But they generally just block domains, or use extensions, neither of which care about http/s.

A quick search shows that https://www.netnanny.com/products/netnanny/ says it works against SSL.


My question is more in the line of - wifi provided by school that blocks/filters content. But thanks for the other answers too!

(I'm concerned if a kid browses with iPad, some telephone, etc., and the concern is more on whoever provides the service that might be liable in some way).


They could block whole domains, which https doesn't change.

The only time it makes a difference is if you want to block part of a domain and not other parts. I can think of some use cases (block Google unsafe search, but not safesearch), but generally they just block the whole domain and have a separate domain for the "good" part. (See http://www.safesearchkids.com/, for example.)

And it's not like this isn't the same situation now. Whatever would be possible then would also be possible now, just by using https. This is just making http harder to use.


Schools do strange things all the time; they'll want to allow YouTube and block Google Drive and Dropbox (because, heaven forbid a student download an arbitrary .exe onto the school's Windows computers and run it, but for some reason opening notepad to create a .bat file locally is perfectly OK).


If we're talking about school computers, they can put whatever self-signed certs they want on them.

Besides, what you described is still domain specific. If you want to block only certain Youtube videos, that's where you'll have a problem.


Only domain/IP-level blocking would work for public/free WiFi-style connections. The filtering wouldn't be able to inspect the contents of the pages that are transferred without perform MITM decryption/encryption, and to do that you need a certificate installed on the user's device.


Child protection filters would work much better as browser extensions.


Yep. They can even use something like Google Chrome's malware protection scheme.

http://stackoverflow.com/questions/18447874/google-chrome-us...


Why?

I'm not familiar with how those typically work, but intuitively from an architecture standpoint, that sounds like something that should sit as a filter between the browser and the outside world. Browser as monolith of functionality seems undesirable.


Trusted filters (such as ad blockers, parental censorware, network monitoring) must to be inside the trust boundary to be effective. That is the best way to ensure they are not imposed en masse against people's will.

In-path filters outside the trust boundary are, I'm afraid, the very first casualty in our efforts to mitigate the threats of nation state adversaries, as they resemble the attacks used there too much to survive. I, for one, will not mourn them.


Well, because in this HTTPS world, the web browser is the only entity that will be able to view the content being given to the user.


Reason being that with SSL you can't read the contents of the Request/Response (including page content) without MITM, which would require you to install a certificate on the user's device. That may be an option in some situations.

With a browser extension (which you would also need to get installed on the device in some manner) you can inspect the pages as they are displayed.


In my opinion:

* A big warning sign when the site is not using TLS, but not a pop-up

* No warning when the site is using TLS

That would be a correct way to force websites to use TLS.


First .google domains, now this? We should stop using this stupid thing named DNS.

https://gnunet.org/gns




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: