Hacker News new | past | comments | ask | show | jobs | submit login

First, congrats, this is great news! There's a lot of use cases out there that require a wildcard cert or work far better with them.

> It is our intent to transition all clients and subscribers to ACMEv2, though we have not set an end-of-life date for our ACMEv1 API yet.

Please don't do this. It will break millions of sites needlessly. Most installations of lets encrypt plugins aren't going to auto update to v2. A lot of us are also using custom v1 code for various reasons that may not be easy to change.

The preferable end-of-life date for ACMEv1 (sparing any existential security issues) should be never. Otherwise you will be executing a Geocities-sized web meltdown every time you phase out a version of the API.




The reason we haven't announced an EOL for ACMEv1 is that we won't announce one until we are confident we won't cause the kind of meltdown you describe.


You could block new domains (new to Lets Encrypt) from using v1.


This will break many tools which currently rely on LE. E.g. mailinabox, which uses LE to set itself up.


I doubt they would just break it. I imagine if they do this then this will be announced sufficiently in advance (probably around two or three years) to allow people to update their ACME clients. Then you can just operate the ACMEv1 for existing domains until noone is asking for more (and scale down the architecture).


The problem is that LE is being used as plumbing. I noticed MIAB was using LE because I recognise that SSL-out-of-the-box is something interesting, and I investigated. But I wager most people who use it will have no idea. They just install it, and "it works", as it should. great. What's HTTPS? That's the entire point of tools like MIAB, mind you:

> Technically, Mail-in-a-Box turns a fresh cloud computer into a working mail server. But you don’t need to be a technology expert to set it up.

https://mailinabox.email

I'm just choosing MIAB as an example here. This applies to anything that LE now enables. People don't know they're using LE, much like IOT users don't know they're using HTTP/1.1. It's part of the plumbing. What's an ACME client? What's LE? What's v1?

This is probably happening for IOT devices across the globe just the same. A 2y expiration date is an order of magnitude too low for plumbing. Imagine if we suddenly decided to phase out HTTP/1.1 within two years.

We have to recognise that we are shoving HTTPS down people's throats. Pretty soon, HTTP will get big f-off warnings. OK: fair enough. However, if we're doing that, we should also provide a viable alternative, with the same reliability. Otherwise, HTTPS is a massive step backwards for the decentralised web. LE is that alternative, but not if we start breaking backwards compatibility every 2 years.


Again, I'm not saying that the two year expiration date means "v1 stops working".

Rather, "after this point, no new domains may setup via v1", so any existing certificates and installations are grandfathered. Two years is sufficient for MIAB to update their software and distribute to users.

>LE is that alternative, but not if we start breaking backwards compatibility every 2 years.

Not what I'm saying either. They have a v2 now, we don't know if they need a v3. And they want to keep v1 running for a while.

But there will be a point where v1 will need to be switched off, similar to how modern browsers have switched off SSLv1 despite a lot of people still having servers running with that.

LE will, at some point, have to decide between keeping v1 running or moving away from old protocols to be able to evolve. And that cannot be infinitely pushed backwards.


That's good to know, thank you for the info.


You might simplify things for yourself to some extent by requiring ACMEv2 for wildcard requests, which will reduce the number of people deploying the old client and spur many to upgrade.

And your old client still works on the systems it's deployed on (by definition) so you could just stop development on that.


Wildcards are only available via ACMEv2. The post linked to here says that.


d'oh, I overlooked that (and I still had the post open in another window!). Thanks.

It's bad enough that some people comment without reading -- I apparently commented without paying attention.


If the EoL is far enough after the release of V2 then I think it is preferable that people start getting security warnings for sites that stop working: it is an indication that they are no longer maintained so potentially not receiving security updates for other matters.

Obviously a decent length of grace period would be the correct way of deprecating the older version, to give people time to update their infrastructure accordingly. I would suggest at least a full year (giving at least four renewal cycles to test changes in a QA environment before being forced to update production), probably more. Perhaps, if possible, a year for new certificates and two years for renewals?


Since the certificate need to be updated every three months they have access to exact number of how many people use ACMEv1. They also have as naturally part of the process the domain names of those users. This should allow them to very slowly watch as the number of v1 users drops until there is so few that they can try contact any remaining users before deciding to set an end-of-life to that version.


You are supposed to provide a valid email address when you register for a let's encrypt certificate. In theory they should be able to contact all v1 client users.


The email address is optional, though most clients tend to hide that fact (or make it mandatory).


When you run the Let's Encrypt official client (certbot), it updates itself.


Only if you use certbot-auto, not if you use an OS package.

(I'm on the Certbot team.)

Also, some people dislike this feature quite a bit, and there are about 100 different clients.

https://letsencrypt.org/docs/client-options/


And some people take clients like acme.sh and modify them. I do that myself.

There's enough entrenched inertia to HTTPS without giving people more ammunition regarding the actual amount of work involved. Unless there's a security reason to eliminate the v1 endpoint, please don't.


They already disabled TLS-SNI-01 for new certificates because of security issues [1].

This was a major breaking change, without any advance notice, but nothing melted down.

I'm sure the other validation endpoints are used a lot more, but the effect shouldn't be any different, especially if give a deprecation notice of a year or two.

[1]: https://community.letsencrypt.org/t/important-what-you-need-...


While there was no world-destroying core meltdown, it was still super-annoying to deal with. Lots of code needed to be touched. I'd really like a comeback of a fixed TLS-SNI challenge as running a port 80 HTTP server just for LE sucks somewhat.

DNS challenges exist and are useful but have more extensive infrastructure requirements. Nothing beats the ease of use of "just put the box up and it'll retrieve its cert as needed".


Depends only on how much different v2 is from v1.


> The preferable end-of-life date for ACMEv1 should be never.

As would be the preferable end-of-life date for SSLv3 and HTTP.


The SSL zealotry drives me nuts. The infosec community screams constantly about "HTTPS everywhere", but they either don't know or don't care about all the effort and pain they're creating for developers who just want their software to work. How many perfectly good sites will be marked ominously as "insecure" by Chrome in the next few months? Sites that were working just fine until someone at Big G decided they weren't.

(Related, a big thanks to Google for un-trusting that whole big Symantec security chain. Yeah, I realize they weren't competent, but I also realize that it had no practical effect on my site's security, as I don't have nation states or motivated hackers in my threat model.)

Security measures should be weighed like everything else - as cost/benefit. In many cases the cost of the security is not worth it.

Edit: I'd just like to point out the irony in some of the replies to this comment. I'm complaining about zealotry, and the vast majority of nasty replies I've received to this comment are using language that only zealots and ideologues would use. My god, you'd think I'm killing puppies based on some of these responses. Nope, just advocating for using HTTPS where it makes sense, and not having it forced down your throat.


> developers who just want their software to work.

Those devs are gonna be really surprised when they find out that unencrypted connections are routinely tampered with.

> they either don't know or don't care about all the effort and pain they're creating

You have not been paying attention to the hundreds of tools available to make HTTPS painless.

> until someone at Big G decided they weren't.

And Mozilla. And countless research papers. And real-world attacks that are reported over and over again. The fact is that the global Web has become hostile, regardless of your prejudice against Google's Web security teams.

> In many cases the cost of the security is not worth it.

The problem is that it's not YOUR security, it's other people's. If websites don't implement HTTPS, it's the users of the Web who pay the price. It's their privacy being deprived. And the website becomes easy to impersonate and manipulate, increasing the liability of having a website. HTTP is bad news all around.


What about hosting HTTP content because you verify GPG signatures upon download? These content would then be super easy to cache on the local network. HTTPS defeats this and makes it uncachable.

I hardly ever see people talk about this use case and how to solve it with https everywhere. AND it's super widely used: e.g. debian repositories.


HTTPS doesn't make it uncacheable - you can still mirror an HTTPS repository with another HTTPS repository (with its own domain name and certificate), and preserve the PGP signatures inside the repository. apt works fine with exactly this model: you use HTTPS for transport-layer protection and GPG for the existing things Debian's security model was already good at. The Debian repository is behind HTTPS at https://deb.debian.org - in existing Debian releases you may need to install apt-transport-https, and then just set your sources.list to

    deb https://deb.debian.org/debian stable main
HTTPS cannot be used as a replacement for PGP in this scenario, but that's the wrong way to see HTTPS. It doesn't provide purpose-built security for people who have custom threat models and need to build security infrastructure anyway (e.g., Debian verifies PGP signatures on sets of packages uploaded by developers, and then builds those packages and puts them into signed archives). HTTPS is baseline security - it's the security that every web connection should just have. It's not surprising that some specific use case like Debian repositories needs more-than-baseline security.

And because HTTPS is nothing more than baseline security, it's possible to automate it with things like Let's Encrypt and not add any more checking beyond current control of DNS or HTTP traffic to the domain.

(Another confusion along these lines is assuming HTTPS is useful as an assertion that a site isn't malware. It asserts no such thing, only that the site is who it claims to be and network attackers are not present. If I am the person who registered paypal-online-secure-totes-legit.com, I should be able to get a cert for it, because HTTPS attests to nothing else.)


I'm not talking about a mirror, which has a different domain name. I'm talking about a transparent cache like squid. This will mean I don't have to change the OS images that I might not even control in order to get traffic savings, whereas under your model I would have to, which again, may not even be feasible.


Clients for this aren’t web browsers. Making browsers warn about them doesn’t break anything.


There are a variety of attacks against GPG-signed repositories - an article [1] by Joe Damato explains them, and that all can be trivially mitigated by serving the repositories with TLS.

[1]: https://blog.packagecloud.io/eng/2018/02/21/attacks-against-...



I'm actually surprised debian repos are still HTTP.

Don't get me wrong GPG signatures with pinned public key is a lot better than trust TLS of a random mirror.

But isn't it nice to have two layers, the two key systems are independent and orthogonal that seems like a solid win.

Need I remind of Heartbleed (openssl) or the very debian specific gpg key derivation bug years ago.

There will always be bugs, we can only hope they aren't exposed concurrently :)


That, and the way gpg is used for apt provides no confidentiality at all, just authenticity & integrity. Someone who can see the traffic will still know which packages you've downloaded.


Indeed. The same is also true for repositories, served via SSL.

Majority of HTTPS traffic is sniffable and largely non-confidential, unless you pad every file and web-request to several gigabytes in size.

Does your website use gzip? Good, now padding won't help you either, — unless it is way bigger than original content. Oh, and make sure, that you defend against timing attacks as well! Passive sniffers totally won't identify specific webpage based on it's generation time, will they?!

As for authenticity… Surely, you are going to use certificate pinning (which is already removed from Google Chrome for political reasons). And personally sue certificate issuer, when Certificate Transparency logs reveal, that one of Let's Encrypt employees sold a bunch of private keys to third parties. Of course, that won't protect authenticity, but at least, you will avenge it, right?

SSL-protected HTTP is just barely ahead of unencrypted HTTP in terms of transport-level security. But it is being sold as golden bullet, and people like you are the ones to blame.


TLS is getting better and there is a LOT of momentum to this.

I bet the SNI issues will eventually be fixed too.

And yes, with momentum behind certificate transparency, it could definitely hold CAs to the fire :)

TLS is no silver bullet, but it's a good base layer to always add.


Having two independent system while destroying traffic savings from a transparent caching system seems like a bad trade off to me.

Consider you're a cloud provider running customer images. If everyone downloaded the same package via https over and over again, the incurred network utilization would be massive (to both you and the debian repository in general) compared to if everyone used http and verified via GPG, all from your transparent squid cache you setup on the local network.


I fear the trust issues with generic HTTP caching makes it infeasible.

It would probably be better to use a distributed system design for this.. BitTorrent or who knows ipfs maybe..


> What about hosting HTTP content because you verify GPG signatures upon download

If you're doing this, then you've made your own HTTP client so you can do whatever you want.

"HTTPS Everywhere" is a web browser thing.


So it's just bad naming. Everywhere to me implies everywhere, not just everywhere in the browser. Regardless, it looks like there are still people confused about it like me discussing in this thread, tho.


> What about hosting HTTP content because you verify GPG signatures upon download?

Because the rest of the content is not verified?????? That's the whole point of HTTPS????????


I didn't downvote this and this is a valid misunderstanding.

The whole point of having GPG is that you (as the distributor/debian repo/whatever) have already somehow distributed the public key to your clients (customers/debian installations/whatever). Having HTTPS is redundant as it is presumed that initial key distribution was done securely.


Those who you trust with your internet browsing is usually also those who you trust with HTTPS certificates. Eg. Your browser, your operating system, your ISP, et.al are still able to spy on you unless the site uses certificate pinning, which is unfortunately not feasible with Letsencrypt due to certs only lasting 3 months.


"It's their privacy being deprived."

I wonder if anyone will be surprised when they learn how HTTPS and HTTP/2 will be used to push more advertising to users and exfiltrate more user data from them than HTTP would ever allow.

Will these "advances" benefit users more than they benefit the companies serving ads, collecting user data and "overseeing the www" generally? Is there a trade-off?

To users, will protecting traffic from manipulation be viewed as a step forward if as a result they only see an increase in ads and data collection?

Even more, perhaps they will have limited ability to "see" the increase in data collection if they have effectively no control over the encryption process. (e.g., too complex, inability to monitor the data being sent, etc.)


I wonder if anyone will be surprised when they learn how HTTPS and HTTP/2 will be used to push more advertising to users and exfiltrate more user data from them than HTTP would ever allow.

We're talking only by HTTPS. Adding HTTP/2 is just mudding the conversation.

Care to give any argument on how does adding a TLS layer over the exact same protocol (HTTP/1.1) will be used to do that?


/I wonder/s/HTTPS/TLS/


> Those devs are gonna be really surprised when they find out that unencrypted connections are routinely tampered with.

Except most big orgs now employ MitM tools like BlueCoat to sniff SSL connections too.

> You have not been paying attention to the hundreds of tools available to make HTTPS painless.

I have, and they don't. They make it easier, but you know what's truly painless? Hosting an html file over HTTP. What happens when Let's Encrypt is down for an extended period? What happens when someone compromises them?

> And real-world attacks that are reported over and over again.

Care to link to a few?

> The problem is that it's not YOUR security, it's other people's.

Oh, so you know better than me what kind of content is on my site? So a static site with my resume needs SSL then to protect the other users?


> Care to link a few?

From Friday, in which Turkey takes advantage of HTTP downloads to install spyware on YPG computers: https://citizenlab.ca/2018/03/bad-traffic-sandvines-packetlo...

From a couple months ago, where Comcast injects JavaScript into HTTP connections: http://forums.xfinity.com/t5/Customer-Service/Are-you-aware/...


> Oh, so you know better than me what kind of content is on my site? So a static site with my resume needs SSL then to protect the other users?

Without TLS how do YOU know that the user is receiving your static resume. Any MitM can tamper with the connection and replace your content with something malicious. With properly configured TLS that's simply not possible (with the exception you describe in corporate settings where BlueCoat's cert has to be added to the machine's trust store in order for that sniffing to be possible). Hopefully in the future even that wont be possible.


And web servers can even detect TLS MITM: https://caddyserver.com/docs/mitm-detection


> Oh, so you know better than me what kind of content is on my site?

The content of your site is irrelevant. We do know that your lack of concern for your user's safety is a problem though.

I also wish that managing certs was better, but until then, passing negative externalities to your users is pretty sleazy.


Oh, so you know better than me what kind of content is on my site? So a static site with my resume needs SSL then to protect the other users?

Absolutely yes. Without that layer of security, anyone looking at your resume could either be served something that's not your resume (to your professional detriment) or more likely, the malware-of-the-week. (Also to your professional detriment).

Do you care for the general safety of web users? Secure your shit. If not for them, for your own career.


So I've heard this argument countless times, and it completely makes sense from a theoretical perspective. Yes, it's very possible for MitM to happen, and that would cause one of the two scenarios you described.

But how likely is it to actually happen? For the former, someone would need to target both you and specifically the person who you think will view your resume, and that's, let's be honest, completely unlikely for most people. The second case I can see happening more in theory as it's less discriminating, but does it actually happen often enough in real life to the point where it's a real concern?

FWIW, I have HTTPS on all my websites (because, as everyone mentioned already, it's dead simple to add) including personal and internal, but I still question the probability of an attack on any of them actually happening.


I have been MitM'd by my ISP, Comcast, multiple times. Their injection only works on HTTP without TLS.


Sure, I've heard of the Xfinity MitMs which IIRC tracked users in some way. But would that realistically cause any "professional detriment" as expressed by the parent comment? Most users wouldn't even notice it's happening.

Basically, I see it this way:

- You can be MitMed broadly, like the Xfinity case, but the company in question can't really do anything crazy like inject viruses or do something that would cause the user to actually notice because then their ass is going to be on the line when it's exposed that Comcast installed viruses on millions of computers or stole everyone's data.

- Or you can be MitMed specifically, which will cause professional detriment, but would require someone to specifically target you and your users. And I don't see this as that likely for the average Joe.

Really, what I would like to know is: How realistic is it that I, as a site owner, will be adversely affected by the MitM that could theoretically happen to my users on HTTP?


As less and less content is served over HTTP, it becomes more and more realistic for an attacker to simply inject their garbage into every unencrypted connection that has a browser user agent in it.

Consider the websites you view every day.. most of them are probably HTTPS by now.

It's the wild west, basically. Regardless of how likely it is that someone is waiting for you to hit a HTTP site right now so they can screw with it, why even take that risk when the alternative is so easy?


> As less and less content is served over HTTP, it becomes more and more realistic for an attacker to simply inject their garbage into every unencrypted connection that has a browser user agent in it.

I've already covered the general case above. Anyone in a position to intercept HTTP communications like that (into every unencrypted connection) is in a position where if they intercept and do enough to materially harm me or my users through their act, then they will likely be discovered and the world will turn against them. They have far more to lose than to gain by doing something actively malicious that can be perceived by the user. So I don't realistically see it happening.

> Regardless of how likely it is that someone is waiting for you to hit a HTTP site right now so they can screw with it, why even take that risk when the alternative is so easy?

I already said I use HTTPS, so your advice isn't really warranted. I also specifically asked how likely it is, so you can't just "regardless" it away. I get that there's a theoretical risk, and I've already addressed it. But as a thought experiment, it is helpful to know how realistic the threat actually is. So far, I haven't really been convinced it actually is anything other than a theoretical attack vector.


You are making it sound like "injecting random garbage into HTTP" is some new hotness. It have been done since forever. By the way, — email still works that way. But Google and a couple of other corporations would not like you to trample their email-harvesting business, so there is disproportionately less FUD and fear-mongering being spread around email connections.

Internet providers have been injecting ads into websites for years. Hackers and government have been doing same to executables and other forms of unprotected payload.

Hashes, cryptographic signatures, executables signing, Content-Security-Policy, sub-resource integrity — numerous specifications have been created to address integrity of web. There is no indication, that those specifications failed (and in fact, they remain useful even after widespread adoption of HTTPS).

For the most part, integrity of modern web communication is already controlled even in absence of SSL. The only missing piece is somehow verifying integrity of initial HTML page.


A lot of ISPs, some huge like the "XfinityWifi" SSID, routinely inject their own javascript in HTTP pages. Some even take no care to namespace their javascript and wreck a party on your window globals, too.


This could be solved without HTTPS. People choose not to for ideological reasons.


How would you solve it without HTTPS?


By signing it.

"Injection" is the process of inserting content into the payload of a transport stream somewhere along its network path other than the origin. To prevent injection, you simply need to verify the contents of the payload are the same as they were at the origin. There are many ways to do this.

One method is a checksum. Simply provide a checksum of the payload in the header of the message. The browser would verify the checksum before rendering the page. However, if you can modify the payload, you could also modify this header.

The next method is to use a cryptographic signature. By signing the checksum, you can use a public key to verify the checksum was created by the origin. However, if the first transfer of the public key is not secure, an attacker can replace it with their own public key, making it impossible to tell if this is the origin's content.

One way to solve this is with PKI. If a client maintains a list of trusted certificate authorities, it can verify signed messages in a way that an attacker cannot circumvent by injection. Now we can verify not only that the payload has not changed, but also who signed it (which key, or certificate).

Note that this does not require a secure transport tunnel. Your payload is in the clear, and thus can be easily cached and proxied by any intermediary, but they can not change your data. So why don't we do this?

Simple: the people who have the most influence over these technologies do not want plaintext data on the network, even if its authenticity and integrity are assured. They value privacy over all else, to the point of detriment to users and organizations who would otherwise benefit from such capability.


And what happens when the content changes? Cacheability is not always a good thing. Your solution is vulnerable to replay attacks. You could be seeing an outdated version of a resource without knowing it. This is only acceptable for truly static content, which is becoming increasingly rare on the web.


This content should not change, or change very rarely. A bulk of the data on the web is media files and static resources. Until browsers started locking down 3rd party requests, handling these over HTTP was standard. Obviously it was a security problem, but it wouldn't have been with this alternate method.

However, it's not that hard to avoid replay after cache expires. HTTP sends the Date of the response along with Cache-Control instructions. If the headers are also signed they can also be verified by a client. If the client sees that the response has clearly expired, it can discard the document. As a more dirty hack it can also retry it with a new unique query string, or provide it as an HTTP header and token which must be returned in the response.


Sounds like you just reinvented HTTPS with a null encryption cipher. I don't see how this makes anything easier or better.


I would love if null encryption ciphers actually worked in real life, but they don't (for the same reason why plaintext HTTP/2 does not — everyone disabled them under political pressure).

By the way, — signing is not equal to "null encryption". Signing can be done in advance, once. Signed data can be served via sendfile(). It does not incur CPU overhead on each request. Signing does not require communicating with untrusted parties using vulnerable SSL libraries (which can compromise your entire server).

As we speak, your SSL connection may be tampered with. Someone may be using a heardbleed-like vulnerability in the server or your browser (or both). You won't know about this, because you aren't personally auditing the binary data, that goes in and out of wire… Humorously enough, one needs to actively MITM and record connections to audit them. Plaintext data is easier to audit and reason about.


And how do you sign these requests? How do you get browsers to trust the signature? Oh, well, we already have a similar solution that also protects the entire connection from spying... it's called HTTPS.


It's like one apt package and one cronjob away. I think some acme clients even do the Cron handling for you. So, like one command. There is a really great acme client written in bash which is incredibly painless to set up.

Literally in the time you've spent thinking about and composing your reply you could have implemented free, secure TLS for your users.


It's not that easy if you don't want to run public http server. I had to write acme client myself because I didn't find a single one simple enough. I spent weeks doing that, comparing to 5 minutes issuing 3-year certificate from wosign when it was a thing. I hate that Google destroyed every free ssl certificate issuer and pushed their child to further dominate the world.


>wosign

Are you name dropping wosign just to be obtuse? They were untrusted because they were untrustworthy, not because Google just doesn't like them. https://www.schrauger.com/the-story-of-how-wosign-gave-me-an...


I don't trust any US company, so it's not any more untrustworthy for me than DigiCert, for example. I'm dropping its name because they were offering free 3-year certificates and it was the best TLS experience I've ever had.


There's a lot of countries I don't trust to keep sensitive data in. But my point is that Wosign was provably untrustworthy, rather than speculation on government interference in other CAs. I saw from your Github that you live in Kazakhstan, I would remind you the government is less than trustworthy as well[0] in regards to digital privacy.

[0]: http://www.slate.com/blogs/future_tense/2015/12/14/kazakhsta...


I doubt, that any government is inherently more trustworthy than any other.

It just coincidentally happens, that US controls 100% of root CAs and Kazakhstan (most likely) controls 0. So the later needs more audacious measures, while former can just issue a gag order to Symantec (or whoever is currently active in market).

CA system is inherently vulnerable to government intervention. There is no point in considering defense against state agents in HTTPS vulnerability model. It is busted by default.


Maybe not 100%. Bermuda has a root CA: QuoVadis Global.


https://github.com/Neilpang/acme.sh does exactly what you want.


What is the point in trusting third parties, if you need to keep trusting them after they were obviously untrustworthy? The entire world depends on the trust chain for SSL, keeping that chain trustworthy is very important.

Marking non-https sites as non-secure is a result of the network having proven itself to be unreliable. This is both the snowden revelations, as well as the cases of ISPs trying to snoop.

Besides, HTTPS isn't hard to get. Worst case means you install nginx appache or the like to reverse proxy and add in TLS. Things got even simpler when let's encrypt came along. Anyone can get a trusted cert these days.


> in my threat model

It isn't your threat model that is important here. It is the users' threat models. Maybe you have full control of that too (the simplest case where that would be true is if you are your only user) but most sites aren't.


The nasty language in reply to your comments is righteous anger. You are advocating to hurt people; the proper response by well-adjusted people to such advocacy is anger.

You will see the same sort of anger at e.g. parents who refuse to get their kids vaccinated (they're my kids, they say; Big Pharma can't make decisions for me, if you want to get your kids vaccinated, that's fine but there's a cost-benefit analysis, I just don't want it forced down my throat). It would be incorrect to conclude that the angry people are the wrong people.


I hear you. Moving to SSL for millions of old websites is a pain in the ass. It's a degree of effort that people often skim over.

Speaking as someone who's maintained a lightweight presence on the Web for over 20 years, I've thought about the tradeoff and I think it is worth it. Our collective original thinking about protocols skipped security and we've been suffering ever since. I was sitting in the NOC at a major ISP when Canter and Siegel spammed Usenet. Ow. Insecure email has cost the world insane amounts of money in the form of spam. Etc., etc., etc.

You and I probably disagree on the cost/benefit analysis here, which is OK. It'd be helpful in discussion if advocates on both sides refrain from assuming zealotry on the other side.


Yeah, I'm not opposed to HTTPS. In fact, the reason I get frustrated is because, like you, I've dealt with it at scale for years. I agree it should be used most places, but what about static documentation sites? What about blogs? I've even used Let's Encrypt a few times, and it seems like a great service. But who wants to set up that machinery for a simple resume site?

That machinery has a cost. With every barrier we throw up on the web, it makes it harder to build a reliable site. I also realize this is an argument I've lost. It's so much easier to just say "HTTPS everywhere" than to examine the tradeoffs.

Oh well.


> It's so much easier to just say "HTTPS everywhere" than to examine the tradeoffs.

This touches on the real point of all this, which doesn't seem to have been contained in any replies to you.

There's no real choice in the matter, https is a requirement if, and that the very big if right there, we truly acknowledge that the network is hostile. With a hostile network the only option is to distrust all non-secure communication.

https isn't about securing the site as you know, it's about securing the transmission of data over the transport layer, and it's needed because the network is hostile.

It doesn't matter one little iota what the data is that's traversing it, as there's no way to determine its importance ahead of time. A resume site might not be of much worth to the creator, but the ecosystem as a whole ends up having to distrust it without a secure transport layer because the hostile network could have altered it.

It doesn't matter the effect of that alteration might be inconsequential, as there's also no way to determine that effect ahead of time. The ecosystems 'defense' is to distrust it entirely.

And that's the situation the browsers/users/all of us are left with. There's is no option but to distrust non-secured communication if the network is hostile.


Yeah, it is an argument you've lost, because it's a bad argument.

Even places like dreamhost give you a letsencrypt cert for free on any domain.

There is no case to be made for not securing your site, on principle or based on what's already happening out in the world, with shady providers injecting code into non-secure HTTP connections.

You see it as "a simple resume site," and I see it as a conduit for malicious providers to inject malicious code. Good on the browser folks for pushing back on you.


Yup, the Dreamhost model, and the model at generic cPanel sites (sadly some places with cPanel disable this to drive revenue to their commercial CA partner) is the Right Thing here - one of the options when setting up or modifying your web site is "Free automatic certificates" and then it's the Host's job to make sure that stays working, just like if you pick "Use latest PHP" or "Strip leading www. from hostname". The guy with a blog about carpentry shouldn't need to care about the ACME protocol any more than he cares about how erbium doped optical amplifiers work when calling his grandmother half way around the world. It's just technology.


My favorite part of the internet were always the small hobyist websites. The guy that has an encyclopedic database about Grateful Dead trivia, the other guy that collects pictures of plants. Those people are independent, they're not technical and their 90s looking websites are going to go under because of blanket security policies that don't concern them.


You do realize you're making this complaint on a discussion about a tool that makes HTTPS easier for said small hobbyist websites? I've updated all of my hobby sites using Let's Encrypt, and I really appreciate how it was easy for me while also being good for my users.


My comment isn't against Let's Encrypt. It's against blacklisting text only sites that don't need HTTPS.


If a "text-only" website is on HTTP it can be MITM'ed and used to serve up malicious JS.


Nobody is blacklisting them. The visitors are just being informed of the risks.

The warning used to be the absence of a pad-lock, but who notices that?


If not SSL, then they'd go away at the point some other technical change dropped. Or do you suggest "we" continue using broken protocols forever in order to preserve them? Do you still support telnet to accommodate people who can't handle `ssh-keygen`?

In any case, (a small subset of) the random enthusiast sites and such are close to the only reason I use a browser recreationally anymore. I absolutely agree with you.

The answer isn't to stop fixing things. The answer is to make it easier and cheaper to be secure.

Kinda like what LE is doing, no?


My point is that those sites don't need to be any more secure than they are. A hobbyist website written in HTML in Notepad with only text and images that can be run on IE 5.0 might not require HTTPS and Google and others might change that.


I don't get the notion that some sites don't "need" HTTPS. The threat model it protects against isn't only sensitive information being intercepted, it's also man-in-the-middle attacks that actually change what's delivered. Maybe a hobbyist website only has text and images sitting on its server, but the visitor might receive malware — and that can happen to literally any site served over HTTP.


> I don't get the notion that some sites don't "need" HTTPS.

Your failure to grasp this is fairly evident from the rest of your comment.


Plaintext HTTP being fine for delivering public documents might have been true 10 or 20 years ago. Sadly, attacks on and uninvited mutation/corruption of plaintext content has become that super-common (at least in some parts of the world) that you can be almost certain that one or more of your users will be affected by it if you're not taking precautions.

It sucks badly. I'd prefer a less hostile network myself. Even back then there were bad actors but at least you could somewhat count on well-meaning network operators and ISPs. Nowadays it's ISPs themselves that forge DNS replies and willfully corrupt your plaintext traffic to inject garbage ads and tracking crap into it. And whole nation states that do the same but for censoring instead of ad delivery.


Yeah and what do those hobbyists do? They go to a blogging service provider or something like a wiki provider and they put their stuff. That stuff still happens today. And of course those users wouldn't want someone else coming along and tampering with their collection, so https everywhere is a must. And these users won't even know or care.


Now we have wordpress, medium, etc. It's never been easier to have a personal blog over HTTPS.


>Yeah, I realize they weren't competent, but I also realize that it had no practical effect on my site's security

Can you explain why you think Symantec demonstrating incompetence is completely isolated from your Symantec SSL protected website?

I sense a lot of hostility coming from you. It seems like you think we do these things for fun. Do you imagine a bunch of grumpy men get together, drink beer, and pick a new SSL provider to harass and bully?


[flagged]


> You can't possibly understand...

Oh, I get it. I've worked with lots of people like you.

You're lazy.

As an infosec practitioner, I'm the one that cleans up after the people who claim good current infosec practices are "too hard" or "impractical" or "not cost-effective", which all boil down to sysadmins and developers like you creating negative externalities for people like me. I have heard all of these arguments before. "Oh, we can't risk patching our servers because something might break." "Oh, the millisecond overhead of TLS connection setup is too long and might drive users away." "Oh, this public-facing service doesn't do anything important, so it's no big deal if it gets hacked."

That's irresponsible.

I'm not at all sorry that the wider IT community has raised the standards for good (not best, just good) current infsec practices. If you're going to put stuff out there, for God's sake maintain it especially if it's public-facing. If using the right HTTPS config is that difficult for you, move your stuff behind CloudFront or Cloudflare or something and let them deal with it. If you can't be bothered with some minimal standard of care, you need to exit the IT market.

And good luck finding a job in any industry, in any market, where anyone will think that doing less than the minimal standard, or never improving those minimums, is OK.


> If you can't be bothered with some minimal standard of care, you need to exit the IT market.

My goodness, you just nailed it.

The IT job market is so tight that complete incompetence is still rewarded. Incompetence and negligence that would get you fired immediately or even prosecuted in many if not most other professions.

If restaurant employees treated food safety the way most developers treat code safety, anyone who dined out would run about a 5-10% chance of a hospital visit per trip.

I was just arguing with a “senior developer” who left a wide open SQL injection in an app. “But it will only ever be behind the firewall, it’s not worth fixing.”

That’s like a chef saying “I know it’s old fish but we’ll only serve it to people with strong stomachs, I promise”.


I wrote that in anger, and almost right away removed it when I calmed down. Please see my current comment.


It's rather bad form to do so without noting what you edited in the comment itself, especially as your parent poster replied to it.


But why did it make you so angry? My guess is because my viewpoint is completely unfathomable to you. You can't even believe that someone would advocate for it. In situations like that, I always try and put myself in the shoes of that person. Sometimes they are wrong, and sometimes they have a point. But it's always a useful exercise.

To your parent comment -

No, I don't think it's a cabal of "grumpy old men" - I think it's a cabal of morally righteous security-minded people who have never worked for small companies or realize that most dev teams don't have the time to deal with all this forced entropy.

You care about security, I care about making valuable software. Security can be a roadblock to releasing valuable software on time and within budget. If my software doesn't transmit sensitive data, I surely do not want to pay the SSL tax if I'm on a deadline and it's cutting in to my margins.


What the gently caress does encrypting an HTTP connection have to do with morals or age? You are way outside the realm of making sense, man, and offer commentary that is openly harmful to securing the Internet. Please step back and revisit your woefully misinformed opinion on this.

Most people who advocate for security, including myself, have worked on small teams and understand the resources involved. Putting a TLS certificate on your shit with LE takes minutes. Doing it through another CA is minutes, in a lot of cases. You spent more time downloading, installing, and configuring Apache, then configuring whatever backend you want to run, and writing your product or blog post or whatever it is you’re complaining about securing.

Honestly, in the time you’ve been commenting here, you could have gotten TLS working on several sites. Managing TLS for an operations person is like knowing git for a software developer. It’s a basic skill and is not difficult. If it’s truly that difficult for your team, (a) God help you when someone hacks you, they probably already have and (b) there are services available that will front you with a TLS certificate in even less time than it takes to install one. Cloudflare and done.

> Security can be a roadblock to releasing valuable software on time and within budget.

Great, you've pinpointed it. Step two is washing it off. Ignoring security directly impacts value, and I'm mystified that you don't see this.

But I guess I'm a zealot ¯\_(ツ)_/¯


> Putting a TLS certificate on your shit with LE takes minutes. Doing it through another CA is minutes

if you have one server, yes. else it's the other way around, because if you have multiple servers you need to do a lot of fancy stuff. And LE also does not work in your internal network if you do not have some stuff publicy accessible. And it also does not work against different ports.

Oh and it's extremly hard to have a proxy tls <-> tls server that talks to tls backends, useful behind NAT if you only have one IP, but multiple services behind multiple domains.

IPv6 fixes a lot of these issues.


You can use Let's Encrypt certificates for non-publicly reachable hosts by using the dns-01 challenge type. That, of course, means that you need some way of properly automating your DNS infrastructure to add the necessary TXT records which, admittely, is sadly not the case in many organizations. It's a solvable problem, though.

I don't understand your last point. Where do you see the problem with letting a reverse proxy talk to a TLS backend? You get the requested server name from the SNI extension and can use that to multiplex multiple names onto a single IP address. The big bunch of NATty failure cases apply to plaintext HTTP just as well, no?


Well the last point means that I need to rollout the cert to multiple servers (as the poster below writes)


In the most common setups, the reverse proxy usually terminates the TLS session and uses a different connection to make requests to the backend servers (e.g. nginx proxy_pass directive).

This means the backend server certificates are only ever exposed to your reverse proxy. There's no need to use publicly-trusted certificates for that. Just generate your own ones and make them known to the proxy (either by private CA cert or by explicitly trusting the public keys).


This new version issues wildcard certificates. Get one certificate. Use Puppet, Chef, Ansible, Salt, Bolt, multissh, or GNU parallel to put it on multiple servers for that domain.

If you need lots of different domains, use one of the auto certificate tools.

If you can't use one of those yourself, consider hosting on a platform that can automatically do this for you for all your sites, like cPanel (disclaimer: I work for cPanel, Inc).

If your stuff is never publicly accessible because you're in a fully private network, just run your own CA and add it to the trust root of your clients.

If you need an SNI proxy, search for 'sniproxy' which does exist.

If you're so small that you can't afford an infrastructure person, a consultant, or a few hours to set such things up yourself, then maybe you should shorten the HN thread bemoaning doing it and use the time to learn how.


> offer commentary that is openly harmful to securing the Internet

Funny you mention this.

With this new functionality, I can register valid certs for any domain in the world if their DNS is insecure, or if I can spoof it.

Have we gotten any headway yet on that whole "anyone can hijack BGP from a mom and pop ISP" thing ?

How many CAs are still trusted by browsers, again? How many of those run in countries run by dictators?

HTTPS doesn't secure the Internet. It's security theater for e-commerce.


> I think it's a cabal of morally righteous security-minded people who have never worked for small companies or realize that most dev teams don't have the time to deal with all this forced entropy.

This is just one anecdote, but I worked at a company small enough that I was the only developer/ops person. Time spent managing HTTPS infrastructure couldn't have been more than a handful of hours a year.

What is so painful to you about running your website(s) on HTTPS?


Honestly, from not ever using nginx to having an auto-renewing "A+" HTTPS site took no longer than 3 hours.


Would you be open to having a phone call, or some other more direct way of discussing this?

It may be easier to be more empathetic.


> marked ominously as "insecure"

It's not that ominous. It's not even red!

I think it's pretty obvious to most users that "Insecure" doesn't matter as much on some random blog, but does matter a lot on something that looks like a bank or a store.


That has to be balanced against the potential pain for users who will be accessing that software whilst vulnerable to having that information snooped or modified. Perhaps for social engineering purposes, perhaps to serve up the latest zero-day, perhaps just for the lulz... who knows?

SSL has a history of being a pain in the ass. There are a lot of pain in the ass implementations out there. Everyone gets that.

At the same time, it's never been easier, and basic care for what you're serving your users demands taking that extra step. What Google is doing amounts to disclosing something that's an absolute fact. Plain HTTP is insecure (in the most objective and unarguable way possible), and it is unsuitable for most traffic given the hostile nature of the modern web.

Do you want your users being intercepted, engineered, or served malware on? If the answer is no, secure it. The equation is that simple. Any person or group of people who in 2018 declines to secure their traffic is answering that question in the affirmative and should be treated accordingly!

That's not "zealotry" friend, that's infosec 101.


Your closing argument is essentially “if you’re not with us, you’re against us.” which sounds like quite the zealots argument to me.


Only because having your stuff SSL'ed (not snoopable) is a binary state. And while you might have business reasons for not doing it, putting those above your user's safety is just plain negligent. In the same way that storing plaintext passwords and sending them around via email, or using SMS as a two factor authentication method is negligent.

So in a way, you're right. I'm not sure why that's a negative.


> just want their software to work

Your software does not work if it is not secure. Security is a correctness problem.


If a given software can't handle TLS it's a fundamental problem of the software / development process and not the fault of the infosec community. Update/change the used libraries and everything will be fine. I've switched a whole distributed system from plain communication to TLS secured connections just yesterday.

Yes sometimes it's pain to solve some TLS based errors and I also miss the opportunity to debug each transmitted packet with tcpdump but I also appreciate it that the continuous focus on TLS improves the tooling and libraries and each day it get's a little bit easier to setup a secure encrypted connection.


>developers who just want their software to work.

Do they keep their servers up to date? Why is it so much easier to do that than getting an SSL cert four times a year?

I hope they update their servers more often than that.


You're getting strident comments and downvotes primarily because of the un-necessarily harsh and condescending tone of your post.


Did you file a bug report on the Mozilla site about forcing HTTPS?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: