I personally have a cron script set up on my domain gateway to update certificates once a month and reload nginx, at the end. Total unavailability is about .5 sec once a month.
It is. I hadn't even looked into it, because I set the job to off hours and the domains have low enough volume even a non-graceful reload wouldn't effect anything.
Thanks for asking, because now I know. I was just assuming the same lag I see in the CLI.
Huh, that's a rather interesting limitation. I guess internally mod_md must be changing the configuration of the server with every renewal? Otherwise I'm not sure why a restart would be needed; the server should just start using the new cert for new connections.
About the elephant in the room: Let’s Encrypt is becoming too big to fail. Wasn’t the point of open sourcing the whole protocol so that we could have multiple CAs like Lets Encrypt?
It seems that running a free CA doesn't really have a business model, so capitalism isn't going to produce viable competitors.
Additionally, other impact-focused people (non-profits, etc.), who would otherwise be willing to make a free CA, probably think Let's Encrypt is doing good enough, so why waste valuable time making the same thing when you could focus on having an impact elsewhere?
I suspect this is a pretty common end result in public-good tasks that don't have business models. They naturally grow to be too big to fail. Some governments try to solve this by just absorbing the task and making it a part of the government's responsibilities. I doubt this would happen for Let's Encrypt, so I guess we'll be stuck with a too-big-to-fail non-profit until it fails, starts to suck too much, gets absorbed by the government, or someone figures out a business model.
I would think (hope) that big ad companies like Google and Facebook would find that the proliferation of https is good for business and provide a free CA
Can anyone offer insight into why there isn't another CA that implemented ACME?
There no reason they can't charge money for certificates issued through it, or in fact use it for EV certificates (after a separate initial verification process).
Is the market size of people who would like to automate certificate renewal but want to use a commercial CA zero?
A number of CAs are working on ACME server implementations, based on posts on the working group mailing list.
There are a couple of reasons why none are available today. First, ACME isn't quite standardized yet. Let's Encrypt currently implements an older draft and will start offering an ACME server running what will become the RFC version of ACME early next year. It makes sense for other CAs to wait for the standardization process to finish (hopefully while working on their own implementation and giving feedback!) rather than implement the draft currently supported by Let's Encrypt and all clients, and then migrating to an incompatible new version soon after. Going live with a server supporting the latest draft wouldn't make much sense either since clients wouldn't work with it.
Another reason is that the later drafts saw a couple of additions and changes that are relevant for commercial CAs, for example things like the ability to bind ACME accounts to existing CA accounts and around the way out-of-band processes are handled (payments, validation steps that cannot or must not be automated). This was done based on the feedback from a small number of commercial CAs that were active in the working group and should hopefully make ACME a viable option for many others.
ACME is an open protocol. The LE servers are open source. You can run your own ACME server, the same code base as LE if you even wish, and it'll work fine with Caddy and Apache.
And if you think it's too big to fail, I encourage you to start your own ACME-compatible sister CA to Let's Encrypt. And if you're now realizing you don't have the resources to do that, you just answered your own questions (but you can still donate to the EFF or to Let's Encrypt directly: https://letsencrypt.org/donate/)
There are companies that run ACME internally so all their internal resources are secure. They can then push the company CA via group policies, or make them available for Linux devs to install.
But you're right about trying to run another public LetsEncrypt style CA. Getting your root cert signed and into browsers is a major undertaking and I'm really glad we have LetsEncrypt. We might be tied to one big provider, but everything they've done is open if other people want to attempt it.
That is indeed a problem, but there are several factors which I think make it less of a problem than it would be with other "too big to fail" CAs.
First, because certificate issuance is automated, if Let's Encrypt ever had to migrate to a new root cert they could do so in a way that's almost completely transparent to its clients. Aside from sites that have the old roots pinned, Let's Encrypt could simply start signing new certs with a different root and everyone could just carry on like nothing happened.
Second, because of the low validity period on Let's Encrypt certs, if such a transition did become necessary the entire process could be completed in ~90 days. This is in contrast to other "too big to fail" CAs (e.g. Symantec), where the process of distrusting the old roots takes years.
These factors don't eliminate the concern entirely, of course, but they do make the overall situation much better.
I thought the point of the process was to automate certificate issuance so that HTTPS could be deployed everywhere. I didn't think distributing CAs was part of that, or even desirable.
If they take the apache approach, and its a loadable module, sure.
I personally don't want Apache loading mod_ssl (or this module either) when I'm not using it. With things like nginx, caddy, etc you have to recompile to remove that part, if it's even possible.
This is why I think projects like caddy/traefik shouldn't get too comfortable thinking Let's Encrypt / HTTPS support by default alone is going to differentiate them too much. They're one PR away from having their major selling point becoming irrelevant in the face of the competition.
(I don't use caddy, but I always saw the "HTTPS by default" thing more as a nice thing to have, but not hugely important given that you can have the same with external scripts in apache or nginx. But being memsafe is the real distinguisher and one that certainly isn't reachable with a pull req in apache or nginx.)
Now, you’ll notice that https://caddyserver.com/ works, but https://caddyserver.com./ doesn’t. Caddy, the server, doesn’t support it, but you have to enter every domain twice manually. And caddy, the website, doesn’t support it either.
This was closed as a WONTFIX, despite every implementation of a webserver except for traefik and caddy doing it the same way.
> Same with every major site, and every major webserver.
I last tried this a few years back (probably around 2011). I found that a substantial fraction of major sites did not support it, and a substantial fraction of those that seemed to support it produced web pages that were at least partially broken.
IIS might support it, but Microsoft doesn't (universally): social.technet.microsoft.com, live.com, bing.com, office.com, skype.com all fail to properly load or redirect. As does instagram.com and linkedin.com.
It sounds like the situation has improved (if you consider it an improvement!) since then.
But did all of them function correctly? Assertions about the host are very common. Many things operate by domain whitelists, and so things like font loaders and analytics will commonly not work. Cross-origin resource loading will often break, if `*` is not used.
(Most of the things that I expect to break are unimportant, but there will still be a non-trivial number of important breakages.)
So, it's to be able to indicate that we wrote a FQDN, otherwise the DNS client, if it has a local search path, will check if it's a relative domain first.
I have to admit it's technically a benefit, but if you have a search path that resolves FQDNs as relative domains, isn't half of your software broken anyway? I can't say I've ever seen a FQDN with a dot at the end on any hardcoded or default value.
> but if you have a search path that resolves FQDNs as relative domains, isn't half of your software broken anyway
That’s correct, but it shouldn’t be that way.
I should be able to have google.com resolve to google.com.local.kuschku.de in my resolver, without issue, and the actual website should use google.com.
The fact that we don’t do that today breaks many parts of the original DNS and URL RFCs.
DNS software has absolute domain names in config files. In BIND zone files you have entries like "IN NS ns1.example.com." specifying the nameserver for the domain.
I bet some software implicitly uses absolute domains. URLs are just specified not to work like that.
What? They're not even comparable. Here are distinct advantages of all three as I see them:
- Traefik has cross-platform, highly dynamic proxying
- Apache has such widespread use and market saturation
- Caddy is the only server, even in the face of mod_md, to have fully automatic HTTPS by default
The thread you linked to has nothing to do with any of this, except that it links to this comment by myself, which preempts your claim: https://news.ycombinator.com/item?id=15433788
They are absolutely comparable and the advantages each one have don't exclude the others from attaining the same features.
Traefik cross-platform? All others are. Highly dynamic. What does that even mean? All are "dynamic".
Apache has widespread use and market saturation... how's that the single advantage it has? It's been evolving a lot.
Caddy is the only server to have fully automatic HTTPS? How much longer mod_md get that?
I think you've missed all of my single point and kind of confirm my fears.
The link which I posted has everything to do with this discussion. It's about Caddy thinking a bad business plan will work because "caddy is the only server to have fully automatic HTTPS by default".
Last question, is Caddy thinking of hiring a CEO or sales person? I think it should.
> mod_md builds are not yet available for all platforms.
Do you have reason to believe they won't be? Are you betting your business on the failure of Apache to do basic release engineering?
> Where did I say it was the "single advantage"?
That's fair. Because you listed it then I think that's the "major" advantage. Is that right?
> ou forgot "by default" -- and probably never, not on Apache's main release tree. Or at least not for a long time.
Why? Let's Encrypt and HTTPS by default being something that a lot of people want, why do you think Apache will ignore that and not include mod_md in Apache "for a long time"?
Competition is good. I don't have major reasons to be afraid but I would like Caddy/Traefik and others to succeed. From the very basic mistakes they're making in coming up with a business plan, I don't think they will. And no, being open source alone is not reason enough to ensure project survival.
If you re-read your own comment, I think you're the one spreading FUD about those other projects (and their implied inability to outpace Caddy).
Because those projects are very conservative about making things default. Apache famously has (had now?) bad defaults that no one should use, just for compatibility reasons.
Keep in mind that caddy is not only https by default, it's HTTP/2 by default as well. How long until that is by default in Apache?
And I don't think those are even the killer features of Caddy. They are the things that drive people in, but the real killer feature is how easy it is to configure.
Where do you see evidence of projects like Caddy getting too comfortable thinking Let's Encrypt/HTTPS support by default alone is going to differentiate them?
The author (mholt) replied above and the 'district advantage' he identified for caddy, his own product was:
> Caddy is the only server, even in the face of mod_md, to have fully automatic HTTPS by default
In every discussion about Caddy I've seen, the same argument is made. Even when caddy would refuse to start (with valid certificates cached!) during the LE outage, the response was "but we do LE + TLS automatically".
I still don't understand the concept of Caddy. The project seems inherently aimed at hobbyist's at best based on the idea that "its too hard to enable TLS in $Competition", but similarly they provide literally zero support for actually running Caddy - no sysvinit script, no systemd unit file, NOTHING.
So tell me again who their target market is? People who can't enable TLS in <Apache/HAProxy/Hitch/Nginx> but can write a fucking unit file for systemd?
Don't know where you get that idea from. The reference implementation for letsencrypt has always been (a Python-based collection of scripts with auto-config, auto-update etc) for Apache httpd. A native Apache module for ACME has been proposed for some time now, and is great because the reference implementation is quite a bit too rich to run as root (and is Python 2 only I believe).
certbot, the reference ACME implementation, should work with Python 2 and 3 (it definitely works with 3; I haven't verified 2 with recent versions), and it does not require root (though the default configuration will want it).
IIRC, the last time I set it up, I stuck HAProxy in front so I could still send ACME requests to certbot, but didn't have to have it running as root. If you put its user in the HAProxy group, it can write the certs as 640. If you want to be really secure, you create SELinux or Apparmor policies as well.
I use a HAProxy + Certbot too (with a certbot "hook" script that builds the .pem for HAproxy AND downloads the OCSP staples from LE).
As a bonus, you can have zero downtime renewals and use the TLS-SNI challenge, rather than relying on the "it's probably safe but it still feels wrong" http challenge.
[It uses the value of the global ServerAdmin setting][1]:
> There are 2 additional settings that are necessary for a Managed Domain: ServerAdmin and MDCertificateAgreement. The mail address of ServerAdmin is used to register at the CA (Let's Encrypt by default). The CA may use it to notify you about changes in its service or status of your certificates.
... but if you don't supply one of course Let's Encrypt won't notify you about anything.
So if you aren't paying attention you may get blind-sided by any future change, particularly if your use case is weird e.g. you can only pass http-01 by HTTP 301 redirecting to a machine with a completely different hostname, works today, could get outlawed as dangerous one day and they'd have the records to show you're going to be affected, but no way to automatically warn you.
As a good first step, it's easy to configure the Prometheus blackbox exporter (or your TLS-supporting blackbox scraper of choice) to report the TLS cert expiry date; I have an alert which pages me if a TLS cert will expire in a week or sooner based on this.
Expiration dates on your TLS certs is usually something that you want to monitor and alert on anyway. I'd actually build the monitoring separately from the renewal process, just in case that the renewal process doesn't notice that it fails.
I doubt it does in general, but mod_md does have a pretty chatty log if you enable it. Haven't tested this specifically, but I assume it prints something around renewal time.
Gosh I can't believe how embarrassing this post is for the LE team. All that time, effort and hard work let down by using the "nano" editor in the Youtube video.
(This is of course, fantastic news and great to see it'll be even easier for non-technical people to use HTTPS with little effort)
Other CAs have made interested noises. Big ones have indicated to m.d.s.policy or CA/B that they are, at least, paying attention to the RFC process and some are participating in standardisation.
ACME is at Working Group Last Call. Which means the IETF Working Group (people who thought this was interesting/ important) thinks it's finished but await feedback from outsiders who might not have realised this was coming or are too busy to look at in-progress designs. It will be published as a Standards Track RFC making it an "Internet Standard" in due course.
A monoculture is at least an improvement over the Wild West we had prior to the Ten Blessed Methods. As recently as last year any CA could decide (on its own recognizance) that any method it chose was adequate to verify Domain Control, under a heading "Any Other Method" in the Baseline Requirements. If your CA was happy with a method so dumb nobody should possibly have used it, we'd have to find out about that, explain why it's dumb, and then you'd get told to stop doing it, often taking several weeks to achieve. A list of just ten explicit methods was written, the Ten Blessed Methods, and now CAs must use one or more of those. ACME implements three today, and is designed to be extensible. Some methods involve things like human lawyers writing physical letters, it is unlikely ACME will embrace that sort of manual process directly, but methods involving email or the WHOIS system could end up in there.
As far as I know there's only one serious server-side implementation right now, and that's Let's Encrypt's open-source Boulder project: https://github.com/letsencrypt/boulder
"...you have to manually restart httpd for any certificate changes to take effect."
It's easy enough to have a daily cronjob that just reloads Apache unconditionally, but that feels dirty.