When I ran it, the first thing it did was ask for a sudo password. Um, no. I ran it on a test machine and it installed a bunch of dependancies without even so much as asking. Again, no. For something that is supposed to be securing systems, it sure seems insecure.
Totally agree that this is not optimal, but bear in mind that the goal is to get letsencrypt shipped through your package manager, where it can just specify dependencies and the package manager will resolve them. With that, this will essentially become a non-issue.
I've been thinking a bit about python and packaging, and I think there are few reasons to wrap the bootstrap process of packages like this in a shell script. As python is already a hard dependency, it should be somewhat easier to write a cross-platform bootstrap in python, than in posix shell (hello, windows?). It is a little unfortunate that expressing "needs python, headers and a c compiler" is as hard as it is. But if dependencies are beyond the (reasonable) reach of pip/setuptools, it seems it might make sense to simply defer to some other package that already deals with this stuff, such as ansible.
Nothing against automating systems with shell (or Makefiles) -- but it seems to result in much duplication of effort (eg: duplicating stuff from the "os" package in python's std lib) -- with an added burden when trying to support a variety of Linux and * bsd distros.
Writing a posix shell installer that works "idiomatically" on Debian is easy. Writing one that works across old-stable, stable and testing is a little more complex. Adding support for Ubuntu LTS along with the slew of Ubuntu-releases newer than previous LTS, can already get messy. Add inn SuSe, RedHat, Arch, Slackware and GNU Guix... and that's not even thinking of Open Solaris or the * BSDs.
It's not that it can't be done, but I think the only sane choice is to get upstream to do the hard work (eg: port/package python w/"os", port/package ansible -- or use something else, like package-config or ... autotools...).
I think the best way is probably to distribute a python script that sets up a virtualenv, and runs pip from within that to pull down/set up a package. I'm not yet quite sure if this is viable for python2x though... bringing venv into the std-lib is one great (and I think often overlooked) feature of python3.
I suspect you may be missing my point. I don't care about the use of the package manager. The problem is that it shouldn't be run when I asked for documentation.
edit:
Understood, and yah, --dry-run support would be great, but I'll settle for working documentation.
Oh no, I agree with your point, too. But I think they are related -- even if there's no reason not to put the full help-text in a here-document (or an actual file, as long as the expected usage is to git clone first - the python code and the shell script could use the same help-text) -- I see few people/projects providing real help texts in their shell scripts.
Generally, people are very sloppy with handling arguments to shell scripts (probably because it requires a bit of boilerplate). But a nice '-h|--help' and maybe '-n|--dry-run' would be good to have.
That looks like a lot more work than the official client. I don't get why so many people immediately on the first day of public beta turn to alternative clients. I used the official one, and by looking at the --help options I easily found how to automatically generate a certificate without taking down the server, and without using root privileges:
Those paths are examples obviously. But I did that, issued a 'service nginx reload', and it was done. I can reissue that command in a cron job to renew whenever I want.
There are some Python dependencies for the official client, but my package manager installed them for me, so I don't get why people avoid it, especially when most of these alternative clients require more work.
The official client handles your registered account, and I don't trust the alternatives not to mess with that or be incompatible in some way, so if you ever want to seamlessly use the official again you might end up with a different account identity or something. I don't know, I didn't look too deeply into it, other than the steps, and the official one was easiest for me.
>That looks like a lot more work than the official client.
Maybe some people have different experience levels, time constraints, or patience for the non-obvious than yourself?
I myself by far found tiny-acme the simpler client to understand; there was absolutely zero worrying about what modifications are being made to configuration files, nor concerns about how best to get everything in a nice cron job.
If this is a case of the official client's documentation being too dense for the likes of me, then so be it.
Edit:
>There are some Python dependencies for the official client, but my package manager installed them for me
I just saw the top post regarding this, and then re-read about it in your comment.
For me - again, possibly a different use case to your own - this is not cool.
If you don't trust the official client, why do you trust them to be your certificate authority? Also, note that I ran this command as a normal user, not root.
The purposes for which you trust a CA and the purposes for which you trust a software developer are different, the ways of verifying what they do are different, and the ways their work can go wrong are different.
I work on the official Python client for Let's Encrypt, but I support people's decision to use a different client if they prefer.
It's frustrating to see the occasional conspiracy theory suggesting that the Let's Encrypt project somehow wants to backdoor the client in order to compromise people's servers (an idea that occasionally gets brought up on our forums!). But it makes sense that some people want a client that doesn't modify their server configurations. The official client tries to modify server configurations because we believe that many people don't have the expertise or inclination to do it on their own. That goal does make the official client more complex, and there are still plenty of integration bugs to find and fix. If people want a simpler and more hands-off client without the integration features, they should definitely use one, and it's a valuable service that this option is available.
> because we believe that many people don't have the expertise or inclination to do it on their own.
So they've set-up a server or VPS, installed a web server and possibly a CMS, configured virtual hosts and appropriate users, hooked-up an RDBMS but... SSL is too hard?
Don't kid yourself, most hobbyists are keen on Let's Encrypt because it's free.
People who really don't have knowledge or inclination for SSL configuration will be waiting for an option on their hosting provider's control panel. They don't have root shell access.
> It's frustrating to see the occasional conspiracy theory suggesting that the Let's Encrypt project somehow wants to backdoor the client in order to compromise people's servers
That's not the only reason someone might not want to use the LE client. Here are a few others:
1. The LE client might have a bug that causes problems on my server.
2. I may have done something non-standard to my config files that the LE client undoes.
3. Someone may have compromised the LE client or one of its dependencies without your knowledge.
I agree with those concerns, and I definitely don't mean to suggest that people who chose not to use the client all believe in conspiracy theories about the developers' intentions.
Wow, awesome to hear you chime in, and thanks for your great work on getting Let's Encrypt up and running!
Please don't take my post as a direct criticism of the official client; I think it's just a question of use cases, and my case doesn't have much overlap with the "I want a client that takes care of everything" use case.
If you can't trust a CA not to prevent their official client from becoming malware, then I don't see how you can trust them to maintain their position as CA. There is no real scenario where the official client is discovered to be backdoored, and people go on using Let's Encrypt certificates.
Malware isn't the problem - their automagic client screwing up my webserver is the problem. This is a justified concern given that the official client already demonstrates bad behavior by causing side effects on --help (see my top level post).
It is not a concern at all for me, because I can run that command from a user that does not have privileges to mess anything up. Those options make it not even attempt to read or write any web server configuration. All it does is create the certificate.
I trust them to maintain a secure CA (and I trust that there are adequate checks and audit mechanisms in place).
I do not necessarily trust them to not muck up my webserver configs, or to accidentally expose my private key due to some bug. While I have no reason to doubt the code quality of the official client (and I'm certainly not suggesting I suspect any malicious activity), I suspect the checks in place for the client are probably less rigorous than those for the CA itself.
acme-tiny is less than 200 lines. Short enough that reviewing the code took a trivial amount of time, and now I'm using a client that I can trust completely.
I could certainly do the same thing with the official client, but it would take me several hours (at least) to get to the same level of comfort.
That's an argument I can never win. There is always another layer down the stack that I haven't/can't audit. That doesn't mean it's not better to audit what I can.
Everyone's recommending their favourite alternative. Mine is simp_le[1], a very simple ACME python client. It only supports the http challenge. You can run it without root (I have a dedicated user generally), without stopping your webserver. You only need to allow write access to domain/.well-known/acme-challenge.
Write a simple wrapper script and run it from cron. You get precisely the certificates needed for Apache or nginx. All that's left is to symlink them to the right spot.
Let's Encrypt follows all redirects and ignores expired certificates so if you have multiple domains redirecting to one, you can get a certificate for all of them in one go even if your certificate expired. You can create a SAN certificate or multiple certificates in whatever constellation you like.
I recently switched to letsencrypt for some of my side projects. It works, but there are pitfalls.
The official client didn't work for me at all (crashed with an error message every time), and there is little documentation to help you out. However, the Ruby client works, so I'm using that. For some reason though the Ruby client was stuck in development mode leading to invalid certificates being generated without warning - I had to manually edit the gem file in order to circumvent that.
Once it's running though, it's very cool. I'm going to start using it for client projects in the future.
Last Monday, I downloaded the client, and started reading docs. It baffles my mind that it needs to install so much just to run --help on it (see pdkl95's post). After an hour of that, running the client took a minute or two. I spent another hour or two figuring out how to set up the certs and key properly with a Java keystore. Hint:
>> The issue with https is that it’s really bloody hard.
How we handle it: we terminate all incoming connections at haproxy and negotiate ssl there. The haproxy instance is packaged into a docker container that pulls the right certs from a private repo at build time. By building this container with different flags we can enable/disable ssl termination. The cert is a wild-card so we can drop this container anywhere in our stack and point dns at it and we get solid ssl termination. The certs themselves are obtained from godaddy, and all the prep that is required is to copy them to the correct repository. The haproxy containers cats them into the correct combined format at build time. This took substantially less than a morning to set up.
I had never installed an ssl cert before and was super excited to try let's encrypt. I had nginx and it was a bit difficult for a first timer to setup. This[0] post really helped me get up and running and was at a level that was understandable. To people writing up guides be cognizant that the people reading them don't have your level of understanding. Nginx has a very extensible set of .conf files so you have to be more specific than:
In the config file put the path to a symlinked diffie helman key.
This above guide looks decent and ill book mark it for later.
One minor beef with the Let's Encrypt process as showcased here: it lists redirecting HTTP traffic to HTTPS as a 'secure' option.
While it's better than leaving the traffic as HTTP, it's obviously not ideal from a security perspective, because it allows people to bookmark the HTTP version (or not change their bookmarks that were set up for the HTTP version), and risk being MITMed on every visit.
It would arguably be better to display a notice to say something along the lines of 'this site is now accessible only over HTTPS, so please update your bookmarks'.
A feature that we were considering is enabling HSTS (in the Secure mode) and gradually increasing the max-age over time, so that if something went wrong at the outset the administrator would be able to reverse the process without affecting clients for very long (perhaps starting with a one hour max-age, then doubling the max-age on every subsequent day?).
Correct, HSTS is a header that is passed by nginx/Apache/whatever combined with a hard coded list shipped with your browser. You need SSL for HSTS to work, but the SSL vendor you choose doesn't have any bearing on being able to implement HSTS.
A 301 redirect would solve this, but it's just too much for an automatic script to insert a 301 redirect or HSTS headers on the server configuration.
That's why I'm planning to move my sites to Letsencrypt's certs, but I have no intention at all of using their client. I can place those settings there, but I'd be pissed of if it placed them.
Quite off topic but I'm building a service that has many clients and I want to automate TLS setup when registering these clients with the main service. It's not necessary to use lets encrypt for this as its between services I control.
Anyone know the easiest way to (semi) automate this? I'd be happy to generate someone a key that they have to copy onto the client when it's installed...
I haven't found any that I was happy with, last time I needed something similar (years ago now). But I've had some similar worries, and one of the projects on my short-list to look into more is:
Load balancers - sure. It just provides a regular, signed x509 certificate that you can use for anything that supports TLS. The auto-configuration client might not support a specific load balancer, but it's not too hard to request one manually.
Many subdomains - well, they don't support wildcard certs so that makes it harder. You can generate one cert with many subdomains, but that still requires extra effort whenever you add one. And it pretty much rules out set ups requiring wildcard subdomains, for instance sites like Deviantart or Tumblr where each user gets their own subdomain.
https://www.gnu.org/prep/standards/html_node/Command_002dLin...
https://www.gnu.org/prep/standards/html_node/_002d_002dhelp....
edit:
Ok, this is a little disappointing:
https://github.com/letsencrypt/letsencrypt/issues/1286
That's fine. Then the --help pre-install should simply produce a message explaining this.Fortunately, it seems I'm not the only person who thinks running an install when someone asked for help is not good behavior:
https://github.com/letsencrypt/letsencrypt/issues/1903