See https://shellcheck.net to fix your script and follow good guidelines. Top of my head:
* /bin/bash -> /usr/bin/env bash
* You probably don't need bash anyway, so switch to /bin/sh
* errors go to stderr (>&2), not stdout
* exits because of errors should return non-zero codes. (e.g.: `exit 1`)
* Full caps variables are bad practise (might conflict with real, global env variables)
* rather than running everything as root (using sudo), I'd call sudo for the only few commands that actually require root privileges (I found none, so I suppose "security" is the only command that needs root perms).
A lot of people use full caps variables in shell scripts, I wouldn't worry about that. However, I'd like to add:
* Indentation is mad. "exit" seems to cause preceding lines to be indented. This is code, treat it as such.
* Only split a command in two where it increases readability. Type "rm +.txt +.doc" if that's what you mean, not "rm +.txt ; rm +.doc". (Substitute + for asterisks; don't know how to avoid markdown.) The asterisks already expands to multiple filenames.
* When rm:ing files in shell scripts using "-f" is likely a good idea. Interactive aliases or unexpected permissions might trip you up otherwise.
* But removing everything in your-certs is probably a surprise for the user. You would expect the script to generate a new certificate, not erase old ones!
* Don't do an if-construct every time you mkdir something. Just do "mkdir -p" instead, that makes sure directories exist and creates them if necessary.
* The config file needs to be specified with a full path, or at least checked if it exists. If you place it in the same directory as the script, use "dirname $0" to figure out the path.
* That config file is so small you might as well store it in the script and cat << EOF it directly through sed to disk. Or even use variable substitution directly.
* If you need temporary files for some reason, it's good practice to use mktemp to allocate them which gives you uniqueness and a suitable tmp folder for free.
* That awk-construct is perhaps not obvious to everyone. Just do "for fprint in $(... | grep)" instead. Or "security .. | while read" if there are more than a handful.
* Not setting umask could potentially render key material readable to other users. Don't do this.
* Don't generate a new CA every run. Keep it around in a directory (with proper permissions) and only generate a new if not already present in the trust store.
Sometimes you may trip on applications not willing to accept a new certificate with the same serial number as an old one. If this is something you need to take into account, just store used serial numbers in the same directory as your CA keys. It should also be noted that openssl ships with a script does all this, except installing the trusted certificate in the trust store.
Thank you for sharing this; I infrequently have to write standalone shell scripts but every time I do I spend a while looking around for best practices and often just end up with some contradicting opinions.
Just for the record, the Google guide focuses on Google infrastructure and systems, not portability. Using /bin/bash for instance is not a good idea if you want to port your work to BSD systems.
1) supporting both macOS and Linux in bash is trivial. Just stick to Bash 3 functionality.
2) Bash4 is trivial to install on macOS via home brew, so saying “available” is a misnomer. “Default” would be a bit better, but many Linux distros still in use don’t use Bash4.
Although these will "work" today in popular browsers and with most tools, this is NOT the right way to scribble a DNS name into a certificate this century.
Write SANs. Subject Alternative Names. These aren't aliases, the "alternative" means in the sense that this is an "alternative" to writing human readable X.500 series Common Names. Unlike those human names, SANs are defined in a machine readable way, e.g. the dnsNAme SAN spells exactly DNS A-labels, the ipAddress SAN is just an IPv4 or IPv6 address written out as raw bytes, not a dotted decimal or whatever else someone thought might be fun today.
You should also write one of the SANs you choose as the Common Name in some plausible text format, but by having SANs all vaguely modern tools can just match those rather than trying to make sense of the Common Name.
In a very new OpenSSL you can actually do this from the command line sort-of sensibly. In most installs you will need to modify that configuration file instead, you're already using a configuration file so that's no big deal.
Chrome finally stopped looking at the Common Name field, and I'm hoping to fade out support in the next few versions of Go. You can already test your systems in 1.11 with GODEBUG=x509ignoreCN=1. Use SANs.
Yours is a much more robust solution that I wish I had found before writing my own. But at least I got a better understanding with how all of this stuff works.
I've added a link to your project in my README.md file.
Thanks for creating such a easy to use tool for solving an annoying "problem" with local certs. I never really tried to solve this myself so I just "put up" with the browser warning. I managed to add a key following the README on your repo with no problems what so ever on MacOSX. :)
Fascinating that Golang reimplemented all the crypto/ssl/tls in Golang itself rather than linking to a C library. I'd probably trust the Go code (a safer language) over OpenSSL so that's a pretty interesting use case.
Why? Surely OpenSSL has a bigger team auditing the codebase, especially recently since they've gotten so much attention for previous security failures.
Also, Go's TLS library is missing some important features, like decrypting most varieties of private keys.
I use Go's TLS library, but I don't think it's necessary "better" than openssl, though it's certainly more convenient.
I'm making no definitive statements just more interested that they did it. I would personally trust a new Go codebase over an old C codebase but I could definitely be wrong in this case.
Your solution generates a certificate and leaves it up to the user to setup https.
There are other steps involved, like adding the cert to the trust store (so you don't get invalid SSL warnings). And also changing your application code to use these certificates.
Even if you do that, you are still exposed to a serious security threat: if a bad actor gets hold of your certificate file, they can pose as a legitimate website and steal sensitive data. This security flaw is present with all other script solutions mentioned in this thread.
To overcome these issues, I have built a mac application called HTTPSLocalhost (https://httpslocalhost.com).
- It offers a user interface to add remove local https domains
- Has an inbuilt proxy so you don't need to change your application code
- Is much safer because it deletes the certificate and private keys as soon as the proxy server starts
- It creates a new certificate each time you start the app, to enhance security.
- And of course, like all good things, is free (there is a video demo on the website, the app will be ready soon).
Wanted to do a proper Show HN next week, but I guess it's the right time to bring it up :)
>Even if you do that, you are still exposed to a serious security threat: if a bad actor gets hold of your certificate file, they can pose as a legitimate website and steal sensitive data. This security flaw is present with all other script solutions mentioned in this thread.
Sorry, but, what? Who is using self-signed certs for public production websites?
But if an attacker has a private key that is trusted by your local trust store, they can pose a legitimate website (man in the middle) and decrypt your traffic.
The title tells me that this thread is about local https. Nothing to do with prod.
I thought there were varying amounts of trust with certificate stores?
Local dev certs should go into a personal store or something that is less trusted than something like VeriSign. You shouldn't be able to mint a legit-looking Google certificates with the same private key that's only trusted via a local self-signed certificate.
I built something similar (though probably a lot less sophisticated) as an alpine based docker image. I had some issues with openssl on a Mac in the past, and this approach circumvents those.
I would use it if I could do so with PHP's internal webserver.
I often hack together quick experiments using PHP's internal webserver. It only serves via http though, not https. Is there a way to make it serve over https?
I'm using a wildcard certificate too for several reasons. The main reason being: if you want to test your code on a real device (iOS and Android simulators included), a simple self-signed certificate won't do the job. But with a Let's Encrypt wildcard certificate and a dnsmasq wildcard domain mapping on the local router DNS server, it all works like a charm.
* /bin/bash -> /usr/bin/env bash
* You probably don't need bash anyway, so switch to /bin/sh
* errors go to stderr (>&2), not stdout
* exits because of errors should return non-zero codes. (e.g.: `exit 1`)
* Full caps variables are bad practise (might conflict with real, global env variables)
* rather than running everything as root (using sudo), I'd call sudo for the only few commands that actually require root privileges (I found none, so I suppose "security" is the only command that needs root perms).