Spoiler: the default SSH RSA key format uses straight MD5 to derive the AES key used to encrypt your RSA private key, which means it's lightning fast to crack (it's "salted", if you want to use that term, with a random IV).
The argument LVH makes here ("worse than plaintext") is that because you have to type that password regularly, it's apt to be one of those important passwords you keep in your brain's resident set and derive variants of for different applications. And SSH is basically doing something close to storing it in plaintext. His argument is that the password is probably more important than what it protects. Maybe that's not the case for you.
I just think it's batshit that OpenSSH's default is so bad. At the very least: you might as well just not use passwords if you're going to accept that default. If you use curve keys, you get a better (bcrypt) format.
While I have you here...
Before you contemplate any elaborate new plan to improve the protection of your SSH keys, consider that long-lived SSH credentials are an anti-pattern. If you set up an SSH CA, you can issue time-limited short-term credentials that won't sit on your filesystems and backups for all time waiting to leak access to your servers.
Spoiler: the default SSH RSA key format uses straight MD5 to derive the AES key used to encrypt your RSA private key, which means it's lightning fast to crack (it's "salted", if you want to use that term, with a random IV).
And a mere four years later they added a format that uses bcrypt... in a PBKDF2 construction? (I haven't found a motivation why, though Ted Unangst's post on it just says "I know about scrypt".)
It has a time space difficulty curve that's more complex, but some people like that. I will stipulate scrypt is "better" if I don't have to argue about it. :)
As an aside about scrypt since I saw it mentioned here:
How does scrypt fair against the recent TLBleed etc? Iirc intels claim was that TLBleed only affected poorly implemented crypto. But is not the memory access pattern of scrypt vulnerable to TLBleed and hard to make constant access?
OT: This is why I still come to HN. At some point the top comment chain is by tptacek (Matasano), cperciva (Tarsnap founder), lvh (Latacora), tedunangst (OpenBSD dev), willvarfar (Mill CPU). And that's a great thread!
My first guess would be that it has "too many parameters/knobs". I guess that could implicitly mean it's hard to use/easy to mess up if you don't know what each parameter means and what different values have.
I guess the same. Not sure why the downvote. The latest crypto functions expect the developer to pick parameters for the memory usage, the time to run and god knows what.
Too low and it's worse than MD5, too high and your login prompt takes a whole minute to check the password.
> consider that long-lived SSH credentials are an anti-pattern.
Exactly. Consider switching to auto-expiring SSH certificates. You can build your own certificate management using a few open tools or switch to Teleport [1] which is 100% certificate based and doesn't even support keys. Disclaimer: I am one of the contributors.
Oh! Teleport does look pretty decent now that I've forced myself to look at it after going through this thread.
I mention this mostly for the sort of people like me who read 'SSH CA' and their eyes roll into the backs of their heads and they start rocking and making saliva bubbles at thought of PAM modules and LDAP servers and so on. But this doesn't look nearly as bad. Go ssh implementation sounds like a nice ancillary bonus.
I just went to the top 20 search results for "how do I generate ssh keys". Almost none of them use the defaults. Almost all of them suggest to use "-t rsa", and some with "-b 4096". The ones using defaults are from Joyent, SSH.Com, and git-scm.com. Since probably nobody is creating SSH keys without using a guide first, we should be able to get websites to update their guides with better default arguments, which will improve things going forward.
Almost all the Windows guides suggest using PuTTYgen with defaults, which gives you a 1024 bit RSA key, and that might be worse in the long run than these password shenanigans.
I created a gist[1] with the letter I'm sending with the suggested changes. It has all the pages I'm sending letters about and how I contacted them. If anyone else would like to use this script to also send letters, please do!
A lot of places make it difficult to contact them about their docs. I had to create accounts and file support tickets for some, and others only had a generic feedback form. So far Oracle and "w3docs" have been the most difficult; the latter only has a Facebook and Twitter for contact.
This whole process is really annoying. All these sites are giving the same advice. Why isn't there one Creative Commons wiki just for technical writing that people could link to?
Thanks for doing that. Half of the times where I spot a mistake or flaw in some docs I just leave it be if the way to contact them seems to cumbersome. I should try harder.
Look, 2048 is fine. Assuming no algorithmic speeduos you get 112 bits of security and that’s plenty. But SSH keys are only used to sign: they don’t affect bulk encryption. 4096’s performance is not what’s holding you back.
"Slows down" can also mean connection times. On a computationally weak device doing 4096-bit RSA is far from instant. This is, after all, one of the reasons people are enthusiastic about Elliptic Curve options in this space.
Some people need SSH to move a lot of data, e.g. for SFTP but some people just want their connection to a nearby machine to feel "snappy" and not take a beat to do the key exchange and authentication steps.
I'm not thinking about myself, the desktop I do Social Media from can do almost three hundred, and indeed RSA 4096 seems fine on that PC, but lots of people have crappy under-powered devices like Raspberry Pis. How many can those do? Four? Ten?
We're in the weeds here, we're agreed that if your weakest point is a 2048-bit RSA key you're in unexpectedly good shape, definitely anyone who feels 4096 even "might be" too slow should just use RSA 2048 (or get an elliptic curve algorithm that's nice and fast on their CPU). I was just pointing out that "too slow" doesn't necessarily mean "Not as much peak throughput as I would like". Station wagons full of tapes remain sub-optimal for video conferencing :D
Do you have a guide for home users you could point to that provides clear guidelines for managing your SSH keys?
I have a few home servers, but if one of my devices were compromised I don't think it would take much longer for the whole network to fall.
I'd love an end-to-end example that shows how you're storing everything, in both meatspace and your devices. Do you use hardware authentication devices? How do you handle backups?
Why would you do this? Isn't it almost just as simple to generate a second software key and paper-key that? The difference between propagating one key to servers or two is a single newline character.
I think the explanation for most bad defaults in ssh is "redhat" which is kinda insane but I very much stay out of that sandbox. The default can't be changed until existing systems can read the new keys. So like 2028 or something.
For the key storage format, not the key type, wouldn't that only be a problem where you copied a key to a redhat system? Requiring conversion there doesn't sound too bad?
You're basically asking a "you broke my workflow" kind of question. https://xkcd.com/1172/
In high school, I definitely kept password-protected private keys on a USB key that got plugged into whatever machine happened to be available. (Now I am affluent enough to carry around a real Security Key and a trusted laptop.)
If you were the maintainer, would you really want to change the defaults and deal with the backlash of complaints from users who do copy keys around?
Even the OpenSSH people will tell you not to copy private keys around. Non-ephemeral private keys should be generated where they will be used. You can copy the public key...
Yep, there is no point in generating a new key if you can't get the other end to trust it. You could generate a new trusted key every time you log on (and encrypt with password), and invalidate the old key, making sure to copy it to your USB. That would roll the keys, and cause you to be locked out with that key if anyone used it. But a bit involved.
It doesn't take that long to arrive at the conclusion that you should not expose any secret to untrusted hardware.
So creating a one time use key for that computer is probably a good idea, you can revoke it once you are done using it and then it won't cause you any problems in the future.
> consider that long-lived SSH credentials are an anti-pattern.
I don't think that's necessarily true, provided that your keys are:
- Properly encrypted
- Protected by a decent password
- You use ssh agent to avoid 1) copying the key everywhere, and 2) typing your password all the time.
Of course it depends how critical security is. Access to a few dev servers inside the company firewall is not the same as managing your client-facing production infrastructure.
The user experience of doing this is very bad. Every time I look into doing this I end up with blog posts that describe punching numbers into the GPG CLI, master keys, subkeys, PIN's. I don't want to be a GPG enthusiast, I just want to use my SSH key safely. (No offense to GPG enthusiasts!)
# Generate key
$ gpg2 --card-edit
> admin
> passwd
change both user and admin PIN to a secure password (can be the same, it's called PIN but you can just use a regular alphanumeric password)
> key-attr
choose RSA, 4096 (or whatever you consider sufficient)
> generate
# Add this to your .bash_profile (use GPG agent instead of SSH)
export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket)
# Export your SSH public key
$ ssh-add -L
I dislike GPG as well, but I have the OpenPGP smartcard. I've just loaded it with x509 certs. There's OpenSC so I can use it in a browser/VPN/SSH/Apple Mail etc.
That's a side effect of the GPG being the back end technology that the Yubikey-based SSH keys are based on.
If you don't want to have to learn gpg (because why should you?) the master/sub keys, PINs, keyservers, and all that can be dumped, just like ssh-keygen is able to create keys without passphrases - not exactly recommended, but still better than the alternative.
FWIW: if you really, really don't want to learn GPG: Yubikeys will also speak PKCS11, it's a separate applet, and they ship PKCS11 libs for every major platform. We've used it for OpenVPN in the past (before we had wireguard).
This is what I meant by "elaborate new plans". I like Y4's as much as the next nerd, but if you're going to put that kind of energy in, spend the energy on setting up an SSH CA.
I recently rolled out YubiKey 4s to my whole organization and it was a painless experience. There's not a single file-based key left. Provision the keys, replace the old file-based keys via our configuration management tooling, done.
Wouldn't a SSH CA just introduce a whole different kind of complexity?
- How to protect the SSH CA and its key and make it highly available? I don't want to be locked out of my bastion host after something the CA depended on broke and my certificate has just expired.
- How to authenticate users against the CA? Most solutions I've seen use a longer-lived client-side secret, which is just as susceptible to theft than a regular SSH key, or some sort of OAuth or SAML SSO. A malicious Chrome extension can now compromise the SSO process, you still need a U2F token (like a YubiKey 4) to properly secure the SSO account, etc.
- How to make it work with our SCM and random things like a storage appliance and various JunOS devices, which support regular SSH keys, but don't know about SSH certificates?
I would assume that Latacora is using a SSH CA, and I'm legitimately curious how you approached these challenges.
You raise a bunch of valid points. Just to answer the one about authing against the CA (I'm in the LV airport for BlackHat and my laptop battery is about to die): yes, still get U2F tokens. You're right that malicious Chrome extensions will mess you up, but that's true for everything else you run too: you need to enforce Chrome extensions via MDM regardless of what your SSH key story looks like. I consider having to SSO in a good thing: it means onboarding/offboarding/audit logging is easier.
The context for SSH CA/Teleport is SSHing into a box. When you do actually need an SSH key, Yubikeys are the best answer. (I like using gpg-agent's ssh-agent emulation mode because I find it works better on Macs, but that's irrelevant to the security analysis.)
I agree that for a large organization - which has the necessary pieces in place - it makes a lot of sense to use SSO for SSH access. The SSO is mission-critical anyway, there might not even be direct SSH access except via an authenticated proxy, there's centralized audit logging and intrusion detection, ...
However, I would argue that unless this is the case, operating a SSH CA is riskier (both from a security and availability point of view).
We like Y4's just fine. For the very limited set of machines we maintain that we ever need to SSH into, we use Y4 SSH keys. But we don't promote them to clients; we're working on rolling out short-lived SSH credentials with them.
I think most (if not all) SSO solutions can do push-based MFA these days. It's not perfect by any means and I'm sure plenty of people will blindly approve any requests to the app but it's a lot more secure than without it and it's pretty convenient.
One way we reduce the risk of CA key compromise is to intermediaries for signing most of our stuff. Our implementation has Vault that's using intermediates certs to sign user's ssh keys, these ssh cert are short lived and we use it for signing in to ephemeral app hosts.
One day we had bit of time skew because of bad ntpd & lo we couldn't login because of our short lived certs :-)
If I use an ssh key to connect to computers owned by multiple organizations and I can't control how those servers are configured can I still use an SSH CA? For instance I use my key to connect to my servers but also computers at work, super short lived sessions when I'm debugging some embedded device through SSH (those tend to be wiped/replaced a lot so I keep having to ssh-copy-id my key to them), github and other "cloud" hosts etc... Does SSH CA make sense in such a configuration?
I'm currently using a yubikey to hold my key (in the GnuPG smartcard applet) so I felt pretty confident security-wise but now you're making me doubt.
> If I use an ssh key to connect to computers owned by multiple organizations and I can't control how those servers are configured can I still use an SSH CA?
Not really. In general you need the same level of access for setting up the sshd host key, for setting up/enabling sshca: the server must trust the ca pub cert, and there's some configuration needed wrt principals (although AFAIK you can/should embed most of that in the certificate).
If only there was a certificate authority management tool that was convenient
to use from command line and through an API, so it could be made into
a company-wide service.
There is this old tinyCA that comes with OpenVPN, but it's awful and can't do
much (I don't even remember if it could revoke a certificate). There are a few
instances of WWW-only CAs, and there are desktop/GUI applications. But command
line? /usr/bin/openssl only, and it's unwieldy. Even worse situation with
a CA library.
People like to fetishize OpenSSH's CA (for both client keys and server keys),
but there still a lot to do before it becomes usable. (Though the same stands
for the traditional save-on-first-use method, honestly.) You're basically
proposing to deploy software that maybe will be usable in a few years, with
a big "maybe", because until now it haven't materialized.
How does it "easily beat" a CA? It's still a long-held credential that you now have to manage across a fleet instead of a single SSH CA. You can get that short-term credential via a long-held key; that long-held key can even live on a Yubikey if you use U2F/WebAuthn. You get all of the security and usability benefits of U2F/WebAuthn as well as the off-boarding/on-boarding/compliance benefits of tying everything to SSO as well as the audit benefits of an SSH CA.
You can work around not having a CA, by distributing keys. I'm exaggerating when I say "just write a script", but it's not hard. You cannot work around not having hardware keys.
SSH CAs improve efficiency and convenience.
A hardware key that requires touch per login is a game-changer. When you go do lunch you know that your key did nothing, no matter how compromised your workstation is. When your machine is turned off you know that there's no copy of the key somewhere. That key cannot be used.
A software cert-based key may be valid for only hours (if you set it up that way), but that means that there are 7 billion possible attackers who could use your key. They could break into your workstation and wait for the screensaver to kick in, and then log in to every single host you have access to, and do their naughty business.
For a hardware key someone has to take a plane from China and break into your house to use your key.
> It's still a long-held credential
Doesn't have to be. But if it is, so what? Given physical locks that are unpickable and keys uncopyable, would you rather instead change locks every day, where the keys are copyable? (even if cost of changing locks scales O(1) with price)
> that long-held key can even live on a Yubikey if you use U2F/WebAuthn
Like I said, one does not exclude the other. You can't prove that A is better than B by saying A+B is better than B.
There's also devices that don't support SSH certificates (e.g. embedded devices), but supporting pubkeys is vastly more common.
Technically supporting public keys is mandatory. Of course not only can real world implementations ignore a MUST in the RFC they can also, and more conveniently, just reject all proposed public keys, leaving public key auth as just a stub.
One of the servers I've had the misfortune of using responds to even proposed public key auth by failing all subsequent authentication on that connection. So you need to immediately do password auth if you want to get in. Brilliant.
I presume the WG specifically wanted to see SSH with public keys deployed widely rather than a world where most places upgrade from telnet to SSH with passwords and think that's the job done.
I agree. Hardware token makes a huge difference here because it ruins attack momentum.
The rest of the attack is very technical, very network applicable - copies of key files, guessing passwords - your adversary may be the far side of the world, and they may have done all this in seconds.
But suddenly a hardware token means ground assets. Different skill set. Some adversaries may be able to buy all the Cloud Compute and Network Bandwidth they can ask for (especially if it's all with somebody else's credit cards...), but putting even one black bag job together in a foreign country is beyond them. And even for adversaries that are able to do this you can't just spin up ground assets instantly.
Yes, in "Rainbows End" Rabbit actually does (if you pay attention) build a ground team to execute the lab infiltration plan despite apparently not having any corporeal existence. But that's science fiction. Here and now that's not how it works.
Yikes that statement has a lot of moral overtones. Is it a good idea to use a Yubi? Arguably yes. Does one need to find "excuses" for not doing so? The vast majority of the time, no.
I think the intent was more that security keys greatly reduce the friction associated with more secure practices, reducing the reasons to use less secure ones.
No moral overtones intended, the point being that the low cost and the form factor significantly lowers the barrier to entry.
I've been using a GPG smart card for a long time, and it required a separate card reader, and both card and reader were easy to break. A YubiKey 4 fits on a keychain, is hard to break (though some of my colleagues succeeded) and you just plug it in.
I like my smartcards. Bummed they didn't catch on. How do you feel about losing the PIN and using a password? I like how, if I'm at my desk using a hardware PIN pad, it's much less likely i'll have problems.
I've not been able to find a good tutorial on how to store keys, plural, on a Yubikey 4 or any other smartcards. They're all limited to storing one, maybe two "authentication" GPG keys. Would you have some pointers?
That’s right: the number of keys you can store on it is limited. That’s one of the reasons we think you should just use them for identity, not temporary authorization.
The KDF used for password-based symmetric encryption (gpg -c, private keys on disk) in GnuPG is also terrible - but at least it is an iterated KDF of a fast function, as opposed to one-shot. I would happily put up funds for a bounty to fix the default in GnuPG but I don’t know where to post/fund such a thing.
Out of curiosity, how can you have individual user logins on a host while delegating authn/authz to a CA? All the examples I've seen thus far involve a shared login, whereas I find it's a lot easier to audit hosts when a unique user ID is in the logs, last/w/who output, etc.
I think I need some more information. What I'd like to do is to have a signed certificate that only lets me into the "otterley" account on the remote host, while not letting "jsmith" into my account (only hers) or vice versa.
My understanding of CA principals is that they identify the user or role that requested the signing, but not necessarily the login ID on the server that is allowed to be logged into. Ideally there'd be a 1:1 mapping between the principal and the login ID on the server. I think there's some sshd configuration that needs to be done, but I haven't seen any clear instructions for doing so.
Thanks! So it looks like AuthorizedPrincipalsFile/AuthorizedPrincipalsCommand gives us a method for doing this. This would have to be combined with some sort of user ID management system still, like distribution of /etc/{passwd,group} files, LDAP/AD, etc.
And for the ssh ca part, bless and teleport (as others have mentioned).
There's the option of putting stuff in ad/ldap - but if you're already using ad, kerberized ssh (and sudo etc) might be the way to go.
I like the idea of a system that's simpler than ad/ldap+kerberos - and ssh certs fits most of the bill.
The challenge becomes auth/authz beyond just login - ldap basically requires ssl ca anyway - and at that point, especially with kerberos set up - I think one might be better off sticking with one complex auth/authz system rather than two...
> consider that long-lived SSH credentials are an anti-pattern
With due respect: have you considered the myriad systems where you need to upload your SSH key to an UI? If my key is short term then I need to do that all the time. I can't set up an SSH CA on github for example.
GitHub has APIs for automation (eg via Terraform). Granted not ever web-based service does, but if you were sufficiently determined to use SSH CA then I'm sure you'd find a way (or an alternative service that did support your workflow).
But it’s important to remember that SSH is a lot less fucked overall than the VPN situation was. People mostly grok SSH. SSH mostly doesn’t negotiate BF-CBC.
I disagree! SSH is not great! Because it doesn't need to support every browser on the Internet, new crypto percolates through the ecosystem much faster than it does in TLS. But it's still a janky protocol with dumb options, and a Noise-based no-negotiation alternative that fulfilled the same interface would indeed be super useful.
I don’t think we actually disagree; I said “a lot less fucked” not “great” :) you can get real AEADs with Ed25529 in SSH easily; and some of the dumb options problems may still apply with Oxy, depending on what you’re taking about.
I have been using tinysshd for a number of years and I am hooked. Keen to experiment, I have also been using ed25519 keys instead of rsa since this option was added to openssh. No one told me to use tinysshd or ed25519 keys. As someone else pointed out, it seems like most "guides" on ssh, even ones written after ed25519 was added, still advocate rsa keys.
I put my SSH keys inside KeepassXC (regular Keepass supports this via plugin). Way better encryption and it automatically manages the adding of keys to the ssh-agent.
All my other SSH keys I don't have in there are plaintext on the disk. The ssh askpass is in userspace and easily spoofed, any local attacker could easily fish it out anyway. Full disk encryption at rest ought to be enough for most people.
The argument LVH makes here ("worse than plaintext") is that because you have to type that password regularly, it's apt to be one of those important passwords you keep in your brain's resident set and derive variants of for different applications. And SSH is basically doing something close to storing it in plaintext. His argument is that the password is probably more important than what it protects. Maybe that's not the case for you.
I just think it's batshit that OpenSSH's default is so bad. At the very least: you might as well just not use passwords if you're going to accept that default. If you use curve keys, you get a better (bcrypt) format.
While I have you here...
Before you contemplate any elaborate new plan to improve the protection of your SSH keys, consider that long-lived SSH credentials are an anti-pattern. If you set up an SSH CA, you can issue time-limited short-term credentials that won't sit on your filesystems and backups for all time waiting to leak access to your servers.