Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How does your company manage its encryption keys?
488 points by 2mol on June 2, 2020 | hide | past | favorite | 238 comments
We just had an interesting data loss at work, that was due to data being encrypted at rest. We somehow managed to delete the encryption keys (still figuring out how), which became an obvious problem once our main database instance was rebooted.

Luckily we were able to restore the data, but now I (we) really want to learn what a proper setup would look like.

If you have any clear overview reading on the topic I'd be very interested to to know about it.

In particular I'm wondering: how do you back up your encryption keys, or even put them in escrow somewhere? Assuming we don't rotate the keys constantly I would love to just save them in somewhing like a passsword manager that's secured with 2FA/FIDO.

Would love to hear your thoughts!




Someone at my company generated the keys. They then put them on a network share without any security restrictions. They've been there for 5 years with no rotation. At least 2 are checked into source control.


We have very simmiliar issue. All our databases have password Qwerty1234 Android keystore is checked in repository with access key in scripts. Security keys for external services are also checked in into repository. Some external services for production are managed by devs that are long time ago not working in our company


Hehe. Less than 8 years ago I asked for help to add a column in a database at a company I helped. This was a few days after they met me for the first time.

The company solved this by giving me a root username and password that worked on every single important database in the company, at least every customer database.

I had to beg them to create a somewhat restricted account.

The same company was however deeply sceptical to all kinds of remote work. The security equivalent of penny wise pound foolish I guess :-]


At a previous job they refused to give me database access and instead insisted I ask them whenever I needed any columns added/altered, however I did have access to the code to do my job and...mysql root credentials were committed to the repo.

To keep the charade up I sent one of every 20 requests to them to do for me.


With root access I suppose you could have created a restricted account yourself!


I end up doing something like that whenever I’m granted root. I create a limited account, then delete the root or, if it’s shared, ask that the password be changed.


I laughed so much at this


It's more than an absurdity, you have to do that to protect yourself from embarrassing mistakes. It prevents you from accidentally deleting or modifying the system in a difficult-to-restore way. If that happens, any security issue that existed becomes purely abstract and academic.


On one my past job there were fingerprint reader system on enter to office. Almost 6 years later, I were still able to enter office with my fingerprint.


At one of my past jobs there was a fingerprint reader system to enter the office. It didn't work reliably to recognise fingerprints of employees, so after a while people settled on the solution of having a large brick next to the door which was used to wedge the door open during the daytime after the first person managed to get the door open in the morning.


I get this same thing with being invited to an Azure instance. years later I still have full access


Skepticism regarding remote work often comes from the fact that a company is not sure whether the employees work like they should (especially for larger companies).

If they slack off, at least they do it in the office and not freely at home (imagine the possibilities!)


This feels awfully familiar. My current company uses the same generic username-password combination for every server.

But we aren't allowed internet access on our workstations because "security"


This is an amazing answer, thank you for sharing :)

I wish you the energy to keep trying to change this, don't let it get to you too much, I know this stuff can be frustrating as hell.


If this is a public repo then you are already hacked.

There are multiple automated systems that scan public repos for credentials. 5 minutes later you are mining bitcoins for them.


In a boneheaded movei I accidentally committed my SendGrid creds to GitHub. Pretty quickly after, GitHub alerted me. However by then my SG account was sending thousands of automated spam messages. Those automated scammer systems are FAST.

Not particularly germane to the discussion, but really disappointed in how SendGrid handled things. I notified them immediately, rotated all API tokens, and tey could not turn it off, so the spammer sent messages for days and eventually my SG account got suspended.


So this has happened at more than one company that I've been at, only difference is that these were AWS keys and used to mine bitcoin. AWS was actually pretty good about it, we rotated the keys as quickly as we could and they dropped all the charges.


I once worked for a co-founder who despite all the warnings I gave about not committing infra credentials to source control still went ahead and explicitly committed credentials to public source control because "the developer experience was better".

The CEO was not pleased with the 30K (or maybe it was 60K) bill... and I just pointed at the CTO and was like "I fought this battle and was overruled"


The most frustrating part of this is that the developer experience isn't better when you check credentials into source control.

It seems convenient until the credentials change (which they ought to now and then). Then when you check out an old revision of the project, it is broken. You end up having to copy and paste the new creds back in time and it's finicky as hell.


My SG API keys, for an account that we terminated still send out emails if I happen to use an old config for a service.

IP Rules are super helpful in this case, still need to rotate when exposed but can limit the exposure.


Github monitors for public commits of service secrets. Not an excuse to commit secrets, but there is a bit of a safety net.

> When you push to a public repository, GitHub scans the content of the commits for secrets. If you switch a private repository to public, GitHub scans the entire repository for secrets.

> When secret scanning detects a set of credentials, we notify the service provider who issued the secret. The service provider validates the credential and then decides whether they should revoke the secret, issue a new secret, or reach out to you directly, which will depend on the associated risks to you or the service provider.

https://help.github.com/en/github/administering-a-repository...


Luckily I was in charge of setting up our remote repositories and made sure that by default they are all private repos, otherwise this would have happened without a doubt.


This would be hilarious if it were not so telling about the state of security in general (not at this company in particular, i'm certain many if not most companies do the same...).


It's still hilarious; it just takes a certain level of bitterness and cynicism to appreciate.


I posted elsewhere about https://github.com/IronCoreLabs/ironhide a tool purpose built for sharing developer/CI secrets.


like every startup ever

in the last startup I worked, all jwt tokens were created from a 10 letter long shared "secret" stored in json config files all over the place :p

even dev environments had same key lol


like enterprise companies too!


That's smart! At least the secret is now versioned, cool!


If those are the keys used in production, then I'm horrified.

If they're dev-keys, I think this is pretty common.


We make no distinction between dev keys and production. Consider them production.

Since it's of interest to HN, I am working on educating our very small team on how keys should be protected and used. I am the youngest developer by about 15 years. It's a very rural company and it often feels like all learning and passion for development stalled around 2005. It's a company that gave me a chance to grow into a development role with no previous experience so I feel indebted to try my best to keep the lights on.


aha, sounds fair.

I don't judge too harshly- anyone who has black and white principles on these matters has never worked in any other industry most likely... all you can do is your best to steer the ship and convey the downsides.

I think it's important too because it helps us understand how much friction people will tolerate.

In many cases, even a small amount of friction will cause people to stop functioning completely; I recently tried setting up vault and it was a nightmare, I understand why people avoid picking it up.

That doesn't mean we should not try; we have to become the advocates, arbiters and helpers for those systems.

Good luck, you're not alone.


One thing i have noticed is that “security conscious” people are very good at criticizing things and pointing out flaws. But they are not as good at proposing clear and workable solutions that don’t add huge burden to users.

It should be no surprise that people do insecure stuff under deadline pressure.


> are very good at criticizing things [but] not as good at proposing clear and workable solutions

This is definitely true, but not actually surprising. It's much easier to notice that, say, a violin performance or plumbing repair is done very badly, than it is to actually do it correctly yourself.

Which also leads to a great deal of exasperation when people (either apparently or actually) don't even notice that what they're doing is insecure. There's a big difference between "yeah, it's broken, but it'd be a huge pain to fix and we'd probably get it wrong anyway, so we'd rather take our chances" versus "there is no problem".


Thanks for the kind words. Good luck to you as well.


Using different ones for dev and prod still might be good idea. If either one is compromised, there’s a chance the other is safe. You can still rotate them regularly, and/or if either one is compromised.


Sounds very convenient to use.


Our company is so too.


https://www.vaultproject.io/

We use Hashicorp's Vault product to manage SSH credentials, TLS certificates, as well as application secrets across thousands of users, tens of thousands of virtual machines, and hundreds of applications.

We pay for the enterprise version, but the free version is more than capable for most needs. Avoid a password manager if you can, it leads to poor security practices and availability issues depending on how and where the data is stored (IMHO, YMMV). Use single sign on where ever possible.

Disclaimer: No affiliation other than a satisfied customer and frequent end user of the product.


This is the best answer I know of. A secret management system is what you want, for several reasons:

1) Secrets checked into code means when the code gets stolen, this is an unimaginably major breach. Code tends to get stolen eventually and most tech shops will never know / only know years later because they don't have access to the channels who will sell your code.

2) You can track secrets you have in storage, who has access and who is using them

3) When a secret does get stolen, they can be rotated with ease. You do not want to be spending on the order of man months manually auditing a huge code base and manually indexing and rotating every secret. With good setup you can do this without taking any systems down.

4) Most support HSM / KMS systems which encrypt the data using a key in a hardware security module, which, in theory the key cannot physically leave – the HSM will re-encrypt all your other keys. The HSM, if used properly can be a continuous bottleneck to decrypting any information, meaning, if your encrypted objects are stolen, the attacker needs active access to your (e.g. AWS) KMS system to decrypt them.

Password managers are good for individual users, but provide bad security characteristics for development and deployment.

AWS Secrets manager, AWS's competitor to Vault used a similar system I worked on that was developed within Twitch for this purpose.


I'm always a bit antsy about Vault. You do end up having all your secrets in one place.


'in one place' is i think using a model of security that's too simple to think about the way Vault works. Those secrets can only be properly decrypted by the right services. To everyone else, they're just noise.


Security and convenience are general opposites. You have to find a compromise that suits your organisations risk profile. For example my BIL works as an engineer on a mass rapid transit system and they have air gapped all their systems.


This just pushed the problem further down the stack. You should have keys to unlock vault when it is restarted. How do you secure those keys?


You can use GCPKMS (probably AWS KMS too) to unseal Vault automatically.

https://github.com/sethvargo/vault-on-gke/blob/master/terraf...

The KMS ring itself is only accessible by Vault. People with high enough privileges for our Vault GCP project could technically grant themselves access to it, but on day-to-day business, nobody can view the project.

At some other place, where we were using AWS, I wrote a script that would store encrypt unseal keys (need multiple due to shamir) via pgp using their keybase public key. IIRC you can store encrypted keys in Vault that can be accessed only for unsealing purposes (please correct me on this). When needing an unseal, the script on the user side would then decrypt the key and submit it to Vault for unsealing. It worked well enough, and it felt like being in the movie/game GoldenEye, but nowhere as slick as auto unseal.

Regardless of the setup, yes, at the end of the day any solution is really just pushing the problem further down the stack.


You could do a lot worse than to have the operators store their shares in their separate password managers or on paper in safe places.

It does admit the possibility that an operator's share could be copied. To work around that you can get a proper HSM that needs a quorum of smart cards presented to unlock. (The offline, low QPS, root of trust-oriented ones are not exactly cheap, but much cheaper than the network-attached ones targeting high QPS transactions). Vault Enterprise has PKCS#11 integration.

With the Thales nShield stuff, you can replicate key material from one to another for redundancy while allegedly still preserving the "can't ever get keys out in plaintext" property. Not sure about others.


There are actually two answers to that:

* You can split the key between different persons, and you can even implement "n of k" schemes, like you specify (at key creation time) that you need any 4 out of 9 shards to unseal the vault. You can then keep those shards on separate operator's laptops, in separate backup systems etc.

* You can use a hardware security module to unseal the vault (support for that is not included in the free version, IIRC).

But even if the vault wasn't stored encrypted, it'd still be a huge improvement over "keys on NFS", because only machine administrators get access to the whole DB, and you can limit and audit the access of everybody else in a sane manner.


If you use Vault, you should use it as an RBAC system as well.

That means that each application got a ServiceAccount (SA) and each user got a username/password. Based on your identity, you get access to specific secrets from Vault.


We also use an HSM.

Smart cards are held by two separate people, each card set is different. Both have to be present to restart the HSM.

Our HSM can require up to five different cards be required for certain operations.

For us, we only require two for normal operations.

For HSM management (key generation, card authentication, etc..) we require a third card set member to be present. This protects against accidental key erasure, or fraud.


Store them in a separate instance of Vault.


I can't seem to understand how a "secrets manager" helps things. Could someone who does ELI5 why it's better than a config file with permissions locked down?


In a monolithic application they're pretty close. In a microservices architecture, suppose you have 100 services and schedule 6 service containers per host. That is 94 services worth of secrets that don't need to be there, expanding the blast radius of single host compromise. There may be a credential on that host that can be exchanged for more secrets, but audit logs can tell you whether it actually was, and you can revoke it.

Secrets may still need to go to config files for third party stuff, but you can write your in-house applications to fetch secrets at runtime and hold them only in memory. That's less opportunity for compromise vs. both memory and disk. Also have heard that Linux's process address space isolation is less prone to vulnerabilities than user account separation or filesystem permissions. Not really sure how true that is.


It makes sense at scale. If you are a company of two there are probably better solutions.

At scale, you can very granularly define policies for each secret. When a secret is accessed, it is done so through a user or application identity. Each access is also logged.


So then how do you manage the secret that authenticates an application's identity? And what good is the logging if after an application has the secret it can do whatever it wants with it?


if it is an instance on the cloud, GCP and AWS let you define ServiceAccounts that get populated on the Instance at boot time.

you should only let the instance access the secret it requires.


and how do you manage secrets that let you define that ServiceAccounts?

As OP wrote, you did not solve it, just moved it to a different level.


The real difference over a file is that you some intelligence(the manager) running that can offer all kinds of additional security functionality.

A typical deployment might involve placing the manager on a secure host that has access to generate and rotate keys, for example.

The manager can then configured to re-generate keys and vend them on-demand to instances that need require them. You can configure these keys with very limited access, and also make them expiring.

The manager then becomes effectively a keystore that can never export master keys, but only vends out some limited-scope keys to other instances.

Other instances would have to authenticate using some pre-configured host keys or even be authenticated directly though the cloud provider.

If your instances are compromised, the worst someone can do is to get access to a limited-scope key that will expire.

Hopefully you have other measures in place that would prevent and detect someone from just sitting on an instance and sucking up all your data and exporting them.


Thanks, this makes a lot more sense and seems much more useful than the justifications I've heard others give. I appreciate it.


We put them in a password manager (1Password) to which multiple accounts have access. Each account is secured with a key, passphrase, and 2FA.


Looks cool, one small nitpick is regarding their cookies banner. You can click "preferences" and it says "To opt out of a category of data collection, set the toggle to “Off” and save your preferences".

Not only it is opt-out, but the mentioned "off" toggle doesn't even exist.


They also have a link to their Privacy policy, which is a 404: https://www.vaultproject.io/privacy


I bet it's because it's a relative link. I don't get the cookie banner because I'm from canada but the links at the bottom redirect to Hashicorp.

The privacy policy is here: https://www.hashicorp.com/privacy


[flagged]


I think it's a bit disingenuous to claim that they do "not care about privacy". Hashicorp has demonstrated they do, on many occasions.


Well, this was my first contact with this company and this was my experience. Probably they pay a lot more attention to their products than they do to their websites.


Especially convenient with KMS auto-unsealing (https://learn.hashicorp.com/vault/operations/ops-autounseal-...)


The KMS autounseal is especially convenient, but you have to know that there is no silver bullet in crypto. You are trading off the convenience of the auto-unseal (and frankly, the fact that this can happen automatically in the middle of the night when your server reboots) against the security of your root unseal key itself.

The only thing protecting the unseal key is access to your KMS. So one rogue SRE can unseal the vault rather than requiring collusion of 1+ SRE members.

Again, this comes down to your risk tolerances and what you are protecting. I think for most workloads, the value KMS autounseal brings is worth the risk, but if you want to have tightest control, then the Shamir Split (M of N) is the best option.


And for the rogue dev you have CloudTrail/AuditLogs.

I find it hard to build initial trust in the system, without involving the trust of an administrator + subsequent automation.


It's also very helpful to be able to audit access. When people leave, and you have tons of static secrets, it's very helpful to be able to see what secrets they have accessed and that now needs to be rotated.


+1 for Vault, where I work we are in the process of switching to it.

Automatic password rotation for AD credentials[1] is especially useful to us.

Meanwhile we use the project level variables Gitlab API[2]

Secrets are kept in Gitlab and requested by applications through a token that can easily be revoked

[1] https://www.vaultproject.io/docs/secrets/ad#password-rotatio...

[2] https://docs.gitlab.com/ee/api/project_level_variables.html


+1 for vault. You can setup your own instance if you like. IMHO, initial ramp up time will be worth the long term benefits.


"Disclosure", not "disclaimer".


Was in a rush between meetings, mea culpa.


Same at our company we use vault via CLI


How do you store the vault seal keys?


Here's what works for small and medium organizations for data which needs to be encrypted at rest, but is not often accessed (so, backups):

1. Buy a bunch of Yubikeys, minimum of 2.

2. Create GPG keys and store them on YubiKeys. Follow this guide: https://github.com/drduh/YubiKey-Guide (if you want to, keep the secret keys, but in case of multiple YubiKeys I would not keep them anywhere). Remember to set the keys to always require touch.

3. Use GPG to encrypt your backups to multiple recipients (all of the YubiKeys).

4. Take care of the physical keys with proper storage and procedures. Do not store the keys together, have at least one in a really secure location, check if you have all the keys regularly, etc.

5. Test restores at least once per quarter, with a randomly selected key.

The advantages of this solution is that it is simple, works pretty well, and gets you a lot of mileage with relatively little inconvenience. You don't have the risk of keys being copied, and guarding physical keys is easier than digital ones.

You still have the problem of guarding the passphrases to the Yubikeys (if you use them), but that is much less of a problem than guarding the encryption keys. A passphrase without the physical key is useless.

This setup works for organization from size 1 up to fairly large ones.

Note that some recently fashionable security consultants crap on GPG from great height, but do not provide an alternative. It's a tool that while having multiple flaws, does many jobs better than anything else out there.


This is all generally good advice, but I think there's huge potential complexity lurking here:

> 4. Take care of the physical keys with proper storage and procedures. Do not store the keys together, have at least one in a really secure location, check if you have all the keys regularly, etc.

Would be great to see what folks think this concretely looks like for joe random startup in Capital City, Somewhere.

e.g. Does "really secure" mean "find a bank that still offers safety deposit boxes"? Does it mean paying for something like Iron Mountain (http://ironmountain.com/) or one of its competitors?


> Does "really secure" mean "find a bank that still offers safety deposit boxes"?

Realistically? Yes. This is what several of the companies I've done contract work for have done. You can still find at least one bank or self-storage place (look for the ones that don't have a nationally-advertised brand and don't look like they're made entirely out of corrugated metal) that do regular safety deposit boxes in pretty much any city. They may only be offered at a couple of locations and I've noticed credit unions bailing the hell out of this market as fast as they can decommission the vaults but boxes still exist.

Let's assume all variables work in the other way, though. If you can't find a safety deposit box and don't have somewhere that's not your office you can drill into a floor or wall and you're storing a small device like Yubikeys or USB sticks, buy the heftiest portable gun safe you can find, one with a steel cable that loops back into the device, and stick it under your bathroom sink with the cable wrapped firmly around the water supply or drain pipe.


> stick it under your bathroom sink with the cable wrapped firmly around the water supply or drain pipe

What's the idea behind this? So that when someone tries to break in, they flood your bathroom? Or do you just mean secure it reasonably?


One problem with a safe deposit box is what happens if things go to hell at 2am? The bank isn't going to open for another 6 or 7 hours, and meanwhile you're sitting there with possibly business-destroying downtime.


Well the idea is that the safety deposit box key is if everything falls apart, all the employees go missing and you dont have any other options. Ideally multiple employees would have keys as well that could respond in those crucial hours.

But if for instance, your whole security team got in a car accident and all the keys burned up, you'd have a way to recover the creds and save the business.


All of those are excellent.

There's also the ability to just leave it at a lawyer/notary (they already handle deposits, they might even have a secure box at a bank, so you can piggy-back on them for this).

Directors/Founders of Random Co. should just make a few copies a few pieces of papers that contain the passphrase and store them at their own home and ask a few relatives to do the same for them. Depending on their recoverability/safety/accountability trade-off they can increase the number of copies, they can increase the separation between the parts (eg. keys and passprhases), and so on.

The big-big-big advantage of the yubikey approach is that it's a HSM, and you can't accidentally copy the key and leave it somewhere.


This depends on your company size and security requirements. I'd say use common sense for small companies, where the idea is mostly not to lose access to all of your decryption keys at once. As the company grows and you need to worry about trusting people, you are solving two problems: having an always-available fallback decryption key (that's the easy part, and indeed deposit boxes work just fine, but so does your parent's home in many cases :)), and restricting access to keys to those people who need it. The second problem is more difficult to solve.

My main point was that the use of hardware keys makes many things much easier, and you do not have to worry about your keys being copied and used without your knowledge. That's a big thing. Also, the often-ridiculed GnuPG is amazingly useful with Yubikeys (using the setup I linked to), because you can use the same keys for SSH, thus ensuring access to all resources as needed.


i think it's more useful to discuss the goal rather than the means; if the goal is resilience to theft and natural disaster, the means might range from "stick it in a fireproof safe in the boss' office" to "outsource to iron mountain" depending on threat model.



I'm very glad it's being developed, but it is in no way a GnuPG alternative.

GnuPG is a command-line tool that is omnipresent, keys can be stored on Yubikeys, can be used with ssh-agent, and I can use it to encrypt files in an automated fashion, using both symmetric and asymmetric crypto.


This is great advise, and depending on your organization and stack, best when coupled with Vault.

Vault stores all secrets needed by running services (ACL tokens, access keys, credentials for databases, PKI for certificates, what have you).

For the rest (Vault unseal keys/cert keys/operator token, other operator secrets), secure those with the GPG keys mentioned above and store them in some way that suits you (GNU Pass/git-secret/there are several alternatives).

Where you draw the line between what's stored in Vault and not will depend on your org and its needs.


This is similar to a system I have seen, other than the inclusion of a dense QR code as the backup, stored in a secure safe. And you need to test the entire process from scan to key resurrection. We saw that our offline signing laptop's camera was low enough quality that it was very hard (but possible) to read the key, because it was so dense.


Yes, I do that, too. Data Matrix, not QR, but my keys do have paper backups.


The more complex your system grows, the more often it will fail and shoot you in the foot. I'd advise against systems like Hashicorp Vault - they just increase the complexity and while they have their merits in complex setups, you seem to be too small to be able to operate such a system.

Have an offline backup printed along with the disaster recovery checklist and documentation and put them in a safe in your company - the checklist should be dumb enough that your drunk self can use it at four in the morning, because you were the nearest employee when everything went down.

Ensure that you have stupid manual processes in place on rotation of the safe's PIN and encryption keys in general, including a sanity check if the newly generated keys actually work (e.g. if they are used for your backup storage, actually back something up and restore it). Ensure that the safe's PIN is available to at least another person and used regularly (e.g. because you store your backup tapes there).

If you feel that you need to change from this very simple system to a more complex one, ask yourself why. What does your change actually add in terms of security and what risks does it add.

In the end, you want your system available to customers and the security you add is to not only secure the data, but actually to know who can access it (the auditing part).


Great answer, thank you. I agree with your point about testing the restore process, right now I'm trying to think of a way to automate it.

As a side note: for example we had some backups that are probably useless, because they are way too small. Catching this would mean more manual regular checks, or some automated rules, at which point it becomes quickly more complex again.


The backup part is quite easy - generate a well-known file with 1kb of data and include it in your backups. After the backup completed, validate that you can restore the well-known file and compare the content of the original and the restored. Easy to automate if you run a well-customizable backup system.

Storytime: I did not check the restoration of my backups some time ago and had a faulty harddrive, so I needed to restore the backup. Backup was also as dumb as possible - essentially a tar and encrypting it with openssl. So I reinstalled the server, tried to decrypt it - got the error, that the key was wrong. Took me a good weekend to find out, that openssl changed the default hash algorithm between openssl 1.0 and 1.1. This would not have been catched with the proposed system, but now I really pin all default options in my scripts as well.


> an offline backup printed

have you ever tried typing a private key from a piece of paper? once i was in a similar situation and gave up and just drove to the colo.


An elliptic curve key (NaCl etc) is 32 bytes. Here's two such keys for you, as a demo:

    $ entropy 32|zbase32-encode 
    pu3zrux6t6cqrmmyesdxtppxiudxjcndrx3bomjuyaupa61493no
    $ entropy 32|phrase-encode 
    afar-pimple-unwind-imagine-buckets-today-duke-sober-dehydrate-rebel-online-nudged-bamboo-saxophone-eluded-tattoo-pause-bays-ungainly-tasked-jingle-topic-null-enraged


While you're right, I'd recommend against both for the specific use-case. You just added another layer. The extra software needs to be available, maybe it's not developed anymore and won't compile on your system, maybe they changed the alphabet from which the words are generated, ...

OpenSSH private keys are armoured by default, gpg-keys can be exported and imported in an armored format - and everything else can be just printed as hex representation with whatever tool (e.g. `od -Ax <file>` or any other).


z-base-32 isn't going to magically disappear off the face of the earth. Anyway, here's a 32-byte secret key as hex. Still easier to type in than to drive to a data center. GPG is just horribly verbose, and the old school RSA keys are huge in comparison.

fd3223ec 20f55ae7 6fddc979 d41e2276 25255516 b08f5cd4 3d66d676 a054d2bb


My Google fu has failed me. What is that "phrase-encode" tool and where can I find it?


It's a 50-line CLI I wrote (just like `entropy` on the other line). It's just a simple interface for https://gitlab.com/NebulousLabs/entropy-mnemonics which is one of the many different "encode binary as words" things out there. It's the idea that matters more.


For a PKI, you can also afford relatively more complexity with respect to the root CA key if you use intermediate CAs for most things and are careful with their expiration.


Can't speak to my current employer as it's above my pay-grade to know, but at Job-1 we did the following:

- All "hot" keys were stored in an offline credential manager in specific vaults depending on who needed access to them. Only staff with actual clearance could request temporary access to a vault (fully background checked, 1 year employment, etc).

- Copies of each vaulth and our master CA cert were written to 4 encrypted USB sticks. Two stored on-site in the fire-safe and two off-site at our safety deposit box that only c-level staff could access. (We had the same process with our tokens and master logins for AWS).

- Any work using those keys was on a pair-up basis, so at least two people, one doing the work and the other observing.

- We had a detailed policy around this that covered each step in the process and who needs to approve them; everyone who could feasibly need to access the keys was briefed annually as part of our security awareness training.

We handled a LOT of sensitive financial data, so this was the most appropriate way that we could find that maintained both sensible availability and key control.

So in order to get to the keys you needed:

- Access to the fire safe (Senior Ops, Senior Security and C-Level only).

- The LUKS passphrase for the USB sticks (Senior Security and some C-Level only).

- The passphrase for the specific vault (Senior Security and some C-Level only).

I don't know how the passphrases were managed by our sec team, but I know that the C-Level staff had physical envelopes in their home safes.


An expanded version of this would make a valuable book (or blog post at least).


How did you handle new secrets / rotations? Seems like a lot to keep in sync.

Seems like we hear more frequently about the actual secrets being stored encrypted (potentially with hardware protection) in a central place, and only the keys to unlock them being distributed like this.


Not OP, but generally you wouldnt distribute the actual passphrases to the people who keep hard copy backups. You'd distribute the key to unlock the key. That way you could rotate the actual key and you just re-encrypt it with the secrets you already distributed.


YubiHSM 2 has worked fantastically well for us as a root of trust in a variety of applications, at a very reasonable price ($650) (I am unaffiliated with Yubico other than as a very satisfied customer).

Accessible over USB or HTTP, it supports every major crypto algorithm [1], and keys can be backed up onto another HSM via a wrap key (if they are marked as exportable -- you can also control what can and cannot be exported -- in fact, every operation may be allowed or disallowed per key).

Every operation is logged for audit, of course, and the device may be setup to require logs to be read before they are overwritten. In combination with configuring a special authentication key to access the logs, you can ensure that every operation on the HSM is logged to a remote store before additional operations may be completed.

It does depend on your existing physical security, so that has to be taken into account when designing architectures including it. The micro form factor at least makes it trivial to put into an internal USB port.

And of course, if you require a more enterprise grade tool, you may want to use an HSM in combination with a tool like Hashicorp Vault to manage your keys throughout your orgnaization.

[1] https://developers.yubico.com/YubiHSM2/Product_Overview/


At my prev company we generated keys and split them using `ssss-split` and handed out shards to specific individuals via a keybase exploding message. Our system required at least 3 shards (combined via `ssss-combine`) to reboot.

FYI: Hashicorp vault just uses Shamir's Secret Sharing scheme under the hood: https://github.com/hashicorp/vault/blob/45b9f7c1a53506dc9722...


I highly second the people saying KMS (AWS KMS, Google KMS, or KeyVault).

* The pricing for just storing keys is incredibly cheap.

* At least with Google KMS you can't delete the keys without a 24 hour waiting period (and you can alert on the deletion attempt), so that's a huge safeguard.

* You get key access auditing out of the box.


AWS KMS also enforces a waiting period of between 7 and 30 days before it will let you delete a key.

There’s also a feature you can enable that automatically rotates your key once a year. KMS is great!


Given the keys never leave the KMS hardware encryption module, are you at all concerned that all your data will be destroyed if you lose access to KMS for any reason? That's what has always given me pause when I consider KMS. Or do KMS users just take on faith that AWS will always be there?

Why do you like their auto-rotation? The keys that are rotated out are not never disabled, so I don't really understand the benefit. In what scenario would their auto-rotation improve security?


> Or do KMS users just take on faith that AWS will always be there?

I don't take on faith that they'll always be there, but I do believe that if for whatever (extremely highly unlikely) reason they did go away that they'd make it possible to get my keys, or give me enough notice so I could re-encrypt with other keys.

Face it, when running a business there is a ton of trust you have to put in 3rd parties (banks, insurers, your employees, the government, etc.) Yes, you should always evaluate the trustworthiness of 3rd parties, but AWS going away and deleting my keys is probably #6327 of things I worry about.


> if for whatever (extremely highly unlikely) reason they did go away that they'd make it possible to get my keys,

It is a very common design criteria for a HSM to not be able to do that, no matter how willing.


This is fine if you're committed to using (say) AWS KMS for your encryption needs as a service with its per-API-call pricing.

The costs of that obviously scale in a completely different way from the per-key storage costs (which are actually zero, I think).


Google's KMS pricing structure is $0.06/key/month + $0.03/10,000 encrypts/decrypts [0].

At $0.06/key/month, that's practically free for most reasonable use cases. For example, if there's 10k secrets that's $7,200/year.

If you encrypt/decrypt your secrets 1 million times per day (~11.6 times/s), the access charges would be $1,095/year (1 million operations/day * 365 days/year * $0.03 / 10,000 operations).

[0]: https://cloud.google.com/kms


If you use the AWS Encryption SDK, you can cache your data keys and reduce your calls to KMS: https://docs.aws.amazon.com/encryption-sdk/latest/developer-...


Definitely! And it depends on what key you're storing, but if you use AWS Secrets Manager, you can setup automatic key rotation to run periodically.


Can you easily set up policies that prevent deletion of keys?



We use Bitwarden[0] for our secrets. It's open-source with a hosted option. Makes sharing passwords and keys across the team pretty straightforward.

In addition to it, we use envwarden[1], which is a simple open-source wrapper around the Bitwarden CLI to manage our server secrets. It's super simple, but does the job for us well. We can then manage both passwords and keys in one place.

Disclaimer: I created envwarden. I'm not affiliated with Bitwarden in any way however. Just a happy customer.

[0] https://bitwarden.com/

[1] https://github.com/envwarden/envwarden


I didn't know about envwarden till you mentioned it here.

We migrated from bitwarden to offline client machine for the reason that we didn't wanted a copy of our encryption key be available anywhere else online.


envwarden looks awesome! Is there any potential for the Bitwarden project to incorporate it/make it officially supported?


We commit encryption keys, themselves encrypted, to git alongside the code and everything else. They’re fully versioned and therefore protected against data loss, and we don’t treat dev keys as any different from production keys (just stored in a separate file).

I first wrote about it back in 2017 (1) and we released an open framework for multiple languages/frameworks (2).

1: https://neosmart.net/blog/2017/securestore-a-net-secrets-man...

2: https://neosmart.net/blog/2020/securestore-open-secrets-form...


We use shh for secrets (https://egt.run/shh).

It's designed to integrate really well with your existing CLI tools like vim, xargs, and diff. It offers user-based permissions, and secrets are encrypted into a single file that's safe to commit into your git repo. We can stream secrets out of it directly to our remote servers during deploys.

Unlike Vault you don't need to manage infra to run it -- it's just a file. Unlike cloud secret managers, there's no lock-in.


A couple of questions..

1. Does every client have a copy of shh to interact with the secrets? Or are the secrets in the file served from a single centralized node?

2. What is your process of exchanging user keys with shh?

3. If someone leaves the company, what is the process you go through to change the secrets and rotate keys?


1. Every client (i.e. developer) has their own copy of shh but interacts with a shared .shh file in the project root. Secrets can be decrypted locally in memory then streamed to the remote servers over ssh. You could in theory run this on a server to help address your third point.

2. Exchanging public user keys is done via email when an employee starts, then added to the .shh file and committed in the repo.

3. You'd need to write a script for this, which probably involves `shh rm-user {email}`. There is no silver bullet here for changing secrets; since developers have access to secrets, any secret they had would need to be regenerated. `shh` makes no assumptions about how you deploy, what secrets you keep, or their formats.


We use Bitwarden https://bitwarden.com/#organizations to store passwords as well as encryption secrets as a backup.

Application repos have the encrypted secrets (meta) stored in their repos using Ansible-vault and the .vault_keys are stored in Bitwarden.


We do exactly the same.


We store all keys that do not required automated access on Yubikey with the option that requires a physical touch per use.

Usage includes SSH authentication, file encryption (backups and exchanges), git commit signatures and password/secret storage using `pass`.

Copies of the offline master keys keys are stored on flash in safes onsite and offsite in bank vaults, and sub-keys are valid for one year.

We use Hashicorp's Vault for secrets that require automated access.


Disclosure: Founder

https://dev.ionic.com

Utilized globally by individual developers, large enterprises such as JP Morgan & Chase[1], and integrated into the KMS services such as Google Cloud[2].

1. https://venturebeat.com/2019/02/27/ionic-security-raises-40-...

2. https://cloud.google.com/blog/products/identity-security/clo...


dev.ionic.com is in particular an answer to, "how do you back up your encryption keys, or even put them in escrow somewhere?"


In particular, this solution implements a key vault that you can safely store keys using the security model provided by your OS (Windows, MacOS or Linux). The tool provides a CLI that lets you easily create and store keys from the command line.


At one job I was afraid of losing a set of keys, so as an extra backup, we had a physical copy of the the key printed out as a QR code. This was then physically locked away (safe deposit or similar). But I like this as an option for a “failure resistant” offline backup. The main benefit is that doesn’t assume a stored away USB stick will still be readable while still having a somewhat machine readable format. Yes, the QR code was very large, but could be split into multiple files if needed.


I worked in a place that had a very nice in-house blade server setup with an attached NAS, which was encrypted (at rest, on storage).

This was early in the “cloud” epoch and at any rate they preferred in-house iron for entirely understandable reasons. Also, they needed to do some pretty interesting things, so they had a NAS populated with enterprise-grade SSDs and a native AES-based encryption scheme. This kept the key on what was basically a glorified USB key.

(For those of you who wish to know these details, the network between the blades and between the blades and the NAS was a nicely spec’d fibre channel network, and there was some iSCSI involved. The blades featured Itanium processors, which kind of gives the manufacturer away, and the firm had invested quite heavily in producing very high-performance code for those ill-fated microprocessors, but I digress.)

So... it happened that somebody lost the USB key. Well, not quite. Somebody took it home during a weekend (whilst the system was shut down) and their kid used it for school work.

This proved to be a “significant problem”. There was a backup, and it was encrypted and stored on an adjacent SAN. It wasn’t exactly stale, but it wasn’t entirely pristine either.

There was much woe and gnashing of teeth.

Nobody was fired because the dolt who maintained custody of the only USB key was the founder/CEO, so he couldn’t exactly blame himself.

But, yeah. That happened, sadly.


You need to have a threat model. Then work out a solution from that. Ask yourself, why are you encrypting the hard drivers ?

One threat model might be: Burglars sneaking in during the night and stealing the hard drivers. Then you would store the keys on a different location then the disks.

Then you make it a routine to reboot parts of the fleet, like scheduled simulation/training so that everyone knows what to do when you actually need them.


How paranoid do you want your security to be?

In general I would suggest using a key vault. AWS, GCP, and Azure all have cloud versions that are backed by virtual HSM's built on top of actual physical HSM's. For the vast majority of usages they're good enough. Use admin account management to enforce 2FA/FIDO for all AWS/Azure/GCP logins. (You should be enforcing 2FA with phone/FIDO auth anyway.)

If you need truly paranoid backups, you can back the key up onto a portable hard drive that you lock in a safe in the closet, with a few key people who know the code.

I recommend against using a (cloud-synced) password manager. Cloud key vaults do the same thing but offer specific features relevant to server stuff. And if you want more paranoia, a physical safe is probably safer than extending your attack surface to a cloud-synced password manager.

Also: make sure that you set up a ~quarterly ritual of opening and verifying the backup. For crucial backup fallback systems, you want to make sure you actually use the system so that you know if it fails.


> How does your company manage its encryption keys?

Poorly seems to be the answer industry wide. Both encryption and disaster recovery are hard tasks. Combine them and you have recipe for a total mess.


We use Knox: http://github.com/pinterest/knox

Think of a lighter vault, with ACLs for people and/or machines to access keys and versioning to rotate keys.

In our case, Knox depends upon AWS KMS to "lock/unlock" its storage.


This looks way easier than getting Vault setup! Unfortunately, googling for it returns a ton of info about Apache Knox which is an unfortunate name clash. I would love to see this catch on. Thanks for the share!


Not sure the "right" way to do it. But this is what we did:

For context: We run a centralised salt-master, salt master unencrypts content using gpg filters as part of variable generation (salt "pillars"). So it's encrypted at rest and encrypted in our git repositories.

What we do/did, is:

* grab a pair of differently branded USB sticks.

* LUKS encrypt the USB sticks; we used a keyfile which is encrypted on our machines with our GPG key.

* encrypt salt's GPG private key with all of our keys.

* encrypt some of the "irreplacable" private keys (IE: CA roots) with all of our GPG keys.

* store it all on the pair of USB drives

* put the USB drives in a real-life vault, give the keys to the office team.

We haven't needed to recover, but it's clearly documented how to recover if anything went wrong.


Bit rot is a thing.

It'd be better to take a page out of the cryptocurrency handbook and print the keys out as QR codes (maybe with a simple password so it can only be restored if you know the pw).


How do you handle the LUKS key file? Do you encrypt it to the whole team? Just yourself? How do you circulate that LUKS key file?


Why differently branded?


AWS KMS, and equivalent for other cloud providers. They are in every sense better qualified to handle encryption secrets than my lone ass is.


Lots of decent advise here already...be wary of a complicated (fancy product) approach and security theater....frankly when it comes to secret management the KISS principle should apply, with the caveat that it should be secure of course.



My humble security advice:

1) If you don't need it - don't store it. If you need it - but it needs to be encrypted - probably! don't store it.

2) Think one way hashes with salts, think deletion policies, rotations, small disk drives.

3) You will get owned!!


I agree with only store what you need. But I can't think of many things in a professional setting that a company should store but not encrypt.


The dream for me has always been to get LDAP/Kerberos tied into container/VM orchestration, and do everything that way with per-instance accounts.

LDAP is ubiquitous enough as an auth method (how do you auth to Vault? You auth to LDAP with it...) that any service you run or use is likely to speak it.

Why this isn't done more often is a mystery to me and probably the number 1 source of credentials being baked into things accidentally: oh we need a service account into the <enterprise system> which uses Active Directory.

Might have answered my own question there though.


I think it really matters how sensitive these keys are.

It's been quite a few years since I interacted with them, but for some keys there is a server somewhere with an HSM installed, and two people have credentials for it. If you need something signed you send it to them, with a justification for why it needs to be signed with the real keys, and they will send you a path to get the signed file, and remind you to delete the signed file when you no longer need it.

This is overkill for some things, and probably would be considered sloppy for others.


Perhaps a dumb question, but how did you restore the data without the keys? Was a prior backup unencrypted?


Put it in a password manager. Nothing can be any simpler than that.


I like this approach because 1Password provides a nice CLI that I can use in scripts:

  op get document UNIQUE_DOCUMENT_ID > pem
Anyone in my department (or who can access the shared vault) can run this script by simply logging into 1Password cli (`op login`) before running.


Online HSMs where you can exchange keys. Databases connect to the HSMs for crypto operations + offline HSMs with PCI cards in a safe for root keys.

For other parts HashiCorp Vault/AWS kms


Hi, this issue brought me back in the days when i was just a IT handyman in a small company. The priority of that company was to don't share keys or everything related to them with no one. For no one i mean, third-parties software of wherever a password or an encryption would be watched to someone. At the time, i thought this "obsession" was clearly a sign of mental illness, cause the company was very small and we were in the nowhere of nothing.(maybe nowdays i still think it). Our method was based on a selfmade MD5 encryption script using ruby on rails. Put your password into it, it print it on a datacoin blockchain that generates a univocal MD5 hash. This hash goes around 5 (later 6) local server those collect the encrypted key. Obviously these servers were without an internet connection, running only for internal company purposes.(such as this). For sure a weakness of this procedure was the slowness for obtain a new password or to change it. I think that the most secure place is where there'snt an internet connection. Thanks for bring back memories :D


When managing/deploying code we use SOPS (Secrets OPerationS) https://github.com/mozilla/sops

For standard password style secrets used by Ops, we use Team Password Manager. Which we chose about 5 years ago because it was self-hosted, the database was encrypted, and it had fantastic audit capabilities.


In my previous experience, I was working for an HSM and Data Protection vendor. If you want to be resonable secure to don't lose keys and keep them safe, just use an HSM. If you need to encrypt filesystem, you can use a DP product (most of them are not so expensive) If you want to database content, you can use tokenization services.


I try to push as much to certificate auth as possible. Internal CA keys are stored on an offline USB HSM, which is locked in a cabinet. Access to the key requires 2/10 individuals to be physically present.

There are different CAs for different purposes. There's an intermediate for device management, and another for user or service auth purposes.


So are your secrets served from a central node that authenticates the certificates? What does your process look like for changing secrets when someone leaves the company?


A couple of our customers shared this thread with me so I thought I’d chime in. For transparency I am the CEO of Doppler (YC W19), a hosted secrets manager service. I know secrets management isn’t directly related to key management, but it’s a cool security topic we think about often. A one-liner about Doppler - lovable secrets manager built for the everyday developer, that works across all stages, from local development to production, on all stacks, and infras.

For anyone who needs a super simple place to store their encryption keys that works with Heroku and has versioning, I think Doppler could help. It doesn’t have all the fancy (and really cool) features of KMS as it’s designed to be a kv store for secrets, but it could be helpful. We have a free tier for anyone who wants to try it out. https://doppler.com


> I (we) really want to learn what a proper setup would look like

Signing and authentication keys are expendable but encryption keys are worth keeping even after they've been rotated since decryption of existing data may be necessary.

The key can be printed on paper and stored in a physical safe. Paper isn't a high density storage medium but it is remarkably durable and perfect for small amounts of data such as encryption keys. It also counts as an offline backup.

Keys can also be printed as QR codes. They support error correction and enable automatic data restoration. Even 4096 bit RSA keys fit in a binary mode QR code and the smaller ECC keys allow use of high error correction modes, making the data even more durable.

I wrote a binary decoding feature for ZBar in order to support this exact use case:

  zbarcam --raw --oneshot -Sbinary > key
It's available on version 0.23.1 and above.


Anyone using any KMIP based solutions? (looks like Hashicorp's Vault product implements a KMIP server).

http://docs.oasis-open.org/kmip/spec/v1.4/kmip-spec-v1.4.htm...


We keep the encryption key stored in plain text files on client offline machine.

Catch: Client machine is encypted with VeraCrypt. Veracript hidden drives, one password is kept by product owner and another password is kept by head of security.

Offline client machine is key for us.

We rotate encryption keys after quarterly security audit.


We use vault, but sometimes I just `openssl` gpg encrypt the secrets with the keys of all the members of my team and commit the .gpg to git. We all use yubikeys and use them to SSH.

Not ideal, but it works... At least until one of us resign (but turnover is quite low here, so crossed fingers).


You should be able to revoke the leaver's shared key from the keyring and reencrypt the secrets?


What is your process for handling key exchange with team members?


when a new guy arrives to the company, we generate the keys on an air-gapped computer (with cahoskey et al) and upload the to the yubikey (they keep a separate encrypted usb key with the private keys). Then, some employees verify the new employee and sign their keys. There are then uploaded to teh keyservers and an internal mail is sent.

Quite old school but it works quite well, alas we're small though (120).


We check many of our passwords into a git repository ... but they are all encrypted with unix pass with the public key of each individual in the team authorised to access. Our services require the gpg agent running to launch, allowing them to read the secrets.


Vault for big stuff, git-crypt for file-level encryption of secrets (in Terraform files etc.)


What happens if vault gets destroyed or corrupted?


This is why you have geographically separated disaster recovery replicas and backups.


https://github.com/IronCoreLabs/ironhide is a tool we built for managing our own developer secrets. It allows encrypted files to be checked in to git or stored elsewhere. You can think of it similar to gpg, with the upside that ironhide has the ability to change who can decrypt the secret without re-encrypting the data.

Check it out and if you have any questions, feel free to ask here or open an issue in github. We also have a Rust version in the works for those interested in something native.


GitHub Actions, Terraform Cloud, and strict RBAC for everyone on our team so they can't see/change these values on GitHub/Terraform/our cloud environments. This doesn't need to be hard.


The new Google secret manager is a pretty nice way of having an easy to use web interface protected by IAM, with a REST API for applications to pull. You could easily prevent users from deleting keys. It's not as hardcore as Vault but it's a much simpler way of getting keys out of source control IMHO. You can have 2FA and audit logs easily too. Simpler than Google KMS too.

https://cloud.google.com/secret-manager


I've had these variations at work: Checked into the code encrypted, a custom secret vault, stored encrypted in a S3 bucket and downloaded/decrypted at startup.

The main observation I'd make that if you put your keys somewhere only accessible in production, you've made it impossible to test anywhere except production. If you do that, you need to create a process where people can ship some small bit of code to test if the production key setup genuinely works (hint: it won't).


AWS KMS for most thing.

AWS ACM for SSL certs.

AWS SSM for ssh, eliminates the needs for ssh keys.

Not everyone loves AWS (me included), but this stack works nicely in removing the need to ever touch raw encryption key files locally.


I am curious how you managed to restore it if you lost the keys.


"accidental" backups


ha ha, cheers mate. I asked because I had encrypted backups in the past for which I lost the key and wasn't so lucky :(

I don't know your company size but we use stackoverflow's blackbox. It's very nice because you can check in the repo into version control. You can add and remove users on the fly as well.


Piecing together a key management solution from open source is possible, but if you're trying to encrypt data and are open to commercial products, we do a lot of support of Vormetric.

All of the key management is "built-in" and managed, so there really isn't much overhead. All software-based, with FIPS certified key management. It's very easy to encrypt data this way. It is expensive though.

Disclaimer: while I used to work for Vormetric / Thales, I no longer do.


Hardware key control (HSM) with smart card controlled access.

Locked in a vault.

Keys are an actual secret key (within the HSM), unknown to any human.

Two people with two different access cards have to be present to enable key operations.

So, you don't have to worry about changing keys as employees come and go, because no one knows the actual keys.

There is a whole structure of spare cards stored in offsite secure storage in case a card is damaged, lost, or stolen.

Card set one is stored separately from card set two.


We have a main config/setup git repository which contains keys (and passwords) in git-crypt. 4 people have GPG keys able to decrypt it, and this is used in normal operation for deployment (i.e. for ansible to feed the keys into systems.) The repo is cloned in quite a few places.

None of our secret material is particularly hard to revoke or replace; we don't run an internal CA or anything like that.


the simplest option is to store your keys in aws secrets manager (if you use aws), and then write some tooling around it.

self promotion * You did ask how people do it :), this is my way, Ive written my own service which has been in production for more than 3 years, http://pkhub.io (if you would like to try it send me an email to admin@pkhub.io). This was before aws secrets manager, the tooling is usefull cause I wrote: running your app with its needed secrests dev/stage/prod, accessing dbs, downloading and installing ssh keys to ssh agent, utilities.. end

of course you could write all these yourself with aws secrets manager.

there is hashicorp's vault but tbh it always seemed like way to complicated to setup.

my advice in general would be: to get something secure but simple enough that your engineers can do their work and access the resources they need, without the oh only bob has the keys on his laptop situation.


I have used EJSON and putting everyone's public key in a single keyring. Works but the secrets and keys end up taking up a lot of space.


If you run on kubernetes it has first class support for secrets [0]. You can reference secrets in environment variables, or mount them as files in your containers.

[0] https://kubernetes.io/docs/concepts/configuration/secret/



Shamir's Secret Sharing algorithm [1]. 4 people have shares, with 2 shares required to reconstruct the key.

How the 4 shareholders store their shares is up to them. Mine is in a secure note in 1Password.

[1] https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing


We have so many secret values it made sense to build our own internal product, an audited system which holds secrets in a backed up, locked down database. Apps pull from that system at runtime (or deploy time) but they can only access their own secrets using access control. We also use AWS KMS for AWS related resources.


Before building your own, did your company consider using Vault? If so, what factors led you to go down the build your own path?


Some on a USB hardware dongle under lock and key. We also run HSMs and the key cards to those require quorum. So the CTO, Founder and head of devops each have multiple cards at home and work such that any two of them can make a quorum with all their cards, or all three can make a quorum with some of their cards.


I put them into a password manager which is backed up both on Dropbox as part of sync and Backblaze for long term.


We save the keys in a cloud vault. For the most important keys I also print them on paper, in text and in a QR-Code (that I generate with an offline tool). It is then placed into a physical safe. This is in case we lose access to our cloud vault, or if the keys are deleted from the cloud vault.


Very badly and inconsistent in my company ! We have a lot of people raising concerns about whatever you want to do. But not many actually have any contributions for how to do it right so every team rolls their own little solution.

I will be very interested in hearing how others do it.


We use Doppler. They're a YC Co. Like an easier to use Hashicorp. Has been great so far.



We use StackExchange's blackbox to check in GPG encrypted secrets into our monorepo: https://github.com/StackExchange/blackbox


Depending on what the keys are for, we use either 1Password or EnvKey for storing them.


Plugging my project Certera as a means to manage keys used for Let's Encrypt certificates: https://docs.certera.io

You can rotate keys and facilitate key pinning scenarios.

Cheers!


They are kept on Keybase >_<

Sadly now forced to figure out what to move to. We're considering 1Password with it's CLI as a short term option, but will be wanting to move to Hashicorp Vault or similar on the mid-term


AWS secrets manager or SSM or KMS for any kind of secrets, keys etc. Works well because our entire stack is on AWS. Otherwise hashicorps Vault would do I guess but that’s yet another service on life support.


Related question: How do big companies (FAANG) store their root private key?


Used Doppler to store encryption keys and other secrets, and it worked well. Was pretty easy to setup and use their secrets store across our dev machines, CI, and production environments.


We use keybase


We also use Keybase. We wrote an extension to our existing lightweight management script, encpass.sh(https://github.com/plyint/encpass.sh), to be able to access/manage our secrets from the CLI and in our infrastructure scripts. The extension is encpass-keybase.sh (https://github.com/plyint/encpass.sh/blob/master/extensions/...) kept in the same repo.

The extension makes use of the per-user keys of Keybase. This allows you to manage access permissions using the Keybase GUI client. Once a user is added to the appropriate Keybase team, they will immediately have access to that teams secrets that are stored in the encrypted Git repos.

To use it you just need to download the 2 shell scripts to a directory in your path (e.g. /usr/local/bin/) or you can clone the git repo.


Nice try, NSA.


My company is too big for me to know about how everyone manages them (my guess would be sharepoint shared folders, with variable degrees of accesses).

My team uses lastpass


We use Azure KeyVault



+1 for Az KeyVault. I use it in my Docker deployment scripts using Azure CLI. Example here is a secret, but similar concept for certs using CLI: STORAGE_ACCOUNT_KEY=`az keyvault secret show --vault-name=<your keyvault> --name=<secret name> --query=value | tr -d "\""` This populates a .env file, referenced in Docker-compose.yml.


I also use this for my passwords https://news.ycombinator.com/item?id=22316520


+1, the integration to supplant K8 secrets is very nice.


+1


On clean hardware under four eyes policy:

https://imgur.com/a/FE8sslv


We use Doppler for our encryption keys, great way to easily store dev / staging / production keys in a secure and obfuscated way


We use lastpass for sharing keys internally and we use AWS SSM ParameterStore, this section of AWS is only accessible by a few engineers.


wrote our own kerberos-aaS clone with less features and vulnerable to more internal attacks than plain kerberos and more reliant in a central cert (not cert authority, cert), that is only used sporadically for cross services, not users (there's something else from major vendor there)

and that team now keeps growing and the feature never improves :)


There's a reason gpg supports --armor; print a backup copy of any important keys and stick it in a safe deposit box.



Lots of answers, and I still don't see anything valid for a 5 person startup--2 of whom are executive types.


On a USB drive in a locked cabinet. It's not a hardened cabinet. A hard yank would probably open it.


At a prior company they were printed off and stored in a typical file folder.


Generated by an automated process via service now and stored in cyberark


AWS KMS. Access is added invidually using IAM roles. That's it.



We have a shared Dropbox account with 2fa hardware and no otp


We email them around lol. Wish we used istio.


We check them into github! (sadly...)


Honeypot post, don't get doxxed


Seriously. Who would be dumb enough to answer this question when they know their key security is non-existent.

Most people have their employer or work email on their HN profile.

Really people?


Lol, I guess this should be the first answer of the thread.

But also, at that point you're fully committed to security through obscurity, so folks should get at least the obscurity part right :)


Wouldn't YOU like to know?


We use Azure Key Vault


Lastpass?


Vault.


Notepad


AWS KMS


we use lastpass


I've asked myself this question many times. Or "Who holds the keys to the keys?".

Here is an air-gapped solution:

https://www.arcanus55.com/?trusted55=A55PV2


We use cloud-provider managed encryption because we're not paranoid and don't have legal requirements to manage our own keys.

We don't have SSH keys because it's not the 90's and we don't have servers.


> We don't have SSH keys because it's not the 90's and we don't have servers.

This seems unnecessarily snarky.

There are lots of businesses in 2020 that still maintain their own servers, use ssh keys, and are staffed by admins and developers who very much know what they are doing (and are not at all "behind the times", as this comment seems to imply such businesses are).

If that's not what you meant, well, OK, but I find it difficult to interpret it any other way.


He'll learn...

probably the hard way.


Or the expensive way


My serverless infra costs me 0$ per month.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: