Hacker News new | past | comments | ask | show | jobs | submit login

https://www.vaultproject.io/

We use Hashicorp's Vault product to manage SSH credentials, TLS certificates, as well as application secrets across thousands of users, tens of thousands of virtual machines, and hundreds of applications.

We pay for the enterprise version, but the free version is more than capable for most needs. Avoid a password manager if you can, it leads to poor security practices and availability issues depending on how and where the data is stored (IMHO, YMMV). Use single sign on where ever possible.

Disclaimer: No affiliation other than a satisfied customer and frequent end user of the product.




This is the best answer I know of. A secret management system is what you want, for several reasons:

1) Secrets checked into code means when the code gets stolen, this is an unimaginably major breach. Code tends to get stolen eventually and most tech shops will never know / only know years later because they don't have access to the channels who will sell your code.

2) You can track secrets you have in storage, who has access and who is using them

3) When a secret does get stolen, they can be rotated with ease. You do not want to be spending on the order of man months manually auditing a huge code base and manually indexing and rotating every secret. With good setup you can do this without taking any systems down.

4) Most support HSM / KMS systems which encrypt the data using a key in a hardware security module, which, in theory the key cannot physically leave – the HSM will re-encrypt all your other keys. The HSM, if used properly can be a continuous bottleneck to decrypting any information, meaning, if your encrypted objects are stolen, the attacker needs active access to your (e.g. AWS) KMS system to decrypt them.

Password managers are good for individual users, but provide bad security characteristics for development and deployment.

AWS Secrets manager, AWS's competitor to Vault used a similar system I worked on that was developed within Twitch for this purpose.


I'm always a bit antsy about Vault. You do end up having all your secrets in one place.


'in one place' is i think using a model of security that's too simple to think about the way Vault works. Those secrets can only be properly decrypted by the right services. To everyone else, they're just noise.


Security and convenience are general opposites. You have to find a compromise that suits your organisations risk profile. For example my BIL works as an engineer on a mass rapid transit system and they have air gapped all their systems.


This just pushed the problem further down the stack. You should have keys to unlock vault when it is restarted. How do you secure those keys?


You can use GCPKMS (probably AWS KMS too) to unseal Vault automatically.

https://github.com/sethvargo/vault-on-gke/blob/master/terraf...

The KMS ring itself is only accessible by Vault. People with high enough privileges for our Vault GCP project could technically grant themselves access to it, but on day-to-day business, nobody can view the project.

At some other place, where we were using AWS, I wrote a script that would store encrypt unseal keys (need multiple due to shamir) via pgp using their keybase public key. IIRC you can store encrypted keys in Vault that can be accessed only for unsealing purposes (please correct me on this). When needing an unseal, the script on the user side would then decrypt the key and submit it to Vault for unsealing. It worked well enough, and it felt like being in the movie/game GoldenEye, but nowhere as slick as auto unseal.

Regardless of the setup, yes, at the end of the day any solution is really just pushing the problem further down the stack.


You could do a lot worse than to have the operators store their shares in their separate password managers or on paper in safe places.

It does admit the possibility that an operator's share could be copied. To work around that you can get a proper HSM that needs a quorum of smart cards presented to unlock. (The offline, low QPS, root of trust-oriented ones are not exactly cheap, but much cheaper than the network-attached ones targeting high QPS transactions). Vault Enterprise has PKCS#11 integration.

With the Thales nShield stuff, you can replicate key material from one to another for redundancy while allegedly still preserving the "can't ever get keys out in plaintext" property. Not sure about others.


There are actually two answers to that:

* You can split the key between different persons, and you can even implement "n of k" schemes, like you specify (at key creation time) that you need any 4 out of 9 shards to unseal the vault. You can then keep those shards on separate operator's laptops, in separate backup systems etc.

* You can use a hardware security module to unseal the vault (support for that is not included in the free version, IIRC).

But even if the vault wasn't stored encrypted, it'd still be a huge improvement over "keys on NFS", because only machine administrators get access to the whole DB, and you can limit and audit the access of everybody else in a sane manner.


If you use Vault, you should use it as an RBAC system as well.

That means that each application got a ServiceAccount (SA) and each user got a username/password. Based on your identity, you get access to specific secrets from Vault.


We also use an HSM.

Smart cards are held by two separate people, each card set is different. Both have to be present to restart the HSM.

Our HSM can require up to five different cards be required for certain operations.

For us, we only require two for normal operations.

For HSM management (key generation, card authentication, etc..) we require a third card set member to be present. This protects against accidental key erasure, or fraud.


Store them in a separate instance of Vault.


I can't seem to understand how a "secrets manager" helps things. Could someone who does ELI5 why it's better than a config file with permissions locked down?


In a monolithic application they're pretty close. In a microservices architecture, suppose you have 100 services and schedule 6 service containers per host. That is 94 services worth of secrets that don't need to be there, expanding the blast radius of single host compromise. There may be a credential on that host that can be exchanged for more secrets, but audit logs can tell you whether it actually was, and you can revoke it.

Secrets may still need to go to config files for third party stuff, but you can write your in-house applications to fetch secrets at runtime and hold them only in memory. That's less opportunity for compromise vs. both memory and disk. Also have heard that Linux's process address space isolation is less prone to vulnerabilities than user account separation or filesystem permissions. Not really sure how true that is.


It makes sense at scale. If you are a company of two there are probably better solutions.

At scale, you can very granularly define policies for each secret. When a secret is accessed, it is done so through a user or application identity. Each access is also logged.


So then how do you manage the secret that authenticates an application's identity? And what good is the logging if after an application has the secret it can do whatever it wants with it?


if it is an instance on the cloud, GCP and AWS let you define ServiceAccounts that get populated on the Instance at boot time.

you should only let the instance access the secret it requires.


and how do you manage secrets that let you define that ServiceAccounts?

As OP wrote, you did not solve it, just moved it to a different level.


The real difference over a file is that you some intelligence(the manager) running that can offer all kinds of additional security functionality.

A typical deployment might involve placing the manager on a secure host that has access to generate and rotate keys, for example.

The manager can then configured to re-generate keys and vend them on-demand to instances that need require them. You can configure these keys with very limited access, and also make them expiring.

The manager then becomes effectively a keystore that can never export master keys, but only vends out some limited-scope keys to other instances.

Other instances would have to authenticate using some pre-configured host keys or even be authenticated directly though the cloud provider.

If your instances are compromised, the worst someone can do is to get access to a limited-scope key that will expire.

Hopefully you have other measures in place that would prevent and detect someone from just sitting on an instance and sucking up all your data and exporting them.


Thanks, this makes a lot more sense and seems much more useful than the justifications I've heard others give. I appreciate it.


We put them in a password manager (1Password) to which multiple accounts have access. Each account is secured with a key, passphrase, and 2FA.


Looks cool, one small nitpick is regarding their cookies banner. You can click "preferences" and it says "To opt out of a category of data collection, set the toggle to “Off” and save your preferences".

Not only it is opt-out, but the mentioned "off" toggle doesn't even exist.


They also have a link to their Privacy policy, which is a 404: https://www.vaultproject.io/privacy


I bet it's because it's a relative link. I don't get the cookie banner because I'm from canada but the links at the bottom redirect to Hashicorp.

The privacy policy is here: https://www.hashicorp.com/privacy


[flagged]


I think it's a bit disingenuous to claim that they do "not care about privacy". Hashicorp has demonstrated they do, on many occasions.


Well, this was my first contact with this company and this was my experience. Probably they pay a lot more attention to their products than they do to their websites.


Especially convenient with KMS auto-unsealing (https://learn.hashicorp.com/vault/operations/ops-autounseal-...)


The KMS autounseal is especially convenient, but you have to know that there is no silver bullet in crypto. You are trading off the convenience of the auto-unseal (and frankly, the fact that this can happen automatically in the middle of the night when your server reboots) against the security of your root unseal key itself.

The only thing protecting the unseal key is access to your KMS. So one rogue SRE can unseal the vault rather than requiring collusion of 1+ SRE members.

Again, this comes down to your risk tolerances and what you are protecting. I think for most workloads, the value KMS autounseal brings is worth the risk, but if you want to have tightest control, then the Shamir Split (M of N) is the best option.


And for the rogue dev you have CloudTrail/AuditLogs.

I find it hard to build initial trust in the system, without involving the trust of an administrator + subsequent automation.


It's also very helpful to be able to audit access. When people leave, and you have tons of static secrets, it's very helpful to be able to see what secrets they have accessed and that now needs to be rotated.


+1 for Vault, where I work we are in the process of switching to it.

Automatic password rotation for AD credentials[1] is especially useful to us.

Meanwhile we use the project level variables Gitlab API[2]

Secrets are kept in Gitlab and requested by applications through a token that can easily be revoked

[1] https://www.vaultproject.io/docs/secrets/ad#password-rotatio...

[2] https://docs.gitlab.com/ee/api/project_level_variables.html


+1 for vault. You can setup your own instance if you like. IMHO, initial ramp up time will be worth the long term benefits.


"Disclosure", not "disclaimer".


Was in a rush between meetings, mea culpa.


Same at our company we use vault via CLI


How do you store the vault seal keys?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: