Hacker News new | past | comments | ask | show | jobs | submit login

Great post. Two points:

..If you don’t inspect the wget script you might as while pipe it into bash.

.. How to distribute secrets if not by env? (which I agree! Honest question)




Disclaimer: I work for Red Hat as an OpenShift consultant so I'm biased

There are competing pieces of advice for secure distribution of secrets, but my current preference comes down to one of these ways, depending on the organization:

1. OpenShift/Kubernetes Secrets mounted into the Pod at runtime.

2. Hashicorp Vault (has a really well designed API. It's very usable just with curl, which makes using it a joy)

3. Sealed Secrets (less experience here but it's looking positive right now) - https://github.com/bitnami-labs/sealed-secrets

If you're using a different PaaS besides OpenShift, it may also offer options worth considering (although do think about portability. These days apps move platforms every few years on average, though I think that may be changing now that K8s is becoming the standard).


> 1. OpenShift/Kubernetes Secrets mounted into the Pod at runtime.

Do you recommend mounting secrets as environment variables to the kubernetes pods instead of files?


Yes, that is by far my preference. Much more 12 factor app-ish and framework independent. A lot of Java apps will want files though, so sometimes it isn't possible.


Files should be used over environment variables. The file system at least as some form of RBAC through file permissions.


Thank you for the pointers! I’ll have a look!


I think they meant to not ship secrets inside the container using the Dockerfile keyword ENV, because they're retrievable. If you must ship a ENV value in an image to the public (it's quite useful for config values that need a default), then know that it isn't secret anymore.

If you need to provide a secret value to an image and it needs to remain secret (like a database password), you most commonly would set the env values at runtime or volume mount a config file at runtime.

On a different side of this, if you need a secret at image build time (like an SSH key to access a private repo), you can use build arguments with the ARG keyword and they won't persist into the final image. Multi stage dockerfiles are also a great way to keep your final image lean and clean.


Thank you for your pointers and the clarification about ENV! I actually misunderstood.


Secrets are an after thought in docker. When I first started using docker I was surprised at how _rubbish_ it was

I've found its best to use the secrets provider that comes with your cloud provider.

For AWS using SSM's get_parameter seems the best thing. But it means you need to find a custom shim to put in your container that will go and fetch the secrets and put them somewhere they are needed.


There’s also Secrets Manager which integrates with other services and has hooks for custom secret-fetching and rotation, so your application doesn’t need to.


Keep them away from the container and use one or more of the following:

- A vault (Conjur, HCV, something else)

- A built-in credential service that comes with your cloud

- A sidecar that injects credentials or authenticates connections to the backend directly (Secretless Broker, service meshes, etc)

If you are doing a poor man's solution, mounted tmpfs volumes that contain secrets are not terrible (but they're not really that much safer than env vars).


Keep them away from the container image


Keep them away from both the image and the container! Getting env var values dumped for a process is trivial outside of the process and even easier within the container process space.


It astounds me how many developers don't realize just how many places environment variables end up, even on a properly functioning server.

common info pages (ex: phpinfo), core dumps, debug errors and logs are notorious for containing them. And those aren't even counting the ways a malicious actor can persuade a program to provide them.


We use `sops`[1] to do this and it works really well.

There is a Google Cloud KMS keyring (for typical usage) and a GPG key (for emergency/offline usage) set up to handle the encryption/decryption of files that store secrets for each application's deployment. I have some bash scripts that run on CI which are essentially just glorified wrappers to `sops` CLI to generate the appropriate `.env` file for the application, which is put into the container by the `Dockerfile`.

Applications are already configured to read configuration/secrets from a `.env` file (or YAML/JSON, depending on context), so this works pretty easily and avoids depending on secrets being set in the `ENV` at build time.

You can also, of course, pass any decrypted values from `sops` as arguments to your container deployment tool of choice (e.g. `helm deploy foo --set myapp.db.username=${decrypted_value_from_sops}`) and not bundle any secrets at build time at all.

[1] https://github.com/mozilla/sops


I did not know sops, thx for the pointer!


> How to distribute secrets if not by env? (which I agree! Honest question)

You'll want to use BuildKit (`docker buildx`), see https://docs.docker.com/develop/develop-images/build_enhance...

[edit] My bad, that works for secrets needed at build time, not at runtime of course.


I use docker secrets[0] and a script like this[1] to inject them in the ENV hashmap in my app.

[0]: https://www.docker.com/blog/docker-secrets-management/

[1]: https://gitlab.com/-/snippets/2029832




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: