Hacker News new | past | comments | ask | show | jobs | submit login

Unfortunately Docker prevents hosting environments from employing some of the most potent security mitigations added to Linux recently.

You cannot treat a docker container like a virtual machine – code running in the container has almost unfettered access to the parent kernel, and the millions of lines of often-buggy C that involves. For example with the right kernel configuration, this approach leaves the parent machine vulnerable to the recent x86_32 vulnerability (http://seclists.org/oss-sec/2014/q1/187) and many similar bugs in its class.

The algorithms in the running kernel are far more exposed too - instead of managing a single process+virtual network+memory area, all the child's resources are represented concretely in the host kernel, including its filesystem. For example, this vastly increases the likelihood that a child could trigger an unpatched DoS in the host, e.g. the directory hashing attacks that have effected nearly every filesystem implementation at some point (including btrfs as recently as 2012).

The containers code in Linux is also so new that trivial security bugs are being found in it all the time – particularly in sysfs and procfs. I don't have a link right now, though LWN wrote about one a few weeks back.

While virtual machines are no security panacea, they diverge in what classes of bugs they can be affected by. Recent Qemu/libvirt supports running under seccomp.. ensuring even if the VM emulator is compromised, the host kernel's exposure remains drastically limited. Unlike qemu, you simply can't apply seccomp to a container without massively reducing its usefulness, or using a seccomp policy so liberal that it becomes impotent.

You could use seccomp with Docker by nesting it within a VM, but at that point Docker loses most of its value (and could be trivially replaced by a shell script with a cute UI).

Finally when a bug is discovered and needs to be patched, or a machine needs to be taken out of service, there is presently no easy way to live-migrate a container to another machine. The most recent attempt (out of I think 3 or 4 now) to add this ability to Linux appears to have stalled completely.

As a neat system for managing dev environments locally, it sounds great. As a boundary between mutually untrusted pieces of code, there are far better solutions, especially when the material difference in approaches amounts to a few seconds of your life at best, and somewhere south of 100mb in RAM.




This is all true.

If a web application has a vulnerability that allows arbitrary code execution then Docker is only a mild help.

BUT, it can help migrate a certain set of security problems. It is a very simple way to provide pretty good protection against file-traversal type vulnerabilities, even when combined with privilege escalation.

People shouldn't view Docker as a security "silver bullet". But at the same time it does provide an additional layer of security, and that layer can be useful.

The Docker people have a good post[1] about the Docker security model, and they list two future improvements they see as important:

"map the root user of a container to a non-root user of the Docker host, to mitigate the effects of a container-to-host privilege escalation;"

and

"allow the Docker daemon to run without root privileges, and delegate operations requiring those privileges to well-audited sub-processes, each with its own (very limited) scope: virtual network setup, filesystem management, etc."

I think most people would agree these are important goal.

[1] http://blog.docker.io/2013/08/containers-docker-how-secure-a...


If this is an argument against trusting the Linux's specific current implementation of OS-level containerization/resource-limiting, I can buy that. But I don't think full virtualization is the only or best answer. Solaris Zones (and their derivatives via OpenSolaris/Illumos) are a pretty solid implementation of the concept that has a good security record with quite a bit of production use. FreeBSD jails are also pretty solid nowadays, though they used to be somewhat buggy.


Another example is OpenVZ, which offers an alternative to lxc for Linux. It requires patching the kernel, but is considered solid enough that multiple VPS providers actually use OpenVZ under the hood.

Note: there will probably be an OpenVZ backend available for Docker at some point :)


> Unfortunately Docker prevents hosting environments from employing some of the most potent security mitigations added to Linux recently.

You list various facts that are mostly correct, but your conclusion is wrong. Docker absolutely does not reduce the range of security mitigations available to you.

Your mistake is to present docker as an alternative to those security mitigations. It's not an alternative - it presents you with a sane default which can get you pretty far (definitely further than you are implying). When the default does not fit your needs, you can fit Docker in a security apparatus that does.

The current default used by docker is basically pivot_root + namespaces + cgroups + capdrop, via the lxc scripts and a sane locked down configuration. Combined with a few extra measures like, say, apparmor confinement, dropping privileges inside the container with `docker run -u`, and healthy monitoring, you get an environment that is production-worthy for a large class of payloads out there. It's basically how Dotcloud, Heroku and almost every public "paas" service out there works. It's definitely not a good environment for all payloads - but like I said, it is definitely more robust than you imply.

So your first mistake is to dismiss the fact that linux containers are in fact an acceptable sandboxing mechanism for many payloads out there.

Your second mistake is to assume that if your payloads need something other than linux containers, you can't use Docker. Specifically:

> You cannot treat a docker container like a virtual machine – code running in the container has almost unfettered access to the parent kernel, and the millions of lines of often-buggy C that involves. For example with the right kernel configuration, this approach leaves the parent machine vulnerable to the recent x86_32 vulnerability (http://seclists.org/oss-sec/2014/q1/187) and many similar bugs in its class.

> The containers code in Linux is also so new that trivial security bugs are being found in it all the time – particularly in sysfs and procfs. I don't have a link right now, though LWN wrote about one a few weeks back.

> While virtual machines are no security panacea, they diverge in what classes of bugs they can be affected by. Recent Qemu/libvirt supports running under seccomp.. ensuring even if the VM emulator is compromised, the host kernel's exposure remains drastically limited. Unlike qemu, you simply can't apply seccomp to a container without massively reducing its usefulness, or using a seccomp policy so liberal that it becomes impotent.

Of course you're right, sometimes a container is not enough for sandboxing and you need a VM. Sometimes even a VM is not enough and you need physical machines. That's fine. Just install docker on all of the above, and map containers to the underlying machines in a way that is consistent with your security policy. Problem solved.

> You could use seccomp with Docker by nesting it within a VM, but at that point Docker loses most of its value (and could be trivially replaced by a shell script with a cute UI).

That's your judgement to make, but I'm going to go a whim and say that you haven't actually used Docker that much :) Docker is commonly used in combination of VMs for security, so at least some people find it useful.

> Finally when a bug is discovered and needs to be patched, or a machine needs to be taken out of service, there is presently no easy way to live-migrate a container to another machine. The most recent attempt (out of I think 3 or 4 now) to add this ability to Linux appears to have stalled completely.

In my opinion live migration is a nice to have. Sure, for some payloads it is critically needed, and no doubt the day linux containers support full migration those payloads will become more portable. But in practice a very large number of payloads don't need it, because they have built-in redundancy and failover at the service level. So an individual node can be brought down for maintenance without affecting the service as a whole. Live migration also has other issues, for example it doesn't work well beyond the boundaries of your shared storage infrastructure. Good luck implementing live migration across multiple geographical regions! This has been established as ops best practices , so over time the number of payloads which depend on live migration will diminish.

> As a neat system for managing dev environments locally, it sounds great. As a boundary between mutually untrusted pieces of code, there are far better solutions, especially when the material difference in approaches amounts to a few seconds of your life at best, and somewhere south of 100mb in RAM.

To summarize: docker is primarily a system for managing and distributing repeatable execution environments, from development to production (and not just for development as you imply). It does not implement any security features by itself, but allows you to use your preferred isolation method (namespaes, hypervisor or good old physical separation) without losing the benefits of repeatable execution environments and a unified management API.


Look, I'm really glad that you're excited for docker, but name dropping companies running the risk of exposing their machines does not magically invalidate the specific examples I gave. In fact I've really no idea what purpose your reply was hoping to serve.

In the default configuration (and according to all docs I've seen), regardless of some imagined rosy future, today docker is a wrapper around Linux containers, and Linux containers today are a very poor general purpose security solution, especially for the kind of person who needs to ask the question in the first place (see also: the comment I was originally replying to)


> name dropping companies running the risk of exposing their machines does not magically invalidate the specific examples I gave

You're right. But what it does is provide anecdotical evidence that your views are not shared by a large and growing number of experienced engineers.

> In fact I've really no idea what purpose your reply was hoping to serve.

It's pretty simple: you made an incorrect statement, I'm offering a detailed argument explaining why.

> In the default configuration (and according to all docs I've seen) [...] today docker is a wrapper around Linux containers

Yes.

> [...] regardless of some imagined rosy future [...]

I only described things that are possible today, with current versions of Docker. No imagined rosy future involved :)

> and Linux containers today are a very poor general purpose security solution

I guess it really depends of your definition of "general purpose", so you could make a compelling argument either way.

But it doesn't matter because if you don't trust containers for security, you can just install Docker on a bunch of machines and make sure to deploy mutually untrusted containers on separate machines. Lots of people do this today and it works just fine.

In other words, Docker can be used for deployment and distribution without reducing your options for security. Respectfully, this directly contradicts your original comment.


> But it doesn't matter because if you don't trust containers for security, you can just install Docker on a bunch of machines and make sure to deploy mutually untrusted containers on separate machines. Lots of people do this today and it works just fine.

> In other words, Docker can be used for deployment and distribution without reducing your options for security. Respectfully, this directly contradicts your original comment.

If I understand services like Heroku correctly, they give customers standard access to run arbitrary code inside a container as a standard user. Therefore, I expect it would be standard and unavoidable to have many different customers' applications running on the same machine, leading to the ability to exploit vulnerabilities similar to the recent x32 one. If they instead used a VM for each application, they would have to pierce the VM implementation, potentially plus seccomp in some cases, which is the mitigation the parent was referring to. The choice to use Docker instead of VMs limits the security options available.


>> In other words, Docker can be used for deployment and distribution without reducing your options for security. Respectfully, this directly contradicts your original comment.

>If they instead used a VM for each application, they would have to pierce the VM implementation, potentially plus seccomp in some cases, which is the mitigation the parent was referring to. The choice to use Docker instead of VMs limits the security options available.

The parent is suggesting you can use Docker as a supplement to any additional security measure one might choose (to quote: "Docker is commonly used in combination of VMs for security, so at least some people find it useful").

In your example, a person would run Docker on top of the VM, and gain "a system for managing and distributing repeatable execution environments".


You keep saying I'm wrong, because in future docker/containers might work in a different way to how they work today, and will be used in a way entirely different to how people use them today (and utterly contrary to how docker has been marketed to date).

AFAICT through the wall of text, the only problem you have with what I said is that Docker loses its value when combined with a VM. That's fair enough, but that was 1% of my comment.

If you're replying, please don't quote yet another wall of text, it's almost impossible to read.


I think the problem the parent has with you're statement is you are saying "Docker is a crap security alternative"... but docker isn't marketed as a security solution, it's a "logical" process isolation.

It solves the "I want to run two apache's how do I stop them conflicting" problem. Not the "I don't trust what is being run here" problem, it has never been marketed as that, and you are presenting it as if it was.

Docker is a great way to build up a machine, and logically define a machines capabilities.

Your gripe about security is just completely irrelevant, it'd be like complaining that iPhoto doesn't increase OS X Security.

Edit: So the answer to the original "Is docker good for security?" I would say "maybe, but that's not its intention or focus".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: