Hacker News new | past | comments | ask | show | jobs | submit login
Docker 0.8: Quality, new builder features, btrfs, OSX support (docker.io)
291 points by asb on Feb 5, 2014 | hide | past | favorite | 121 comments



Just to clarify on the OSX support: obviously we did not magically get Darwin to support linux containers. But we put together the easiest possible way to run Linux containers on a Mac without depending on another machine.

We do this by combining (1) docker in "client mode", which connects to (2) a super-lightweight linux VM using boot2docker.

The details are on http://docs.docker.io/en/latest/installation/mac/


It is especially important to set DOCKER_HOST=tcp:/// before you run "boot2docker init" -- I forgot to do this initially and things failed mysteriously. I had to "boot2docker delete" and re-init to get things running.

Once I got that ironed out, everything is running very smoothly, and I don't have to ssh into the VM to do things. Nicely done.

My wish for 0.9 is a more streamlined installation process, possibly by simply incorporating these steps into a Homebrew formula.


Note that this is documented in the boot2docker README ;)


I've followed and refollowed those steps on OS X 10.9.1, but this is what happens:

  » docker version
  Client version: 0.8.0
  Go version (client): go1.2
  Git commit (client): cc3a8c8
  2014/02/05 23:10:55 unexpected EOF
Yet the docker server is definitely up:

  docker@boot2docker:~$ docker version
  Client version: 0.8.0
  Go version (client): go1.2
  Git commit (client): cc3a8c8
  Server version: 0.8.0
  Git commit (server): cc3a8c8
  Go version (server): go1.2
  Last stable version: 0.8.0
Tried both `export DOCKER_HOST=tcp://` and `export DOCKER_HOST=localhost` (as per boot2docker README), before re-init.


Which version of boot2docker are you using? Also, which version of VirtualBox?

See also https://github.com/steeve/boot2docker/issues/48 which has some more information about this specific issue.


Hey, thanks for the report! We're tracking that issue on https://github.com/dotcloud/docker/issues/3952 and are working with the boot2docker folks to get a fix out asap.


Thanks! I solved the issue—it was due to CrashPlan using port 4243 (doh!)

https://github.com/steeve/boot2docker/issues/48


I'm having the same issue, though running the commands provided in the GitHub issue at the bottom just results in the Crashplan service not starting... Docker still isn't functional.


Too bad it depends on VirtualBox - had a lot of kernel panics when using it, so I decided to stick with VMware fusion.


Really? When was the last time you tried it? I've been using VirtualBox for years and it's been pretty solid for me.


Are you sure it isn't caused by the mid-2010 macbook video card bug? I had it re-exposed when i upgraded past 10.6 and that laptop now reboots 8 times a day. Mostly from mac mail or when the power drops and it shifts video cards.


FWIW, I had my 2010 mbp motherboard warranty-replaced and the crashes were gone.


Mine started failing a few months outside the replacement period (3 years). I was due for a new work laptop, so I just replaced the machine.


and I'm always told that macs are superior because they "just work," hmm


That's marketing. They have the same number of problems as all other OSes and computers. I have found a higher percentage of Windows users to know where to look for system messages compared to the number of Mac users who know how to look for system messages. Perhaps it is because I have met more Windows users or perhaps they need to look these things up more...? But in any case, you meet more "hardcore" Windows users who know their way around compared to Mac users, perhaps because the end-user expectation of a Mac user is that it "just works" so doesn't need to poke around as much.


Not sure why the downvotes here! It isn't an anti-Apple rant, it's true! My day job involves writing OSX software on a Mac Pro and at home my only machine is a MacBook Pro, where I funnily enough write OSX software!


> That's marketing. They have the same number of problems as all other OSes and computers.

Absolutely not! I have to support my parents using a computer and I bought them a mac mini 6-7 years ago, because they would always get their windows machine in an unusable state where i couldn't even remotely connect to help. Using a mac, they can do the things they need to do with almost no problems: store photos, backup, email, web browse, facetime, ichat. That would be impossibly for them on a windows machine.


I won't use Virtualbox either. Too many problems and I don't need yet another hypervisor when VMware Fusion works fine.


You should be able to install the same iso into a VMware image. You'll need to expose the networking ports, but the information should be in the boot2docker script.


Or even bare metal. Personally, I extract the kernel and initrd and use it directly via PXE.


Is that to say that you.. wait, what are you doing?


I have my router serving up TFTP with the following pxelinux.cfg (and the appropriate vmlinuz64 and initrd.img extracted from the b2d ISO):

  LABEL boot2docker
          MENU LABEL boot2docker v0.5.2
          KERNEL boot2docker/v0.5.2/vmlinuz64
          APPEND initrd=boot2docker/v0.5.2/initrd.img loglevel=3 user=docker


I guess s/he owns a second machine and boots boot2docker over the network (PXE = https://en.wikipedia.org/wiki/Preboot_Execution_Environment ) without installing anything on the second machine.



Note that boot2docker will expose dockerd on the network if booted on VMware Fusion too. So if you boot in it, it works all the same.


Was the OS X binary built without cgo? I can't seem to access containers in private https registries:

    $ docker login https://registry.example.com
    2014/02/05 14:36:20 Invalid Registry endpoint: Get https://registry.example.com/v1/_ping: x509: failed to load system roots and no roots provided
The hostname in question has a valid SSL certificate. I encountered a similar problem in the past with Go built from homebrew[1][2]. Has anyone else seen this?

[1] https://github.com/Homebrew/homebrew/pull/17758 [2] https://code.google.com/p/go/issues/detail?id=4791

Update: Filed a bug against docker, others are having the same issue. https://github.com/dotcloud/docker/issues/3946


Thanks for the report. We are working on a fix ASAP and will post updates on https://github.com/dotcloud/docker/issues/3683.


In the meantime, if you need to get it working right now, you can build your own binary as outlined here: http://blog.devtable.com/2014/01/using-docker-on-osx-with-pr...

We've confirmed the instructions still work with Docker 0.8 (make sure to change the checkout branch though :))


Glad to hear OS X has official support. I jumped into Docker for the first time last week and have a burning unresolved question for those using boot2docker.

What is your development workflow? I am working on a Rails app, so my instinct is to have a shared folder between OS X and boot2docker, but afaik this is not supported as boot2docker doesn't support VirtualBox guest extensions.


Hi Matt, you are not alone :)

It turns out that shared folders are not a sustainable solution (independently of whether boot2docker supports them), so the best practices are converging towards this:

1) While developing, your dev environment (including the source code and method for fetching it) should live in a container. This container could be as simple as a shell box with git and ssh installed, where you keep a terminal open and run your unit tests etc.

2) To access your source code on your host machine (eg. for editing on your mac), export it from your container over a network filesystem: samba, nfs or 9p are popular examples. Then mount that from your mac. Samba can be natively mounted with "command-K". NFS and 9p require macfuse.

3) When building the final container for integration tests, staging and production, go through the full Dockerfile + 'docker build' process. 'docker build' on your mac will transparently upload the source over the docker remote API as needed.

There are several advantages to exporting the source from the container to the host, instead of the other way around:

- It's less infrastructure-specific. If you move from virtualbox to vmware, or get a Linux laptop and run docker straight on the metal, your storage/shared folders configuration doesn't change: all you need is a network connection to the container.

- Network filesystems are more reliable than shared folders + bind-mount. For example they can handle different permissions and ownership on both ends - a very common problem with shared folders is "oops the container creates files as root but I don't have root on my mac", or "apache complains that the permissions are all wrong because virtualbox shared folders threw up on me".

That said, we need to take that design insight and turn it into a polished user experience - hopefully in Docker 0.9 this will all be much more seamless!


Thanks for taking the time to write this. I've hit a major wall in figuring out the best workflow for this exact scenario. Good to finally hear an official suggestion on the matter. I've been depending on shared directories, so I'll definitely be experimenting with network filesystems.

As Docker evolves, it would be great to have some kind of official resource to get suggestions for optimal workflows as new features become available (the weekly docker email is my best resource right now). Searching the internet for info has been a huge chore as most of the resources (including the ones hosted by docker.io) are woefully out of date.


> As Docker evolves, it would be great to have some kind of official resource to get suggestions for optimal workflows as new features become available

Yes! We are trying to figure this out. Our current avenue for this is to dedicate a new section in the docs to use cases and best practices.

As you point out, our docs (and written content in general) are often inaccurate. We need to fix this. Hopefully in the coming weeks you will start seeing notable improvements in these areas.

Thanks for bearing with us!


"- It's less infrastructure-specific...." - "a very common problem with shared folders is "oops the container creates files as root but I don't have root on my mac", or "apache complains that the permissions are all wrong because virtualbox shared folders threw up on me"."

Thank you for taking the time to write this, just to emphasize these two pain points. I've been using Docker since 0.5 and my current setup is still based around sharing from host to guest. The problems you mention obviously aren't deal breakers (at least for me), but the accumulated effort of dealing with these issues (especially having to modify permissions) adds up over time.

Here's a concern and a hypothetical, though, and I'd like some insight (or a facepalm) from others if I'm wrong...

Say I'm collaborating with a few people on a Rails app and we all work within a Docker container we build from a DockerFile located in our source control and we use the guest-to-host setup you outline. What happens if one of my developers accidentally pushes that container to Docker's public registry? Is my billion dollar ( ;) ) Rails app stored in that container and suddenly available for anyone that wants to pull the container?

I would hope the above is a far-fetched example, but with host-to-guest sharing I at least have some safeguard in knowing that my data is decoupled from my configuration. Is such decoupling worthwhile in your opinion?


I configure my local containers to my ($5/month) quay.io account to address the problem you are describing. If all of your containers start with quay.io/name then you don't have to worry about exposing your docker images.


This should be a blog post or tutorial or guide on the documentation page. It would be really helpful to have guidance on how to do development work flow using docker while avoiding bad practices like shared volumes.


Seconded. My biggest stumbling block with Docker at the moment is “best practices”, as I work on coming up with a Docker Dev Ecosystem (for myself and for a team).


Same here. I've cooked something built around bash scripts and guest-to-host sharing of source code, and I can't help but have a nagging feeling that it isn't as good or correct as it should be...or perhaps it's just totally wrong altogether.

In the absence of "best practices" even a discussion thread somewhere that allows Docker users to pick apart and discussion configurations would be helpful. Pretty much all I've been able to find is a smattering of blog posts.


from my experience samba, nfs are awfully slow when working with big projects. When you use an IDE or Editor that indexes all files for fast search and Intellisense, using NFS/Samba is problematic imo. Thats why i like the vagrant approach, i can edit the code with all the speed of my local tools, and just the vm accesses them through NFS which is fast enough for serving requests in 1-2s.


In your experience are bfs and samba really slow on a virtual network? I have trouble imagining that it would sliw things down enough to be a problem.


I would really be interrested in details on how to connect the filesystem between OSX and docker/linux via 9p.

What would be the recommended way? How to install the required software on either system. Host on OSX or on Linux?

On a recent Linux it seems like a modprobe 9p activates the required module and then a mount -t 9p serverIP /mountpoint seems to do the trick.

But what about the OSX side?


Thank you for this. I found it extremely helpful.


You can use Docker + Makefiles (test, binary, etc) to really help you with this.


Is Docker a good way to bring more security on a server with a few different websites? Separating the sites from each other and run nginx as a proxy in front of them?

What's the overhead?


Unfortunately Docker prevents hosting environments from employing some of the most potent security mitigations added to Linux recently.

You cannot treat a docker container like a virtual machine – code running in the container has almost unfettered access to the parent kernel, and the millions of lines of often-buggy C that involves. For example with the right kernel configuration, this approach leaves the parent machine vulnerable to the recent x86_32 vulnerability (http://seclists.org/oss-sec/2014/q1/187) and many similar bugs in its class.

The algorithms in the running kernel are far more exposed too - instead of managing a single process+virtual network+memory area, all the child's resources are represented concretely in the host kernel, including its filesystem. For example, this vastly increases the likelihood that a child could trigger an unpatched DoS in the host, e.g. the directory hashing attacks that have effected nearly every filesystem implementation at some point (including btrfs as recently as 2012).

The containers code in Linux is also so new that trivial security bugs are being found in it all the time – particularly in sysfs and procfs. I don't have a link right now, though LWN wrote about one a few weeks back.

While virtual machines are no security panacea, they diverge in what classes of bugs they can be affected by. Recent Qemu/libvirt supports running under seccomp.. ensuring even if the VM emulator is compromised, the host kernel's exposure remains drastically limited. Unlike qemu, you simply can't apply seccomp to a container without massively reducing its usefulness, or using a seccomp policy so liberal that it becomes impotent.

You could use seccomp with Docker by nesting it within a VM, but at that point Docker loses most of its value (and could be trivially replaced by a shell script with a cute UI).

Finally when a bug is discovered and needs to be patched, or a machine needs to be taken out of service, there is presently no easy way to live-migrate a container to another machine. The most recent attempt (out of I think 3 or 4 now) to add this ability to Linux appears to have stalled completely.

As a neat system for managing dev environments locally, it sounds great. As a boundary between mutually untrusted pieces of code, there are far better solutions, especially when the material difference in approaches amounts to a few seconds of your life at best, and somewhere south of 100mb in RAM.


This is all true.

If a web application has a vulnerability that allows arbitrary code execution then Docker is only a mild help.

BUT, it can help migrate a certain set of security problems. It is a very simple way to provide pretty good protection against file-traversal type vulnerabilities, even when combined with privilege escalation.

People shouldn't view Docker as a security "silver bullet". But at the same time it does provide an additional layer of security, and that layer can be useful.

The Docker people have a good post[1] about the Docker security model, and they list two future improvements they see as important:

"map the root user of a container to a non-root user of the Docker host, to mitigate the effects of a container-to-host privilege escalation;"

and

"allow the Docker daemon to run without root privileges, and delegate operations requiring those privileges to well-audited sub-processes, each with its own (very limited) scope: virtual network setup, filesystem management, etc."

I think most people would agree these are important goal.

[1] http://blog.docker.io/2013/08/containers-docker-how-secure-a...


If this is an argument against trusting the Linux's specific current implementation of OS-level containerization/resource-limiting, I can buy that. But I don't think full virtualization is the only or best answer. Solaris Zones (and their derivatives via OpenSolaris/Illumos) are a pretty solid implementation of the concept that has a good security record with quite a bit of production use. FreeBSD jails are also pretty solid nowadays, though they used to be somewhat buggy.


Another example is OpenVZ, which offers an alternative to lxc for Linux. It requires patching the kernel, but is considered solid enough that multiple VPS providers actually use OpenVZ under the hood.

Note: there will probably be an OpenVZ backend available for Docker at some point :)


> Unfortunately Docker prevents hosting environments from employing some of the most potent security mitigations added to Linux recently.

You list various facts that are mostly correct, but your conclusion is wrong. Docker absolutely does not reduce the range of security mitigations available to you.

Your mistake is to present docker as an alternative to those security mitigations. It's not an alternative - it presents you with a sane default which can get you pretty far (definitely further than you are implying). When the default does not fit your needs, you can fit Docker in a security apparatus that does.

The current default used by docker is basically pivot_root + namespaces + cgroups + capdrop, via the lxc scripts and a sane locked down configuration. Combined with a few extra measures like, say, apparmor confinement, dropping privileges inside the container with `docker run -u`, and healthy monitoring, you get an environment that is production-worthy for a large class of payloads out there. It's basically how Dotcloud, Heroku and almost every public "paas" service out there works. It's definitely not a good environment for all payloads - but like I said, it is definitely more robust than you imply.

So your first mistake is to dismiss the fact that linux containers are in fact an acceptable sandboxing mechanism for many payloads out there.

Your second mistake is to assume that if your payloads need something other than linux containers, you can't use Docker. Specifically:

> You cannot treat a docker container like a virtual machine – code running in the container has almost unfettered access to the parent kernel, and the millions of lines of often-buggy C that involves. For example with the right kernel configuration, this approach leaves the parent machine vulnerable to the recent x86_32 vulnerability (http://seclists.org/oss-sec/2014/q1/187) and many similar bugs in its class.

> The containers code in Linux is also so new that trivial security bugs are being found in it all the time – particularly in sysfs and procfs. I don't have a link right now, though LWN wrote about one a few weeks back.

> While virtual machines are no security panacea, they diverge in what classes of bugs they can be affected by. Recent Qemu/libvirt supports running under seccomp.. ensuring even if the VM emulator is compromised, the host kernel's exposure remains drastically limited. Unlike qemu, you simply can't apply seccomp to a container without massively reducing its usefulness, or using a seccomp policy so liberal that it becomes impotent.

Of course you're right, sometimes a container is not enough for sandboxing and you need a VM. Sometimes even a VM is not enough and you need physical machines. That's fine. Just install docker on all of the above, and map containers to the underlying machines in a way that is consistent with your security policy. Problem solved.

> You could use seccomp with Docker by nesting it within a VM, but at that point Docker loses most of its value (and could be trivially replaced by a shell script with a cute UI).

That's your judgement to make, but I'm going to go a whim and say that you haven't actually used Docker that much :) Docker is commonly used in combination of VMs for security, so at least some people find it useful.

> Finally when a bug is discovered and needs to be patched, or a machine needs to be taken out of service, there is presently no easy way to live-migrate a container to another machine. The most recent attempt (out of I think 3 or 4 now) to add this ability to Linux appears to have stalled completely.

In my opinion live migration is a nice to have. Sure, for some payloads it is critically needed, and no doubt the day linux containers support full migration those payloads will become more portable. But in practice a very large number of payloads don't need it, because they have built-in redundancy and failover at the service level. So an individual node can be brought down for maintenance without affecting the service as a whole. Live migration also has other issues, for example it doesn't work well beyond the boundaries of your shared storage infrastructure. Good luck implementing live migration across multiple geographical regions! This has been established as ops best practices , so over time the number of payloads which depend on live migration will diminish.

> As a neat system for managing dev environments locally, it sounds great. As a boundary between mutually untrusted pieces of code, there are far better solutions, especially when the material difference in approaches amounts to a few seconds of your life at best, and somewhere south of 100mb in RAM.

To summarize: docker is primarily a system for managing and distributing repeatable execution environments, from development to production (and not just for development as you imply). It does not implement any security features by itself, but allows you to use your preferred isolation method (namespaes, hypervisor or good old physical separation) without losing the benefits of repeatable execution environments and a unified management API.


Look, I'm really glad that you're excited for docker, but name dropping companies running the risk of exposing their machines does not magically invalidate the specific examples I gave. In fact I've really no idea what purpose your reply was hoping to serve.

In the default configuration (and according to all docs I've seen), regardless of some imagined rosy future, today docker is a wrapper around Linux containers, and Linux containers today are a very poor general purpose security solution, especially for the kind of person who needs to ask the question in the first place (see also: the comment I was originally replying to)


> name dropping companies running the risk of exposing their machines does not magically invalidate the specific examples I gave

You're right. But what it does is provide anecdotical evidence that your views are not shared by a large and growing number of experienced engineers.

> In fact I've really no idea what purpose your reply was hoping to serve.

It's pretty simple: you made an incorrect statement, I'm offering a detailed argument explaining why.

> In the default configuration (and according to all docs I've seen) [...] today docker is a wrapper around Linux containers

Yes.

> [...] regardless of some imagined rosy future [...]

I only described things that are possible today, with current versions of Docker. No imagined rosy future involved :)

> and Linux containers today are a very poor general purpose security solution

I guess it really depends of your definition of "general purpose", so you could make a compelling argument either way.

But it doesn't matter because if you don't trust containers for security, you can just install Docker on a bunch of machines and make sure to deploy mutually untrusted containers on separate machines. Lots of people do this today and it works just fine.

In other words, Docker can be used for deployment and distribution without reducing your options for security. Respectfully, this directly contradicts your original comment.


> But it doesn't matter because if you don't trust containers for security, you can just install Docker on a bunch of machines and make sure to deploy mutually untrusted containers on separate machines. Lots of people do this today and it works just fine.

> In other words, Docker can be used for deployment and distribution without reducing your options for security. Respectfully, this directly contradicts your original comment.

If I understand services like Heroku correctly, they give customers standard access to run arbitrary code inside a container as a standard user. Therefore, I expect it would be standard and unavoidable to have many different customers' applications running on the same machine, leading to the ability to exploit vulnerabilities similar to the recent x32 one. If they instead used a VM for each application, they would have to pierce the VM implementation, potentially plus seccomp in some cases, which is the mitigation the parent was referring to. The choice to use Docker instead of VMs limits the security options available.


>> In other words, Docker can be used for deployment and distribution without reducing your options for security. Respectfully, this directly contradicts your original comment.

>If they instead used a VM for each application, they would have to pierce the VM implementation, potentially plus seccomp in some cases, which is the mitigation the parent was referring to. The choice to use Docker instead of VMs limits the security options available.

The parent is suggesting you can use Docker as a supplement to any additional security measure one might choose (to quote: "Docker is commonly used in combination of VMs for security, so at least some people find it useful").

In your example, a person would run Docker on top of the VM, and gain "a system for managing and distributing repeatable execution environments".


You keep saying I'm wrong, because in future docker/containers might work in a different way to how they work today, and will be used in a way entirely different to how people use them today (and utterly contrary to how docker has been marketed to date).

AFAICT through the wall of text, the only problem you have with what I said is that Docker loses its value when combined with a VM. That's fair enough, but that was 1% of my comment.

If you're replying, please don't quote yet another wall of text, it's almost impossible to read.


I think the problem the parent has with you're statement is you are saying "Docker is a crap security alternative"... but docker isn't marketed as a security solution, it's a "logical" process isolation.

It solves the "I want to run two apache's how do I stop them conflicting" problem. Not the "I don't trust what is being run here" problem, it has never been marketed as that, and you are presenting it as if it was.

Docker is a great way to build up a machine, and logically define a machines capabilities.

Your gripe about security is just completely irrelevant, it'd be like complaining that iPhoto doesn't increase OS X Security.

Edit: So the answer to the original "Is docker good for security?" I would say "maybe, but that's not its intention or focus".


It depends. If the goal is to consolidate several boxes to a single VM, it does this. Be sure host-based (on the linux box) firewall rules are set and documented. If possible, set network-based firewall rules also (AWS security groups).


Yes, it is. It’s atop LXC (Linux Containers), which - oversimplified - is process namespacing, jails, etc.

There’s near zero overhead, because there’s no virtualization.


its not as good as using separate vms, but has arguably lower overhead

also, its not really docker doing that, its LXC. Docker is an API around it.


Docker is not an API around LXC. Any more than any application using a framework is a wrapper around said framework.

However, the analogy I just made is only at a point in time. LXC is not the only Linux provider to manage cgroups/namespaces, it was just the most convenient at the time for the target audience. That is a fleeting position soon remedied.

If you'd like to know more, I'd encourage you to get involved with Docker development.


yeah, docket is an api around namespaces.. and thus lxc, right now.

thanks for nothing really.


So what's the solution for 'root inside a docker container is root on the host'?

We'd like to ship a set of utilities as a docker container, but unless the sysadmin gives everyone 'sudo' privileges on the server (unlikely and insecure), they can't run the container and its utilities.

Any advice?


Look for an update on this in 0.9 :)

Future versions of the Docker API will natively support scoping. This means that each API client will see a different subset of the Docker engine depending on the credentials and origin of the connection. This will be implemented in combination with introspection, which allows any container to open a connection to the Docker engine which started it.

When you combine scoping and introspection, you get really cool scenarios. For example, let's say your utility is called "dockermeister". Each individual user could deploy his own copy of dockermeister, in a separate container. Each dockermeister container would in turn connect to Docker (via introspection), destroy all existing containers, and create 10 fresh redis containers (for reasons unknown). Because each dockermeister container is scoped, it can only remove containers that are its children (ie that were created from the same container at an earlier time). So they cannot affect each other. Likewise, the 10 new redis containers will only be visible to that particular user, and not pollute the namespace of the other users.

Of course scoping works at arbitrary depth levels... so you could have containers starting containers starting containers. Containers all the way down :)


user namespaces will solve this. lxc in ubuntu 14.04 already supports docker in user lxc, ie. no root needed.

add silly disclaimer.. yes docker has some notion of portable containers plugins, but it uses lxc atm, and the feature is in lxc upstream.


Awesome that OSX support is now official, but is there any benefit to using this process as opposed to using docker-osx? https://github.com/noplay/docker-osx

The official installation process seems more complicated, and I don't really see an advantage.


I'm curious about how the focus on multiple, ABI-incompatible platforms will affect the pace and momentum of Docker development. So far, Docker has benefitted a lot from the focus on amd64 userland on Linux.


Docker on OSX is actually running on a lightweight Linux VM, so it's not really multi-platform support they've added.


Personally, when I read "OSX support", I thought that meant that there would now be containers with Darwin-ABI binaries inside them. So on Linux, you'd use cgroups for Linux-ABI binaries and a VM for Darwin-ABI, just as on OSX you use a VM for Linux-ABI (and presumably would use the OSX sandbox API for Darwin-ABI containers.)

This "native sandboxing for own-ABI if available, VM if not, and VM for everything else" approach would extend to any other platform as well, I'd think (Windows, for example.) I'm surprised that this isn't where Docker is going, at least for development and testing of containers.

(Though another alternative, probably more performant for production, would be something like having versions of CoreOS for each platform--CoreOS/Linux, CoreOS/Darwin, CoreOS/NT, and so on--so you'd have a cluster of machines with various ABIs, where any container you want to run gets shipped off to a machine in the cluster with the right ABI for it.)


Going that way would dilute Docker's value. Docker's promise is that you can build a container and it will always work; it won't mysteriously break in production or giving you installation headaches. To do that, your development environment has to be as close to the production environment as possible. Having a totally different ABI doesn't help with that goal.


Our priority in the short term is definitely to focus on the Linux ABI and making it available on as many physical machines as possible. This is the reasoning behind our current OSX support, and support for more platforms coming soon.

Longer term we do need to support multiple ABIs, if only because a lot of people want to use Docker on x86-32 and ARM. Having ELF binaries built on Linux isn't of much help if they're built for another arch :) So at the very least we will need to support 3 ABIs in the near future.

The good news is that it can be done in a way which doesn't hurt the repeatability of Docker's execution environment. Think of it this way: every container has certain requirements to run. At the very least it needs a certain range of kernels and archs (and yes it's possible, although uncommon for a binary to support multiple archs). It may also require a network interface to bind a socket on a certain TCP port. It may require certain syscalls to be authorized. It may require the ability to create a tun/tap device. And so on.

Docker's job is to offer a portable abstraction for these requirements, so that the container can list what it needs on the one hand, the host can list what it offers on the other, and docker can match them in the middle. If the requirements listed by a given container aren't met ("I need CAP_SYSADMIN on a 3.8 linux kernel and an ARM processor!") then docker returns an error and a clear error message. If they are met, the container is executed and must always be repeatable.

TLDR: ABI requirements are just one kind of requirements. Docker can handle multiple requirements without breaking the repeatability of its execution environment.


Your reply primarily addresses architecture support and the implications for pre-built payloads. But I think a more important concern is the fragmentation that would result if Docker attempted to natively support other operating systems. Consider that practically every Dockerfile starts with a Linux distro, and includes commands specific to that distro (e.g. installing packages with apt). Everybody assumes that the payload is Linux-based, and it all just works. How would it work if Docker also supported FreeBSD jails, Illumos zones, or whatever other options are up for consideration? Would the public registry of Docker images now be fragmented along OS lines? Or would Docker try to automagically smooth over the fragmentation by firing up VMs when the host and container operating systems don't match? In the latter case, would every Docker installation then require a working hypervisor?

Considering that the overwhelming majority of Unix servers are running Linux, I think it's better to say that Docker is Linux-based, end of discussion.


I think what he's saying is that although Docker will support other Linux ABIs, it will stay with Linux.


Note that you need to be running boot2docker 0.5.2 [1] (with docker 0.8) for docker to work properly on OS X.

[1] https://github.com/steeve/boot2docker/releases/tag/v0.5.2


IMO, it would be fantastic if there was something like Docker for Windows. Imagine being able to bundle up games in individual containers and easily being able to move them from machine to machine as you upgrade. Same thing applies for other Windows apps.


Docker is to Unix almost as Microsoft App-V/VMware ThinApp/Symantec Workspace Virtualization products are to Windows.

Not exactly the same but closer.


Precisely, it's not the same. There still a whole bunch of fiddling going on.

I'll stick to my statement. I want something Docker-like for Windows so that I can easily move things from one machine to another.


It seems like they are working on it: "Microsoft Corporation : Patent Issued for Extensible Application Virtualization Subsystems" http://www.4-traders.com/MICROSOFT-CORPORATION-4835/news/Mic...


The developers of DRM for Windows games would probably make sure that doesn't work.


Not if it's an option for installation. Or if they start shipping them like that. Steam springs to mind. :)


I just tried the docker interactive tutorial. It was fun, but I still don't get the point of using docker. Just been hearing a lot about it, and its getting too much buzz.


I am interested in the BTRFS support in particular, it is clear that performance in FS is key. However, what I like the most about Docker is the ability to use layers and diff them. In effect, I want version control for images, because it allows me to not run extra provisioning tools for the images (just rely on simply Zookeeper stuff for app config). Whatever gives me 'vcs' for images in the most performant way, wins in my book.


It's not explicitly said, but are they following Semantic Versioning? http://semver.org/


Not exactly. The article says the first number is for major lifecycle events, ie 1.0 means "production ready". They'll be releasing monthly and the second number will be the release increment. The third number will be for patches and fixes.

So to me that doesn't fit in with the Semantic Versioning contract. I think the product is too young yet to use a version scheme that assumes relative API stability.


Well, to be fair, this is why even with semver 0.x means "anything goes". It's only from 1.x onwards that major version increments should be used for backwards-incompatible changes.


Well, I would assume 0.x.x is by nature unstable, so I'm not sure how it doesn't fit. I read the article, but I'm still left wondering for clarification.


It's confusing why btrfs support was prioritized ahead of zfs considering zfs' superior architecture and ops capabilities. Is docker (formerly dotcloud) going to start withholding capabilities as licensed features?

Edit: prelim zfs driver work is here https://github.com/gurjeet/docker/tree/zfs_driver


We've tried to be plainly open that going the 'open core' route is in no ones best interest.

Swappable storage engines will be easier to create over time, not less. There's also a ZFS branch, but the reality is people spent time and resources on getting BTRFS (which has been experimental for >6mos) instead of ZFS.

Docker development works a lot like Linux development (just on a much, much smaller scale.) If there's an area where you're comfortable committing, the barrier to entry is minimal. All you need is 2 maintainers to agree to your addition and its merged in. So get on it!


Why isn't open core in Docker Inc.'s best interest? I wouldn't rule it out just yet. One nice thing about Docker's choice of license is that anyone can do an enhanced commercial version, not just Docker Inc.


Gurjeet is on it.

Edit - docker's great. if I were an investor, how would you guys monetize it? prof svcs, support? I could see folks paying for a dashboard, cloud controller w/ api and an easy-to-deploy openstack-like setup.


We were pretty open with plans in our Series B announcement a couple of weeks ago.

Basically - investing more in open source, investing in the docker platform, investing in commercial capabilities

So as you can imagine, we're meeting with a ton of companies and hiring really good people who want to be part of something pretty amazing.


Probably just due to the licensing issues surrounding zfs and linux.


Check out zfs on linux, kmod binary distribution is likely okay and dkms method shifts compiling to the end-user, which avoids it entirely.


Docker doesn't prevent use of ZFS.

It's common and easy to mount your host FS into the container, putting mutating data where you can take full advantage of the superior architecture and ops capabilities of whichever FS you prefer.

The images' internal AUFS/BTRFS layers are then only for keeping your binaries-at-rest and static configuration straight. They may as well be in highly indexed ZIP files, for all you care.


ZFS cannot legally be distributed with the Linux kernel and BTRFS already is supported in-kernel so would presumably be easier to support...


What superior architecture does zfs have?


Online scrubbing, so no downtime waiting for fsck for one. If you'd used it, you'd know how many hard won production battles solaris devs poured into making zfs better from the ground up. btrfs is oracle's NIH syndrome, reinventing the wheel instead of developing one that already had, pun intended, traction.


btrfs has online scrubbing.

btrfs is the response to sun picking an incompatible license. when that is removed zfs might get more interesting for a lot of people.


Oracle acquired Sun, so they could have solved it by just choose another license moving forward. One gotcha is the ZFS (Solaris core) team vehemently resisted anything GPL-compatible. Something like a BSD license would make the most commercial sense. Instead, Oracle has a consistent pattern of losing community goodwill that loses customer interest and pushes developers to fork.


iirc their own developers were concerned about oracles attitude that they went out to get third parties to add core components so that it could not be oracled :\


My understanding is that btrfs is still in catchup mode for the foreseeable future, but might eventually cover the distance.

Has btrfs jumped ahead of zfs in ways I haven't heard about?

Edit - this is my first search result:

http://rudd-o.com/linux-and-free-software/ways-in-which-zfs-...


I'd be biased to agree since that's Manuel's blog, someone I used to work with. I've supported 24x7 and 9x5 ops where downtime was unacceptable. zfs makes it a whole lot easier to perform upgrades, know data and metadata are solid and send snapshots around.


> ZFS uses atomic writes and barriers

This about settles the question for me. Assuming that the implication that btrfs performs otherwise holds true.


Barriers are also used in btrfs.

and from what i can tell clone operations are also handled atomically

though i kinda wonder what exactly is meant by atomic writes.


Thanks for clearing that up.

I might have to dig into it a bit more.


Yeah, it depends on the use case. For home directories and large risk items like financial stuff, testing that barrier writes are happening is a good thing. You don't want a storm to knock out a DC to learn that the hw/sw fs stack was lying to you at some level.


the first parts are right (though i don't know how accurate) and could certainly be improved with tooling.

testing will come with time.

with raidz sure it's great but the vdevs being immutable is really rather annoying. the way btrfs handles multi device stuff is significantly better (replication is not between 2 devices but closer to the file (it allocates a chunk of space and decides where to put the other replica in the pool).

though i wish the erasure coding stuff would land faster.

btrfs has had send\receive for a while.

I haven't needed to dig into the btrfs man pages yet so can't common on how accurate this is.

btrfs also uses barriers log devices and cache devices are awesome. i hope btrfs adds them.

the block device thing is a limitation of btrfs and annoys me though i've slowly moved to just having files. (though in anything largish i would probably be moving to a distributed fs anyway)

the sharing stuff is good but i think that's a tooling issue not an fs issue.

btrfs has an out of band dedup allowing you to run periodic dedup without having the memory penalty of live dedup (though costing disk)


does btrfs have anything comparable to raidz3?


that would be the erasure coding that they are working on. it is not yet available.


* ZFS is a volume manager.

* ZFS is a RAID manager.

* ZFS is also a filesystem.

* Writes are handled in transaction groups (TXGs).

* Every transaction group is written atomically.

* ZFS keeps a revision history of the past 128 transactions written to disk.

* ZFS is a Copy on Write filesystem.

* As such, due to the previous 2 features, snapshots are free.

* Snapshots are first class, read-only filesystems.

* Snapshots can be upgraded to read-write clones.

* Snapshots can be sent and received to other locations.

* ZFS uses block-level deduplication.

* ZFS supports transparent compression.

* Every metadata and block data is checksummed with SHA256 by default.

* Other checksum algorithms are supported.

* ZFS uses a "slab allocator" to minimize fragmentation.

* ZFS implements an "intent log" for synchronous writes.

* The intent log can be migrated to a fast SSD or NVRAM drive.

* ZFS uses advanced caching implementing for MRU/LRU and MFU/LFU caches.

* A secondary cache (outside of RAM) can be installed on fast SSDs.

* ZFS uses dynamic striping with its RAID arrays.

* ZFS supports triple parity RAID.

* ZFS autoheals bit rot when a block does not match its checksum, if the pool is redundant.

* ZFS fully supports advanced format disks (4k blocks and beyond).

* In fact, block sizes are dynamic from 512 bytes to 128K (or 1M in the proprietary ZFS).

* In the proprietary release of ZFS, native encryption is supported.

* In the Free Software release of ZFS, "feature flags" have been introduced to add on "plugins" without changing the core of the filesystem.

* ZFS supports native NFS, allowing the mount to be available before the export.

* ZFS supports native SMB for the same reason.

* ZFS supports native iSCSI, also for the same reason.

* ZFS can create static sized block devices called "ZVOLS".

* ZFS pools can be exported and imported.

* ZFS "scrubs" data to find blocks that do not match their checksum.

* The Free Software release of ZFS is supported on GNU/Linux, OpenIndiana, SmartOS, FreeBSD, and many other operating systems.

* Administration of ZFS is done via 3 commands: zpool(8), zfs(8) and zdb(8).


Great list.

L2ARC, zil can each have their own volume configuration (mirror, etc.) For example, using different types of SSDs for each. http://forums.freenas.org/index.php?threads/zfs-and-ssd-cach...

zfs send & receive ... Send snapshots around like a fancy SAN.

raidz (N+1 - like raid5) raidz2 (N+2 - like raid6) raidz3 (N+3)

It's also way faster and cheaper to put together boxes from commodity enterprise server hardware, making hardware raid cards basically expensive shelf dust catchers along with overpriced SANs and NASes.

(Extra shout out for iXsystems, not because they use lots of Python, but because of massive awesomeness supporting FreeBSD and FreeNAS. Also their parties put Defcon afterparties to shame.)

Conclusion: Full ZFS is often better than a SAN, NAS and/or hardware solutions. Also protip: Direct attached is way, way faster than 10 GbE, FC or IB, especially if images are directly available to compute nodes.


Awesome release, especially since I can now use Docker on OS X without having to boot up a full Ubuntu VM through Vagrant.

If anyone else is using Boxen, I packaged up a quick Puppet module to get up and running with Docker on OS X: https://github.com/morgante/puppet-docker


why is osx support necessary on the path to 1.0? i'd rather have a simple, small, 1.0 release i can trust than all these "bells and whistles".


It's not necessary and we didn't go out of our way to get it. We just happened "for free" as a result of writing portable code, a clean client-server architecture and the appearance of the boot2docker project in the community.


Anyone using Docker on 32-bits?


Docker does not (yet) support 32-bit architectures.


I think I read that you just have to disable a condition somewhere and build your own base image or something like that.


can someone explain what this is ? and what's the purpose ?



For even more fun, check out the interactive tutorial: http://www.docker.io/gettingstarted/


nope, I still don't get it.

> Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere.

Like an internet browser for executables ? I don't understand how can this be useful...


Containers are like virtual machines, only without the hardware emulation.

See http://en.wikipedia.org/wiki/LXC

See http://docs.docker.io/en/latest/faq/#how-do-containers-compa...

Docker is a mechanism to bundle an application together inside a container (think VM instance) in a way that makes it easier to distribute.


Think of it like this:

Say you have a python/rails/nodejs/c++/whatever app. Sometimes getting all the dependencies on the system is cumbersome and hard to manage. This is true for both developers, and the people deploying these apps. I can't count how many times I've had a python app fail to build on a new box because of some C extension and forgetting to install a package on (centOS, ubuntu, debian, ect).

Docker lets me do all of this once for my app, with a Docker image, and now when I want to deploy it, all the system needs is Docker installed. This means on my laptop, on our staging server, on our production server, all they need is Docker, and I will have the exact same environment in each place, and to deploy the app, its exactly the same.

There's a ton of other bonuses like each container's dependencies and processes are isolated from each other. I also get a ton of the Docker features that allow containers to communicate with each other and setup service discovery between them (ex: your database container can now expose information to the app container using environmental variables).

Tons of other good reasons too though, you should check it out.


it's like a vm, but it's much lighter (runs faster / uses less cpu, memory and disk). it does that by re-using the host kernel. so you are forced to have the same kernel in your client as in your host, and that has to be linux 64 bit.

an example of where it's useful, for me, is in system integration tests. unit tests are designed to run without changing the machine they run on, but for system integration you need to build and install and configure and run a system. so you really need something like a vm. docker gives you that isolation, but with lower overheads.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: