Hacker News new | past | comments | ask | show | jobs | submit login

> * Coordination. I'm used to using something like supervisord to control my processes - what's the equivalent in docker land?

The question doesn't really make sense: The equivalent is supervisord running inside the lxc container.

> * Routing. How do you tell your reporting/web app containers "this is where your message bus and database live?"

How do you tell them in a cluster? This is a problem anyone who's ever needed to scale a system beyond a single server has already had to deal with, whether or not the application is package up a container, and there's a plethora of solutions, ranging from hardcoding it in config files, to LDAP or DNS based approaches, to things like Zookeeper or Doozer, to keeping all the config data in a database server (and hardcoding the dsn for that), to rsyncing files with all the config data to all the servers, and lots more.




I'm not sure there's going to be a bullet-proof solution to this. I believe that Docker is the right step forward, especially for the application containers. What I have been tinkering with is the idea of having smaller, configurable pieces of infrastructure and then providing a simple tool on top of that (e.g. 'heroku' CLI).

Once you are past the procurement and provisioning steps you really need a way to describe how to wire everything together. I definitely haven't solved it yet but I sure hope to! :)


Take a look at "juju". Canonical is doing a bunch of stuff in this area. Juju does service orchestration across containers. I don't particularly like how they've done it, but it shows a fairly typical approach (scripts that expose a standard interface for associating a service to another, coupled with a tool that lets you mutate a description of your entire environment and translates that into calls to the scripts of the various containers/vms that should be "connected" in some way to add or remove the relations between them)


What don't you like about Juju's approach, and what do you think the right way would be?


To be fair, it's been a while since I've looked at it, so it could have matured quite a while. I should give it more of a chance. My impression was probably also coloured by not liking Python... Other than that, the main objection I had was that it seemed on one hand that writing charms seemed over-complicated (might have been coloured by the Pyhthon examples..) , and that there seemed to be too much "magic" in the examples. But I looked right after it was released, so it's definitively time for another look.

(EDIT: It actually does look vastly improved over last time; not least the documentation)

Specifically, I run a private cloud at work across a few dozen servers and a bit over a hundred VMs currently, and we very much need control over which physical servers we deploy to because the age and capabilities varies a lot - ranging from ancient single CPU servers to 32 core monstrosities stuffed full of SSDs. They're also in two data centres.

When I last looked at juju it seemed to lack a simple way to specify machines or data centres. I just looked at the site again and it has a "set-constraint" command now that seems to do that.

The second issue is/was deployment. OpenStack or EC2 used to be the only options for deploying other than locally. Local deployment was possible via LXC. EC2 is ludicrously expensive, and OpenStack is ridiculous overkill to us compared to our current system (which is OpenVz - our stack predates LXC - managed via a few hundred lines of Ruby script) .

I don't know if that has changed (will look, though, when I get time away from an annoying IP renumbering exercise...), but we'd need either a lighter option ("bare" LXC or something like Docker on each host would both be ok) or an easy way to hook in our own provisioning script.

(EDIT: I see they've added support for deployment via MAAS at least, which is a great)


Docker will soon expose a simple API for service discovery and wiring. This will standardize 1) how containers look each other up and 2) how sysadmins specify the relationship. The actual wiring is done by a plugin, so you are free to choose the best mechanism - dns, juju, mesos/zookeeper, or just manual configuration.


> The question doesn't really make sense: The equivalent is supervisord running inside the lxc container.

So the solution is to "batch up" a load of apps into one container and run with supervisor or something as per my last bullet? I had pretty much envisaged a one docker container per application type of model...


You can do that too. But that is a very different setup. If you build "single application" containers, then the container will stop if the application stops, and you can run supervisord from the host, configured to bring up the container.

If you build full containers, with a single application, you probably still want supervisord inside each container. EDIT: This is because the container will remain up as long as whatever is specified as the "init" of the container stays up, so in this case your app can die without bringing down the container, and something needs to be able to detect it.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: