Hacker News new | past | comments | ask | show | jobs | submit login
What’s New in Docker 1.13? (docker.com)
149 points by hackerpt on Jan 20, 2017 | hide | past | favorite | 102 comments



Docker for Mac is a pretty terrible experience. https://github.com/docker/for-mac/issues/77 has eaten a huge chunk of time for my team and the issue has been open and known about since the beta. Yes there are hacks to make things better, but as of a month or two ago the Docker team seems to have gone radio silent after providing periodic updates.


For years, I advocated for developing inside of VMs or containers. Even wrote a very handy shell tool for managing the process. But a few years ago, I stopped, and just switched to developing on my local machine.

Working in VMs or containers adds a ton of complexity, for very little benefit.

Installing databases is trivial, with any of `brew`, `yum`, or `apt-get`.

Your `bin/setup` script can take care of automating that for onboarding new developers. The same script gets used in your Dockerfile.

And your CI system is there to as-perfectly-as-possible replicate production, to run your comprehensive test suite (you do practice TDD, right?) and catch things like "forgot to add a library dependency to the setup script" and "app broke because of a library version difference".

Since switching to local-only development, plus containers and CI/CD, my life has gotten a lot nicer.


At GitLab all team members develop without virtualization. Disk IO is just much faster that way. We recommend the approach in our development kit https://gitlab.com/gitlab-org/gitlab-development-kit

We would love to see better virtualization disk performance so it is easier for non-team members to contribute. We're happy that despite that people are still able to contribute https://gitlab.com/gitlab-org/gitlab-ce/merge_requests?scope...


That's interesting, but shouldn't containers (e.g.: docker/lxc) be almost as fast as native at disk I/O?

I mean that's basically why I moved my self hosted virtual servers from KVM, which was abysmal on regular desktop spinning disks, to LXC/Docker which is comparably fast to native -- plus I can fit a lot more containers than full VMs.


The problem is that on MacOS you have to run containers in a virtual machine since you're not running on Linux. Crossing the VM boundary with IO can be slow, especially on virtualbox.


Oh, I'm not using MacOS so I didn't know that.

It makes a lot more sense now, thanks!


If you are developing within a docker container, how do you get as far as CI before you have tested it working locally??

The whole point of using docker containers is to have the same development experience locally, in CI and in prod.. not have some unknown system libraries on your box which don't match other environments.

Want a database? docker run mysql/mysql-server

Your argument for switching away from containers, seems to actually be arguing for using containers.


You'll never get the same environment locally as you do in prod. That's why you have staging (you do have staging, right?).

It's better to make the environment as close as possible to prod without adding massive inconvenience but docker both adds inconvenience and forces you to use it in production (which entails a whole other set of headaches) if you want close environmental parity, and even then, since it's lightweight virtualization there are hundreds of things which can behave differently between dev and prod.

IMHO you either want very close environmental parity (in which case full virtualization is the way to go) or you don't, in which case running locally is fine.


> That's why you have staging (you do have staging, right?).

Plenty of people don't. The main reason being that staging is never quite the same as production...


We are working in a remote team, so replicating shared envs locally is important. Our system contains multiple databases, application servers, SMTP, etc etc. -- you can use all of them in your local dev env. It's not the same as prod because of different latencies, etc., but all containers are present locally. It is _very_ useful.


>We are working in a remote team, so replicating shared envs locally is important.

It is, but I'm not particularly keen on docker as a solution to this problem. It provides a low level of isolation/realism for these services and the tooling surrounding it is relatively poor.


What about LXC system containers? Differences between hardware and OS virtualization still exist, e.g. immutable system clock, but it's otherwise like running a regular Linux box. An init process that can simulate rebooting, mature system package management, no process isolation that's often gratuitous, etc.


What about LXC system containers? Differences between hardware and OS virtualization still exist, e.g. immutable system clock, but it's otherwise like running a regular Linux box. An init process that can simulate rebooting, mature system package management, no process isolation that's often gratuitous, etc.


> Installing databases is trivial, with any of `brew`, `yum`, or `apt-get`.

yes. For once database version and one single data set. If you handle multiple customers, multiple versions of a software suite to support which target specific versions of a DB, containers or VMs are handy.

I don't do my dev in a container. That always seemed stupid for me. But my dependencies are in containers.


This makes Docker for Mac unusable for rails development (without 3rd party work arounds). The issue and the radio silence has burned a lot of goodwill I and I'm sure others had for the Docker project. It's made worse by the constant stream of "production ready" announcements, while this product is crippled for most common use cases.

Here's the related thread on the docker forums. https://forums.docker.com/t/file-access-in-mounted-volumes-e...


I have given up on using docker on macOS. I now use Ubuntu at work, and it's interesting to watch a lot of my co-workers give up on macOS too. I totally agree it is a horrible experience.


Use Docker on Ubuntu on my own machine, but forced to use a Docker on Windows at work. Using Docker on Linux is an absolute joy. I totally advocate it. Dockerhub is my first port of call when I'm installing something new on my machine because I know if there's an official repo it will work.

Docker outside of Linux is another story. I've run into config issues, VM memory problems, proxy issues, ect. Painful.


Same here. I was struggling using docker at work on my Mac, but when I was working at home on my Linux box it was amazing. Docker is awesome on Linux and I have never been more productive.


This is the case where I work too. I'm guessing that at least 75% of our engineering team now uses Linux, even though everyone were given a MBP when hired.


Working with Docker at a previous gig, after wrestling with docker-for-mac and boot2docker for months I just stopped witl all the hacky work-arounds and just used Vagrant or Ubuntu AWS nodes for Docker development. Also, I have to say that Docker ONLY got a 'prune' option with 1.13 is amazing - I had aliases to do that years ago and was always a huge pain point! Why wasn't this rolled in last year?


Same with our team. Linux, for the most part, is a much better development experience. The only con is our front-end guy can't use Photoshop.

I don't even miss iterm2 anymore. i3 tiling WM is iterm2 for the whole desktop on steroids.


Docker for mac mounts your whole FS to the VM which is so fucking scary and dangerous. I posted about this somewhere in the github issues, but they just shrugged and carried on. I also asked if they could compile the kernel with btrfs support, and the same happened.

They should just open source the whole thing so people could help fix issues or understand how and why it works.


It's likely impossible to make volumes shared from the host to the container fast for all use cases.

It's also likely impossible to make them work as expected on Windows where the host volume is NTFS and the container is Linux.

So don't use shared volumes for code.

At Convox we offer a Docker development environment in the "convox start" command.

We manage syncing code into the container by watching the host file system and periodically doing a "docker cp" to get source changes into the container.

It works great and shows the power of the Docker API for taking control of your containers.

A bit more info is available here: https://convox.com/guide/reloading/


I've been using https://github.com/codekitchen/dinghy for a while...it transparently uses NFS for sharing and disk access is really fast


It's no coincidence that some projects such as Nomad and Confluent include statements such as "Docker For Mac Caveats", "Don’t use Docker for Mac" in their docs. https://www.nomadproject.io/docs/drivers/docker.html

http://docs.confluent.io/3.0.1/cp-docker-images/docs/intro.h...


While the installation experience of Docker for Mac is very good, the unfortunate downfall of the project is they try to hide what is really going on. You are running a VM and through a series of very clever tricks and fancy technology they attempt to make the VM hidden as though you are running Docker local (like in Linux). This has the side effect of odd behaviors. As a result in my own projects we struggle to support Docker for Mac also.


Yeah, this is my favourite: screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty Oh you can just login as passwordless root to the VM, then you know, access the mounted fs which is your root fs. Insane.


Are you referring to this?

  / # uname -a
  Linux moby 4.9.4-moby #1 SMP Wed Jan 18 17:04:43 UTC 2017 x86_64 Linux

  / # mount | grep osxfs
  osxfs on /Users type fuse.osxfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,max_read=1048576)
  osxfs on /Volumes type fuse.osxfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,max_read=1048576)
  osxfs on /tmp type fuse.osxfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,max_read=1048576)
  osxfs on /private type fuse.osxfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,max_read=1048576)
  osxfs on /host_docker_app type fuse.osxfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,max_read=1048576)


Yes, it mounts you HD in /Volumes.


As far as I remember, doesn't Docker for Mac create a VM and then run the docker containers in the VM? That'd explain the terrible filesystem performance.


But that is not the issue. I think the speed of docker under the os hypervisor is fine. The bug is in in osxfs https://github.com/docker/for-mac/issues/77


It does, it's a lighter VM than VirtualBox which is what docker machine used, but still a VM nonetheless.


docker-toolbox you are talking about. The new version (Docker for Mac) with this problem doesn't use a VM anymore, but native virtualization on OS X. The old version doesn't have this problem.


Native virtualization is a VM...


xhyve spins Alpine Linux VM inside which it runs Docker Engine. So basically it don't deviate too much from running Docker prev. to docker-toolbox for macOS.


Yes, and especially for tools like PostgreSQL which uses features of the Linux filesystems which Docker simply didn't implement or implemented in a buggy way. My only solution was to use data volumes for PGSQL mounts, which at least provide native ext4 experience and compatibility but then you loose the directory sharing part.


What was your use case where you needed pgdata in both the HOST and a container (or multiple)?


Simply just backup it up. There is no way to back up Docker volumes, except for making an other container only for this purpose which runs rsync or something similar.


Docker for Windows works great of you want to try it.


Except that you can't use virtualbox anymore when hyper-v ist active... That's why I switched back to docker-machine on my windows pc.


boot2docker in a virtualbox actually seems to have much faster file system access. Slightly more awkward experience overall, but perf seems better


And then you didn't mention the whole concept of storing everything in one-huge-always-expanding-blob ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 file which contains all Docker related storage, images, data volumes etc.

Now that file just loves growing to 40 GB+ and there is no supported way to reclaim that space.

You can delete the file (or click Reset to factory defaults in the GUI), but then you loose all your Docker's storage, like data volumes.


> and there is no supported way to reclaim that space

That's one of the things addressed in the latest stable release. It now reclaims space on startup.


This has been one of the main reasons why I have not adopted Docker yet.

I develop on a Macbook and push things to Linux Servers.


Hopefully native squashing will allow some people to avoid writing Dockerfiles that look like

  RUN this && that && thistoo && thattoo && rm this && rm that
in order to avoid generating extraneous layers.

There have been some third party tools and registries that implement this feature, but it's nice to finally have it upstream.


I don't like that they're doing it as a flag to docker build, though. I would have liked something like

  RUN echo "this gets its own layer"
  BEGIN
  RUN echo "this will be squashed"
  RUN echo "and this, too"
  SQUASH
  RUN echo "this gets its own layer again"
Similar to how BEGIN..COMMIT works in SQL.


Yeah, I like that. Because even a chmod of a directory causes another layer with all the data duplicated, it would be better if I can just squash the selected lines.


Honestly, I see this as an antipattern, not because it generates extra layers, but because a Dockerfile isn't the right place to write a provisioning script.

When building apps, I keep a small number of shell scripts that live in a `bin` directory, that perform essential operational tasks.

One script installs all the dependencies required to either run or develop the app; another runs all tests and exits with either success or failure; and the final script just runs the app.

Docker then just runs those scripts, which are written deliberately to be easily read, and thus function as living documentation for the project.

These same scripts get used, both by developers on a daily basis, as well as by our CI/CD system when prepping containers.

This also makes onboarding a snap: you run `bin/setup`, and your Mac or Linux box is good to go. And, because that script gets used every time the CI system spins up a build, it will never go out of sync with reality.


The problem is that if you're building images a lot, Docker doesn't cache COPY calls because it doesn't know if the file has changed or not when calling the COPY command. So if you constantly build images and there are a lot of computationally intensive tasks within those scripts, but you've only changed one small item near the end of the script, your build time may be significantly longer by using the script than by putting it directly in the Dockerfile using the RUN command.

I build a lot of modules within one of my Docker images for work, and while it doesn't change often, I would really not like to wait the 20 minutes if I really needed to push to production when I only change some certificate population segment at the end of the Dockerfile or something after a particularly intensive module build RUN step.


Docker does cache COPY calls AFAIK.

If you're copying your whole directory though the cache breaks if any file changes.

The normal way of doing it is to copy stuff like requirements.txt, package.json, Gemfile etc first and install stuff, then copy everything else at the end


Ah, maybe that's true, I always figured it didn't cache it since it wasn't text. In this case it doesn't matter since the script would change, but if the scripts were broken apart and ran using RUN commands it would be a better middle-ground solution than wholly putting it all in a script with a single RUN command.


It never gets out of sync, until somebody gets tripped up by some idiosyncracy of running the server directly on their machine instead of through Docker.

One of the original benefits of containers is in its potential to reduce the disparity between dev and prod. If you're running containers based on the exact same image as dev and prod, then there are fewer reasons why something would only work on a dev machine.

If your devs are using containers correctly, and building candidate images on their own machines, then there's no need for separate bin scripts. If your devs need separate bin scripts so that they can avoid installing and using Docker on their own workstations, then you're throwing out a lot of the benefit which containers give you.


Finally they added this! I submitted a PR for this over two years ago (https://github.com/docker/docker/pull/4232).


I would prefer the ability to squash a set of adjacent layers. Squashing the entire history doesn't seem like a great idea, you lose the benefit of common layers shared by different images. As for that multiple "&&" it's not a big deal, you can escape a new line and keep the file readable.


It doesn't squash the whole history, just up to the `FROM` command. But yes, it defeats one of the optimizations that content addressable storage introduced.


Yes, but bear in mind that layers are what enable differential push/pulls from the Hub or a registry. So you may now want to have separate dependent Dockerfiles - a base for sys deps and app deps squashing it and then an application image deriving from it - probably also squashed.

If you squash a single image I guess you will always push/pull the entire thing even if the system dependencies haven't changed.


Squashing is down up to the `FROM` statement in the Dockerfile. Any layers from the parent image are preserved.

Where this does mess things up is with content-addressable storage where if you have two layers from completely separate images that produced the same content, the layers would be shared, but not if you squash (because there is no layer).


I'm very excited about the clean-up commands!

Docker desperately needs this, it's so frustrating constantly having full disk space due to untagged containers and unbounded volumes.


The downside is the prune command is a giant hammer. In a lot of cases you don't want to clean everything, just things you know you won't be using. For instance, in a CI environment there's going to be some images that will always be there, but if you run a prune in-between jobs you've now made it necessary to re-pull all images.


If you're on Mac, it's still a bit of a problem; hopefully the SSD-gobbling Docker.qcow2 file problem[1] is fixed soon!

[1] https://github.com/docker/for-mac/issues/371


Looks like it is fixed in https://twitter.com/kelseyhightower/status/82223709949555097.... Im looking for official changelog for confirmation though.


Judging by the github issue list there are a lot more issues with Docker.qcow2 now than before. Mostly related to Docker.qcow2 preventing docker from starting.

Looks like a bad release. Hopefully 1.13.1 is soon.


(I work on Docker for Mac.)

Apologies for the inconvenience this has caused.

There was a race condition in a previous release which could allow multiple hypervisor instances to open the Docker.qcow2 simultaneously. Unfortunately this can corrupt the file by allocating the same physical block (cluster) twice, resulting in bad things happening. When this file-locking bug was fixed we also added an integrity check which checks the structure of the Docker.qcow2 on every application launch. For safety the app refuses to start if corruption is detected.

I believe that in these cases, the corruption happened in the past and is now being detected since the upgrade. Unfortunately if the app refuses to start it makes it difficult to reach the "Reset to Factory defaults" menu option. The workaround described here https://github.com/docker/for-mac/issues/1159#issuecomment-2... is to remove the qcow2 and restart the app. Unfortunately containers and images will need to be rebuilt.

For what it's worth after the integrity check and the locking fix went in, I've not seen any recurrence of this error. Please open an issue if you see any other problems!


This is now fixed on the stable release, but space is only released when you restart docker at present. Online freeing is being worked on too.


I have a little one-liner I keep around on all my docker hosts that does this for me. I don't think you needed to wait for them to add anything.

    #!/usr/bin/env bash
    docker rmi $(docker images --filter "dangling=true" -q --no-trunc)


Yes and isn't it fun to type that in over and over - especially on a new machine where you don't have your .dotfiles yet :-/ The new command is going to really save hassle here.


The new command is definitely a nice addition, but I can't say that I've manually ran the other clean up commands in a long time. They were sitting in a daily cron for the last 9 months or so.


The "Use compose-files to deploy swarm mode services" is really intriguing to me, especially when they quote this one liner:

> docker stack deploy --compose-file=docker-compose.yml my_stack

But there's no links to further documentation or anything. Can this be used to deploy easily to a cluster of Droplets for instance?

I feel like the low end/longtail deployment of Docker is really underserved. I want to use Docker for its devops merits, but I have yet to find a clear, concise guide for deploying a simple web app to one or a cluster of VPS instances for a modest traffic project.


Have you looked at Kubernetes? It's pretty dead simple to get a cluster up these days and the general abstractions they chose are really great. Service discovery, config and secret storage, control loops, autoscaling, even stateful containers these days.

100% worth giving it a shot, even just to say you've tried it.

I've been using Docker for over three years now, and Kubernetes really is the realization of what I imagined containers would be like when I was just starting to use them in development.


> It's pretty dead simple to get a cluster up these days

The standard Kubernetes setup instructions are horrible. You need to run a bunch of shell scripts including cluster/kube-up.sh, which are documented to work on AWS, but which are completely untested before release, and which were recently broken for almost a month.

I'm currently running Kubernetes under Rancher. It's still a pretty steep learning curve, with some very well-hidden but essential configuration parameters, but at least it actually works if you follow the instructions.


The situation has been improved quite a bit with kubeadm. Getting a basic cluster set up is a lot easier with that.

https://kubernetes.io/docs/getting-started-guides/kubeadm/


I've found Rancher to best way to setup a k8s cluster as well.

Even the "Kubernetes of Children" book made my head spin.


I haven't yet. Any good getting started guides/tutorials you could recommend?


I suggest jumping right into the documentation on kuberenetes.io. There is got decent documentation with a set of tutorials and you can check out this playground with a walkthrough: https://www.katacoda.com/courses/kubernetes/playground


service discovery is not yet in kubernetes. the proposal is here - https://github.com/kubernetes/community/blob/master/contribu... and https://github.com/kubernetes/community/blob/a1d8453e184dfd8...

secret storage is not done - this is the bug tracking the Hashicorp Vault proposal https://github.com/kubernetes/kubernetes/issues/10439

The configuration scheme is not fixed yet - https://github.com/kubernetes/kubernetes/issues/10439

The game is still up in the air. While I'm bullish on Kubernetes, its still not the superior choice.


Your response is a massive mischaractarization of the status of kubernetes. It's FUD.

1. Service discovery.

When people say service discovery, they usually mean the ability for a given application to discover other copies of itself and other services & applications within a cluster.

Kubernetes has this down. It has it down better than almost any other system.

The "Service" object in the API can be used for in-cluster service discovery, the service-account tokens can be used as a powerful means of introspection and discovery, etc.

The things you link to are neither relevant. The first is about making api-servers more HA and federation better I think (unrelated to user's services) and the second is a proposal which is implemented, done, and didn't really catch on tbh. It's implemented because, well, it didn't propose any code changes on top of all the features services provide now, just some standard metadata users can opt in to adding if they want that sorta thing.

2. Secret storage

The Secret api object works great. It's done. Adding vualt support is an ongoing feature, but it is in no way a bug, it's just a feature request/enhancement. It's good that kubernetes is evolving, but that doesn't mean that the feature isn't already working.

Configuration scheme

You linked the same issue as before, typo I assume. I have no clue what you're talking about though. ConfigMaps are basically done. Downward API is nice. No clue what you think isn't "fixed".

Please quit it with your FUD. You obviously have no clue wtf you're talking about


not only are you being very liberal with your use of profanity, you are not very accurate. I am not sure what is your personal peeve on this that you had to use words which have no place in a technical conversation.

I linked to bugs and proposals that we are discussing on various sigs on slack. even aspects of load balancers and Service abstraction are insufficient and im part of some of the discussions. secrets management being insufficient is the reason why there are suitable number of distributions with their own flavors of these.

Your terming of Service Discovery proposal as a HA is laughable. HA in apiservers has been production ready for quite some time and well integrated in kops and kargo. Etcd recovery is still a hassle. You are mistaking documentation being marked as available with it being actually usable in lifecycle.

Are you aware that full tls in kubernetes is hard because some values are hardcoded? this causes etcd breakage in lifecycles.

kubernetes is not complete - im very bullish on its future, but I would recommend you stick to technical refutal more than personal ire.


Indeed, I'm wrong. The second link (https://github.com/kubernetes/community/blob/a1d8453e184dfd8...) I referred to is something like "Third party resources" (TPRs) which again has nothing to do with services.

The first proposal you link is not anything that needs implementation.

Please, respond to my technical answers. You're not. Listen to your own words.


I would try out rancher. It's kinda like kubernetes but uses docker lingo. Kubernetes has a somewhat high learning curve (what's a replication controller? Why is that different from a pod?). Rancher will even spin up new hosts for you on DO/AWS/anythihg and put them in the cluster.


They even have a demo site now: https://try.rancher.com


Since no one has actually provided links to more info about Compose V3, I'm happy to step in and help.

> But there's no links to further documentation or anything. Can this be used to deploy easily to a cluster of Droplets for instance?

Yes! This is exactly what the Compose V3 format (now native to Docker CLI, as noted in the article) is intended for. It creates/manages native Docker swarm services (and networks, etc.).

The Docker official docs might take a minute to catch up in terms of Google indexing, etc., but you can always get the latest on GitHub to get a feel for what has changed from previous versions and/or how to learn Compose V3.

https://github.com/docker/docker.github.io/blob/master/compo... is a Markdown document outlining various Compose options. The main thing to keep in mind is that 'build' will not work as it has traditionally (use 'image' instead), and you gain additional access to a 'deploy' key to specify, e.g., number of "replicas" (indentical copies for scale) of a given container.

https://github.com/nathanleclaire/composeexample/blob/master... is a V3 version of the canonical Compose Redis + Python "page counter" app that may help to get the feel for things.

In general:

- 'docker deploy' will create or update a 'docker stack' (group of services) ('docker deploy' is shorthand/porcelain for 'docker stack deploy')

- 'docker stack' will allow you to manage these stacks (list, remove, etc.), created from Compose files

- 'docker service' will allow you to manage the individual services created (a service is a group of homogenous containers -- think 3 copies of a webapp intended to sit behind a load balancer)

Check out the '--help' text for each.


There is also Docker Cloud (and Docker Datacenter for bigger fish). You can bring your own nodes of whatever, e.g. AWS EC2, DO, bare metal somewhere. The documentation for Cloud tackles this kind of thing.


And my understanding is that all the capabilities and security options that were in 2.1 version are not supported yet... :(


If you want to deploy to AWS and ECS check out:

Convox - https://github.com/convox/rack

ECS CLI - http://docs.aws.amazon.com/AmazonECS/latest/developerguide/c...

Disclaimer: I work at Convox


In addition to crappy mounted volumes, Docker for Mac eats up CPU like crazy: https://forums.docker.com/t/com-docker-hyperkit-up-cpu-to-34...

And, also like the mounted voluems issue, despite this being on of the top things on the forum, it's still been a problem for like 8 months with no attention.


This release broke our Azure Container Service Kubernetes deployment. Had to downgrade back to 1.12 for now, but not without suffering a day of downtime trying to trace the issue.


What was the issue?



Guess I'm not upgrading soon! Thanks for mentioning


Use Terraform to provision some virtual machines, load balancers, bastion host, vpc, etc. https://news.ycombinator.com/item?id=13436415

Use Docker to setup development and production environments for a sample Flask application with CI/CD. https://news.ycombinator.com/item?id=13436452


Huge news, I really hope this is gonna help the workflow and give another alternative for scheduling. I have been waiting for the update since last November (iirc it was scheduled for end of 2016).

BTW - Am I the only one finding the quality of the linked video very bad?


> docker system df will show used space, similar to the unix tool df

I thought 'df' is for free space, 'du' is for used space. Can anyone explain why the went for 'df', not 'du'?


df shows both used and available space on a whole disk/device.

du is for space used by particular files/directories.


I can't find anything about secrets in compose files... Any ideas about this?


Ok, it still seem to be in development: https://github.com/docker/docker/pull/30144


why does the squash flag squash everything? Wouldn't it possible to only squash the layers introduces in the localhost build? The you could share the other layers again.


It only squashes the layers produced by the build.


I moved on to LXC

Never looked back

Congrats to the docker team in the new release anyhow.


What do you use instead of "docker build" and "docker pull"?


Could use the distro's package manager?


What do you mean? My Dockerfiles already use distro's package manager to install packages.

Do you you mean "create a long-lived LXC container and use distro's package manager to install packages as needed?" The problem with that approach is you have no idea how to duplicate the server, nor how to roll back the changes reliably. For example:

- Your distro upgraded a library, and the upgrade introduced a bug. For extra fun, this was not security upgrade so unattended-upgrade process did not install it, so only half of your servers have the bug.

- Your package list is incorrect. Somehow, your old server ended up with an extra package (previous software version? installed while trying to troubleshoot problems), and your new server does not have it.

- You have leftover files -- either from the previous version of your software, or from the package you have had installed before.

These are definitely fixable in LXC with enough effort -- after all, Docker is not magic, and you can achieve a lot with LXC + shell scripts, and even more with LXC + shell scripts + chef. However, Docker is just so much easier and reliable than writing these scripts by hand.


LXC is sane to me. Upon first glance it is understandable and works everytime.

I went all in with Docker, it was all fun and games up until I tried it in production, and I started relying heavily on it.

Docker introduces container linking voodoo which have burnt me badly. Sure enough voodoo comes with its upsides, we now have Kubernetes which obviously I think is very cool. Just far outside of my use case.

Serenity now!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: