Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Do you use Vagrant or Docker for active development?
50 points by nepger21 on July 10, 2015 | hide | past | favorite | 68 comments
I think I understand the use case of Docker for deployment, but does Docker hold its sway at the moment for active development. Data persistence support is not out-of-the-box. Vagrant with its own isolated and shared volume support seems like something that would be very good in active development of software artifacts. What is the view of the HN mass, and what would you recommend?

Apart from these two, are there any others that looks promising?




On a Mac or Windows machine you're using Docker with Vagrant via Boot2Docker anyway.

A lot of people us both http://docs.vagrantup.com/v2/provisioning/docker.html

That's actually the only thing that got me to hold off on Docker the last 2 times I've evaluated it. I was able to get everything running for a 1 monolith + 7 microservice system that I work with but the local developer workflow felt very clunky even with Fig. That was 6 months ago and it's my understanding there have been a lot of improvements.

That project was for a Ruby team and there are so many Ruby based tools that make the local development workflow a smooth operation that shoehorning Docker in locally would have been a step back, so we held off on it.

It's an area that I think will see major improvement though. Heroku's even gotten in on it.

https://devcenter.heroku.com/articles/introduction-local-dev...

Which is really impressive to me. If anybody in the space can polish out the user experience, it's Heroku.


I recently switched to using both, docker running in a Vagrant VM. I've had several frustrating issues with boot2docker on OSX, it's generally just been less stable for me than Vagrant.

In terms of using docker, IMO it's the best development experience I've come across once you get everything set up. It can be confusing to get your workflow set up at first, and it seems like everyone does it a little differently, I'm hoping that best practices will standardize a bit as docker continues to mature.

I love having every part of an app (app code, split into a few microservices if you wish, postgres, redis, rabbitmq, etc.) completely isolated, and docker-compose is a great system for linking things together. I also currently don't have any puppet/chef/etc code and love not having to maintain that, in my mind a large part of the need for configuration management tools is dealing with the complexity of diffing two arbitrary states of infrastructure, and with the immuatable approach of docker containers all that complexity disappears.


Did you manage to have automatic code reload set up with Vagrant/Docker? Last time I tried the following config: host folders shared with vagrant VM and vagrant folders mounted inside docker containers. Unfortunately, file change events on host didn't propagate to docker containers. As far as I remember, this was a limitation of Vagrant file system.


I'm using the same setup and it works great. Vagrant shares a host folder into the guest VM, and then the docker containers mount subfolders of that shared folder into the container.

So you change a file on the host computer, and it reflects automatically inside the guest VM which is the docker host, and automatically inside the containers since that same location is mounted by the containers.

I just checked the Vagrantfile, and the Ansible playbook we use to launch the docker containers, and I don't see any special magic required to get the syncing working with the Vagrant filesystem, it just works out of the box.


Yup, this is what I do as well. No special configuration was required as far as I know, either.


Don't you lose inotify this way? Most autoreload solutions trigger on inotify events.


No, it works fine for me. The node.js livereload code running inside the container can detect the file changes just fine, and it just alerts the browser running on my host computer via a port that has been forwarded from inside the container to the docker host, and from the docker host to the VM host.

And by the way this setup is bidirectional. If code inside my container, or an SSH session running inside my container makes a disk change, that change is also reflected on the version of the folder that runs on the host.

So if you don't want to go through the effort of doing the port forwarding, you can run live reload on your host machine, and reload the browser automatically in response to changes that were made by the container.


Filesystem events don't propagate across any networked filesystem (at least none that I've seen). FYI, vagrant is probably just using NFS. The default boot2docker setup uses vbox shared folders.

Events should work just fine when they are on the same host (although currently overlayfs does not support inotify, so...)


re: boot2docker, I experienced the same issues as you and I found it even more annoying with little incompatibilities, etc.

I actually did something simple with both Vagrant and Docker, I'm hoping to bundle it into a self-executable and have it as an alternative to boot2docker.

Right now, it works only on OSX & Linux, but what it does is it uses Vagrant and builds a "docker-host" vm that you spin up manually (or you can just use a launchd script to spin it up on startup). Since Vagrant has great support for setting up a network bridge, I basically created a virtual private network between the host vm and my os. I set up the docker client running locally (brew install docker) and through the network bridge, it creates a pretty seamless experience.

The beauty of Vagrant here is that I can customize the mem, cpu, etc. Vagrant also has rsync support (and rsync-auto!) and of course, great shared folder support with the docker image. I have experimental support to also create this experience on AWS, including spot instance support

You can check it out here: https://github.com/mahmoudimus/docker-host-osx.


I have been trying to setup docker, but the tutorials on the web seem not to be focused on active development (at the moment). Can you link to some documentations from where I can learn about the workflow and transitions from old way of doing things to the docker way?


I do the same, with the exception of docker-compose. What I have found to work for me is running mysql/memcached on the vagrant box and running docker with --net=host (in development). I deploy to Elastic Beanstalk with RDS and Elasticache though which contributes to my reluctance to completely set up multi-container configurations in docker-compose. My experience there has not been so smooth, would love to hear more about how people are doing that.


> Data persistence support is not out-of-the-box.

This is actually not the case. Although containers do not share any persistent volumes with the host by default, you can use the --volume option[0] to do so.

To answer your question, I've used Docker for local development to run MySQL, Postgres, and Redis inside of containers. Using the aforementioned --volume option, you can share the unix socket opened by either of these services from the container to the host. Otherwise, you can use the --port option[1] to share ports between the container and the host.

I've had a generally pleasant experience using Docker for this use case and would recommend it. It's nice being able to start using a new service by pulling and running an image. Similarly, it's nice to have the ability to clear the state by removing the container, assuming you choose not to mount volumes between the container and the host.

The only frustration I've run into is running out of disk because I have too many images, but it takes a while to get to that point and those can easily be deleted.

[0] https://docs.docker.com/reference/run/#volume-shared-filesys... [1] https://docs.docker.com/reference/run/#expose-incoming-ports


Thank you for clearing that up. So, does this work like Vagrant's mount folders? Does the container then find the path to the host's development artifacts?


You have to manually specify the folders you want shared in the arguments to the command. For example, running this:

   docker run -v /tmp/:/host-tmp/ ubuntu:14.04 bash -lc "echo 'TEST123' > /host-tmp/test"
Should create a file on your host's filesystem called "/tmp/test" with the text "TEST123".


Use Vagrant for everything even tiny projects go in a Vagrant container (isolation is the primary win with the ability to do a git clone, vagrant up and be away).

Don't use any kind of provisioning on vagrant just straight bootstrap.sh as honestly I don't like them.


I don't use Docker, although I don't have a need for it.

As a solo coder, I love vagrant - the whole nature that you can use a configuration file with a script or two to build out an entire VM has so many benefits. Less time to build the VM, easily destroy the entire VM, easily rebuild the entire VM, save drive space by destroying the VM when you don't need it, keep the VM configuration in a git repo, distribute the configuration to someone else to use, and the best is having all the steps used to configure the VM are documented in the config file and scripts.


Everything you said can be done with Docker, just faster.


If he's not on a Linux host that can run Docker to start with, its not going to be faster, and its adding complexity.


I was able to get a somewhat serviceable environment up on my Mac. However: after all the effort involved I ended up sticking with vagrant. There is no way I could explain the process to someone I would potentially work with.

I check back in about once every two months to see if there have been any breakthroughs.


I have typically created a Vagrantfile to spin up an Ubuntu host with a static IP, configure its' docker to listen on a network port, and 'brew install docker' to get the client on Mac. Set DOCKER_HOST to the vagrant VM IP+docker port, and it's easy to distribute to a team.

I wish the Docker team hadn't more or less given up with boot2docker, since it completely abandons the vagrant advantage of a folder shared with the OS. People do wierd shit like trying to mount their OSX home dir over NFS and, well, Vagrant solved this a long time ago.


Interesting. I haven't looked into it much because I like the concept of having a completely separate system virtualized for the sake of being as similar to the production environment as possible (Ubuntu), and I typically have been coding on Arch.

If what you say is true, Docker would be a great way to do development through SSH on a VPS with minimal RAM and resources. I'll definitely take a look at it.


Is "just use docker" the new universal HN comment like "can somebody port this to javascript" or "blockchain solves all the world's problems because {mumble, mumble, unfounded optimistic ideology, mumble}"

VMs are great when you want to test cross-platform or cross-architecture or legacy compatibility all on one machine (there's more to the world than Ubuntu).


Not really. Docker is a fundamentally different approach to software execution and packing, which really has a number of significant effects into how we deploy and run software. Basically, it allows deployed systems to be packaged as a very complete redistributable artifact, including not only dependency managing, packaging and distribution, but also resource isolation, scaling, and security. It's far more universal then puppet or vagrant, far easier then ansible, and far more efficient then a VM, and far more robust then any of the previous linux container efforts.

While right now, it's a linux only thing, with the incoming support of Windows, things will start to change dramatically.


That's quite a bit of koolaid someone has been drinking.


How good Docker is really to isolate the environment from the host machine? With a VM we have a complete isolation, while Docker (i.e. containers) share resources with the host machine; what about having different tools and especially different versions being installed on the two?


Docker gives you this level of isolation. Basically the Kernel is shared in Docker, but not a whole heck of a lot more. Docker is very good at giving you what appears to be a completely isolated system.

It isn't a VM though, and there are some isolation issues, but primarily as a target for security, not something that results in different versions or tools being a issue.


So what would be the advantage of using Docker for deployment, as opposed to simply fire up a whole server based on a Vagrant image?


Much better resource utilization, much better control of the deployment environment, much easier packaging/release constructs.


Vagrant with Docker provisioner:

https://github.com/czettnersandor/vagrant-docker-lamp

Much faster than the Virtualbox provisioner, so it's not an "or" decision, the two thing works well together :)


Docker is really great for developing things IMO. I use it in a few ways actually. One thing I've found it really useful for is isolating build slaves in Jenkins (using the docker-cloud plugin in Jenkins).

I also like to use it to create test deployments for debugging or evaluating things, for example it's a lot easier to run Hadoop in pseudo-distributed mode inside a Docker container with host networking, than it is to fiddle with running it in a VM and either getting NAT or DNS working just right, or installing it locally. With the Docker container, if anything goes awry, it's just so easy to get back to initial state by killing the container and starting again.

As for Vagrant, I like it a lot too, but for different reasons. You can define a set of actions that is a lot closer to installing whatever it is you are developing, instead of baking everything together like you do with Docker, which can be desirable. I have used it in the past for creating virtualized cluster environments for integration testing of distributed systems. I think so far I use the VirtualBox provider, but I'm thinking of re-working some of my past uses of it that don't strictly require a VM to use the Docker provider.


I use docker for the development of FEniCS, an open source scientific computing package written mixing python and c++. FEniCS requires a lot of dependencies which can be hard to compile (PETSc alike) or need version hold (Boost alike). Docker helps to hold the environment constant. We currently plan to have build bots based on docker as well to streamline build testing.

When I write code inside docker, I always submit to a git repo like Bitbucket. Data persistency is easy. Besides you can always use --volume, which works out of box in Linux.

Vagrant requires some basic shared environment, which is not realistic in my case. For example, I use Archlinux myself and am forced to use old Scientific Linux at work, while many other FEniCS developers use Ubuntu, Fedora, or Mac stuff. It is too painful to write and maintain a Vagrant script for all these (different compiler, boost, blas, lapack and some other 10+ numerical specific stuff). I even tried Vagrant+docker. But in the end, with docker maturing, I switched to docker+bash script instead. It is just more convenient and needs less dependency.

So I'd endorse a docker only approach if you mostly use Linux and your project has a diverse group of people.


Working in a consulting capacity, mainly doing LAMP development with a small team. We use a standardized Vagrant image (https://github.com/readysetrocket/vagrant-lamp) which has cut down on a lot of local environment issues for our dev team.

Previously all devs had their own environment (some MAMP/WAMP, some homebrew, some remote, etc) which led to onboarding and support issues. Setting up a standardized recommended dev environment has helped with that a lot - both in terms of reducing project onboarding and getting junior developers up and running.

Would love a day where we can build projects as Docker containers and hand them off to our clients' IT teams, but that seems to be a way off.

SO thread where the authors of Vagrant and Docker weigh in: http://stackoverflow.com/questions/16647069/should-i-use-vag...


I use chef and test-kitchen to bootstrap my dev-VM (vagrant is used by test kitchen). So I have written my cookbooks, and depending on the project, I converge (spinning-up) VMs depending only on cookbooks that I need, eg no java will be installed if the project requires node.js only. In addition the main gain with that, is my dev VMs are totally disposable and they own nothing. Everything is being synced into the VM from my host machine, same with data.

Lately I am trying to "dockerize" my backends, so in the case my workspace project needs a mongoDB or an other backend from my architecture, I should be pulling those containers up on converging. That will make my life easier when writting cookbooks for the backend dependencies.

I believe you can achieve the same using ansible, chef was a personal taste.


Vagrant with ansible to set it up. 1 build vm, 6 'deployed' vm's and 1 'deployer'.

Needs a minimum of 42G ram, 150G disk space and fills its logs at 2G/h. Not great when you are running on a 256G SSD.

Building takes 2h+ with ~10% random failure rate due to dependency mirrors and timeouts.

The python code is deployed as gziped virtualenvs to the hosts. This actually works pretty nicely as it means you cant just import stuff and have to build stuff similar to 12 factor style(We dont use ENV_VARS/stdout logging though).

TBH I still dont really see the point of docker, Im sure it will 'just click' at some point but it hasnt happened yet


At this point, I too am trying to find "it will 'just click'" moment for docker, though I have only been looking at the two for a few days only.


Why not both? We use vagrant to create our docker environment - a 3 machine CoreOS cluster. This is so we accurately represent our production environment.

We then use our production docker image(s) with some more development appropriate configuration options. Vagrant mounts the user's home directory at /Users/<username>/ inside the CoreOS machines. Then we mount the appropriate folder inside the docker container at where the container would normally expect to find the app's code. This way the developers have live updates without having to rebuild the docker image or anything.


I have been reading that Vagrant provides a out-of-the-box support for docker after version 1.7. So, it seems this will be easier, and with an obvious case for Windows or Mac.

I am already on Linux, so, my question is why create extra abstraction to it? Is not the function of docker to provide isolation as Vagrant but without the extra overhead of VM. As far as I could gather, one prominent use cases of Docker is the cheapness. Would not doing Docker on top of Vagrant beat that purpose?


We do something similar (though Ubuntu not CoreOS). The obvious advantage is you get to leverage all the work you did setting up docker for deployment and you have a development env that much more closely matches production.

The downside is it feels like a lot of layers of abstraction. And I haven't quite figured out the right way to hook up e.g. the PyCharm debugger to a python interpreter running in a container that's running inside a VM.


I would put it the other way around. Docker+Vagrant is best used for deployment and hopefully it will be stable and battle-tested enough so I can use it in production.

I love the fact that once I configured the dev environment on my PC and I hit the road on the next day I can have exactly same environment on my laptop by running single line - "vagrant up". Not to mention that any dev working on the same project saves himself ton of time but not having to configure everything from scratch.

I have not taken the leap of faith yet and I am not using the docker in production but hopefully this will happen soon.


I think you wanted to say "Docker+Vagrant is best used for development" ;)


Yes! Thanks for a correction!


How would you provision the Vagrant box? I would think you'd want to avoid having some Dockerfiles for setting up production servers and some completely different provisioner for setting up development in Vagrant.


The idea is that you build a Dockerfile while you are developing, and then you can push an identical environment to production - one of the common causes for production issues being differences from the developers' environments.


Yes, sorry, that was my point actually. That once you commit to Docker on production that it probably doesn't make sense to use anything else for development environments.


I've used Vagrant for a few projects over the past few years - mostly small things like hackathons and such. Haven't used it much in the past year or so though.

At my last job we used Docker extensively for developing our main software product, based on a Django + PostgreSQL + RabbitMQ + Celery stack. It's definitely a bit tricky to get your head around at first, but after that, it's very nice being able to just type "docker-compose start" and have a working application with consistent configuration ten seconds later.


Did you use separate docker container for each of the stacks? From what I can gather, the docker way is to reduce each complexity to a separate container solution.

And, I agree that docker way of doing things is a bit tricky at first, since we are not used to doing things that way


Did you keep the postgresql data inside the container ?


That's a tasty sounding software stack


Vagrant + Ansible for most things. I have used Docker to rebuild some of my environments, and there's a lot of promise, but some hard issues (especially w/r/t more complicated applications with multiple dependencies) that I'm still hoping to make less-hard before switching more to a container-based workflow.

One of my main day-to-day Vagrant configs is encapsulated in Drupal VM (http://www.drupalvm.com/).


I've used vagrant for a big rails 3 app with a lot of dependencies and services, i.e., solr, a redis backed delayed_job queue, etc.- stuff that would have difficult or impossible to manage on a mac.

The vm environment was also as close as possible to the production env, with the same os version, etc.

It also greatly streamlined onboarding of new devs. The dev environment setup was a couple of hours instead of a day or two.


My team is currently using the gradle cargo plugin (https://github.com/bmuschko/gradle-cargo-plugin) to deploy to our remote docker machines for testing. This is my first time hearing about Vagrant. What are it's advantages for our use case for active development?


The only use case I had for docker so far was to set up a cross compiler toolchain to produce binaries for an armv7 igep board.

It was significantly easier to tell my co-workers to install docker and type `make local` for local binaries and `make igep` to produce a igep armv7 binary by running a docker container.


I use docker extensively for both development and production, using it to mirror the production environment as much as possible. Write code on the host, run on the container. It took me sometime to adjust to the concept but once I did, it was pretty cool.


How much time did it take for you to adjust? I still am finding it difficult to wrap the concept of dockerization.


I use Ansible and Vagrant for active development of client projects. It's a great combo because it ensures my local environment matches production as close as possible and as well I can go from nothing to running environment with a vagrant up.


I use vagrant for all my developments, it was easy for me to setup and play around with some new tools, such as saltstack (configuring master and minions) and reused the same bash scripts to setup the dev env.


Not to hijack the thread, just wondering anyone has experience with zero-downtime deployment of multi-container app with cross-container communication?


I would definitely check out a number of docker related technologies. To answer your question, we don't have our software out in Docker yet, but it's coming, and it will make a splash when it lands.

Vagrant is a great technology, but I recommend taking a look at Docker compose (https://docs.docker.com/compose/) previously known as FIG. One of the great advantages of compose is that combine it with Swarm (https://github.com/docker/swarm) and you have a very robust distributed deployment system. Docker-machine is the direct competitor to vagrant, but to be honest, I don't use it. I spin up my docker containers via some build in service API's we already built for proprietary reasons.

If you want to get really robust, you can also look at Kubernetes (and their zero-downtime deployment) and Mesos. These both add a huge amount of complexity t the deployment, but also grant a robust distributed system for managing downtime and deployment. Redhat also has Openshift.


Yep, machine/compose/swarm are great tools from Docker and I'm already using it, but compose is more of a "dev" tool so it restarts all containers with every new deploy.

What I'm looking for is a robust and systematic zero-downtime approach to update some of the containers (say in a loadBalancer → web servers → db architecture).


I think this is the golden question regarding this stuff... would love to have any blog post about this.


Challange accepted - I'll put together something this weekend. What are you exactly looking for?


Yes, Vagrant only (it is awesome), still not sure what docker does. After setting up a Vagrant VM, I run fabric scripts to build the box for its role


Using Vagrant with docker inside for development for some time now and it's been the biggest productivity boost ever, give it a try.


Vagrant and puppet. And it's the same puppet we use for production, so we're as close as we can get.


I use vagrant to run a multi-vm hadoop cluster for testing.


systemd-nspawn and linux-vserver environments with dependencies (library versions, compilers, even python vitualenv) guaranteed by cfengine promises.


Isn't it more like Docker with Vagrant?


vagrant is okay, but it's kind of a PITA.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: