Hacker News new | past | comments | ask | show | jobs | submit login

Hi Matt, you are not alone :)

It turns out that shared folders are not a sustainable solution (independently of whether boot2docker supports them), so the best practices are converging towards this:

1) While developing, your dev environment (including the source code and method for fetching it) should live in a container. This container could be as simple as a shell box with git and ssh installed, where you keep a terminal open and run your unit tests etc.

2) To access your source code on your host machine (eg. for editing on your mac), export it from your container over a network filesystem: samba, nfs or 9p are popular examples. Then mount that from your mac. Samba can be natively mounted with "command-K". NFS and 9p require macfuse.

3) When building the final container for integration tests, staging and production, go through the full Dockerfile + 'docker build' process. 'docker build' on your mac will transparently upload the source over the docker remote API as needed.

There are several advantages to exporting the source from the container to the host, instead of the other way around:

- It's less infrastructure-specific. If you move from virtualbox to vmware, or get a Linux laptop and run docker straight on the metal, your storage/shared folders configuration doesn't change: all you need is a network connection to the container.

- Network filesystems are more reliable than shared folders + bind-mount. For example they can handle different permissions and ownership on both ends - a very common problem with shared folders is "oops the container creates files as root but I don't have root on my mac", or "apache complains that the permissions are all wrong because virtualbox shared folders threw up on me".

That said, we need to take that design insight and turn it into a polished user experience - hopefully in Docker 0.9 this will all be much more seamless!




Thanks for taking the time to write this. I've hit a major wall in figuring out the best workflow for this exact scenario. Good to finally hear an official suggestion on the matter. I've been depending on shared directories, so I'll definitely be experimenting with network filesystems.

As Docker evolves, it would be great to have some kind of official resource to get suggestions for optimal workflows as new features become available (the weekly docker email is my best resource right now). Searching the internet for info has been a huge chore as most of the resources (including the ones hosted by docker.io) are woefully out of date.


> As Docker evolves, it would be great to have some kind of official resource to get suggestions for optimal workflows as new features become available

Yes! We are trying to figure this out. Our current avenue for this is to dedicate a new section in the docs to use cases and best practices.

As you point out, our docs (and written content in general) are often inaccurate. We need to fix this. Hopefully in the coming weeks you will start seeing notable improvements in these areas.

Thanks for bearing with us!


"- It's less infrastructure-specific...." - "a very common problem with shared folders is "oops the container creates files as root but I don't have root on my mac", or "apache complains that the permissions are all wrong because virtualbox shared folders threw up on me"."

Thank you for taking the time to write this, just to emphasize these two pain points. I've been using Docker since 0.5 and my current setup is still based around sharing from host to guest. The problems you mention obviously aren't deal breakers (at least for me), but the accumulated effort of dealing with these issues (especially having to modify permissions) adds up over time.

Here's a concern and a hypothetical, though, and I'd like some insight (or a facepalm) from others if I'm wrong...

Say I'm collaborating with a few people on a Rails app and we all work within a Docker container we build from a DockerFile located in our source control and we use the guest-to-host setup you outline. What happens if one of my developers accidentally pushes that container to Docker's public registry? Is my billion dollar ( ;) ) Rails app stored in that container and suddenly available for anyone that wants to pull the container?

I would hope the above is a far-fetched example, but with host-to-guest sharing I at least have some safeguard in knowing that my data is decoupled from my configuration. Is such decoupling worthwhile in your opinion?


I configure my local containers to my ($5/month) quay.io account to address the problem you are describing. If all of your containers start with quay.io/name then you don't have to worry about exposing your docker images.


This should be a blog post or tutorial or guide on the documentation page. It would be really helpful to have guidance on how to do development work flow using docker while avoiding bad practices like shared volumes.


Seconded. My biggest stumbling block with Docker at the moment is “best practices”, as I work on coming up with a Docker Dev Ecosystem (for myself and for a team).


Same here. I've cooked something built around bash scripts and guest-to-host sharing of source code, and I can't help but have a nagging feeling that it isn't as good or correct as it should be...or perhaps it's just totally wrong altogether.

In the absence of "best practices" even a discussion thread somewhere that allows Docker users to pick apart and discussion configurations would be helpful. Pretty much all I've been able to find is a smattering of blog posts.


from my experience samba, nfs are awfully slow when working with big projects. When you use an IDE or Editor that indexes all files for fast search and Intellisense, using NFS/Samba is problematic imo. Thats why i like the vagrant approach, i can edit the code with all the speed of my local tools, and just the vm accesses them through NFS which is fast enough for serving requests in 1-2s.


In your experience are bfs and samba really slow on a virtual network? I have trouble imagining that it would sliw things down enough to be a problem.


I would really be interrested in details on how to connect the filesystem between OSX and docker/linux via 9p.

What would be the recommended way? How to install the required software on either system. Host on OSX or on Linux?

On a recent Linux it seems like a modprobe 9p activates the required module and then a mount -t 9p serverIP /mountpoint seems to do the trick.

But what about the OSX side?


Thank you for this. I found it extremely helpful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: