I don't have anything invested in either project, but I just want to point out that dockerlite's last commit was two months ago and the Docker project has had twelve releases in that same time frame[1]. Docker has moved quite a bit in the last two months and is getting close to production-ready.
I don't know anything about Docker being production ready, but both LXC (used by docker) and OpenVZ are production ready. OpenVZ is a mostly equivalent technology that has been around for a number of years, but IMHO hasn't gotten the management features and updates that docker is getting.
You can use OpenVZ right now for similar things, but it isn't as easy to create small single-use containers.
The nice thing OpenVZ has that docker doesn't currently support is mounting a Host directory (read-only or read-write) into the OpenVZ container so you can easily share lots of data to many containers with one copy. Right now, docker supports sharing volumes between containers, but not with the Host system.
For example, you can make very small OpenVZ containers by using common /usr, /lib, /lib64 directories and mounting them read-only in all of your containers. It's easy to bring up OpenVZ on a Centos6 machine and you can run Ubuntu containers in it if you like.
I have nothing against Docker and hope they keep adding features, but my current experience is with OpenVZ.
Great! Thanks for pointing that out. I had read through the tremendous number of comments around that feature a month ago, and didn't expect it to make it through committee already.
This is a good point. There are two definitions of "production ready":
1) Works for my application.
2) Won't get my head chopped off.
The second is what most developers working inside a hierarchy really want. I am always torn in trying to keep up with new things, because reliability is generally not the focus of new products.
Docker is great. For those that don't know, it's basically a sandboxing infrastructure for linux running many apps on one physical or virtual server. The future is only having one app per server container. This means mounting a read-only opaque deployment image with an overlay so has a live-cd inspired filesystem (AUFS or dmsetup overlay). Install just one app per a container, do that, and do it well. Combined with an approach of just enough operating system (JEOS), it's possible to have lean-and-mean server instances minus all the default crapware (i.e., bluetooth and pptp).
The only feedback is that BTRFS == fsck, and is also sponsored by Oracle, an organization that is killing off the superior, but N.I.H., ZFS. Let's not forget all of their other community fails (Berkeley DB, Java, MySQL, OpenOffice).
>
The only feedback is that BTRFS == fsck, and is also sponsored by Oracle, an organization that is killing off the superior, but N.I.H., ZFS. Let's not forget all of their other community fails (Berkeley DB, Java, MySQL, OpenOffice).
Personally I always read the situation as Linux not supporting ZFS because it was born under the Solaris banner, while Btrfs, though being developed "in cooperation with" third parties, is fundamentally a shiny open generic FOSS technology (the name itself eschews branding to make it sound like "look, it's all just B-trees, you can understand this.") Heck, until this conversation, I thought Btrfs was something Linus himself had a stake in, like Git. What wonders a different brand-image can do to people who have incomplete information :)
The ZFS sources are licensed in a way that makes it impossible to merge them into the Linux kernel. You could run ZFS as a user space daemon communicating with the kernel over the FUSE interface but it's never going to be part of the kernel itself.
My personal vain hope is that tux3 will become an awesome filesystem and will become the successor to ext4. tux3 promises to support snapshotting.
So far tux3 looks very promising when it comes to the underlying design and the performance. But we shall see about features and stability when it gets closer to completion. And also if any of the added features will hurt performance.
Having worked directly with Daniel Phillips, it's lead creator... I am overly skeptical. His track record at finishing complex projects is not so great. Look at Zumastor for an idea
I believe this was created and presented as simple (contrast to easy) demo to demonstrate BTRFS as a possible backend for images. Docker currently uses AUFS, and was designed to support more fs backends over time. It also demoed a potential path for docker to become more extensible over time.
Yes, dockerlite is mainly a proof-of-concept, to experiment with a couple of important things.
1. Use BTRFS instead of AUFS, and see if any specific problem arises, or if we hit any corner case when doing that.
2. Setup the network without using LXC default userland tools, in a race-condition free way. This is not as obvious as it sounds.
dockerlite was a success on both accounts. It paved the way for BTRFS support, and gave us some insights about how to make the network setup more flexible.
I keep seeing that sort of arrangement in headings like "How it works?" where I expect it to say "How does it work?" and I'm starting to wonder if that's actually valid English or just a common error.
so docker is mainly a json parser to command lxc i suppose. the bash json parser is the main part of this project - the rests are just wrapping lxc/btrs functions.
The JSON parser is not "the main part of this project"; it was included just in case we needed to interact with the registry (but it wasn't deemed necessary).
[1]: https://github.com/dotcloud/docker/releases