I don't think that "X has more users than Y" is a valid complete argument in itself, otherwise nothing would ever change. Of course it matters in the grand scheme of things, but the causal relationship is more complex, involves many degrees of freedom and goes both ways.
Before docker came, Linux had LXC, which wasn't as popular as docker is now, but it was certainly known and used by people. So when docker came, LXC had more users than docker, and yet docker surpassed LXC in popularity in weeks, so the "X has more users than Y" state can be changed by various factors and it's not enough to keep the system in equilibrium.
So yes, the fact that Linux is more widely used than FreeBSD and illumos in the developer community is certainly an important factor, but I don't see anyone ever saying "FreeBSD is great but we want to use something supported by a larger community", or "illumos is great, but we don't have expertise with it", which are certainly important arguments to consider when making a decision.
But I hardly see anyone making these arguments, or any other arguments really. It's like these systems don't even exist. At first I attributed this to the "X has more users than Y" factor, but then I see people having particular problems with Linux container technology, in areas such as security, virtual networking, etc. And these are problems already solved by FreeBSD and illumos. Surely when you have a problem you look for alternative solutions that don't have these problems?
But I don't see people looking over the alternatives at all. As I said, there are many valid reasons not to use these other technologies, but I am very perplexed that people refuse to even acknowledge the existence of them.
And now that illumos can run Linux inside a zone (and FreeBSD did this too 15 years ago, and still does for 32-bit binaries, I believe work is well underway to extend this to 64-bit as well), I think the "I only know Linux" argument loses some potency, you can run Linux after all...
LXC and Docker are operations technologies. It takes the same codebase and just generates a different product from it.
Switching to a different OS, however, requires different development-time technologies. Some third-party dependencies like libraries you're used to using might not even exist on the other platform.
Effectively, developers are locked into a pretty tiny development-time ecosystem: all the devs I know develop on Ubuntu (or on OSX with testing on Ubuntu, if they can get away with it.) They depend on the apt package graph, including PPAs.
Half of the renewed enthusiasm behind containers isn't about security; it's about the fact that a lot of operations people prefer RPM-based distros, and it was always really annoying to try to keep a given piece of Linux software portable between deb-based and RPM-based distros. You needed to figure out how to specify operation-time dependencies against at least two package graphs, and also compile-time dependencies using autotools or similar.
In contrast, Docker and similar ops software are interesting (from one perspective) precisely because they let devs learn fewer things: you develop your software on your Ubuntu machine, create a container that basically replicates your development environment, "install" your software in there, and then distribute that. Now your software can be run on some other dev's (Ubuntu, OSX) machine, or deployed to a production (RHEL) machine. The other deployment scenarios are pretty minor in comparison.
Or, in short: devs and ops are separate jobs. Containers make ops people do more work/learn more things, but let devs do less work/learn fewer things. That's why devs are enthusiastic about them: it pushes the work of packaging their software (or writing autoconf scripts) for various platforms off their plate.
Devs are interested in learning one thing—e.g. how to write a Dockerfile—that lets them drop an entire stream of continuous work/learning/keeping-up they have to do, e.g. handling changes in the multiple platforms their software supports.
> Now your software can be run on some other dev's (Ubuntu, OSX) machine, or deployed to a production (RHEL) machine.
What about dependencies between user-space and the host kernel? Aren't all containers forced to use the same kernel as the host?
The packaging scenario thst you describe has existed for years with VMs, where the VM can have a kernel or even OS version that is different from the host.
The main difference (besides overhead) is that a VM contains a running collection of OS services along with your app. Because of this, the ops team needs to be involved in keeping the VM up to date, the VM usually needs to be "managed" with orchestration much like a physical machine, and all-in-all it's big interdependent mess where the devs can't just forget about deploy-time concerns.
Idiomatic usage of containers forces one particular solution for this: a container contains one service; multiple services means multiple containers, and the orchestration of those containers is up to the ops people and their software.
VMs can also be done this way. EC2 ephemeral instances work great for doing a CoreOS-like "upgrade by starting up new AMIs instances and killing the old ones" strategy.
However, since ops people can't be guaranteed that random VMs they're handed do not, in fact, have arbitrarily-old services running in them with possible security vulnerabilities, they have to be conservative about deploying random VMs created by devs. Thus, VMs don't tend to get created by devs; thus, the devs still have to solve the deploy-time problem some other way to get the ops people something they can build into a VM. This isn't as much of a problem with containers.
Unikernel VMs, on the other hand, are effectively equivalent to containers: they both provide "just one service in a sandbox"-level granularity, that ops can then manage as it pleases. If Unikernels had come around 10 years ago—if Linux had been factored into a rump kernel, for example—I don't think we'd be nearly as interested in containers today.
Before docker came, Linux had LXC, which wasn't as popular as docker is now, but it was certainly known and used by people. So when docker came, LXC had more users than docker, and yet docker surpassed LXC in popularity in weeks, so the "X has more users than Y" state can be changed by various factors and it's not enough to keep the system in equilibrium.
So yes, the fact that Linux is more widely used than FreeBSD and illumos in the developer community is certainly an important factor, but I don't see anyone ever saying "FreeBSD is great but we want to use something supported by a larger community", or "illumos is great, but we don't have expertise with it", which are certainly important arguments to consider when making a decision.
But I hardly see anyone making these arguments, or any other arguments really. It's like these systems don't even exist. At first I attributed this to the "X has more users than Y" factor, but then I see people having particular problems with Linux container technology, in areas such as security, virtual networking, etc. And these are problems already solved by FreeBSD and illumos. Surely when you have a problem you look for alternative solutions that don't have these problems?
But I don't see people looking over the alternatives at all. As I said, there are many valid reasons not to use these other technologies, but I am very perplexed that people refuse to even acknowledge the existence of them.
And now that illumos can run Linux inside a zone (and FreeBSD did this too 15 years ago, and still does for 32-bit binaries, I believe work is well underway to extend this to 64-bit as well), I think the "I only know Linux" argument loses some potency, you can run Linux after all...