> Enter containers and container orchestration. Suddenly many of the problems that configuration managers were trying to solve aren't there anymore.
I profoundly disagree with this statement. I think containers are, potentially, a huge step backward in the configuration management task domain. They are very useful in dynamic scaling orchestration tasks, but awful as a configuration management tool. And they are mostly used as a configuration management tool.
Ansible/puppet/chef describe how to install your complete server pool. It's hugely reassuring to know that this knowledge is written somewhere, in formal form, that it is version-controlled and that it is debuggable.
Containers as configuration management are the modern day equivalent of the mythical server that no one touches and that no one knows how to reinstall. (a bit exaggerated, but correct nonetheless)
Yes. This is one of the biggest myths about containerization. Docker containers are opaque because:
* "Dockerfile" or whatever that language is called is horrible, terrible, and very bad, with extremely minimal support for anything useful and braindead implementation for the little it supports (every new RUN line cements a new layer in the image, meaning you need lots of commands on a single RUN-line continuation to keep size down).
Essentially, this means that people do all their work before Dockerization, in some undefined mechanism that is probably not very reproducible. Most often this is copying over static configurations, making changes, etc., before running docker build.
* The FROM semantics encourage passing the buck and using opaque binary bases that are not necessarily well-understood. Since Dockerfiles are very hard to do anything useful in, people will get something minimal working and then cement it in a FROM tag to iterate further, because they don't want to potentially break that base. We have short Dockerfiles where I work but they are still stacked 2-3 layers deep, and this is just slapping binaries on top of each other; unlike an Ansible playbook, it provides no information about how or why these things are being combined.
Docker image distribution does not necessitate source Dockerfile distribution. The build is not necessarily reproducible, and this semantic results in a big, fragile dependency stack when someone somewhere in the FROM chain falls over. It also creates some attack surface for injecting into an early commonly-used FROM source. People just trust that FROM images are great and good and reliable and trustworthy. We all should know how that will end up.
Making a Docker image is much closer to the old "zip up this folder and label it 'todays-good-code' and save it on your desktop Bob, we don't want to lose it." There is no information about what is in that copy. It is a raw binary blob.
This is the antithesis of configuration management.
I profoundly disagree with this statement. I think containers are, potentially, a huge step backward in the configuration management task domain. They are very useful in dynamic scaling orchestration tasks, but awful as a configuration management tool. And they are mostly used as a configuration management tool.
Ansible/puppet/chef describe how to install your complete server pool. It's hugely reassuring to know that this knowledge is written somewhere, in formal form, that it is version-controlled and that it is debuggable.
Containers as configuration management are the modern day equivalent of the mythical server that no one touches and that no one knows how to reinstall. (a bit exaggerated, but correct nonetheless)