Hacker News new | past | comments | ask | show | jobs | submit login

How is this better than ansible, salt, chef, puppet, etc.? Once it's automated it's automated, no?



I wouldn't use the term "better" - I still rely heavily on puppet and ansible.

But the containerization aspect gives the option to pull up and destroy services without worrying about polluting the host you're in.


But couldn't you get the same thing using the virtual environment provided by python? Unless you're using a lot of modules with C extensions, you wouldn't need to install anything outside the virtual environment.


> Unless you're using a lot of modules with C extensions

in my experience only the simplemost applications actually don't need anything outside of vanilla python pip. suddenly the client wants PDF reports and you're setting up e.g. some tex-based printing system with a bunch of host packages needed. only containers give you the peace-of-mind that all dependencies, current and future, can be described as part of the codebase.


> only containers give you the peace-of-mind that all dependencies, current and future, can be described as part of the codebase.

Couldn't the same thing be done via the package manager and a RPM spec or deb file where all the necessary dependencies are listed and installed as part of the package? It could be done on a VM or could be done on a machine by keeping track of what dependencies are installed when installing the application and handling uninstallation by removing all newly installed dependencies along with the application.


My understanding is yes and I believe you are describing NixOs (https://nixos.org/)


but why polute host or go back to VM overhead per application if we have already a solution that is easier to handle? even just in terms of build & launch time it's quite a bit more performant to use containers vs. VMs.


> but why polute host

The package manager can handle removing pretty much any file it installs when uninstalling, so the host really doesn't get "polluted".

> or go back to VM overhead per application

In a development environment hosted on a VM, several applications can be installed on the VM (rather than having one per VM) to reduce overhead. Then testing and making code changes could be done by running a pip -e (editable) install and modifying code in the working directory, or making the change, repackaging and re-installing, and restarting the daemon.

With a container, at least in my experience, you need to re-build it each time you make a change to the code, which actually takes longer than modifying an editable install or re-building the wheel/RPM and reinstalling it.


typically on a dev PC you'll have multiple apps roled out for development / maybe feature testing. if this is done directly on the host it's always possible for these environments to influence each other. very common is to forget to script a dependency, which only may get recognized once someone wants to deploy on a fresh installation.


Yes, but dependencies are handled by the package manager. So if one is missing, it's a simple matter of adding it to the list of dependencies.

In any case, the point I was trying to make is that the development cycle with containers, in my experience, is slower because you have to go through the build step every time you make a change. For an intpreted language like python, that shouldn't be necessary until close to the end where you test a fresh build before submitting the changes for review.


The setup is a bit more involved, but this can be mitigated in several ways. One is docker or SSH plug-ins in editors like VS code coupled with an SSH server included in the develop build phase, like you would use to develop on a remote server. The other approach is adding the source tree as a volume. You can also do a mixed approach where development is done on host, but testing and deployment is done in docker.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: