Hacker News new | past | comments | ask | show | jobs | submit login

Containers are a convenient work-around for the problem where programs have incompatible dependencies, and additionally the problem where security isn't as good as it should be.

For instance, you want to run one program that was written for Python 3.y, but also another program written for Python 3.z. You might be able to just install 3.z and have them both work, but it's not guaranteed. Worse, your OS version only comes with version 3.x and upgrading is painful. With docker containers, you can just containerize each application with its own Python version and have a consistent environment that you can run on lots of different machines (even on different OSes).

They're also a lot more convenient than having to go through the arcane and non-standard installation procedures that a lot of software applications (esp. proprietary ones) have.

Yeah, honestly it kinda sucks that we're adding this layer of inefficiency and bloat to things, but these tools were invented for a reason.




> For instance, you want to run one program that was written for Python 3.y, but also another program written for Python 3.z. You might be able to just install 3.z and have them both work, but it's not guaranteed. Worse, your OS version only comes with version 3.x and upgrading is painful.

This is because the Linux model of global system wide shared dependencies is stupid, bad, and wrong. Docker and friends are a roundabout a way of having a program shipping its dependencies.


The Linux model works fine (very well, in fact, because of less HD space and much more importantly, less memory used for shared libraries) for programs that are normally included in the Linux distribution, since the whole thing is built together by the same organization as a cohesive whole. If every random little 20kB utility program were packaged with all its dependencies, the bloat would be massive.

It doesn't work very well for 3rd-party software distributed separately from the OS distro and installed by end-users.

The problem I've seen is that, while pre-Docker there was really nothing preventing ISVs from packaging their own versions of dependencies, they still only targeted specific Linux distros and versions, because they still had dependencies on things included in that distro, instead of just packaging their own. The big thing is probably glibc.

As I recall, Windows went through a lot of similar problems, and had to go to great lengths to deal with it.


> because of less HD space and much more importantly, less memory used for shared libraries

Literally not in the Top 1000 problems for modern software.

> Windows went through a lot of similar problems, and had to go to great lengths to deal with it.

Not really. A 20 year old piece of windows software prettt much “just works”. Meanwhile it’s nigh impossible to compile a piece of Linux software that runs across every major distro in active use.


>A 20 year old piece of windows software prettt much “just works”

No, it only works because Windows basically included something much like WINE (they call it WoW) in Windows, so old pieces of software aren't running on the modern libraries.

>it’s nigh impossible to compile a piece of Linux software that runs across every major distro in active use.

Sure you can, with Docker. It's effectively doing the same thing Windows does with WoW. Windows just makes it a lot more invisible to the user.


Nope, that's not how WoW works. It might feel close enough for you but if that's the case then you aren't careful enough with the analogies you make. Think harder, aim for more clarity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: