Hacker News new | past | comments | ask | show | jobs | submit login

Dynamic linking and containers aren’t necessarily incompatible, though nobody has combined them well yet.

Of course, half the point of containers is to “vendor” your dependencies — a container-image is the output of a release-management process. So the symbolic reference part of dynamic linking is an undesired goal here: the container is supposed to reference a specific version of its libraries.

But that reference can be just a reference. There’s nothing stopping container-images from being just the app, plus a deterministic formula for hard-linking together the rest of the environment that the app is going to run in, from a content-addressable store/cache that lives on the host.

With a design like this, you’d still only have one libimagemagick.so.6.0.1 on your system (or whatever), just hard-linked under a bunch of different chroots; and so all the containers that wanted to load that library at runtime, would be sharing their mmap(2) regions from the single on-disk copy of the file.




Hey, you've invented WinSxS.

The primary issue with this approach is that if every program only sees its own version of the library anyway, there's no incentive to coordinate around library versions - you end up with tons of versions of everything anyway, maybe not one for every application but close to.


> Hey, you’ve invented WinSxS.

Oh, I know :)

> there’s no incentive to coordinate

True, but it potentially works out anyway, for several reasons that end up covering most libraries:

• libraries that just doesn’t change very often, are going to be “coordinated on” by default.

• people building these container-images are the same people who actually tend to be running them in production, so they (unlike distro authors) actually feel the constraint of memory pressure; so they, at development time, have an incentive to push back on library authors to factor their libraries into fast-changing business layers wrapping slower-changing cores, where the business-layer library in turn dynamically links the core library. This is how huge libraries like browser runtimes tend to work: one glue layer that gets updates all the time, that dynamically links slower-moving targets like the media codec libraries, JavaScript runtime, etc. Those slower-moving libs can end up shared at runtime, even if the top-level library isn’t.

• on large container hosts, the most common libs are not app-layer libs, but rather base-layer libs, e.g. libc, libm, libresolv, ncurses, libpam, etc. These are going to be common to anything that uses the same base image (e.g. Ubuntu 20.04). Although these do receive bug-fix updates, those updates will end up as updates to the base-layer image, which will in turn cause the downstream container-images to be rebuilt under many container hosts.

• Homogenous workloads! Right now, due to software-design choices, many container orchestrators won’t ensure library-sharing even between multiple running instances of the same container-image. We could fix this issue without fixing the rest of this, but designing a container-orchestrator architecture around DLL-sharing generally, would also coincidentally solve this specific instance of it.


Unless you coordinate the different app to be compiled with a specific version of a library when compiling, effectively creating a distribution.


Apple does something similar with their dylib cache but they sadly don't use content-addressable storage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: