Hacker News new | past | comments | ask | show | jobs | submit login

The issue here is not only the depth limit, but also:

* performance overhead of each layer, however small

* disk space for files removed in intermediate steps (scenario: ADD huge-ass source tarball, commit, RUN compile+install+remove, commit - user has still download the huge-ass source tarball to use the final image which doesn't have it)

* there's often just no need to publish intermediate layers; there may even be a good reason to not publish them (say, I distribute a program compiled with a proprietary compiler as a step of the build, but can't distribute the compiler itself)

* simplicity of having just one image for user to download and for publisher to distribute, rather than whole chain (this will be more important when we are able to use anything else than the registry to distribute images)




All valid points.

I guess a lot depends on other aspects of your project. For example, if you are looking at distributing frequently, and rsync is an option, then bandwidth concerns are effectively nullified. Likewise, disk space diffs for a few installs on a base filesystem are not big and thus not really expensive to keep. But I agree with you.

One aspect is crypto: signing a tarball is easier than a bunch of files.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: