I build container images using Nix (actually OCI containers; I don't use Docker at all, although I think it can run OCI containers). Going through these points one at a time:
> Use a smaller base image or libc
By default, Nix will build container images from scratch. No base image (although I guess you can add one as a layer to your manifest JSON, if you like).
> Make the image context smaller
Nix tracks dependencies very precisely. Our container images will only contain the things we asked for, and their dependencies (transitively).
> Fully minimize and tidy up image layers
This only seems relevant for imperative images, built "from the inside" (e.g. the talk about contents being "overwritten"?)
> Splitting up processes and services for breaking up images
The helper-functions in Nixpkgs contain tricks to automatically figure out which content is better kept in separate layers, based on how many times they're referenced (as a proxy for how often they'll be shared across containers). Docker only supports 128 layers, so the remaining content gets combined into the final layer; e.g. see https://grahamc.com/blog/nix-and-layered-docker-images
I recently tried to use nix to build images (docker images though, not sure if k8s can run OCI, or if I can push OCI to a docker registry (artifactory)).
What I found cumbersome though is to «cross-compile» images, I'm on mac/arm but the images need to run on linux/amd64. Ok my code is JavaScript (Node.js), I can compile the sources into plain JS on any machine, and then pack into an image together with nodejs from `pkgsLinux = import <nixpkgs> { system = "x86_64-linux"; };`, and all will work. I can even install node_modules on my build machine since the node_modules folder is (often) just a bunch of .js files. But some dependencies have native code, so I'd have to cross-compile those. And that's where I gave up.
> not sure if k8s can run OCI, or if I can push OCI to a docker registry (artifactory)
I push OCI images to AWS ECR, so it would probably work.
> I'm on mac/arm but the images need to run on linux/amd64
I've never used the cross-compilation support in Nixpkgs. My work machine runs macOS (x86_64), so I have a Linux VM as a remote builder. I happen to use LimaVM ( https://github.com/lima-vm/lima ), but anything would work (Qemu, VirtualBox, the VM bundled with Docker Desktop, etc.).
PS: Since containers only work on Linux, and I'm often working on macOS, I use the following assertion to check that I'm not accidentally including Mac builds in a layer:
assert all (d: ((d.type or null) == "derivation") -> (d.system == "x86_64-linux")) drvs;
I haven't attempted this yet but I have wondered about the feasibility of using the nixos/nix docker image just to build a linux/amd64 docker image using Nix. Or setting up a nixos/nix docker image as a remote builder for Nix.
I get why nix is attractive, really, and its cool you can build docker images.
But for people who don't know nix its a bit strange to say that Dockerfiles are tedious, and then proceed to show a much more arcane and verbose way of building them as a solution to that tediousness.
The one thing that makes Dockerfiles attractive (and Docker in general), is that they are so simple and intuitive. You can immediately leverage your basic linux knowledge. That is why a lot of people like it whose main job is not about packaging.
I think the keyword for that phrase is "reliable".
Overtime, I had to go back to fix "working" base images that had become stale, and when dealing with layers and layers of base images this really gets tedious.
BTW, I haven't dealt with nix generated images in production, I'm not aware of possible drawbacks (other than people not being familiar with nix), so I'm not really endorsing it, although I would like to try.
> What has this got to do with making Docker images smaller?
"Docker images" are .tar.gz files containing some binary executable.
The article starts by putting an entire Linux distribution into the .tar.gz (Ubuntu, Alpine, etc.), then goes through several ways to try and make the result smaller.
Nix does things the other way: the .tar.gz will only contain the specified binary, plus its dependencies. It's minimal by default (although there are still tricks to making things smaller, e.g. taking dependencies from `pkgsMusl` instead of `pkgs`, to avoid glibc)
I've started building my personal projects like this but the end result is an image running as root, which is not best practice. Have you been able to build images with Nix that run as an arbitrary user?
I'd say the common cases, (especially if someone has already done something similar that you can copy-paste from), are reasonably straightforward.
Generally, getting a good understanding of Nix takes some time. It's hard because you need to understand what Nix does, and understand what you're trying to do, and how it constrains what you're trying to do.
I used Nix at work but eventually realized, managing dependencies, compiling cross-platform, and deploying wasn’t actually bad enough to warrant that level of complexity
The goal isn't smaller images, but smaller downloads. Smaller downloads doesn't necessarily mean smaller images. The very first docker pull isn't usually the problem, its all the subsequent ones.
The goal is layers. Cache the ones that don't change often, put the ones that change frequently at the end.
Then you download as little as possible, and reuse layers.
Some techniques for smaller containers, such as squashing, actually makes downloads worse.
That entirely depends on your execution environment! If you are running on one fixed host and only use Docker for the benefit of reproducibility or easy software upgrades, you should definitely go for a caching strategy.
If you are running on your own (bare metal) Kubernetes or other orchestrators, rather go for smaller image sizes.
If you decouple you build from your packaging you can also use from-scratch containers. You can also use distroless' static image [0] to get a posix-y environment with as little space as possible. Total image size of the static image is ~2.4mb and it comes with a `nonroot` user configured to lock down your perms and packages like ca-certs that are often forgotten.
You can make use of this with multistage builds or with build systems like Bazel and Please.build.
Using build stages is usually the biggest contributor to reduction in image size.
Usually, images have steps to build (which requires source code, dependencies, etc), and the entry point at the end. Splitting the build- and run-time means you're shipping an image that only includes what's needed to run (not build) the application.
You can resolve this by installing, building, and uninstalling in the same RUN command right? I once worked at a place that used some insane setup with images building other images to avoid this but I'm pretty sure the first option works as well.
You could, but the biggest benefit of staged builds (IMHO) is that you can use a heavy base image for the build process (classic Ubuntu for example, where it's easy to add repositories and install dependencies), and a very small image for runtime (i.e. Alpine, busybox).
I find that it's easier to debug the build process with a "standard" distribution.
Technically, yes, but it's much easier to use Dockerfiles like normal (and have the build step is a good environment for development), and copy exactly the files you need for the release stage.
One more. Copy files with the correct chmod, don’t chmod copied files in a separate run command because that run command will create a new layer.
This surfaces when adding larger files.
Similarly, don’t wget in a run command, then chmod. Use add with a url where possible, combine with chmod flag. When not easy (file from authentication protected storage), you can optionally use a named stage and copy from that stage into your target image. You can also copy from another image!
Because why learn how to make RPM's and build, test and deliver software in the OS' native format with just rpmbuild --clean -ba yourpackage.spec when so much precious time could be wasted perpetuating this fragile, order of magnitude more complex and shoddily implemented Docker / Kubernetes machinery?
Has anyone had successes building smaller python images? I have always found it challenging to build it with Alpine as the base image. It just never finishes building. I end up pulling the python base image which uses debian. After installing pip packages the image pretty much comes to 250-300 mb at the minimum.
EDIT: I should add that I already have multi stage builds in place. My main dependencies are fastapi, gunicorn, redis, sqlalchemy, snowflake-sqlalchemy, snowflake-connector-python, fastapi-pagination, pymysql, fastapi-redis-cache, pandas. I'm willing to forego pandas if the size of the image reduces.
Off-topic, but has anyone actually used their product, it seems really cool, but I'm trying to decide if it's smoke and mirrors?
Even if it generates a lot of really stupid test cases, including them in a separate package would be incredibly powerful for handling stupid mistakes.
You should optimise for performance and security. For example python's base image is slower, than a container built on ubuntu for python apps. This is shown by several tests. Pythons base image is built on Alpine, which has shown to also have longer build times, and obscure bugs.
So a smaller container image is not always better. Instead optimise for performance and security.
> So a smaller container image is not always better.
No one said it is _always_ better. Obviously you can make a smaller but worse image.
Making the image smaller, without affecting its performance or security, can reduce costs (storage + network) and can make deployments much faster (and so improve reliability, and Developer Experience).
One example from my recent experience: embedded systems and industrial facilities, which don't necessarily have a reliable or fast internet / network connection.
When updating images on multiple devices on a 10 Mbit connection the difference between, say, a 500 MB image and a functionally equivalent 50 MB image can be quite significant.
I have heard of extreme scale cases where you are looking at terabytes of docker image transfers to deploy to all machines and it happens every single time a new build goes out which is multiple times a day.
> Use a smaller base image or libc
By default, Nix will build container images from scratch. No base image (although I guess you can add one as a layer to your manifest JSON, if you like).
> Make the image context smaller
Nix tracks dependencies very precisely. Our container images will only contain the things we asked for, and their dependencies (transitively).
> Fully minimize and tidy up image layers
This only seems relevant for imperative images, built "from the inside" (e.g. the talk about contents being "overwritten"?)
> Splitting up processes and services for breaking up images
The helper-functions in Nixpkgs contain tricks to automatically figure out which content is better kept in separate layers, based on how many times they're referenced (as a proxy for how often they'll be shared across containers). Docker only supports 128 layers, so the remaining content gets combined into the final layer; e.g. see https://grahamc.com/blog/nix-and-layered-docker-images