We've been using Nix for deploying a Rails app for an enterprise customer for quite a few years now. One area where it shines for us is the ability to build it on relatively recent version of Ubuntu and deploy to a (almost EOL) RHEL6 box. Bundling, assets and various other tasks take just a few minutes. We also have ~20 Go services that are also deployed via Nix, and building takes seconds.
However, it can be quite cumbersome to get a Nix expression to the point where it builds reliably for something that takes multiple steps like a Rails app, especially if you're building on macOS and deploying to Linux. It's come a _long_ way in recent years, but with Enterprise customers now embracing containerization we migrated everything to that and haven't looked back.
I'm having a hard time parsing -- you migrated everything from nix to containers (containers removed the need for nix) or to nix+containers (containers solved the "having to build for multiple platforms" issue)?
Ah my apologies - We use Bazel to build our services, and the output artifacts were then pulled into Nix and deployed as Nix packages. Bazel has _excellent_ support taking the same application code and creating Docker images from them (https://github.com/bazelbuild/rules_docker#language-rules), and the tools available for deploying containers is orders of magnitude more feature-full and higher quality than what you get with Nix today. So we no longer have Nix anywhere in our pipeline, and all of our artifacts are now deployed inside containers.
It is, perhaps they didn't use cache to store previously built packages and build everything from scratch. With no cache Nix will start with building compilers and glibc until it has everything to build the actual application.
While this is true if you are building new versions of dependencies on an existing system, in many cases in high security environments you want to start with a clean build environment every time. Cached dependencies might be considered to be a security risk. Bootstrapping an entire Nix environment can take a lot of time.
Sure, but in most systems you wouldn’t have much of a choice; if you’re using Debian or Ubuntu, you’re probably using binary packages built by someone else. It takes a lot more work to build everything you want from source in a traditional distro.
It seems like a pretty straightforward tradeoff: do you want to rely on prebuilt packages for speed, or do you want to rebuild it all (and accept the extra time that takes)? Nix and Guix at least make the latter pretty straightforward to do.
Nix guarantees reproducibility, which means anything can be rebuilt from scratch, but that's a very abnormal use case. If it didn't work out of the box, it's a problem with the package scripts (the "derivations"). That said, all of our software tends to bottom out in a bunch of shitty C libraries that are all delicately cobbled together with autotools and cmake, so anything that aspires to reproduce these things is going to have issues. This tends to make Nix difficult to use, because it doesn't have nearly the same investment/manpower (yet) as other package ecosystems that is necessary to wrangle these dependencies into a stable foundation that doesn't leak its underlying havoc to higher levels of the stack.
This comment is really, really confusing. If it builds once Nix, it will build again with Nix... If you can find a derivation that builds on one machine and not on the other there's usually a fundamental difference - either in CPU arch, your nixpkgs config differs (overlays), etc. Or I guess you could write a build script that non-deterministic ally fails, but that has nothing to do with Nix's maturity.
Nix ensures the tooling is called the same. I can understand having a non-autotools project that is more difficult to write a derivation for, but "Nix can't wrangle messy libraries" makes absolutely no sense, either from a "make compiling reliable" or anything at use-time.
Do you have a specific example in mind that is less hand-wavy?
> If it builds once Nix, it will build again with Nix...
That's the aspiration, but it doesn't always pan out. The rate at which you run into problems depends a lot on the packages you use, how much attention is given to them, and how hard it is to reproducibly package them. I've noticed in particular that the Python ecosystem is really fragile.
> Do you have a specific example in mind that is less hand-wavy?
A specific example that comes to mind was the psycopg2 Python package, which would build on some developers' machines but not on others. This sort of thing happens all the time, usually on macOS, and usually with C packages (sadly, so much of the Python ecosystem is built on C and its shoddy build tooling). I've also found quite a few packages in nixpkgs that simply don't build on macOS, but which presumably build on Linux; however, I forget which ones specifically.