Hacker News new | past | comments | ask | show | jobs | submit login

This article is a good explanation of how nix works at a high level, and I'm excited to see nix getting some really prominent support, but for some reason it never tells you what the point of all of this is, so I think many folks might feel turned off by it. In other words, I don't believe it ever compellingly answers the question that constitutes its title. The word "package" doesn't even appear until near the end of the article! This is probably a second or third article to read about nix rather than the very first thing that exposes you to it.

The point of nix is just to create completely reproducible builds and package management, including support for multiple versions of packages side-by-size with no issues. It's sort of a next-generation package management system that tries to avoid most of the pitfalls that OS package managers have fumbled with up to this point. It's really that simple.

"nix" as a term refers to a system of multiple components that work together to achieve that goal. There's the package manager itself (called Nix), the language that build instructions are written in (also called Nix), an existing ecosystem of predefined packages (called nixpkgs), and an optional Linux distro that uses Nix the package manager as its package manager (called NixOS).




The PhD thesis that goes with Nix is a really great intro to it. It's very accessible and well-structured.

It motivates the need for a new take on package management by analogy to memory management in programming languages.

It contrasts how software deployment works today (“Fuck it, I’m sure /usr/bin/local/python is a reference to a python 2.7.12 install with PIL available”) to how we used to write software (“Fuck it, I’m sure 0x7FEA6072 is the address of the array of files; so clearly, assuming a 32-bit address space, the seventh file is at 0x7FEA608A.”)

In both cases, as long as your assumption was correct, things go swimmingly. But it could be easier and rely less on hoping your assumptions are correct, and more on things that are verifiably true. And that's what Nix offers: a way to build software that is insulated against assumptions and "it works on my machine".

Thesis: https://edolstra.github.io/pubs/phd-thesis.pdf


The downside is that the approach works best when packages are aware of the Nix approach. But the packages have not been written with Nix in mind, and some work is needed per package to adapt to the approach.


Nixpkgs has already done much of that work for you, thankfully.


It's a true wonder but much like the AUR (read: Archlinux User Repo), it's great 80% of the time when projects have reasonable popularity or committed (even 'niche') support.

It's honestly a treat for personal use if you have the mind of a tinkerer, but it's a difficult proposition to sustain in business (you basically end up vendoring yourself if big enough).

Nix has a very different proposition that solves from the "inside" a problem that is currently usually solved from the "outside" using e.g. Ansible in ops/infra, or Vagrant in dev.

In my perspective, the outside/black-box solution is ultimately brute-force, plain and simple. It works because it scales well, because copying data is cheap enough (re. deduplication, delta sync, etc) You hit limits when systems grow older and complexity creeps in no matter what however, it's an approach that requires a blank slate every now and then.

But the inside package approach is elegant in that it's absolute, it's not conjuring a black box that "should just work" if Ubuntu18.04-387lts-32-1.989-2a and my_fav_package.1.1.1.9.3.5-f work fine together, this time around. Sure, there is testing but we're back to the popularity/support limit.

In the end, in a world where some of the stack is basically nailed and can be modularized with clear forever-true expectations of I/O (the content, not the hardware), then Nix eventually prevails— but we really have to evolve "LTS" for what it means (like we'd do in construction, electricity, plumbing...), and not a half-trendy windmill / rat race. But we have to think of those systems that could transfer almost-as-is well from now to 2050 or more, not 2025-2030. It's not impossible, it's what COBOL did, does as we speak.

I think something like Nix could help shift perception in the right direction, but I expect mindsets to take a good part of the decade to change deeply, if it happens.


Yes that's true. But some things remain difficult, for example installing the latest version of CUDA.

https://discourse.nixos.org/t/cuda-setup-on-nixos/1118/9


Yes, absolutely. Although to be fair, CUDA is a royal pain (although way easier) even on more standard distributions (supporting multiple versions seamlessly is hours and hours of fun and breaks all too easily) and is behind the great majority of time “wasted”. NVIDIA deserves a lot more flak for this than they receive.


Very true.

Part of the reason is probably that you need the proprietary drivers to use CUDA efficiently and as they are not in the (upstream) kernel you have to use some other tools, like unofficial repositories or the native installer (which doesn't always play well with the OS package manager, system upgrades, etc.). It's a real PITA.


Funny, I was able to install it (for use with pytorch) just a few days ago without a hitch.


Did you use conda? It does a lot of the leg work for you.


No, I tried conda once, but it downloaded gigs and gigs of data, and I have enough trouble trying to keep nix under control.


That's really a flaw with how those packages have been designed rather than a flaw with Nix. We've known that it is a terrible idea to put everything into global directories, edit `PATH` and so on for literally decades.


I gotta use pointers? Ugh, sounds hard, I think I'll just stick to DOS. /s


What I'd really like to see is a realistic, end-to-end tutorial for either 1) deploying a relatively straightforward web application (like Dokuwiki or ZNC), or 2) setting up a basic desktop for day-to-day use. I feel like I've seen a lot of "snippets", I feel like I understand how Nix works and what it's supposed to be good for, but I don't have a coherent sense of the steps involved in actually using it for mundane things.


Nix contributor here. You are completely right, that is missing. Unfortunately the documentation is somewhat fragmented and its structure makes it quite hard to find relevant information, especially to newcomers.

We started to work on making official guides for common Nix tasks, about how to get a development environment set up, how to build a Docker image… focus is on the DevOps side at the moment, not so much on the desktop user, as we see that as the most valuable use case. This is part of the work of the NixOS marketing team to facilitate adoption of Nix into the mainstream.

Have a look at https://nix.dev/ for the first guides being worked on – pretty barebones so far, but we are aware and working on it.


For usage of "setting up a basic desktop for day-to-day use", to use nix to just mundanely install a list of packages:

What I have is a "myPackages.nix" that I symlink into ~/.config/nixpkgs/overlays https://github.com/rgoulter/dotfiles/blob/22ebe20a820a1adf64...

After installing nixpkgs tool, I can then run "nix-env --install --attr nixpkgs.myPackages" to install that.


Yeah I was trying to do Linux things on my Chromebook and some blog or other recommended Nix as package manager. I was like cool this is like apt-get but trendy I'll try it.

I then proceeded for like an hour to try and figure out how you say apt-get install in Nix. There was all this documentation but none of it said "here's how you install emacs and stop thinking about Nix"


`nix-env -iA nixos.emacs`

But I do agree with you, it's not as straight-forward as it should be yet. But I absolutely love how I can be certain that no garbage is accumulating on my computer like I needed this program one time and I don't even know what does it do, yet I have it's complete dependency graph installed that the package manager can barely uninstall. In nix I just create a `nix-shell -p package` for one time use, do my work and then forget about it. At the next `nix-collect-garbage` it will be removed from my computer completely.


It's a bit old now, but not that much has changed. All my commercial projects use essentially this method of deployment.

https://jezenthomas.com/deploying-a-haskell-web-service-with...


Look for nix users' "this is not a dotfiles repo" repo on GitHub.


Care to clarify and/or add a link? I'm not sure what this means. (Edit: I did not downvote you)


https://github.com/ihebchagra/dotfiles I guess sometimes they are called dotfiles haha.

https://www.google.com/search?q=personal+config+github+nix seem to be a good search

Perhaps I misremembered how many of them make throw shade on dotfiles repos, or perhaps it is just that google isn't good at finding such things ("not dotfiles" won't work).


> for some reason it never tells you what the point of all of this is, so I think many folks might feel turned off by it

I’m attempting to write some hands-on pragmatic no-bullshit articles tackling that.

Why? I’ve been hearing a lot about Nix over the years, how it’s an experiment yet so good it’s actually usable. But every article I found obscure the good parts, either because of too much details and theory or because they’re apt->nix-env but without the whys.

And the good parts are really good. So good that as soon as I was enlightened I simply dropped my lonesome 13 years ArchMac project on the floor.

But it needs a pragmatic approach to explaining what it is and how it can be of value to you day to day. Everything is actually in the manual, but it lacks some ties with existing knowledge for people to make the jump, unless they’re really really curious enough to piece things together.

That’s what I’m writing.


That sounds great. I remeber when I started, it was very difficult, I don't think I would be able to start without nix-pills articles.

Nix needs more tutorials, especially in areas that could showcase it (which you planning to do).

Nix is often referred as a package manager, but I believe that characterization does a bit of disservice to it, because it can do much more than that. IMO the areas where it excels the most for me is the build system. Ability to have a language define the exact environment that developer has is IMO awesome.


Isn’t that the same goal as Docker? I’m surprised there’s still no Docker base image for NixOS...


"Same execution environment everywhere" is one way which developers use Docker. Docker gets this by copying the layers of a built image. Unlike nix, the image building itself doesn't need to be reproducible. So you can have a Dockerfile which works now but will fail to build in however many months.

"Reproducible builds" do get you "same execution environment everywhere". But they have the stronger guarantee that for the same inputs, outputs will be the same.

IMO/IME, I don't think that aspect of nix is a strong selling point for use of nix on developer workstations. Probably thanks to less-elegant solutions like "<language> Version Manager" etc..

But I think the nix language makes for a nicer way of describing a package of software you're developing in terms of dependencies and outputs than Dockerfile.


I do have to commend Docker for providing and managing an agreed upon VM for non-Linux users to host all their containers. It's the "killer-feature" that has made it as successful as it is. But underneath it requires a VM (libcontainer,LXC,virtualbox,hyperkit,etc) on non-Linux machines.

This helps developers work together and quickly get small projects up and running. I'd contend that after a while, a mess of containers/sidecars ends up becoming just as difficult to manage as a mess of native binaries. Hence the growth of so many container management systems. Now, because they are re-inventions of service managers we get the benefit of designing them from scratch for modern needs, but also loose many of the benefits of well understood semantics of native processes.

Looking for feedback: I've been playing with an idea (and have a system in production using it to try out the concept) where the Dockerfile only contains busybox+nix and you when you run it you specify an environment as a Nix path. Specify a binary cache via env vars. Using "nix run" this will download all deps and run your program, with bind mounts all containers can share the host cache. Put a RUN into the Dockerfile and you can prefetch all the deps. Basically it's a Docker container that uses Nix at build or run time for all the heavy lifting, instead of the docker layers mechanism.


How much overlap is there between your idea and Nixery?


Have you checked out the official container system in NixOS?


Yes, but it requires NixOS. This “docker compatibility layer” is about being able to use nix style packaging in environments that expect Docker. Eg: ECS. https://github.com/tomberek/nix-runner


There's some conceptual overlap but I don't think the two tools are redundant.

Using nix for development is sort of like having dedicated handcrafted development Docker containers for every single project... without having to ever use Docker or containers. You just get the sandboxing and safe reproducibility for free. It's kind of like having a build tool like cargo or stack, but for everything, all the time. You can fire up nix-shell for a project and just magically have the dependencies for that project available. There are tools like direnv and lorri that make this even easier and more powerful. Then, if you want to package up your project into a Docker image for deployment, you get that for free too.

With all of that said, the magic is blunted a bit by some rough edges, missing packages here and there, etc. I wouldn't jump into nix expecting to have a completely polished and flawless experience like you can get with Docker, which is a much more mainstream project at this point. But I do think this will rapidly improve with nix, especially with large and well-known companies like Shopify using it.


Using nix, you typically build from scratch and only include binaries that are needed in the Docker container. It’s quite elegant, and uses the nix cache, too, so you aren’t dependent on order of layers

https://nixos.org/nixpkgs/manual/#sec-pkgs-dockerTools


A Docker base image with NixOS doesn't really make sense, since with Nix you wouldn't use Docker for building Docker images, but let Nix make images from scratch.

That's the approach my team is taking, anyways.

https://nixos.org/nixpkgs/manual/#sec-pkgs-dockerTools

(and as others have noted, you don't need an OS in your Docker image)


https://hub.docker.com/r/nixos/nix/ seems to be a thing. It's apparently not a nixos image, but you probably don't want nixos with all the service configuration and so on, just nix, for most docker use-cases?


So... statically compiled executables?

Wasn't that tried long ago and it was determined that the user should be able to choose when to upgrade dependencies, such as if a dependency needs an out-of-band update to work on the localhost OS?


Your criticism misses the mark because nix users have the ability to update a dependency and rebuild all of the dependees. With nix, I can update openssl in one place and be sure that everything that depends on it gets re-evaluated. How can I be confident that everything is linking the patched openssl I want when I'm using aptitude, pip, npm, docker, etc?


But what if they're using different versions of OpenSSL?


It'll only rebuild the packages that depended on that particular openssl. This is an area where Nix shines, because all packages are explicitly bound to their dependencies, it means it's no longer relevant what one file happens to be occupying `/usr/lib/libssl.so`, or even `/lib/x86_64-linux-gnu/libc.so.6`, so you can be running different apps that rely on totally different glibc versions alongside each other with no problems.


I mean if there's a bug, it's not enough to patch one particular OpenSSL. You have to audit manually, so Nix won't make much of a difference.


Well, by default it uses the same library everywhere, but it gives you an option that you can have two apps that rely on e.g. different version of openssl. If you do that it is on you to make sure both dependencies are updated. There is a bit of benefit as well. For example instead of creating a new derivation, you can override existing one (kind of like you extend a class in OO) if for example you make another version of openssl based off the existing one, but maybe changed compile flags, then nix will be smart enough to recompile both.


Except the auditing doesn't have to be manual: https://github.com/flyingcircusio/vulnix


Oh, that sounds like it could solve my issue. E.g. I compiled my program against my glibc which is 2.24, but target was running 2.23. I didn't even use any "new" 2.24 features, but it was missing a symbol, so it wouldn't run. But then to compile against 2.23 locally, I had to get an older version of GCC (not sure why, but), and everything needed to build the older GCC, and so on. But then I got some old versions of libraries on my system, so I had to delete them. I ended up compiling it all inside a docker container because I didn't want to pollute my env with older versions.


Maybe but don't underestimate the effort, I needed to do something similar and tried to use Nix but I failed: the doc wasn't very good, there weren't 32bit packages..

So I just built gcc myself which I found very easy.


Good point: statically compiled binaries are a big problem for security updates.

This applies to Linux distributions as well as large organizations that have their internal distribution (like Amazon).


The user should use nix to do that.


Thank you for the explanation! I'd never heard of nix and I was confused by the article.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: