Hacker News new | past | comments | ask | show | jobs | submit login
Developing V8 with Guix (wingolog.org)
73 points by apaprocki on Aug 4, 2015 | hide | past | favorite | 46 comments



I wonder if wingo knows about the 'guix environment' tool or not. It can create dev environments with the correct environment variables configured on the fly, without creating a new profile or polluting an existing one. Profiles are still useful for persistence, but I use 'guix environment' for all of my day-to-day hacking so I don't have to periodically delete generations of a profile in order for the GC to collect them.


I've tried using Nix a few times. The idea is in principle not bad, but the actual deployment for, say... an API endpoint? Horrendous.

Problems include:

1. Scripts deployed almost never appeal to /usr/bin/env, they tend to hardwire interpreters to /bin/sh. This flat out doesn't work in NixOS, and the Nix team is ideologically opposed to maintaining a compat layer for specific cases like this.

2. Compiled binaries are compiled to this SPECIFIC version of Nix. That may not seem like a bad thing, but the reality is that it's very abusive to work with. Your binaries cannot even be invoked becuase they cannot find their links (including, you know, the C bootstrapping lib). This means you can make an executable on your dev box, try and push it to prod, and because prod doesn't have something like a dtrace library the entire system is incompatible with your executable.

3. There are tools to assist with these problems, because they're so common. However, they themselves are full of edge cases they don't solve for that they're very hit-or-miss. While this isn't bad in and of itself, it tends to make the established userbase very unsympathetic and often very formulaic when you approach them with more subtle linking issues. This can be quite frustrating.

I found the entire system somewhat unworkable. I also feel like Docker or Rocket solves a lot of the practical problems I have there, despite protestations. Container images are perfectly reproducible, but also easy to update. They also tend not to rip themselves apart if you, in a moment of intellectual weakness, give a slightly confusing order to the nixos package manager and rip your system in half.


I don't know what software you were using, but it doesn't sound like Nix.

1) Your scripts should be part of a package. I don't know about Nix, but in Guix we have a build step that patches shebangs to remove the nasty /usr/bin things and replace them with references to the right executables in the store.

2) You can copy the closure of a build, that is the build your are interested plus all of its recursive dependencies, to another machine and it will work just fine. In fact, this feature is a big part of how we produce the Guix binary tarball. Yes, the output of a build is a function of the version of Nix/Guix used to build it, but that has no bearing on whether or not you can run the resulting binaries on another machine.

3) Given that problems 1 and 2 are invalid, I don't know what else to say to this.

Docker and Rocket do nothing to solve the problems that Guix and Nix solve. Container images are not reproducible. You and I can build an image from the same Dockerfile and I guarantee that the checksums would be different. Reproducibility is being able to build the same thing N times on M different machines and get the same exact result, bit for bit. Guix and Nix are getting us to this goal, Docker and Rocket do nothing. Also, you cannot "rip your system in half" with Nix or Guix. They provide roll back facilities to undo bad changes to user profiles or the system configuration.


> I don't know what software you were using, but it doesn't sound like Nix.

I love how the initial reaction by every Nix user is to imply I'm a liar or just titanically stupid. Like it's not possible that a relatively new linux distribution that has just undergone a major overhaul of its package management system might still have bugs or issues.

No. NixOS is perfect for everyone today. Just completely rewrite every deploy script you have from the ground up. Having trouble? Don't worry! The documentation is "getting better."

> Your scripts should be part of a package. I don't know about Nix, but in Guix we have a build step that patches shebangs to remove the nasty /usr/bin things and replace them with references to the right executables in the store.

Sorry, but I will not be writing an OS-integration package for every specific software deploy during development. It's unlikely to even imply that I should be doing this during development just to get off the ground. I tried to use NixOS as an environment for development for Golang and Java and found it frustratingly at odds with all existing tooling. The only languages I found that seemed well supported was haskell and C, and I don't ship code on either of those platforms anymore.

It was doubly unlikely early this year. The documentation was in utter disrepair and more than two Nix developers told me on twitter, "It is very difficult to do unless you are a contributor." It was also the case that many pre-existing packages were done incorrectly, still suffering these problems. Golang's package was one of them.

> but that has no bearing on whether or not you can run the resulting binaries on another machine.

Copying a closure to deploy software projects currently in development is an extremely intensive step, and many packages had outstanding bugs that don't do this.

Furthermore, existing deploy tools like Puppet, Salt and Ansible all have extreme difficulties managing this model, but Nix (at least at the time I examined it) didn't offer very good tools for actually distributing said closures in a cloud services environment.

> Container images are not reproducible. You and I can build an image from the same Dockerfile and I guarantee that the checksums would be different.

It's not the responsibility of the dockerfile to maintain this immutability. It's the responsibility of the environment the dockerfile is executed in.

Which is probably why Golang was specifically such a problem, it has a really miserable story for actually making reproducible builds with locked deps.

Given a specific git SHA for a project with locked dependencies, it is absolutely reproducible and will produce the same image SHAs every time.

P.S., at the time NixOS was going through a major update and it was possible to install a version of Nix onto NixOS that was not entirely compatible. This ruined the install while still using Nix commands. If that bug has been fixed, great, but it was a known issue and I was not the only person in IRC asking for help with it. You lost your ability to roll back.


the moment your Dockerfile has apt-get update and apt-get install is the moment your docker images stop being reproducible unless you take extreme measures like mirroring all deb repos you have in your sources.list.

disclaimer: happy docker user here.


Where is everyone getting the idea that Docker is for creating reproducible environments? I know when I first heard about Docker, it was mentioned, but Docker does not claim to provide that. Seems like a confusion between distributable and reproducible.


The execution environment for a product is certainly reproducible with Docker, assuming you actually make your base image Dockerfile work in a reproducible environment as well.

You can also use Dockerfiles as units of deployment for individual software projects, which offers that.

While it is true that NixOS offers a different vision for how to solve these problems, it does so in a way I find very destructive for actual use. It seems inferior for deploying software compared to containerization (which offers immutable and composable build artifacts), and very cumbersome to work with during software development (since the value of writing nix packages during development is cumbersome and for many language environments you have to deploy the source code in the package and compile it within the closure you expect to deploy on to make any headway).

Ya'll can pretend https://github.com/NixOS/patchelf doesn't exist because these problems are incredibly irritating, I guess.


The "Cattle vs. Pets" argument is basically that. You want all the cattle to look the same and you can't do that if they are modifying themselves.


Yeah, the point of docker is you build one image, test that, and share it around and that image should always work. The build is a different story.


> the moment your Dockerfile has apt-get update and apt-get install is the moment your docker images stop being reproducible unless you take extreme measures like mirroring all deb repos you have in your sources.list.

So don't do that. I can put mutable software into Nix packages because I can force arbitrary shell commands with non-deterministic output. That power exists. It'll just break the guarantees we'd like to enforce.


They use "deco" as an init system, for example, which is kinda like systemd but not.

The init system is actually GNU dmd. deco is the name of the control utility (like systemctl is to systemd).

It also has absolutely nothing in common with systemd, so I don't know why the author made that comparison.


It's hard to talk about an alternative init system without comparing it to systemd these days. I wish wingo would have mentioned why we use dmd in the first place. It's not out of systemd hatred, but rather for better system integration. dmd is written in Guile Scheme, just like Guix, and that gives us the two programs really excellent integration with each other. dmd needs more love to approach the feature set of systemd, but I think using it is the right choice for the long term. In addition to using it on my GuixSD systems, I also use it as a user service manager at work and it works great. Also, there's a proposed project out there for someone to implement a Guile reader that reads in systemd unit files and produces the equivalent dmd service object. If this is done, it should assist adoption, or at the very least experimentation.


> but it is just unacceptable and revolting that software development in 2015 is exposed to an upgrade process which (1) can break your system (2) by default and (3) can't be rolled back.

I would like to add that building a system that makes it difficult to reproduce the exact same state on another machine, even if the central/distributed repository it originated from is not available, should be considered malpractice.


Just an anecdote: Since I started using Slackware Linux, I have never had any problem with my system.

I like the idea of functional package managers, Guix especially. But for me personally the problem they are trying to solve does not exist.


Functional package managers (FPM) to me are a viable solution to (cloud) deployment: what is the difference between a FPM controlling the totality of inputs and outputs of an application, and modern containerization solutions such as Docker, really?

I've played with Docker (and rkt) and there's something off putting about using this stuff in production, to me at least, while I'm itching to see how Nix and Guix grow; waiting for someone with more free time than me to really cultivate this potential.

Nix is more mature, but I dislike the Haskell syntax. I love Scheme, but I hate the free-software-only-religion around GNU Guix: I appreciate what GNU does, I'm a regular GNU/Linux user, but at this stage of development religious beliefs are just a hindrance to adoption and it makes me sad.

EDIT: and let's not forget about node provisioning, something you can already do with NixOS (no idea about GuixSD): forget Ansible and its declarative configuration, but define the totality of the system state and configuration in a single file. Tell the machine "OK configure yourself like this", and voilà, you have a node which is (formally?) proven to be exactly like you've described. That's the future.


Among functional languages, I wouldn't call nix anywhere close to haskell. As for the free software, sounds like a typical baseless meme I've heard many times.


> sounds like a typical baseless meme I've heard many times

Is it though? I'd probably install GuixSD on a virtual machine if I knew most of the non-free software are used are packaged or at least accepted [1]. This means more users, which means more contributions and it's realistic for the project to go somewhere.

I can give you a baseless opinion I don't have any fact to show for but I don't think it's so hard to see: many GNU projects are very cool but for some reason they fail to reach critical mass and they just seem to grow very slowly (or die a slow death, depending on the point of view.)

1:

> [...] the GNU distribution follow the free software distribution guidelines. Among other things, these guidelines reject non-free firmware, recommendations of non-free software, and discuss ways to deal with trademarks and patents.

http://www.gnu.org/software/guix/manual/html_node/Software-F...


There is adoption & attraction of development because software has freedom.

There is adoption & attraction of development because software does not have freedom.

They are not mutually exclusive. gnu, debian, etc are fundamentally driven by the goal of making a completely free operating system, and they would have much much less adoption and development if they weren't. We now have and are gaining more hardware which is completely free. If we don't make software distributions which are completely free, we will never have completely free operating systems, and we won't be encouraging people to use completely free operating systems.


I too am a diehard Slackware user (I would say "Slack" like J.R. Dobbs intended it, but people would confuse it for the chat) and I'll echo your sentiment.

Guix and Nix are the only ones in the field which deeply identify and solve issues in package management, and are the solutions I'd be most open to using.

For personal usage, the Slackware tarball format without dependency resolution has never failed me.

(I also have an affinity for Portage, but it's a delicate thing to use.)


Bob be with you.


Huh... I had no idea Nix and Guix could be used this way. I viewed them as replacements to apt, but what I'm hearing is I could just as easily use it as a souped up stow... interesting...


IMHO, there is only a little bit of separation between Docker and Nix/Guix.

I wonder if the Linux desktops of tomorrow will be a thin hypervisor (or something like CoreOS), running a immutable VM on top.


There's quite a lot of separation, IMO. Docker uses an imperative, stateful system for image creation that doesn't actually help with reproducibility and specifically only creates container environments. Guix and Nix use a functional, declarative system for building anything (software builds, VM image creation, etc.) and can create any type of environment, including containers. Guix and Nix are much more general-purpose and are built upon a stronger foundation.

So, rather than using Docker running Debian/Ubuntu/whatever images and another OS entirely managing the host, you can use Guix or Nix to manage every layer and take advantage of their very advanced features everywhere.


I'm a big fan of Guix/Nix and would love to see them in greater use.

Whilst I agree that they provide many of the reproducibly benefits of containers, isn't the other big advantage of containers the (theoretical) security benefits they bring? i.e. a process can't break out of its container and access other data.

Perhaps I'm missing something but can such isolation be provided by Guix/Nix?


Yes it can. The build daemon already uses containers to create the isolated environments to perform builds in so that we can view builds as pure functions. Nix uses systemd-nspawn for making NixOS containers. Guix uses a Scheme interface that I wrote called call-with-container to build GuixSD containers or simple containers without an init system via 'guix environment'. call-with-container is a work in progress, but some of the code has already made it to master, and the rest of it should land in master in time for the next release.


That's great to hear, thanks for the work, I look forward to trying it out. Using Guix with systemd-nspawn sounds like it could be a formidable combination.


Guix doesn't use systemd, but maybe you could still use systemd-nspawn somehow? call-with-container will be the preferred mechanism on Guix, since it will integrate tightly with the rest of the system.


Apologies for the confusion - I assume call-with-container talks to the required kernel functionality directly instead.


Actually I would argue that Docker is more flexible than Nix/Guix. The usecase for package managers are installing packages, whereas Docker combines package management along with configuration management.

Think about Nix+Ansible - that's what you really need if you are going to have a true stateless system. Take your base packages to create an immutable snapshot and then layer your configuration changes on top of them.


Actually, Guix and Nix aren't just package managers, they also provide a fully system configuration management tool complete with the ability to roll-back to earlier versions of the system. You don't need Ansible or Docker when you use Guix or Nix. The same framework for providing reproducible software builds is also used to provide reproducible system configuration.

See: http://www.gnu.org/software/guix/manual/html_node/Using-the-...


very interesting - thanks for pointing that out. Ergo my statement that Docker and Nix/Guix will converge at some point. There is already an overlap: I believe Docker has mindshare, but it is unusable on desktops.. whereas Nix/Guix was designed for desktops, but is losing out because most people use Macs!!

It would really be something to build a ESXi for Docker/Nix/Guix and be able to provision fully functional VMs on top of it.


I am currently using the Nix packager on a Mac and it works fairly well. It's not as smooth as Homebrew but it's getting there. See http://nixos.org/nix/ -> Get Nix


We already have a cloud-orchestration solution, built on NixOS, and in a lot of ways it's already considerably nicer than some alternatives: http://nixos.org/nixops/

Docker doesn't really offer us much beyond a stable engine for actually using containers, which could probably be accomplished with its `libcontainer` anyway.

Docker's imperative scripting language to build systems isn't appropriate for NixOS, because we declare our entire system with a single configuration file, which can be transactionally updated/rolled back. That means you never want to really run 'apt-get install ...', you want to add a package to your system by modifying your configuration file, and 'rebuilding' your system. So the imperative 'run commands to update OS' model that something like Ansible uses is obsoleted.

Because of this, if I have my laptop with a configuration file, and a backup of my data, I can basically reproduce my laptop on a brand new machine on the spot by A) copying config, B) 'realizing' the configuration and rebuilding my OS based on it, C) restore data. Naturally, you normally just version control all these files, because your configuration is your specification of how to 'create a system from scratch'. All I do is 'git clone' onto a new device, run 'nixos-rebuild' and my system is ready to go. I can deploy a live server after making sure the configuration is reboot/clean-start safe by testing it in a VM, and scp'ing it to a new server, etc.

Note that the same language, Nix, is used to A) describe how to build packages in a reproducible way, B) used to describe your OS configuration (NixOS), and C) is used to describe whole configuration networks, including things like EC2 configs (NixOps). Nix is also a programming language, not like YAML or a simple configuration language - so you also get a significant amount of reuse in code, and a drop in tool diversity/complexity from this. My NixOS configurations are very abstracted and reusable for multiple situations, they share configs where it makes sense, etc.

NixOS, at least, also has a concept of Linux containers that are specified declaratively in the same configuration file. This is why I mentioned libcontainer earlier - currently, NixOS spawns NixOS-based containers using systemd-nspawn. In theory we could probably replace this with Docker, but it's not really a detail that the user is aware of - the actual underlying mechanics of the container engine are abstracted. Docker is useful as a development tool even on NixOS, but it's not really what we need. We could maybe write something more sane by reusing some code from elsewhere.

Of course not everything is perfect. Docker has of course progressed very quickly for users since I last used it (very early releases that were promising but ultimately lacking in a lot of ways), but since using NixOS I have never looked back, because while it's a tool that requires me to do a lot of work (which is not an exaggeration), it is one that actually allows me to move mountains, so to speak, and get my work done.


True, but Docker adds true separation. It enhances security and decreases random damage area.


Docker has major security issues. Guix and Nix allow for much more security via build reproducibility, extensive checksumming, and of course, containers. I don't know what "random damage area" means, but you can't shoot yourself in the foot with Guix or Nix. If you screw up, you just roll back and try again.


What if an application removes all your files in the filesystem via "rm -rf /", because it has remote execution bug?


Nix(OS) does configuration management too, Guix(SD) does it too but only for the base system as far as I know. There's no need for ansible on a NixOS system.


A NixOps-like tool for Guix is in the works.


What's the significance of "V8"? He means the javascript engine, right? But that never enters into the story.... It's really just "Developing with Guix".


Towards the end of the post he shows how he creates his reproducible v8 dev environment.


My bad, I didn't make it that far. Dude spent the first half of the article bitching about his Debian box, and I skimmed the rest (apparently not very well).


He explains how nix/guix technology in the realm of personnal computing can improve day-to-day work, in this case the work of a developper. Wingo's setup is kind of complicated: nixos+guix. (I use only Guix OS for Django dev). In my case, Guix is overkill most of the time, since most dependencies are python. Guix can also replace virtualenv taking into account binary dependencies..

Binary dependency is the case met by Wingo.

I think he does not stress enough (first part) the fact that upgrading is painless in nixos and Guix OS (almost, it's still alpha) compared to debian-like distros.

Guix is a good idea. It needs more packages that is all.


The "bitching" about his Debian install was what I found most interesting. My view of Debian is of a slow-to-change, highly stable distribution. Now, bearing in mind he had switched to the unstable branch but still found it reliable over a lengthy period of time, it was somewhat disturbing to see he could end up in a pretty bad spot after a reboot.


wingo likes/studies v8? https://wingolog.org/tags/v8


he is knowledgeable in the VM implementation, and do develops features in v8 and GNU Guile VM.


I'm a little surprised that a veteran Debian user would fall into this kind of trap (again :). I think guix looks very interesting, but a couple of tips if "all" you need is to "code on the edge" in Debian:

1) Run Debian stable. Possibly add stuff from backports if you need to (eg: newer kernel for drivers. Newer xorg. Hopefully this shouldn't ever be needed for mainstream workstations - modulus closed source graphics drivers).

2) Don't mix'n'match [packages from testing/unstable with stable]. Don't pin. Just do not do it. [Don't run testing/unstable... unless you are testing testing].

3) For utilities not in stable - some can go in ~/opt/{bin,lib,man} -- living in ~/opt/xstow/$package-$version/ -- see "man xstow/apt-get install xstow" -- and set your path, ld-path, man-path ("man man") and friends to point to ~/opt/man etc.

I also have a ~/opt/venv/util/bin in my path so I can go "pip install mercurial" without worrying about system python packages etc. If you go down this path be aware of "apt-get build-dep mercurial|python-$foo" as a reasonably sane way to get system dev headers for c libraries things you pip install in your venv(s).

Think of the venvs as disposable! Might have to trash them on a dist-upgrade and recreate. Ditto for ~/xstow.

But none of that is much better than the mess Wingo found himself in, therefore:

4) Embrace the glorious trinity formed by lvm, schroot and debootstrap! With lvm-backed schroots you can have a source-chroot for each of testing, sid/unstable and experimental. Complete with rollback, named snapshots and automagic binding of $home (with among other obvious benefits - your x session cookie).

Schroot documentation/wiki/howto do need a facelift, though.

See also:

https://www.pseudorandom.co.uk/2007/sbuild/

https://wiki.debian.org/Schroot/e

https://wiki.debian.org/CrossCompiling

[Note that auto-bind-mounting home can now be set in schroot.conf IIRC -- see man schroot]

PS: If you have $home on nfs and mount it on both 32bit/64bit linux as well as Solaris... you can put stuff in ~/opt/$arch/.. and dance around in .xsession and/or .bashrc... but I don't recommend it unless you have to...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: