Hacker News new | past | comments | ask | show | jobs | submit login
Guix: An advanced operating system (ambrevar.xyz)
313 points by Ambrevar on Jan 15, 2019 | hide | past | favorite | 168 comments



I started using NixOS over Christmas and I have no trouble whatsoever with the language. It seems pretty lazy for the author to shoot Nix down on the grounds that DSLs are "always a bad idea." He even has the nerve to mention Mozilla as failing with XUL, while ignoring the success of Rust!

Nix is lightweight, pure and maximally lazy. It's a simple DSL that evaluates targets -> derivations (build specs.) Considering it can build an entire OS with so few features, I'm not sure adding a numeric tower and Turing-completeness is really desirable.

Also, I'm surprised this article doesn't mention Hydra. Reproducible binary caches and automatic testing are one of the key advantages of these systems -- does Guix use it?


(Author here) You are right that the tone is unnecessarily harsh towards Nix. I actually admire Nix a lot, it's simply the the article suffers from too little editing and would certainly deserve a rewrite.

I'd like to emphasize more on how much Nix and Guix share and that for the better part Guix should be thankful to Nix!

With regard to Mozilla, Rust might precisely _not_ be such a mistake, but you are right, I should praise Mozilla for moving on then (although I know nothing about Rust, so I can only guess here).

> I'm not sure adding a numeric tower and Turing-completeness is really desirable.

That's what I meant: Turing-completeness comes for free, it does not harm the project and you'll need it sooner or later. Even if the devs don't need Turing-completeness, not adding it to a project is effectively limitating what the user can do with a program.

I fell for it myself and at the time, I had the feeling that "a Turing-complete language might be too much for the project." Today, I believe that this thought is mostly an ungrounded gut-feeling.

> Hydra

Thanks for pointing it out, you are right I should have mentioned it when I wrote about the continuous integration. Will do.

Guix itself uses Hydra on the old server and Cuirass (a Scheme-based CI system :p) for the new one.


I know that Nix users respect Guix (well, the ones who talk on IRC, at the very least), and I'm sure it's mutual ... and I think it's very helpful to make this mutual respect public!


Use Nix, respect Guix, can confirm

The only reason I'm not using/learnign Guix (or contributing meaningfully to Nixpkgs these days, for that matter) is time


And I use Guix, and respect Nix :)


Can I somehow have NixOS package universe with Guix language?


You cannot use Nix package definitions with Guix. You can, however, easily extend or modify Guix packages. New package definitions can inherit from existing ones and override fields that you may want to change.

There is also an API for rewriting dependencies recursively:

https://www.gnu.org/software/guix/manual/en/html_node/Defini...

You can also use Nix as a package manager on top of a Guix system (and vice versa).


Most Nix packages, yes, but the NixOS packages specifically (the ones that build Linux), probably not. NixOS and Guix have slightly different philosophies, e.g. about nonfree software (Guix is generally opposed to including it, although I'm not an expert on the details, while NixOS disallows it by default but only by default), and this affects e.g. device driver packages.


XUL is a DSL, rust is not.

And to be fair, so many bad DSL has been done in the past, espacially behind corporate walls, that I understand the avortion for them. And anyone who had to work on a Lisp or Ruby code created by a DSL inspired hacker know the pain.

DSL have many problems, but the most obvious one is that you get a language without the testing, documentation, tooling and community of a battle tested general one. And because by nature it targets a niche, the miracle needed for it to get real traction requires even bigger divine interventions.

Even if nix was amazing (I couldn't tell, I haven't tried it), and the resulting product, not only fantastic, but worth people switching to it (which is not the same and harder to achieve, or even to recognize with honesty), the scale it would need to become practical may not even fit in the niche it targets.


XUL and Rust are also absolutely not used in the same place. AFAIK there's no Rust code in the mozilla codebase which actually layout widgets in random UI forms.


> He even has the nerve to mention Mozilla as failing with XUL, while ignoring the success of Rust!

Um. Rust is not a DSL; if XUL failed and Rust succeeded, then that's (weak) evidence in favor of "DSLs suck".


I also started using NixOS around Christmas time. I really love NixOS, but I'm at best lukewarm on Nix as a language. Honestly, though, the language is not my current concern. My current concern is actually more about compartmentalization.

Being able to run `nix-shell -p` and get a shell with a program installed is super cool. Also, being able to add a package to the system declaratively is super cool. However, one thing I wonder about is workflows.

For example, I have a lot of Jetbrains IDEs installed for various languages, but I don't want to pollute my global environment with tools, especially command line tools that pollute the PATH. Let's say I want to use Jetbrains Rider. I can easily start a shell with the dotnet SDK with `nix-shell -p dotnet`, then start Rider under that shell. This is OK, but it feels wrong to me. It feels weird that launching Rider outside of nix-shell won't ever work, yet it's installed globally. And yet, I also kind of want it to be globally installed...

Another solution would be to have Nix expressions for every project I work on. This actually seems better and I've adopted it for a couple projects, but it also irks me because it feels half-baked. I don't know where to put these expressions other than next to the project, but I don't think it belongs in version control. Also, I'm very mixed for what belongs in these environments. Should it be personally suited, with my IDE and developer tools? If I do that, I can't share the same code I'd use for my tooling as I would for my Nixpkg (assuming there is a nixpkg) which leads to a divergence I don't like. (Though maybe that can be solved?)

I almost want a robust system of working environments that encapsulates this. So, I could think about my Nix workflows as separate environments. Declarative or imperative, and ideally with more encapsulation than a typical nix-shell. Maybe instead of orienting my workflow around opening an IDE and opening a project, I can orient it around opening a project environment and then opening an IDE inside of it.

At this point I'm just rambling. I still prefer NixOS for practical reasons, since it makes my complex IOMMU setup a cinch and keeps all of the configuration for it surfaced and documented. But dang, the potential here to shift workflow paradigms is enormous, it feels.


> I also started using NixOS around Christmas time.

Hah, me too. :)

> I really love NixOS, but I'm at best lukewarm on Nix as a language.

My biggest issue as of now is that the documentation focuses on which individual files you need to touch, but doesn't really explain the big picture. For example, /etc/nixos/configuration.nix contains a function that takes { config, pkgs, ... }. Where do these come from? Where can I find documentation for them? What is in the ellipsis? More generally, where can I find a high-level overview of how the different components interact with each other during, for instance, nixos-rebuild?


All the quirks are little design patterns that look pretty weird at first, not language features themselves. The resource you're looking for is the Nix Pills series [0]. The LearnXInYMinutes on Nix is also a good overview of the syntax.[1] nix repl is also essential for feeling it out.

To answer your question, { config, pkgs, ... }: declares a function that takes an attribute set containing values for "config" and "pkgs". The ellipsis indicates that it's okay to pass extra attributes.

Nixos-rebuild calls your function by: with (import <nixpkgs> {}); callPackage ./configuration.nix {}

<nixpkgs> is a file containing the set of all symbols (packages are under pkgs, Nix config lives in config, etc.) like a root namespace. callPackage inherits all those symbols and passes them to the function, which is how they got bound.

0. https://nixos.org/nixos/nix-pills/index.html

1. https://learnxinyminutes.com/docs/nix/

EDIT: I guess it's also like Git where it's really confusing until a moment of enlightenment where it all makes sense.


NixOS doesn't actually use callPackage. it uses lib.evalModules. The fact there is a difference between how NixOS is evaluated, vs how Nixpkgs is evaluated is something I actually find quite annoying. Whilst nixpkgs is all about functions and dependency injection, NixOS is all about merging dictionaries.

Here is some context, with corresponding criticsm by the person behind Guix, which might give some interesting insights

https://discourse.nixos.org/t/best-resources-for-learning-ab...


> I don't know where to put these expressions other than next to the project, but I don't think it belongs in version control.

What about using .gitIgnore or using a git submodule for them? Ideally you could have one central repo for nix workflows, and then just have git submodule point to the repo's subfolder, so when you do submodule init, it pulls down that specific folder. Otherwise, you could use .gitignore and a symlink to that as you might do with your dotfiles if you manage those through version control.


Dotfiles are another story. I would prefer my dotfiles to be in source control, but they're not yet, as I haven't gotten to checking out the Nix community's tooling for that, and I prefer to not actually version my whole home folder (at least on machines where my actual work is done in the home folder.)

I think so far my preference might actually be to just have a separate folder, separately versioned, containing my nix expressions for doing work, centrally. But then I kinda want a system for launching into them so I'm not constantly calling nix-shell manually, and that's where I wonder if I actually just want some more tooling for this here.


I use `git --separate-git-dir=~/.dotfiles --work-tree=~/` and have a .gitignore set to whitelist only things I want to track. This keeps git from colliding with things I have checked out below. I also have configuration.nix import an expression I checked into my $HOME, so I can share common config between all my machines.

For shells, I have a custom `env-python [-p <packageNames>] [-f <extraPaths>]` which brings a fully-configured VS Code (with per-environment config), debuggers and neovim with all the trimmings. Everything inherits in layers so I can add different kinds of environments quite easily.

If you'd like, I can share my dotfiles with you. It's pretty custom but the beauty of Nix is you can clone and run!


> I would prefer my dotfiles to be in source control, but they're not yet, as I haven't gotten to checking out the Nix community's tooling for that

Have a look at home manager[0], I use it and have been very happy so far.

[0]: https://github.com/rycee/home-manager


Maybe you'd like `makeWrapper`? It's a little utility which replaces executables with small shell scripts, which set up env vars and PATH components, etc.

I'm not familiar with Jetbrains or dotnet, but you could probably use makeWrapper to make a "jetbrains-with-dotnet" package, something like this:

    with import <nixpkgs> {};
    runCommand "jetbrains-with-dotnet"
      {
        inherit dotnet jetbrains;
        buildInputs = [ makeWrapper ];
      }
      ''
        mkdir "$out/bin"
        makeWrapper "$jetbrains/bin/jetbrains" "$out/bin/jetbrains-with-dotnet" --prefix PATH : "$dotnet/bin"
      ''
This package provides a `jetbrains-with-dotnet` script, which adds the `bin` directory of the `dotnet` package to the start of $PATH then runs the `jetbrains` executable from the `jetbrains` package.

Installing such a package globally will put the `jetbrains-with-dotnet` script in your PATH, but it won't pollute your system with anything from dotnet or jetbrains: those will be downloaded/built and cached in the Nix store, but are only accessible via that script.


> For example, I have a lot of Jetbrains IDEs installed for various languages, but I don't want to pollute my global environment with tools, especially command line tools that pollute the PATH. Let's say I want to use Jetbrains Rider. I can easily start a shell with the dotnet SDK with `nix-shell -p dotnet`, then start Rider under that shell. This is OK, but it feels wrong to me. It feels weird that launching Rider outside of nix-shell won't ever work, yet it's installed globally. And yet, I also kind of want it to be globally installed...

I use nix-buffer[0], which lets Emacs pick up the environment automatically when I open any file in the project. And as long as you only use setq-local it'll stay scoped to those files, "normal" files still open in the regular user environment. It's even seamless to keep files from multiple such projects open at once.

[0]: https://github.com/shlevy/nix-buffer


> This is OK, but it feels wrong to me. It feels weird that launching Rider outside of nix-shell won't ever work, yet it's installed globally.

What do you mean it won't work? I think rider will launch without dotnet, If not then rider dependencies should be fixed to include dotnet. Otherwise, it's the same case for any other distro.

> but I don't think it belongs in version control.

Why not? In fact you should put them in vc since anyone with nix can easily build the program exactly same as on your machine.

> Should it be personally suited, with my IDE and developer tools?

You can create different nix files: e.g. in ide/default.nix you can inherit your default shell.nix and put extra dependency. You may want to skip checking this in vc though.

For a good organization of nix files look at the reflex-frp haskell project.


Rider launches without dotnet, but is not very useful. It becomes effectively a text editor.

Of course, if you wanted to use rider without dotnet, you could. But it doesn't fit my workflow.

And yes, like I said, this is still better than the situation on other distros, but it feels like it's close to being even better than that.

>Why not? In fact you should put them in vc since anyone with nix can easily build the program exactly same as on your machine.

For starters, because I work on a lot of projects that aren't mine. But also, because it's not really standardized.

Like there's a fairly standard practice for putting a package.json in a repository but not really so for default.nix. And it's also not exactly orthogonal to Nixpkgs; if you wanted your project to be in Nixpkgs, your expression would be (slightly) different. That is probably solved by importing the Nixpkg into default.nix, but I haven't tried yet.

Finally, this doesn't capture everything. If I want an environment where I have VSCode with the Go extension installed, I would prefer that to be isolated from other VSCode instances.

That's why I think I want more than just a Nix expression. The tools to do what I want are all here, but the tool to automate the workflow is not.


> If I want an environment where I have VSCode with the Go extension installed, I would prefer that to be isolated from other VSCode instances.

You can absolutely do this by overriding the derivations. The tool to automate your workflow is there(just create a bunch of nix files and a makefile to call nix-shell/nix-build), but not as convenient as you might like.


Making working per-project definitions in NixOS is what ultimately got me to drop NixOS :(

The specific reason is Ruby + RoR leading to nokogiri... which is something people have been dealing with for several years and there's no solution, and I lost the one file where I managed to get it working good enough to launch rubymine in a way that sees everything necessary.

Similar issues crop in other places, due to how Nix dependencies are handled like unflattened node_modules at times :/


Check out `users.users.<name?>.packages` for a declarative way of installing user-specific (non-global) packages.

https://nixos.org/nixos/options.html#users.users+packages


That is cool, but I actually kind of like keeping most things system scoped. What I'd like to do is move everything that is not system scoped into a more granular environment.


I think that Nix expression language is Turing complete. At least by the number of constructs, it seems so [1].

And while I think Nix language is kind of strange, there is at least one reason I prefer it instead of a Lisp (this being a person that uses Clojure everyday at work): no parenthesis.

It is not that I think Lisp is a bad language because of the parenthesis: I love the consistency and simplicity of the language. However, Lisp is one of the few languages that I don't feel productive at all without a proper editor setup (rainbow parenthesis, slurp, barf, etc.) and I think this is bad for configuration files since sometimes you simply don't have the proper setup.

I can easily edit /etc/nixos/configuration.nix in NixOS using only vi to bootstrap my system. I don't think it would be as easy to do the same using GuixSD.

[1]: https://medium.com/@MrJamesFisher/nix-by-example-a0063a1a4c5...


What if there was a way to replace parenthesis with syntax? For instance, if you could declare your operating system as:

  operating-system:
    ...
    packages:
      - vim
      - %base-packages
instead of:

  (operating-system
     ...
     (packages (cons* vim %base-packages)))
would that help you? No more parenthesis, the complexity of cons* is hidden behind the "-" and it's pretty trivial to convert between one syntax and the other. If it was possible, how would that affect your opinion on guix? What if the installer came with a pre-configured vim for scheme?

btw, I'm also a guix and a vim user, and I don't use anything special for parenthesis. I just close them manually like a grown-up (which is a very usefull skill to have when you want to impress an emacs user :p).


Wisp provides a Python-like syntax for Scheme that gives you something very close to the above: http://draketo.de/english/wisp . (Though me and my editor don't view parentheses as an obstacle, quite the opposite...)


There you're replacing a configuration language that is executable Scheme, with what seems to be a declarative language, and converting to indentation syntax. That's a two-variable change.

The original S-exp syntax doesn't have to be an executable program in which you use Scheme operators like cons* .

The indentation-based notation, likewise, can retain the property that it's executable Scheme and reveals operations like cons* to the user.


Yeah, this seems way more clean.


The whole problem with that notation is that it isn't declarative. It's mean to execute. So to read it you have to know Guile functions like cons* . vim and packages% are actually variable bindings; vim undoubtedly stands for some package object defined elsewhere.

A Scheme-based configuration language that is executable Scheme is going to end up using the entire language, and be daunting to some IT person who just wants to reconfigure the distro and isn't a Scheme programmer. (Counterargument: a clean, dedicated configuration template language will likely sprout a way to run arbitrary Scheme code anyway.)

It's tempting to make things like this because they are quick and dirty. For instance, we could make an assembly language which is like this. (mov eax ebx) could be a call to a function called mov, and eax and ebx could be constants in Scheme itself. The function has the side effect of emitting the assembled code into some stream. We don't have to write any code which would walk, validate and translate the syntax of a Lisp-ified assembly language; we just let the programmer write code using the API we have provided and throw it into the path of the evaluator.

And we can have fun like (dotimes 13 (roll %ecx)) to emit 13 copies of an instruction.

So it has some up-sides, and some down-sides.


"Too many parentheses" isn't a valid argument IMHO. It comes up so often that I wrote a standard response http://chriswarbo.net/blog/2017-08-29-s_expressions.html

tl;dr s-expressions are (by design) trivial to both parse and generate, and there are already a bunch of tools to convert to/from a other formats (such as indented text). There's no need to create/use a whole different language+toolchain+ecosystem just to avoid parentheses.


This post shows exactly what I mean: editing Lisp with a good editor is nice, however when you're bootstrapping a system sometimes the only tool available is vi (no even vim, just vi).

Try to find a close parenthesis after 8 function calls without syntax highlighting, or without your "hide parenthesis".

Nix syntax is quite simple to read even without any editor support. I did the whole bootstrapping of my current NixOS system only using vim without special support for Nix language and still got it right in my first try.


> This post shows exactly what I mean: editing Lisp with a good editor is nice, however when you're bootstrapping a system sometimes the only tool available is vi (no even vim, just vi).

Then I think you misunderstood. My point is that if you don't like parentheses, then don't use parentheses. Lisp doesn't care. Denote structure in whichever way you prefer, and convert that to/from s-expressions programatically.

Guix runs on Guile Scheme. I-expressions (which use indentation instead of parentheses, which I linked to in the post, https://srfi.schemers.org/srfi-49/srfi-49.html ) contains an implementaion for Guile right there in the spec. Likewise the code repos for Sweet expressions and Wisp (which also allow mixfix, braces, etc. and I also linked to in the post) seem to provide explicit Guile support too, alongside other systems (e.g. Racket) and standalone conversion scripts for compatability with any other s-expression system. Guile also has a built-in implementation of "curly-infix expressions" ( https://www.gnu.org/software/guile/manual/guile.html#SRFI_00... ).

That's 4 ways to use Guix without reading or writing any parentheses; one of which comes built-in. It's also pretty trivial (thanks to the simplicity of s-expressions) to make up your own alternative if you don't like any of these.


Vi has % to jump to a matching parenthesis as pretty ancient feature. It's in POSIX:

http://pubs.opengroup.org/onlinepubs/9699919799/utilities/vi...

Of course % is a movement that combines with commands like deletion: d%.

The original vi also has a Lisp mode (not described in POSIX) for indentation; that is of course in Vim also.

All the code you see here was written using little more than indentation via Lisp mode and % for paren matching:

http://www.kylheku.com/cgit/txr/tree/share/txr/stdlib

Though I work with syntax coloring and sometimes use Vim features like visual selection.


> Vi has % to jump to a matching parenthesis as pretty ancient feature.

I know. It helps, but it is far from being good enough. Making things like slurp is still a pain without proper editor support, and this is very common to do in Lisp.

> All the code you see here was written using little more than indentation via Lisp mode and % for paren matching:

Yeah, and K&R written Unix with only ed. My point still is valid: Nix is more readable than Lisp without proper editor support.


Also, any half decent Lisp dialect can tell you where an unbalanced parenthesis lies; you shouldn't have any difficulties hunting for this in a large file.

  $ txr unbal.tl 
  unbal.tl:8: syntax error
  unbal.tl:8: unterminated expression
  unbal.tl:8: while parsing form starting at line 5
  $ pr -Tn unbal.tl
      1   (defun foo()
      2   
      3     )
      4   
      5   (defun bar()
      6     (let (x)
      7       )


I don't think this is a great error message. I use Clojure everyday and I still had to look 2 times to find the error.

If it was something like this, this would be much better:

  Unmatched parenthesis from starting at line 5.
  Maybe you wanted to do something like this?
      1   (defun foo()
      2   
      3     )
      4   
      5   (defun bar()
      6     (let (x)
      7       )
               ^
This being a very basic code. In reality you will have multiple function calls inside more complicated functions (like a package definition in Guix), so you will end closing 10+ parenthesis and catching them can be hell.


I agree but in fairness Nix is also turing complete.

I prefer to think that Nix "out-schemes" scheme.


Is Guix not?


Guile Scheme and Nix both are.


Guix does use Hydra, yes: https://hydra.gnu.org/


It's true but this is no longer the default substitute server. For the default we use a custom Guix Continuous Integration tool.


That's awesome! Congrats to the Guix community. :-D

I hope that as Guix and Nix drift further apart, both projects can continue to learn from one another. :-)


I only skimmed, but parts of this article read as just another pro-LISP piece... A rant against other languages (which the author terms "DSLs" simply because they're not Turing complete) like LaTeX, HTML, regex, SQL etc. and lists alternatives, all of which are S-expression-based. Well, the superiority of S-expressions is just, like, your opinion, man. Personally, I consider them inferior to pretty much any other syntax (joke languages like Brainfuck, Whitespace, JavaScript excluded). An in absolute terms, the whole civilisation was built on syntax-full DSLs (law, math, engineering, ...).

I mean, how is this (Skribilo):

     (define (freedom . body)
       (ref :url "http://www.gnu.org/philosophy/free-sw.html" :text body))

     (p [Skribilo is a ,(freedom [free]) document production tool that takes a structured document ...])
better than this (LaTeX):

    Skribilo is a \href{http://www.gnu.org/philosophy/free-sw.html}{free} document production tool that takes a structured document ...


Seems like the top defines a reusable construct. Couldn't this be

     (p [Skribilo is a ,
     (ref :url "http://www.gnu.org/philosophy/free-sw.html": text "free")
     document production tool that takes a structured document ...])
vs

\par Skribilo is a \href{http://www.gnu.org/philosophy/free-sw.html}{free} document production tool that takes a structured document ...

If we further examine

     (ref :url "http://www.gnu.org/philosophy/free-sw.html":text "free")

     \href{http://www.gnu.org/philosophy/free-sw.html}{free}
The first example defines a url with (ref :url :text) and latex is \href{}{} this is about the same amount of syntax being very similar to a function with 2 arguments

The enclosing scope is (p []) so basically 5 extra characters to define structure.


The article is for the bigger part about operating systems, only a minor part is about programming languages (which is important for the matter at hand). It's not pro-Lisp in particular, it's pro-"general-purpose, well tested and well designed programming languages". I don't exlude other programming languages from the picture. The alternative I gave as examples are the ones that I know of, but sure thing there are others out there in Python, Lua, Ruby, you name it.

I personally like S-expressions because they are very general and have deep semantic implications (eval-apply, a.k.a. code is data) that go beyond mere syntactic considerations.

Regarding your examples, if I understand correctly they are not equivalent, so they don't exactly compare.

Here LaTeX would require the \begin{document}...\end{document} bits.

But regardless, TeX is very painful to program with (lack of proper data structures and control structures). It's very important in my opinion.


TeX itself is Turing-complete, if not exactly an ideal language for computing things not directly related to typography.

But Lua(La)TeX exists, which allows for more user-friendly scripting in Lua (not my favourite language, but...), and I think that is the only real 'competitor' of (La)TeX.

I love Lisp, but the majority of what I write is in (La)TeX, and I don't know of anything comparable for dealing with creating and typesetting academic-type work. Most of its 'competitors' use TeX on the back-ends, and though I've tried writing more in Org mode and exporting to LaTeX, in the end for anything beyond trivial text, it's easier just to write directly in LaTeX.


All this is definitely true, but in my opinion not ideal. LuaTeX adds a significant layer of complexity to an already way-too-complex system.

At this point we might want to step back and wonder why this has become so hard.

TeX is a poor foundation to build upon, so I don't think LuaTeX, despite being the best solution currently, is the right direction to explore.


> TeX is a poor foundation to build upon

I still disagree with this characterisation. TeX is a really beautiful solution to a non-trivial problem, and its implementation in WEB is really quite interesting. LaTeX adds quite a lot of complexity and messiness, but does also add very useful functionality and abstraction away from low-level typesetting. Org-mode and similar 'mark-up' languages are very nice in more rigidly-defined domains, but they don't have the power of (La)TeX for academic writing. I don't see any real 'competitors' to (La)TeX that aren't themselves just adding layers on top of TeX (for better or worse). TeX is nearly as old as (original) Emacs and has similarly become complex in perhaps not-entirely-ideal ways, but - like Emacs - there's no substitute.


I'm not criticizing the power of TeX in terms of typesetting, I'm only referring to the _language_ here. (And you are aboslutely right, Org and other markup languages are not competing with TeX.)

Imaging the capabilities of TeX with a good programming language.

I don't find WEB beautiful. In my opinion, literate programming (at least in this case) is a pretext to hide a terrible, unexpressive programming language.

As you're saying, Emacs had the good taste to use a Lisp back in those days, so it really is a pity that TeX didn't.


Well, I agree a Lisp or at least more Lispy interface with TeX would be much preferable. But I am dubious how anything comparable to TeX and the TeX-ecosystem would be developed at this point.


Additional: I did come across https://github.com/mbattyani/cl-typesetting , though its homepage seems to have disappeared.


Nice! Thanks for sharing!


This would be extremely demanding a project indeed, which is probably one no such thing has really taken off in many decades.


Instead of a pro-scheme rant one could interpret it also as a NixOS pitch using a scheme API instead of a DSL.


Guix and GuixSD are definitely interesting. I am currently running Guix on top of Fedora for my desktop and just recently started running GuixSD on my laptop. Probably my favorite feature that I use the most is guix environment which can help me set up all the necessary packages needed to build and work on projects.

I think there is a lot of hidden potential in Guix, especially with the choice of Guile as the language. The article demonstrates a bit of what you can do from Guile with some small script examples. A lot of the functionality of Guix is exposed and can be used to create tools and scripts. I sort of wonder what interesting tools and scripts will be made using this.


Nice. What languages do you build environments for ? Are they ready for a typical python & JS (vuejs) project ?


Python: sure. JS: ... it's complicated.

See this article for an introduction to the problems of attempting to match npm with principled package management:

https://dustycloud.org/blog/javascript-packaging-dystopia/


I have used guix environments for python, c++, and guile projects.


Any pointers to an opinionated install guide that does not require me to configure every aspect of the system? Something that yields a working gnome/kde desktop with less than 10 minutes of effort would be ideal.


A more user-friendly installer is in the works and should hit Guix with 1.0.

Currently, Guix comes with pre-filled operating system declarations so that beside filling out the hostname and a few other options, there is little to do but run `guix system init`.

While Guix install is completely different from any other system (except Nix), it is suprisingly easy with nothing else but 2-3 steps (if you have Linux-libre compatible hardware, that is to say).


Thanks - really looking forward to moving my workstations to something like GUIX, the productivity gains will be immense.


> TeX, LaTeX (and all the derivatives), Asymptote (better ideas: scribble, skribilo)

It’s hard to take screeds like this seriously when their suggested alternatives use LaTeX for output!


Skribilo can use lout for output.


This is sadly a good point... Scribble and skribilo are not finished, there are some (very unfinished) works to implement a new PDF generator (at least for skribilo).

I only said it was a "better idea", not a "better finished product", but I should have been more explicit. Will fix.


> GuixSD strimes at being the “fully-programmable OS."

What does this mean? Google is suggesting it has something to do with weed-whackers, and I'm guessing that's not right.


probably s/strimes/strives


Indeed, thanks for catching the typo!


>GNU/Linux distributions in general lack the aforementioned perks of Guix. Some of the most critical points include:

>Lack of support for multiple package version, or “dependency hell”. Say, when the latest mpv requires a new ffmpeg, but upgrading ffmpeg breaks most other programs relying on it, we are stuck we dilemma: either we break some packages or we stick to older versions of other packages. Worse, a program could end up not being packaged or support by the OS at all. This issue is inherent to most distributions which can simply not provide the guarantee to fulfill their primary objective: package any program.

I disagree. Having multiple versions of a package means having old versions of a package, and that's something most users should not be messing with because they are not in a position to track and backport security fixes.

It's always undesirable to have to a dependency on an old, probably unsupported version of a library. The distributions that do it have a correspondingly huge maintenance burden. One of the advantages (from my point of view) of a lighter rolling distribution like Arch is that because all the software on the system is expected to be compatible with a library version that is upstream-supported, there is significantly less maintenance work to be done. Note that this is compatible with, say, packaging both GTK2 and GTK3 programs, since both of those are supported upstream. The OS is perfectly compatible with shipping multiple versions of a library, but Arch sensibly (IMO) chooses not to do so by default.

One way to quickly experience the pain points of the OP's way of doing things is to try to use or distribute software written in a language with a "modern" build tool like npm or cargo. These systems implicitly encourage hardcoding a single version (or a small set of versions) as dependencies, and the convenience of this means that many developers rarely update the list even if the new library versions are fully compatible with their code. So as the user building the software, or someone who wants to distribute it, you can either run with libraries that have known vulnerabilities, or you can update the libraries yourself and run an unsupported configuration, hoping you don't break anything in the process. A more reasonable position is that it is a bug if your program crashes with the latest version of a library.


> A more reasonable position is that it is a bug if your program crashes with the latest version of a library

This makes sense for actively maintained code.

There are a lot of pieces of software that aren't actively maintained but that don't exhibit any breaking bugs and people continue to rely on them. If the libraries that these pieces of software ever make breaking changes, either (a) someone has to step up and start maintaining the software again, (b) an older version of the library must be installed, or (c) the software dropped, annoying a lot of people who still depend on the tool.

What macOS has is .frameworks, which let you version shared dynamic libraries. When building a project against a framework, one can either choose to build against a specific version or (more commonly) leave the version unspecified and use the latest. No fuss, no muss.


> When building a project against a framework...

That still implies active maintenance, ie doing new builds.

If it's assumed to work with the "latest version", but then breaks in the future, it is a decision outside of the build itself as to which version of the library it should link up with.

The only thing that should be baked into the software is which specific version of the dynamic lib it was compatible with, then externally the system can determine which most-recent version of the library is still compatible.


> then externally the system can determine which most-recent version of the library is still compatible

This is great until a more recent version of a library still has the same interface but a completely different behaviour or seriously breaking bug in the implementation.


That's exactly what I'm talking about. That compatibility information is what needs to be captured, and is outside any future preference that the program was originally built to.

Instead, most automatic systems just blindly say "this version number is newer, so use that" and break everything. Programs were built to a specific version which worked, and only those versions compatible to that specific version should be used.


Counterargument: constant updating contributes to "software bit rot" - i.e. perfectly good hardware and software suddenly stopping to work simply because they were not updated to keep up with the changes of the environment around it. Put another way: updates are risky, and can randomly and unexpectedly break your setup. The software you're depending on may no longer be developed, or the new version (or alternatives) are worse than the old one - which is something increasingly frequent in this industry.

Not every update is important. Hell, I'd say most aren't, since developers are freely mixing security patches with feature updates. Until the culture changes to cleanly separate the two, all updates will carry the risk of degrading or breaking things.

> These systems [npm or cargo] implicitly encourage hardcoding a single version (or a small set of versions) as dependencies, and the convenience of this means that many developers rarely update the list even if the new library versions are fully compatible with their code.

Counterpoint: version pinning prevents your project from suddenly breaking apart at unexpected times, because some dependency had a breaking change. It also prevents you from automatically pulling in vulnerabilities introduced - sometimes intentionally - in the newer version. NodeJS in particular has a "malicious package" drama roughly every 6 months. In all honesty, you should be auditing your dependencies anyway, and it's harder to keep up if they change faster than you can review them.


The package manager should have regression tests that test for linking breakage, etc before installing, pinning is just a lazy solution.


Works > Secure.

Don't believe me? You're using a computer right now. Doubtless you are running both hardware and software that have had vulnerabilities exposed in the past and almost certainly have vulnerabilities right now that haven't been exposed yet. You use them anyway because at the end of the day shit needs to get done.

That's why being able to run out of date things is important to people in the real world.


Exactly. If additional security is needed it can be applied in layers e.g., container sandboxing. If updates are forced old/infrequently used content will simply be broken/not accessible.


Distributions (package sets) can and do enforce that a as few versions of something as possible be supported. But it's just stupid to make this a hard requirement of the package manager itself---both old-school upgrades and new-school docker-style workflows become so much harder.


Guix co-maintainer here.

The article says "Guix is a fork of Nix", but this is not correct. Guix uses the same format for derivations (~ a low-level representation of builds), so that it can reuse the Nix daemon, but that doesn't make it a fork.

Guix builds upon the very same idea of functional package management that was pioneered with Nix, so they are very close and they do have shared roots. But that's not what "fork" is usually understood to mean.

Here's something I wrote a few months ago (with minor changes) to clarify the relationship between these two projects. I hope this helps.

-------------------

As one of the co-maintainers of GNU Guix I'm obviously biased, but here's what I consider some important unique features of Guix:

- Guix is all written in Guile Scheme (with the exception of parts of the inherited daemon, which hasn't yet been completely implemented in Guile); this extends to development tools like importers, updaters, to user tools like "guix environment", and even bleeds into other projects that are used by GuixSD (the GNU system distribution built around Guix), such as the shepherd init system. There is a lot of code reuse across the stack, which makes hacking on Guix really fun and smooth.

- Packages are first class citizens in Guix. In Nix the idea of functional package management is very obvious in the way that packages are defined, namely as functions. These functions take their concrete inputs from an enormous mapping. In Guix you define first-class package values as Scheme variables. These package values reference other package values, which leads to a lazily constructed graph of packages. This emergent graph can be used as a library to trivially build other tools like "guix graph" (for visualising the graph in various ways) or "guix web" (for a web interface to installing and searching packages), "guix refresh" (for updating package definitions), a lovely feature-rich Emacs interface etc.

- Embedded DSL. Since Guix is written in Scheme---a language for writing languages---it was an obvious choice to embed the package DSL in the host language Scheme instead of implementing a separate language that needs a custom interpreter. This is great for hacking on Guix, because you can use all the tools you'd use for Scheme hacking. There's a REPL, great Emacs support, a debugger, etc. With its support for hygienic macros, Scheme is also a perfect vehicle to implement features like monads (we use a monadic interface for talking to the daemon) and to implement other convenient abstractions.

- Graph rewriting. Having everything defined as regular Scheme values means that you can almost trivially go through the package graph and rewrite things, e.g. to replace one variant of a package with a different one. Your software environment is just a Scheme value and can be inspected or precisely modified with a simple Scheme API.

- Code staging. Thanks to different ways of quoting code (plain S-expressions and package-aware G-expressions), we use Scheme at all stages: on the "host side" as well as on the "build side". Instead of gluing together shell snippets to be run by the daemon we work with the AST of Scheme code at all stages. If you're interested in code staging I recommend reading this paper: https://hal.inria.fr/hal-01580582/en

- Bootstrapping. Some of us are very active in the "bootstrappable builds" community (see http://bootstrappable.org) and are working towards full bootstrap paths for self-hosting compilers and build systems. One result is a working bootstrap path of the JDK from C (using jikes, GNU classpath, jamvm, icedtea, etc). In Guix we take bootstrapping problems serious and prefer to take the longer way to build things fully from source instead of just adding more binary blobs. This means that we cannot always package as many things as quickly as others (e.g. Java libraries are hard to build recursively from source). I'm currently working on bootstrapping GHC without GHC and without the generated C code, but via interpreting a variant of GHC with Hugs. Others are working on bootstrapping GCC via Scheme.

- GuixSD, the GNU system distribution built around Guix. GuixSD has many features that are very different from NixOS. The declarative configuration in Scheme includes system facilities, which also form a graph that can be inspected and extended; this allows for the definition of complex system facilities that abstract over co-dependent services and service configurations. GuixSD provides more Scheme APIs that apply to the whole system, turning your operating system into a Scheme library.

- I like the UI of Guix a lot more than that of Nix. With Nix 2.0 many perceived problems with the UI have been addressed, of course, but hey, I still prefer the Guix way. I also really like the Emacs interface, which is absolutely gorgeous. (What can I say, I live in Emacs and prefer rich 2D buffers over 1D command line strings.)

- It's what I want GNU to be. I'm a GNU hacker and to me Guix is a representative of a modern and innovative GNU. It's great to see more GNU projects acting as one within the context of Guix and GuixSD to provide an experience that is greater than the sum of its parts. Work on Guix affected other GNU packages such as the Hurd, Guile, Mes, cool Guile libraries, and led to a bunch of new GNU packages such as a workflow language for scientific computing.

On the other hand, although Guix has a lot of regular contributors and is very active, Nix currently has more contributors than Guix. Guix is a younger project. The tendency to take bootstrapping problems very seriously means that sometimes difficult packages require more work. Oddly, Guix seems to attract more Lispers than Haskellers (I'm a recovering Haskeller who fell in love with Scheme after watching the SICP lecture videos); it seems to be the other way around with Nix.

Having said all that: Nix and Guix are both implementations of functional package management. Both projects solve similar problems and both are active in the reproducible builds efforts. Solutions that were found by Nix devs sometimes make their way into Guix and vice versa. The projects are not competing with one another (there are orders of magnitudes more people out there who use neither Guix nor Nix than there are users of functional package managers, so there's no point in trying to get people who use Nix to switch to Guix). At our recent Guix fringe event before FOSDEM Eelco Dolstra (who invented functional package management and Nix) gave a talk on the future of Nix surrounded by Guix hackers --- there is no rivalry between these two projects.


> - Graph rewriting. Having everything defined as regular Scheme values means that you can almost trivially go through the package graph and rewrite things, e.g. to replace one variant of a package with a different one. Your software environment is just a Scheme value and can be inspected or precisely modified with a simple Scheme API.

Tell us about graph rebuild times.

When there's a security update to glibc and you do the equivalent of `apt-get upgrade` in Guix, how long does that generally take to complete?


In cases like that we use a mechanism called "grafting". See http://www.gnu.org/software/guix/manual/en/html_node/Securit... for a description.

We usually put changes that cause the world to be rebuilt on a separate branch and have the build farm build binaries for it. This can take quite a while, but it's not something a user generally needs to worry about.


The issue with all package managers is they are basically an abstraction for build systems. Ideally a package manager would just be a "meta-build" language and say git (per option branches).


I believe this is called portage. Turns out its complicated to manage and comes with costs.


No portage isn't a meta-build language something like premake would be but the premake implementation is pretty broken. Portage also does a mess of shell `use foo && FLAGS+='--foo'` which is a mess of state that needs to be tested for every combination which per branch options could fix.


Indeed, I'll fix this, sorry for the poor choice of word. What phrasing would you suggest then? "Guix was influenced by Nix"?


I think the licenses alone will split the whole functional package management community in two.


Nix and Guix both excite me a lot. Guix as a Guile scheme API is a very exciting, but what I like more is, as a result of that, Guix & it's derivations are all part of the same repo. Because they are one in the same I can be sure I'm not going to build a package that hasn't been tested against the version of guix I am running. It simplifies integration testing if I decide to create a new branch of Guix packages.


I really want the things Guix provides, but I'm not a computer scientist. Feels like you need to be to make it work somewhat like the linux desktop I enjoy computing on (text editor, graphical web browser, sky, virtualbox, postman, osb, keepassxc, etc.)


If you look at https://guix-hpc.bordeaux.inria.fr/browse? you'll find most of your wishes there, and many more to come hopefully.

Guix is also working on an installer to help reducing the friction. I'd personally like to work on a fully graphical install + package management.

While Guix sounds like some heavy scientific stuff, it really is a powerful base that allows anything to grow on top, including user-friendly interfaces.


I'm one of the Guix co-maintainers and a Guix user, but I'm not a computer scientist.

The biggest thing that might be annoying for someone who is not comfortable with Lisp is probably the operating system configuration, which is done with Scheme. For installing packages, however, you can use the command line interface, or the convenient Emacs user interface --- neither of those require any programming knowledge.


Sadly the innovation in Guix and Nix is to bring even more complication what should be the straight-forward task of getting software on your machine, not less.


I have a question for GuixSD users here: I have a laptop with NVidia CUDA, TensorFlow, mxnet, Python, Julia, etc. installed.

I experimented with NixOS last year, liked it, and being long time Lisp user I would like to also try GuixSD. Would I have problems with CUDA, etc.?


CUDA is problematic. There is no package for CUDA in the official Guix channels, because it is non-free software.

There is a patch for a Tensorflow package, which is built without GPU support (because that requires CUDA), but it hasn't been merged yet, because it isn't really pretty. (I didn't use Bazel to build Tensorflow, because Bazel depends on dozens of Java archives that cannot easily be built exclusively from source.)

You can use Nix as an additional package manager on top of a Guix system, of course, but this might not be as satisfying as using one package manager exclusively.


Exciting stuff. There's been bit of a Lisp renaissance over the last 5-10 years (thanks to the Emacs packaging system, Clojure) so this could be the next logical step for someone who writes parens for a living.


I don't understand why these config systems can't be less 'programmy' (less technical).

All of these (Nix/Guix/NginX Lua, etc) config systems are all so complex with layers of interesting syntax and punctuation.

Why can't these use simpler (INI/YAML/TOML/CSON or even JSON) configuration schemes? It would reduce barrier to entry - simplify parsing and automation, and a whole lot of other things. Why do I need to learn yet another language just to configure and manage my system?


> Why can't these use simpler [...] configuration schemes?

Because you inevitably need something less simple at some point. It's a hard balance to strike. Want only a config format? Get angry advanced users. Want only a programming language? Get angry beginners. Want both? Get a hodge-podge of half-and-half configs calling out to built extensions and in general a bunch of beginners finding poor code-based solutions on the internet.

In general, you need a simple programming language to manage this stuff or it becomes unwieldy once it leaves hobbyist size. And mixing config formats and full-blown languages (or worse, template languages) just adds ambiguity on the best way to do something coupled with loose synchronization between the two formats, "extension hell", and half-developed code solutions with an exponential number of poorly tested combinations of exported features that beginners try to use. To see examples of all these problems, one simply has to look at the historical landscape of orchestration engines and build tools.


Some food for thought: https://dhall-lang.org/ looks like an interesting balance between config file format and programmability. Would it be suitable for describing packages?


I don't think that would be enough. At the end of the article I linked the paper "Code staging in GNU Guix" (https://arxiv.org/abs/1709.00833) which explains why 2 stages of code execution are so central in Guix. In practice, this means that a Turing-complete language (Scheme) is used both at the declaration level and the build level. Lots of packages make use of this feature, it brings tremendous flexibility to packaging.


Others gave good answers I think, so I'll just address your points directly.

> [Nix is] so complex with layers of interesting syntax and puncation

Precisely. Guix uses Scheme all over the place. It's just _one_ language for everything, not multiple languages. As a result, it's easier.

Scheme yields very easy-to-read declarations (see https://ambrevar.xyz/guix-packaging for some examples), so JSON & friends don't add much here.

> Parsing and automation

Parsing couldn't be simpler than using the language itself! There is no need to write an interpreter here.

> Why do I need to learn yet another language just to configure and manage my system?

That's precisely part of my thesis: with Guix, you only need to know Lisp/Scheme (which users might already know since it's a general-purpose language). Programs going for a DSL are effectively forcing everyone to learn a language for the sole sake of using the programs.


Look at msbuild. It tries to use XML as a syntax for its configuration. But because you absolutely need a programming language for this task, you program a very odd language that is unlike anything else. It combines the clearty and simplicit of XML with the intuitivity of a dsl.

Now think of a lisp that uses YAML as its syntax instead of sexprs. That might be a much better choice than msbuild, but do you really want that? Would that make things better?


Given a point, and quite soon, you'll use advanced and peculiar yaml syntax. Want to factorize rules ?

    .deploy: &deploy
    
and

    foo
        <<:*deploy
etc. This doesn't simplify parsing. This is already a DSL. And the format changes slightly depending on implementations (official, gitlab,…).


I sometimes like to say that the Nix language is like JSON with functions and multiline interpolated strings, both of which happen to be extremely useful for system configuration and package definition.

In fact packages in Nix are functions. Dependencies are just function arguments. It makes a lot of sense.


Others have hinted at this, but functions are fundamental to Guix and Nix. They are "functional package managers", whose packages are written in "functional programming languages". In particular, it's rare to directly define a package; they're almost always defined using functions, which take dependencies as arguments. This allows reuse, overriding, modularity, etc. Another killer feature of these systems is their extensive libraries, which provide all sorts of functions to make packaging new stuff easier.

Here's a Nix example (I'm less familiar with Guix) which defines a Haskell package, where the definition is fetched from hackage.haskell.org, automatically converted to a Nix definition (using `cabal2nix` behind the scenes), with its test suite disabled and profiling symbols compiled in:

    with import <nixpkgs> {};  # The Nix standard library
    with haskell.lib;          # Haskell-specific Nix functions
    with haskellPackages;      # Haskell packages, for use as dependencies
    enableLibraryProfiling (dontCheck (callHackage "vector" "0.9") {})
I'd hate to think how to define such packages if Nix didn't have the ability to write functions like `enableLibraryProfiling`, `dontCheck` and `callHackage`.

None of the formats you mention (JSON, YAML, etc.) support functions, either defining them or calling them. Adding functions to such formats would be problematic in two ways: figuring out how to make them work (representations, syntax, implementation, etc.) would be a challenge in itself, especially if we don't want to break existing tools for those formats; actually using them presents another problem, since functions are used so extensively that it would be pretty much like learning a new language anyway, which defeats the main reason of going down this route.

Note that Nix and Guix can offload some configuration to those more limited formats if desired, e.g. by using things like `readFile` and `fromJSON` in Nix. Personally, whenever I've done this I've inevitably wanted something more powerful, and ended up using Nix to generate the JSON; hence I don't bother these days!


Of course they can. Check out Ansible. It's quite comparable to these systems, even if less pure and lacking some of the features.


> It's quite comparable to these systems, even if less pure and lacking some of the features.

So it's not really comparable then.


I'd really like to see Shepherd used outside of GuixSD. In theory, since Guix can be installed on top of other distros, this could be leveraged to be make swapping in Shepherd easier. Of course, systemd is much more complex, so replacing the init in systemd-based distro would probably be trickier, but it should be more manageable in places like Void Linux.


Shepherd is quite scary outside of GUIX. It basically runs package-supplied code in PID 1, with no protection. A badly written init script will halt startup completely and render system unbootable.

GUIX (I believe) has this "choose previous version from GRUB" dialog, so you can recover from bad init script by rolling back the whole thing. Any other OS will require manual fixes, and this is going to be quite a pain.


Ah, thanks for the additional information. That's too bad though, I like the model of a Scheme-based init. I suppose I'll stick with runit for now then.


I tried NixOS a few years ago when they talked about better UX for the CLI. Anyone know if this happend?


It does slowly happen. There's a new `nix` command (a.k.a. Nix 2.0), which has a better CLI, though it's not yet 100% complete, I believe. But already useful (e.g. `nix search`).

https://nixos.org/nix/manual/#ssec-relnotes-2.0


Nice.

Does Guix have a better CLI?


Compared to Nix 1.x (not sure about 2.x), I think it's simpler to use (Guix co-maintainer here and before that a Nix contributor). This is the reference of 'guix package' (analogous to 'nix-env'):

https://gnu.org/s/guix/manual/en/html_node/Invoking-guix-pac...

Because it's all Scheme, Guix has well defined APIs for all sorts of things. For instance, there's an API for search paths (environment variables you need to define when you install packages), one to create application bundles, one to create VMs containing GuixSD, etc.

That, in turn, leads to clean CLIs in my opinion: 'guix package --search-paths', 'guix pack', 'guix system vm', etc.


Guix CLI is quite different and I've heard a few times people comparing it favorably to Nix. I haven't used Nix enough to compare myself though.


Guix seems interesting but I almost stopped reading when I read "Nix". For all the benefits the Nix ecosystem provides, they haven't crossed the usability gap for me. First was naming their packages so confusingly, Nix the language vs NixOS vs Nix the package manager. Then there's the whole promise of it being usable from multiple platforms ("you don't even have to use NixOS, just use the package manager for awesome builds") -- It was supposedly amazing for building some software packages, but the reality was always so sticky and never quite panned out.

To be fair I have seen/heard lots praise it a lot for delivering stability to the servers they manage, but in this day and age, I'm not sure it's even worth trying to learn something to manage servers when the cattle approach is so much better (just shoot it). Even when stateful applications get involved, distributed storage is springing up to make it easier than ever to take down a server but have the data it had still be accessible, granted it was replicated, and sync if/when the machine ever comes back up. If the world evolves in that direction, it seems like people will only care about packages at VM/container/whatever base layer build time.

Don't want to be the downer here but I don't think it's likely that Guix is going to cross the usability chasm. I've invested a lot of time/interest/effort in projects that were better, but never crossed the usability/mindshare chasm and while I hope I'm wrong, this feels like some of the other ones.

Completely unrelated to that though, this quote rang true to me:

> Guix is a fork of Nix and addresses the main issue that Nix didn’t get right: instead of comming up with a homebrewed domain-specific language (DSL) with its own set of limitations and idiosyncrasies, Guix uses a full-fledged programming language. (And a good one at that, since it’s Guile Scheme, a Lisp-based language.)

I think Hashicorp has this problem with HCL (see the recent updates[0] announced at the last HashiConf, the `for` keyword is now in HCL) -- though I'm not recommending they fix the problem with lisp, people need to stop making DSLs then bridging the gap between their DSL and a full programming language. I much prefer the Pulumi[1]'s approach -- though it leaves devs lots of rope to hang themselves with ("recursive infrastructure building functions...why not?").

[0]: https://www.hashicorp.com/resources/keynote-terraform-free-s...

[1]: https://pulumi.io/reference/how.html


NixOS fangirl here... you're totally right about usability!

Git was self-hosting from day 3, but consisted only of a few pieces of exposed plumbing. As more porcelain was bolted on, it got to the point where flashy UIs could exist. Porcelain isn't all trivial either - consider "git rebase"!

NixOS today is like early Git.. usable and powerful, but not user-friendly. It needs a good UI, a graphical installer, and high-level porcelain to hide the plumbing unless you need to drop into it.

As for the cattle question, I think Nix's real power will come from efficient, declarative "Dockerfiles" and serving as a Bazel-like build system. There was a project for this, Hocker, but it seems inactive.


> NixOS today is like early Git.. usable and powerful, but not user-friendly. It needs a good UI, a graphical installer, and high-level porcelain to hide the plumbing unless you need to drop into it.

I'm not sure even that's enough, Nix is trying to win the lottery 3 times -- pushing a (essentially universal) package manager, a programming language, and linux distribution into the mainstream at the same time. What you noted might solve package management, but I'm convinced no one actually knows just how to make programming languages/OSes popular enough to hit and stay in the mainstream, outside of lots of money for marketing or serendipity.

All this said... Nix (OS + package manager) is definitely on my list of things to take for a spin -- I watch and read stuff about it but haven't taken the plunge just yet. It's very popular in the Haskell Ecosystem [0].

[0]: http://www.tpflug.me/2019/01/14/haskell-nix-vim/


Interestingly, historically, at first there was only Nix the package manager + language. The NixOS distribution came to be purely as a subsequent experiment, a bright idea by a single another person (AFAIK), kinda "what if?..." Which then became a runaway success (relatively speaking — not mainstream yet), with more and more people realizing the potential and the benefits. Moreover, in fact the original Nix was also a one-person experiment, a PhD thesis by Eelco Dolstra.

I believe there's not that much "pushing" anything, rather the ideas being just inherently awesome and breakthrough, and many people quickly realizing this notion after being exposed to them :)


NixOS seems in some way to have more momentum for curious hackers than any other GNU/Linux distro; people who get into it really get into it. I myself consciously try not to be evangelical about it, still many of my friends and acquaintances end up installing it because it’s just so damn cool.


NixOS lightweight but fanboy here. I really rooting for both Nix and Guix to get more attention. They are suffering from usability problems because of they lack of attention so far. But, the power they grant to make major system changes with low risk is so damn good... It seems a tragedy that market inertia is putting billions into portable snapshots while ignoring provably bit-accurate, experimentable configuration.


> There was a project for this, Hocker, but it seems inactive.

Nixpkgs' dockerTools.buildLayeredImage can build Docker images from your Nix packages.

There's also pullImage and buildImage which combine to let you use regular Docker base images, but frankly that seems pretty pointless.


These are all good points. To be honest, I'm skeptical about the future of NixOS: it's wonderful as a user distribution, but for production servers it's a hard sell to migrate off a stable, enterprise-grade OS like CentOS, Debian, or openSUSE. And as you say there's a long-term trend toward cattle servers and lightweight throwaway containers, which do not need features like atomic rollback and deterministic versioning, because we can simply snapshot the image and do a blue-green deployment with easy rollback at the orchestration layer.

That said, NixPkg (the package manager) is incredibly powerful, because it can build containers more quickly than Dockerfiles [1] or other specifications -- and it can do so in an entirely reproducible manner, with efficient caches for CI servers and developers. This becomes particularly powerful if your team works in a compiled language with long build times, like C++, or if your team works with a variety of languages. Particularly in scientific computing, it's not uncommon to have a lot of Python {2,3} with C/C++ dependencies, and perhaps a cmdline utility in another language or two. At this point something like 'pip install' no longer really cuts it, and NixPkg can step in to produce app images and development/CI environments in a snap. These images can then be backed into your favorite container runtime with the confidence that you could rebuild the entire container with a 1-line patch and actually have the same contents everywhere, and the new 'nix path-info' and 'nix log' commands can give you deep introspection into the binaries: showing the full transitive closure of build-time or runtime dependencies, their sizes on disk (important for containers and cloud deployments!), their configure/compilation arguments and output, and so on.

[1]: https://grahamc.com/blog/nix-and-layered-docker-images


To provide some contrasting experience: I'm not a sysadmin nor a coder, but when I had to launch a local server with multitude of services running, including libvirt with a couple of VMs, I had to consider two possibilities:

1. Learn all the required tools and their requirements, the syntax of their respectful configs and read a ton of documentation to just install all of them.

2. Learn the basics of Nix and spin a server with centralized configuration with unified syntax, which is also resistant to my sloppiness ("I don't know what I'm doing" situation).

No need to specify which path was taken. Wasn't easy, but no regrets yet. Would love to switch to Guix since Guile seems much saner (at least easier to quickly understand) for a non-CS guy like myself, but support and package availability is nowhere near, unfortunately.


Have you written about this anywhere? I'd love to read about what you went through.

As a side note how is it possible that you were neither a sysadmin nor coder, yet were in the position to manage a server with VMs managed by libvirt? And how did you know Nix existed, or that you wanted to manage the machines with a central configuration server/repository to start with?


No, I haven't, although I've been considering it for a while. Thanks for your interest.

There's no machines nor a central server/repository, just a single machine with a local config.

In a nutshell, I work in a very lean firm as a designer lead but since I'm fairly proficient with tech and there's no IT department, I'm in a position to make some decisions. Situation occurred when it was necessary to launch an always-on service which is strictly Windows, with RDP access from a local network. Outside specialist was brought in to set it up. However, since the service is very low on resources I suggested to launch it inside a VM and use the linux host for other needs, such as back-ups, file-sharing, etc. I had been interested in NixOS (I just try to stay aware of promising tech innovations) for a while and it looked like a good platform for the problem at hand.

The interesting bit is that I had a bit of an edge case with libvirt, since I wanted to cut Windows guest from the outside world leaving only RDP and SMB access. Default forwarding options in libvirt couldn't provide that. Someone helpful on IRC mentioned there's some network bridge configuration that's not fully described (or at least clear enough) in the docs, so I had to edit VM's XML Network part and write a bunch of NixOS firewall rules (and rewrite them more than a few times). This sounds easy but it was a bit out of my skill-set and I had to sweat over it for a while, but satisfaction was absolutely worth it.

All of this felt like an achievement for me personally, but went invisible for the management, which is both a good and a bad thing.

TLDR: A lot of complications for no particular reason except self-education and self-amusement with no monetary reward whatsoever.


Thanks for sharing so much of your experience!

> The interesting bit is that I had a bit of an edge case with libvirt, since I wanted to cut Windows guest from the outside world leaving only RDP and SMB access. Default forwarding options in libvirt couldn't provide that. Someone helpful on IRC mentioned there's some network bridge configuration that's not fully described (or at least clear enough) in the docs, so I had to edit VM's XML Network part and write a bunch of NixOS firewall rules (and rewrite them more than a few times). This sounds easy but it was a bit out of my skill-set and I had to sweat over it for a while, but satisfaction was absolutely worth it.

Yeah that doesn't sound easy -- "someone helpful on IRC" and "network bridge configuration that's not fully described" convey the difficulty quite accurately for me. Sysadmin life is death by a thousand cuts with stuff like that, which is why everyone becomes gray beards so quickly.

I bet you someone out there has already run into this and gave up when they didn't find that helpful person on IRC or somewhere else how to solve this was written down.

> All of this felt like an achievement for me personally, but went invisible for the management, which is both a good and a bad thing.

Uhhhhhh yuuuuup? I'm don't have an MBA but I'm fairly sure you should get them to compensate you more or make you CTO or at least Director of Technology or some better title if resources are constrained. Of course, that might come with being the go-to for more of these sorts of issues but if you don't mind and want a chance to build tech with real stakes that then it seems fair.

> TLDR: A lot of complications for no particular reason except self-education and self-amusement with no monetary reward whatsoever.

It might be a little late now, but when things like this come up, you need to go out and get that monetary/other reward! I don't know what the outside specialist was going to charge, but you essentially did their job... If there's no IT department, then it should be pretty easy to just make one and be the head of it :)


Thank you for your encouragement. Two things to consider:

1. It's hard to have an IT department when it's about 10 people in a whole firm.

2. I'm afraid it wouldn't be as much fun if all I did the whole time was something like this.

This is a plot that hinders my career my whole life: too interested in too many things to fully commit to one of them. And the field for which I feel enough passion (music/studio work) has no money to compete with other jobs I can do. I really wish there was some position to exercise more of my wide but not-excitingly-deep skill set.

In regards to writing it down: I have a residual sense of guilt for not expanding the libvirt wiki right after I was finished, but, honestly, I was low on energy and had a bunch of my regular work built up. And, as usual with memory, I'm already not clear enough on the details to write a coherent guide.


If you don't mind I'll push a little more

> 1. It's hard to have an IT department when it's about 10 people in a whole firm.

I think that's both a blessing and a curse. If it was a 10000 person firm it might be a really big step, but right now it's a small step! Nothing wrong with getting in a role early and growing with it (and the company).

> 2. I'm afraid it wouldn't be as much fun if all I did the whole time was something like this.

That's true, and there are very real downsides like burnout and there's on problem with staying where you are if you're happy and getting what you want out of your job, but if what you're doing now isn't your dream job, and you don't absolutely hate the upper tier of the company, why not do a job you like slightly less for much increased future career prospects?

> This is a plot that hinders my career my whole life: too interested in too many things to fully commit to one of them. And the field for which I feel enough passion (music/studio work) has no money to compete with other jobs I can do. I really wish there was some position to exercise more of my wide but not-excitingly-deep skill set.

Yeah, that's difficult -- Imagine how much time you'd have to be a renaissance man/woman with the money/freedom a good C-level gig might afford? Of course responsibility would also increase, but it might be possible to keep it to the same usual 9-5 schedule, and that might not be too different from what you're doing now, except with a bit more stress at work, and maybe in a year or two you can take a whole year off and just do music/studio things? or fund a "startup" that's really just you having fun in the music/studio space and maybe finding a way to make it your living (if that's what you want).

Definitely don't decide your future off of a random HN comment... But also maybe it's worth some thought.

> In regards to writing it down: I have a residual sense of guilt for not expanding the libvirt wiki right after I was finished, but, honestly, I was low on energy and had a bunch of my regular work built up. And, as usual with memory, I'm already not clear enough on the details to write a coherent guide.

Well if you've got a list of TODOs, why not write it down? even if you share something as vague as you wrote here I'm sure it'd give someone out there enough of a clue to move forward.


Thanks, this is a lot of things to ponder on, and I actually agree with you on much of this. Your feedback is much appreciated and really made me feel better!

I actually have an almost empty blog, but having ideas and writing them down in a coherent way (especially in a non-native language) is two different things. I should try to write more.

Thank you again.


I think the throw-away-server approach you mention cannot address enough issues or fulfill enough needs; in particular, I'm thinking reproducible builds: https://reproducible-builds.org/.

Recent years have shown how badly we need to trust our software, and Nix/Guix are two prime examples of projects trying to tackle this issue.

"Just shoot it" servers lack trust and reliability (assuming you are not shooting a reproducible distro of course :p).


Yeah that's definitely true -- throwing your server away is no good if the machine image/setup flow you're using is insecure. Nix/Guix + the long list of distros working on it are all going in the right direction to fix this problem -- but I find it hard to believe that instead of just waiting for Debian reproducible builds (or some other vendor provided option) people will all switch to Nix/Guix.

Above the VM layer (@ the sandboxed process i.e. container layer), TUF[0] + Notary[1] + compliant image registries + attestation tools are helping to solve this problem which is nice.

[0]: https://theupdateframework.github.io/

[1]: https://github.com/theupdateframework/notary


Indeed, many distros tackle the reproducibility issue. But for the better part (and because of history), it can only be an after-thought, some extra layers that tries to fix the existing system.

The fundamental difference with Nix and Guix is that it's "reproducibility by design". A much more sustainable approach in my opinion.


> Then there's the whole promise of it being usable from multiple platforms ("you don't even have to use NixOS, just use the package manager for awesome builds") -- It was supposedly amazing for building some software packages, but the reality was always so sticky and never quite panned out.

The vast majority of Guix users are on other (so-called "foreign") distributions (i.e. not GuixSD) -- I think the opposite is true for Nix/NixOS.

I'm not sure what's "sticky" about it, can you elaborate?


I meant "sticky" in that when I tried to evaluate NixOS vs CoreOS Container Linux + Ignition (this was a while ago), nix was much harder to install and use than suggested. I ended up going with Container Linux for that very reason and didn't look back.

There was also an issue with nix and docker compatibility that I ran into w/ nix-docker[0], but I don't remember what it was.

Here are some videos I found in my history back when I was exploring Nix (watched in end of 2017/early 2018 I believe):

https://www.youtube.com/watch?v=YbUPdv03ciI

https://www.youtube.com/watch?v=mIxtBVKo7JE

It's entirely possible my anecdata is too old to be useful. Maybe nix is much easier to use (and use correctly) these days. One example of where Nix should have been able to pick up mindshare is with the same people who value linuxkit[1]. Maybe this is is another marketing/branding/hype/money disparity thing but `nix-build '<something>' -A vm` looks like exactly what linuxkit is being lauded/promoted for being a good tool for, and it's been around for so much longer.

[0]: https://github.com/zefhemel/nix-docker

[1]: https://github.com/linuxkit/linuxkit


I can't speak for Nix, but getting Guix up and running on your favourite distribution can be achieved with this two-liner:

  wget https://git.savannah.gnu.org/cgit/guix.git/plain/etc/guix-install.sh
  sudo bash guix-install.sh


> Even if the devs don't need Turing-completeness, not adding it to a project is effectively limitating what the user can do with a program.

But limiting what people can do with build systems is generally a good thing.


Why?

Also one core feature of Guix is that it cannot break: you can always roll back.


Because then tools can reason about the build system. Also it limits the crazy things people try to do.


I don't get it. Yeah guix is good. But AFAIK GuixSD has no support for modern hardwares due to their policy toward firmware blobs. So how's a OS being advanced without good hardware support?


Looks to me like Guix is Nix with a Scheme based language (Guile) instead of the Nix specific DSL.


I do really wish that Guix used Lisp rather than Scheme.


You mean Common Lisp ? Because Scheme is a Lisp.

Guix is from GNU, Scheme is their official extension language (with Guile as the implementation.)


I do mean Common Lisp. I think that it'd be preferable for Guix to use it because Lisp is more production-ready than Scheme: it standardises things that Scheme — even R7R2 large — doesn't; it's a pragmatic production language into which a lot of experience went. And of course there's CLOS, which is absolutely awesome. Even better, a well-written Guix-Lisp would be portable between multiple Common Lisp implementations; it's extraordinarily difficult to write a real system in Scheme, precisely because Scheme is so under-specified: really, Guix isn't written in Scheme; it's written in Guile Scheme.

Scheme is neat, and pretty, and very elegant, but it's not (even in its largest, most recent version) really meant for serious use on large systems. And I don't agree that it's a Lisp: it has only a single namespace, it breaks (car nil) → nil and (if nil "true" "false") → "false".

I'm aware that rms dislikes Lisp and wants Scheme to be GNU's extension language; IMHO he's wrong, and his attitude has really held back GNU, Emacs and the Lisp world at large. Had it not been for that attitude, we might have a Common Lisp Emacs by now, we might have a Common Lisp GNU userland by now, we might have a GNU which really is Not Unix, instead of a mess of C, security bugs & hacks.


Note that MacCarthy's Lisp didn't have (car nil); it was an error. I'm reluctant to insist that a language in the Lisp family must have (car nil), since that means that Lisp 1 and Lisp 1.5 aren't Lisp!

See this text in Steele and Gabriel's The Evolution of Lisp:

In the end only a trivial exchange of features resulted from "the great MacLisp/Interlisp summit": MacLisp adopted from Interlisp the behavior (CAR NIL) -> NIL and (CDR NIL) -> NIL, and Interlisp adopted the concept of a read table.

However, whether or not (car nil) works, it should express "the first element of the empty list, which is also false", rather than "the first element of a symbol that isn't related to lists and isn't Boolean false".


It's a tedious fruitless argument that goes back years. Some people in the "Common Lisp camp" believe that Common Lisp has the "rights" to the 'Lisp' "trademark."


As someone who programs in Clojure--and therefore has no justification for weighing in--I _believe_ that it's a cultural or generational difference. That is, for certain programmers (working in the 1980s? 1990s?) "Lisp" (capital-L, lower-case isp) meant Common Lisp, and many agreed.

I am sure that people tediously, fruitlessly argue about it. But it's also true that for a wide swath of programmers, Lisp is Common Lisp.

I wonder if hn's lisper or lispm could weigh in...


For a wide swath of programmers, a Lisp is something with lists made of mutable cons cells, terminated by a nil symbol which is both false and the empty list. I don't think you will encounter too many Common Lisp programmers who don't think that Emacs Lisp is a "Lisp".

Scheme doesn't use "Lisp" in its name. It has its own Usenet newsgroup; Scheme programming is rarely discussed in comp.lang.lisp. Likewise it has its own subreddit; r/lisp isn't used much for discussing Scheme.

"I would rather this were implemented in Scheme rather than Lisp" (or vice versa) is a perfectly understandable statement and sentiment that is not simply about semantic labels.


And now Racket, which used to be a Scheme, doesn't call itself a Scheme anymore.

In fact it describes itself on its web site as "The best of Scheme and Lisp."


That is correct; you can't be Scheme if you just have the best of it, and not the rest of it.

E.g. Racket doesn't have set-car! and set-cdr! so RnRS-conforming Scheme programs which rely on these won't work.



That's nice, but in Scheme, all pairs constructed by any code anywhere are mutable, and structure made out of mutable pairs can be passed into any library function. An object notation like (a b c) read from a stream gives you a mutable object.

It's a significant language difference that can't be papered over with a data type and handful of functions.


That's a different concern.

Racket supports both immutable and mutable cons cells. Convenient or not - you decide.

Oh, been watching too much YouTube.

In both Racket and Scheme it is usually a good idea to program in a functional style - i.e. not to mutate pairs.

The compiler and runtime system can in some situations handle immutable pairs more efficiently than mutable pairs. So if mutable pairs are seldomly used, it makes sense to make immutable pairs the default.

The rationale why Racket made the shift is described here:

http://blog.racket-lang.org/2007/11/getting-rid-of-set-car-a...

In short: the Racket community sees the immutable default as a plus.


On the other hand some weirdos try to claim that Python is “a Lisp”...


Arguments about labels are always tedious and fruitless. They don't advance anybody's understanding of anything important, like the substance of the whatever those labels might apply to. Arguing about whether or not "Lisp" means "Common Lisp" doesn't advance anybody's understanding of Common Lisp.


The sentence at the root of this subthread, namely "I do really wish that Guix used Lisp rather than Scheme" isn't an argument about labels.


No, but the terminology of the root was plainly coming from the camp of "Scheme isn't Lisp, Common Lisp is Lisp." The child comment was confused by that peculiar terminology, so I provided context for that terminology in a way that I hoped would head off yet another argument about labels. It was my intent to stop the argument before it began.


The terminology is troublesome because "Scheme" more specifically determines the language than "Lisp", so "I'd prefer to use Lisp over Scheme" or vice versa is somewhat "apples versus vegetables".


> Arguing about whether or not "Lisp" means "Common Lisp" doesn't advance anybody's understanding of Common Lisp.

But not expecting Common Lisp to behave like Scheme helps. The design for the Scheme language is simply different from Common Lisp and thus code looks and behaves differently.


I agree with everything you said. The argument is tedious and fruitless. My point is an empirical one: Programmers of a Certain Age were more likely to use "Lisp" than programmers of other ages.


What do those campers prefer for what others would have called "Lisp"?


In my experience they often prefer that scheme/etc ceased to exist at all. I've had conversations with CL fanatics who accuse Scheme/etc of diluting Lisp by merely existing, and thereby harming CL.

Others aren't so extreme and just want people to stop saying "scheme is type of lisp."


I don't mind that Scheme exists, but I do mind that enthusiasts naively use it to build production systems, because they end up wasting time & energy re-implementing things the Common Lisp standard already specifies rather than spending that time & energy on the problems that they're actually trying to solve.

Scheme is another Blub: a Scheme user looks at Common Lisp and thinks, 'hey, I don't need generic functions; I don't need packages; I don't want separate namespaces for separate things; I don't want to extend my reader.' Meanwhile someone who's built large systems in both Lisp & Scheme realises how glad he is to have those things.

Meanwhile, the ecosystems of non-Lisp-family languages grow ever-larger, and hence ever-more-appealing, which is a problem because while Scheme-the-language is better than Python-the-language or Go-the-language, Scheme-the-fractured-ecosystem is worse than Python-the-healthy-ecosystem or Go-the-astoundingly-robust-ecosystem, which means more people use Python & Go, which means fewer people use Lisp-family languages … it's a vicious cycle.

Which means the world is a worse place, stuck in a local maximum of C-family languages and unable to break out to the Lispy global maximum.


> Scheme is another Blub: a Scheme user looks at Common Lisp and thinks, 'hey, I don't need generic functions; I don't need packages; I don't want separate namespaces for separate things; I don't want to extend my reader.' Meanwhile someone who's built large systems in both Lisp & Scheme realises how glad he is to have those things.

I find this list a little funny, because we do have generic functions, packages, and the ability to extend the reader. In Guix we use reader extensions for G-expressions, for example.

I grant you the separate namespaces thing, because, well, we really don't want to have a separate namespace for variables and procedures :)


> I find this list a little funny, because we do have generic functions, packages, and the ability to extend the reader.

But those are all non-standard, right? At least, I quickly skimmed through R7RS & don't see them in there. So they're either Guile-specific extensions, or your project implemented them, or they are a third-party library which has not undergone the same scrutiny as the standard itself.

FWIW, I find that having many namespaces (not just two!) rather than one is conducive to good program design.


"I understand and do want all these things, but don't need them coming from an ANSI/ISO standard" is substantially different from the Blub Paradox.


> In my experience they often prefer that scheme/etc ceased to exist at all.

Alright. Well that doesn't really work when choosing words and definitions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: