The NixOS way of doing things (a layer on top of plain config files) is inevitable once configuration management is taken seriously: one needs to be able to merge configuration values from multiple source files; for example, a generic role plus some more machine-specific role, working together to produce one systemd unit or file in /etc. The modules in nixpkgs effectively make it possible to combine configuration values to produce most of the config files that Linux applications want.
If one weren't using NixOS to do it, it would be done with your own templates and configuration management, and probably without the huge advantage of being able to rebuild a whole immutable system without leftovers from prior configurations.
Doing programmatic configuration with a traditional distribution isn't difficult. I've got a python script that starts with a tree of config files and a top level file that says what files to consider per-machine. Each config file is run through Jinja2. Some templating even pulls data from a LibreOffice spreadsheet [0]. By default the script only pushes files that haven't been remote, so if you do deviate and edit a config file on a machine you can diff it to merge those changes back into the centralized config before overwriting.
Having said that, I'm moving towards NixOS for reasons detailed in another comment.
[0] It's ugly but it works. I'd love to find a lightweight curses-based data structure editor.
Where is the evidence for this? Where is the evidence that you can and should mix config management all of the way down the OS stack? How many containerization concepts do we need? Docker, lxc, VMs, and now NixOS? If it were a legitimate abstraction layer, then wouldn't it have caused fewer problems in implementation? And wouldn't it have seemed more intuitive to Unix experts? Yes... Reinventing the wheel again. I'm open to nicer restructuring of the Linux fileystem, but this is really re-inventing the wheel trying to polish over ugly parts that are ugly for a reason. Keep useful abstractions separate!
One problem that NixOS solves is to be able to use different tools on the same host in different versions easily(1) and in a reproducible way(2). Docker containers can be used to solve 1), but they are in some sense overkill and quite a few uses cases. If you are a developer who only wants do use different compilers then Docker containers are not really ideal, because they also enforce process isolation. Also there is the popular believe that Docker containers also solve 2) that is not really completely true. Yes you can run a Docker image in a reproducible way on different machines, but cannot necessarily reproduce the Docker image, at least not with the standard Docker tools. You can use Nix to create Docker images in a reproducible way.
> If it were a legitimate abstraction layer, then wouldn't it have caused fewer problems in implementation? And wouldn't it have seemed more intuitive to Unix experts? Yes... Reinventing the wheel again.
The basic concept of Nix is (a) use the `--prefix` argument of ./configure scripts to keep things apart, and (b) use the `PATH` end var to choose what we want to run.
In comparison, containers are much more recent, require more invasive changes (i.e. support from the kernel), etc.
Personally, I like using containers to run binaries. Putting a whole Linux/Busybox installation inside one seems to defeat the point though...
If one weren't using NixOS to do it, it would be done with your own templates and configuration management, and probably without the huge advantage of being able to rebuild a whole immutable system without leftovers from prior configurations.