For people -- admittedly, like me -- who have a strong knee-jerk reaction to GoboLinux's design the twenty year old "I am not clueless"¹ document contains a fair amount of interesting background and reasoning on the concepts. I don't think I've overcome the reaction entirely, but it isn't so strong anymore ;)
The writing has some of the same feel as the writing for HTMX or Tailwind.
Paraphrasing: "Yes we know this is different. Yes it is very simple. You may not be used to it, but it is easier to understand and work with. You don't have to use this. We like this and we are happy."
I can't help but wonder how much of the kneejerk reaction is due to "cosmetic" parts of the design rather than the functional parts. For example, I notice that a lot of my initial reaction is based on the capitalization; "Programs" with a capital "P" (probably unfairly) evokes an emotional response due to reminding me of the Windows "Program Files" naming, and (perhaps slightly more fairly) I'd probably find it mildly infuriating to type "LibX11" rather than "libx11". Even though Linux filesystems are generally case insensitive, I imagine that package names would still be unique across casing, and it seems pretty likely that a Linux distribution focused on making the filesystem hierarchy more user-friendly wouldn't end up putting a second directory on the root that differs only by case. As silly as it is to verbalize, I genuinely think my initial reaction would be less strong if the naming convention examples were "/packages/libx11/1.6.9" and "/packages/gcc/9.2.0", and I don't think the benefits would be diminished at all by naming things like this.
I agree, but I’m happy to see that Nix, Guix, and the one I do the most work on Spack are starting to gain some traction since they all do basically this. Making it work is not always trivial though, and making it work efficiently is harder. Only in the last few years have I felt like we’re finally getting to the point where this is actually a maintainable model for the majority of software distribution. Here’s hoping it gets the rest of the way!
They are, yes, but there is a very important core difference here that I think is not mentioned.
The big cost about Nix/Guix that puts people off is that it eliminates a human-readable filesystem. The traditional layout is gone or empty, replaced by a tonne of folders under `/nix/` called stuff like `240-572-9837wfgjh234098672-_bash` and you have to just... *trust* that your path and so on will work and stuff will just get found somehow.
You have to let go of navigating your own filesystem.
That's just too much for a lot of people. The filesystem layout is one of Unix's defining characteristics.
Whereas Gobo's goal is the opposite: yes, let us discard the traditional filesystem layout, but let's do it by making it more human readable instead.
You can work out where things are without knowing. There's better isolation. It's like semantic versioning, applied to the filesystem: a semantic filesystem layout, where folder names encapsulate versioning info and are more meaningful than the old 1970s reduce-typing-effort-at-all-costs approach.
Nix says ignore paths, ignore directories, you don't need them, we'll manage that for you.
Gobo says forget traditional paths, here are some better ones that you will find easier and more useful.
For me, that's an attractive proposition.
In real life, it seems that both were too much for most people.
My suggestion would be: why not merge them? Try to bring the advantages of Gobo -- readable, meaningful directory paths -- to a version of Nix.
Instead of a flat directory tree with hashes, a consistent algorithm that categorises apps and puts them in a tree:
/gonix/apps/gui/productivity/images/krita/5.1/
/gonix/apps/console/shell/bash/5.2/
/gonix/apps/programming/compilers/fpc/3.1/
/gonix/libraries/c/glibc/2.39
/gonix/apps/console/editors/vim/9.1
I am totally making these up, you understand, they're merely illustrative examples.
> Instead of a flat directory tree with hashes, a consistent algorithm that categorises apps and puts them in a tree:
> /gonix/apps/gui/productivity/images/krita/5.1/
> /gonix/apps/console/shell/bash/5.2/
> /gonix/apps/programming/compilers/fpc/3.1/
> /gonix/libraries/c/glibc/2.39
> /gonix/apps/console/editors/vim/9.1
This is a nice illustrative example that works well when the only difference between software packages is semantic versions. The reality is that there can be many variants of packages and using something like a hash of package inputs is very practical. Take for example a library called GDAL, which has an insane amount of configure flags. It is quite common for scientific software to be tuned and package maintainers cannot supply a one-size-fits-all solution.
1. Put the human-readable part first because humans see that first.
2. Retain hierarchical layouts. For humans this makes things more manageable and accessible, while for programs, as long as it's computationally-parseable the impact is negligible.
3. Lean in to the hierarchy. If it's necessary to duplicate slightly different versions of dependencies for different programs, then put them inside that program's hierarchy. That's one of its reasons for existing.
4. Embed the hash anywhere a simple regex can find it, no problem; tab-completion can sort that out.
All I am trying to suggest is that a Nix/Guix/successor/replacement type tool could focus more on ordinary joes who find it hard to remember the difference between `/bin` and `/usr/bin` as it is, and make stuff human-readable first and machine-readable second.
@tgamblin beat me to it in a sibling here, but this is exactly why I like spack’s way, it mostly gives you the best of both. Now I need to go finish that PR to make it usable as an actual distro…
The only one that doesn’t quite work out is 3. Dependencies are vendored like that when it’s a patched version that can only ever be used with one package, but in general it helps more to have the regular structure to let them be shared. Otherwise we would have to move them when a new dependent appears and some other kinda funky things. I have played with having each package get a “view” of links to all its deps in its prefix, but it’s a high cost in inodes for an only moderate increase in observability. Pretty easy to generate your own (there’s literally a command for it) so not sure if it’s worth it.
FWIW this is what Spack does, and it uses a store layout like nix/guix. here are some chunks of the spack install tree path. All you really need is the hash, but we provide a bit more.
One reason `nix` uses `/nix/HASH` is because it results in shorter paths and avoids issues with long paths. We use sbang[2] to avoid the most common one -- that long shebangs don't work on some OS's (notably, most linux distros).
>The big cost about Nix/Guix that puts people off is that it eliminates a human-readable filesystem. The traditional layout is gone or empty, replaced by a tonne of folders under `/nix/` called stuff like `240-572-9837wfgjh234098672-_bash` and you have to just... trust that your path and so on will work and stuff will just get found somehow.
Not really trust. Nix and Guix are not magic.
You can still use ldd to figure out what (now exact!) library anything is gonna use. Guix and Nix use the rpath feature.
And your profile directory in Guix is a normal directory called $HOME/.guix-profile . Currently, it has "bin" and "share" and "sbin" and "libexec" and "lib" and so on in there. That is what the end user uses. Entries in those do link to /gnu/store/xyz/bin/xyz (using regular symlinks), yes.
>Instead of a flat directory tree with hashes, a consistent algorithm that categorises apps and puts them in a tree:
One hash represents all the dependencies of that package, too.
One directory name in Guix's /gnu/store encodes both the direct sources and all the dependencies, all the way up to a bootstrap root (the latter of about 250 Bytes of binary). If any of the dependencies of bash changes, the hash will change, too. That also means that you will have multiple /gnu/store/*-bash-5.1.16 in there in a normal system.
But in your proposed case, you'd have to introduce some magical things which figure out and STORE the dependencies somehow differently in the background. Guix doesn't do that. It stores that in the file system, the end (as in directly in the file system, not as in "the system database blob is in file /foo"). No magical extra store.
That said, it would be possible to only change the guix profile layout to be what you suggest, and then still have symlinks to /gnu/store/<hash>-xyz in there. Probably like 5 h of work, and end user programs most likely will work on the first try. Patches welcome.
See guix/profiles.scm for what creates the profile. %user-profile-directory is $HOME/.guix-profile . profile-derivation is the function you want to change.
You can also use guix on top of whatever other Linux distribution and try it out like that. The package manager (which makes the profiles) also supports containers--so you can safely try out whether your change works.
The reader comments I get is that a non-human-readable filesytem (or only marginally so, by ignoring the hashes and looking in the new single-level hierarchy for recognisable fragments of names) is just too high.
They don't have enough of a problem with how things are to choose a whole new method with this very high price.
(Which is the same reason Plan 9 flopped, as I have both written about and spoken publicly at FOSDEM about this month.)
People on the whole have no idea how this stuff works, and they just copy magic incantations from StackOverflow to get stuff to happen. If that doesn't work, then this OS is broken. The end.
He "fixed" it. Systemd now works in WSL2. All those guides for noobs now work. Everyone is happy.
In a world where tools like Flatpak and Snap are proliferating and it's driving deep divisions between Linux distros, if you think the average person struggling with Linux is going to use `ldd` to work out where the dependencies for something live, I'm afraid you are a deep guru who lives on a different plane of existence.
We now have widely-used packaging systems which simply embed an apps entire dependency tree into a package to avoid people having to work out the difference between `apt` and `rpm`. Thousands of terabytes of disk are being burned to make this stuff go away.
> I'm afraid you are a deep guru who lives on a different plane of existence.
I'll take that as an amusing tip of the cap, I hope that is how it was intended. I don't know if I consider myself a deep guru, but if something doesn't run when I expect it to, I do tend to reach for ldd pretty quickly.
Incidentally, lack of systemd in WSL2 was also a showstopper for NixOS, so before the Poettering hire and WSL2's systemd support, some NixOS hackers worked out a chroot solution to get systemd running under WSL2. :)
(I used to use it at work when I still ran Windows, even after the official systemd support, because Microsoft's implementation had some problems)
> People on the whole have no idea how this stuff works, and they just copy magic incantations from StackOverflow to get stuff to happen. If that doesn't work, then this OS is broken. The end.
I don't think small, weird Linux distros like NixOS or GuixSD or GoboLinux really have the resources to support careless users of that kind, or stand to gain much from them. Projects like that are seeking contributors more than they are seeking users at all!
Is this a tragic flaw? Or is it just a mundane fact about a project that can still thrive in its niche?
PS: Any idea why Microsoft 'needed' a special init system for WSL2 anyway? Full fat systemd systems boot in like 8 seconds anyway! WSL's imitators on macOS (e.g., Lima, Orbstack) work with regular Linux distros and just leave the init system intact.
> PS: Any idea why Microsoft 'needed' a special init system for WSL2 anyway? Full fat systemd systems boot in like 8 seconds anyway! WSL's imitators on macOS (e.g., Lima, Orbstack) work with regular Linux distros and just leave the init system intact.
My slightly educated guess would be on the point that in WSL2 distros, those do share the same running kernel, the same network stack and probably couple of other things under the hood (something about binfmt?).
From the product perspective, booting another instance (say I have Ubuntu 22.04 and 18.04 both available) in ~ 1 second, can be seen, and even proved with some A/B testing, as a huge advantage.
PS C:\Users\coolcold> wsl -t 'Ubuntu-18.04'
The operation completed successfully.
PS C:\Users\coolcold> wsl -l -v
NAME STATE VERSION
* Ubuntu-22.04 Running 2
Ubuntu-18.04 Stopped 2
PS C:\Users\coolcold> netsh wlan show interfaces|wsl -d 'Ubuntu-18.04' -- fgrep Mbps
Receive rate (Mbps) : 390
Transmit rate (Mbps) : 390
the call for `wsl` is almost instant even for stopped instance
> From the product perspective, booting another instance (say I have Ubuntu 22.04 and 18.04 both available) in ~ 1 second, can be seen, and even proved with some A/B testing, as a huge advantage.
For me at least, they went to more trouble than it's worth for that ~6-7 second gain, given that a normal systemd distro will boot in 7 or 8 seconds anyway. Maybe the difference is bigger on spinning rust. But when I used WSL regularly I was in one VM all day every day. It was always running, so I really wasn't worried about the 10 seconds it took to start up or whatever.
At the same time, their custom init setup caused compatibility issues with my distro of choice and some applications (somehow). So it didn't come for free.
> I don't think small, weird Linux distros like NixOS or GuixSD or GoboLinux really have the resources to support careless users of that kind, or stand to gain much from them.
You're missing the point. It's not what the distros need from the people. It's what the people need from the distros.
Linux software packaging is junk. It's vastly over-complex, it's horribly fragile, there are multiple rival systems (apt, rpm/dnf, pacman, apk, etc. etc.) and none work with each other and none is a complete answer, and as a result, there are now 2nd level schemes on top of that (appimage, snap, flatpak, docker, etc.) and those reproduce the problems -- they are incomplete, incompatible, etc. -- and they reproduce it with a level of bloat that makes their packages 100x bigger and 100x slower to download and update.
Android just makes this work and has several _billion_ active daily users, who rarely have big issues. I've never heard of anyone "bricking" their phone installing an app. It's more reliable than Windows with 100x as many users.
That is what Linux needs, and it needs it yesterday, lest it become a weird way of packaging apps for Windows boxes. ("Oh, yeah, that? You need to install a wrapper thingie first, then it will work. Install this wsl thing and it'll run.")
We need a better answer. Nix is better in many important ways: it solves these problems, but it does so at a cost, and the sort of people who like Nix don't realise, or don't care, that the cost is terrfying for ordinary mortals.
Gobo survived 20y of neglect by being better in different ways. It makes the existing filesystem less cryptic.
Nix throws it away and tells you that you don't need it.
Better to say "hey, look, we give you a better filesystem" than to be that bearded mystic on a mountain saying "for true enlightenment, you do not need a filesystem".
> Any idea why Microsoft 'needed' a special init system for WSL2 anyway?
Um. Have you used it? This is _not_ just a VM.
It seamlessly extends Windows so that Windows can run Linux binaries.
It is not another OS in a box.
I'm not saying it's perfect -- it is not -- but it's about 20 years ahead of simple VM solutions.
And it does that by integrating with the guest OS via a special init system.
Yes, I used it every day at work for most of a year! Its showstopping bugs (a killer memory leak, freezes after suspend, data loss issues (!!), native systemd support making Emacs freeze somehow (???)) are a big part of why I finally took the leap to macOS at work.
How much do you use WSL? Because its bugs go well beyond rough edges and those reliably show up if you actually use it for most of your work every day.
> This is _not_ just a VM.
Sure it is! It's a VM with the following features/integrations that start working without any manual configuration:
- nice management interface that can download distros for people
- filesystem sharing via 9pfs, which is cool but unfortunately too slow for serious use
- a pair of command-line interfaces for invoking commands on each side from the other
- this doesn't involve the init system btw. it just uses good old binfmts on the Linux side with the Linux-side wsl command, just like some users do with wine for Windows binaries or qemu for RISCV Linux binaries
- automatically forwards network traffic between the Windows host and Linux guests so that users don't have to manually configure port forwarding
- a shared display server (Wayland implementation) on the host side and some plumbing to automatically forward its sockets to guest VMs
I'm not sure I'd wanna count this, but some third-party applications also do some socket forwarding tricks (and did so before Microsoft's Wayland implementation was a thing) so that guests can share one Docker implementation, like Docker Desktop, Rancher Desktop, Podman Desktop, etc.
That's it. There's no other magic.
> It seamlessly extends Windows so that Windows can run Linux binaries.
I wouldn't characterize that feature as seamless— in fact it's so bad that you can't reliably pipe the output of WSL commands evoked from the Windows side into a Windows-side pager or clip.exe (sometimes wsl.exe would just hang or eat the output). And then there's the PATH translation issues, which often have to be handled quite manually with subshells calling cygpath or wslpath or whatever.
That particular feature is so bad that I hardly ever used it! I'm astonished to hear you call it 'seamless'. Have you ever tried to share a Yubikey for SSH auth and GPG encryption between the Windows host and Linux guests on WSL2? How 'seamless' was that for ya?
> I'm not saying it's perfect -- it is not -- but it's about 20 years ahead of simple VM solutions.
The things I compared it to (Lima, Orbstack) have the exact same features without ever having replaced systemd. Lima uses cloud-init as the interface for injecting its startup hooks. Idk what Orbstack does. Lima is definitely jankier than WSL2 (which is saying a lot, unfortunately), but Orbstack seems solid.
Anyway, both of them reproduce the port forwarding, filesystem sharing, and command forwarding that WSL2 has without replacing systemd, and Lima is older than the systemd integration for WSL2. Have you used either tool? There's no way WSL2 is even one decade ahead of either, though it might be a couple years ahead of Lima.
Me, no, I have only played around with it. From a technical level, I personally preferred WSL1, which I thought was more elegant. I am happy to concede your points as it certainly sounds like you have more experience with it than me.
WSL2 is just a VM, yes, I totally agree -- but a much better-integrated one than most non-techies could ever hope to achieve, and still better than most techies could achieve unless they really knew their stuff and they were competent in both of 2 different OSes.
From what I've seen over my working life, I'd say about 0.1% of Windows users would have the level of knowledge needed, and possibly more Linux ones -- but they mostly wouldn't want it or care.
My personal impression is that it's an attempt at the classic MICROS~1 "embrace and extend" manoeuvre on Linux.
No, I've never tried the Mac tools you mention, and I don't own a Yubikey or anything like it. I try new distros almost daily on my Mac but I have no need of any integration -- I am not a developer. I review distros sometimes, though, and occasionally things like hypervisors:
> From a technical level, I personally preferred WSL1
Same. WSL1 was ambitious, wild, and incredibly impressive despite the flaws that eventually convinced Microsoft to start over with WSL2.
> WSL2 is just a VM, yes, I totally agree -- but a much better-integrated one than most non-techies could ever hope to achieve, and still better than most techies could achieve unless they really knew their stuff and they were competent in both of 2 different OSes.
Agreed! WSL2 has a very impressive OOTB experience in terms of getting you from nothing to a VM that you can use. It would be a lot of work to set up comparable integrations yourself.
> My personal impression is that it's an attempt at the classic MICROS~1 "embrace and extend" manoeuvre on Linux.
Unfortunately I can't disagree. People seem to think that MICROS~1 is dead, but as far as I can tell they're in a very similar monopoly position with more or less the same interests now as they've ever had.
> No, I've never tried the Mac tools you mention, and I don't own a Yubikey or anything like it. I try new distros almost daily on my Mac but I have no need of any integration -- I am not a developer. I review distros sometimes, though, and occasionally things like hypervisors:
In that case, Orbstack and Lima might be tools worth your writing, the same way hypervisors and virtualization apps are! They're attempting to be WSL2-alikes but they don't use their own hypervisors (Lima supports qemu and Apple's virtualization framework, idk about Orbstack.)
> The big cost about Nix/Guix that puts people off is that it eliminates a human-readable filesystem. The traditional layout is gone or empty, replaced by a tonne of folders under `/nix/` called stuff like `240-572-9837wfgjh234098672-_bash` and you have to just... trust that your path and so on will work and stuff will just get found somehow.
> You have to let go of navigating your own filesystem.
Your broader point makes sense in that I get how a user-navigable filesystem is attractive, especially in the context of store paths being exposed to users as they are embedded in binaries and scripts. But I think the above is an overstatement.
Stuff on NixOS gets on your path the exact same way as it does on any distro: a bit of shell! Users can read through the `set-environment` script and the config NixOS generates for their shell in /etc and all that. They can check what binaries are on their PATH with `which -a` as usual. They can ask their shell where a given variable was defined using the usual tools it offers.
And the human-readable part of the filesystem does exist, just on top of the Nix store. The symlinks under /nix/var/nix/profiles/... or /run/current-system/sw are about as human-friendly as Unix ever is, imo.
Perhaps NixOS could do more to make those 'official', foreground them and document them, and even provide more loosely versioned paths to them like you suggest (making it easier to impurely manually build against them).
Secondly, from my position as a tech writer (for about 25 years) overlapping for well over a decade with some 20-odd years as a software tech support person and trainer, I think you are dramatically over-estimating the level of understanding of the average Linux user.
A Linux user can be expected to be a bit more clueful than a Windows user, but not much. They probably understand the concept of a "directory tree" and know that there are "files", some "executable", in "directories", one of which is "current", and stuff like that, but you can not rely on them grasping advanced concepts such as "search paths" and "links" (of different kinds, no less!)
The idea that there's a second, "more standard" tree, and it's optionally overlayed on top of the flat software-managed Nix tree, made up of symbolic links, and that you don't need to know where things are because the software magically makes sure that the OS can find them... I'm sorry but after coming up on 40 years in tech, my reaction to that might be captured by the crying-laughing emoticon. You could no more explain that to the average distro-hopper than you could explain it to a Labrador dog.
How it gets set? Don't care. A script is what I'd assume, sure, but that's no more important than the colour of the script's author's hair.
I'm saying that, professionally, as a writer, I have now presented the concept to smart interested users that they let go of the filesystem hierarchy and just let the OS manage that, and I'd say about 9 out 10 of my readers responded "oh to h3ll with that! No WAY!"
It is a step _much_ too far. It's comparable but worse than self-driving cars to these folks.
To refer to memes again, you need the god-level intellect step of the intelligence meme:
https://imgflip.com/i/45rmf2
... to grasp the concept that you don't need to know where files are so long as the OS knows.
This is not one step too far. It is a two-year backpacking trip across all of Asia too far.
> Secondly, from my position as a tech writer (for about 25 years) overlapping for well over a decade with some 20-odd years as a software tech support person and trainer, I think you are dramatically over-estimating the level of understanding of the average Linux user.
This is easy for me to believe :)
> you can not rely on [Linux users] grasping advanced concepts such as "search paths" and "links" (of different kinds, no less!)
Those are required for understanding how conventional Linux distros work, too, though. So the users who won't understand $PATH on NixOS are also users who don't understand how $PATH works on Ubuntu.
I will admit that there's a difference here in that you can have a very half-assed understanding of Ubuntu, and even a half-assed understanding of NixOS probably has a higher skill/knowledge floor.
And maybe I've unthinkingly assumed that half-assed understandings are worth less than they really are, at least psychologically. I have a very half-assed understanding of how my car works, but having it not be a complete wonder and mystery to me does give me more confidence in driving the thing— even if most of my knowledge of how it works is too incomplete to be practically useful and I'll always take it to a specialist if it has problems.
> You could no more explain [the Nix symlink forest] to the average distro-hopper than you could explain it to a Labrador dog.
Maybe the average distro-hopper matters because the average Linux user is a distro-hopper. But I think distro-hoppers are kind of a pathological case here, because distro-hopping is precisely the habit of throwing your hands up, blasting away your whole setup, and clicking Next Next Next instead of learning how your system works. Of course distro-hoppers are going to struggle to understand how their distros work! To distro-hop is to enact a commitment to not understanding your distro!!
I also wonder if this is really true, and this is probably the part where I am just naive. But a forest of symlinks would be super easy to diagram perfectly faithfully, and I think a visual aid like that could go a really long way for the average person.
> the concept that you don't need to know where files are so long as the OS knows.
Have you seen this article, posted to HN to some infamy, about how zoomers don't expect to know/care where their files are, and are in fact somewhat baffled at the very question of where they've saved something?
I what that generational gap says for the viability (in terms of user acceptance) of the Nix store in all its ugly glory in the future.
I want to be clear that I do think some of your suggestions for laying out the files differently (like Spack apparently does) are perfectly reasonable. There are technical and historical reasons that the store is hash-first, but maybe they're worth re-evaluating. Sometimes I think the real winner, some day, in terms of executing this idea beautifully and in a user-friendly way, will be on a 'brand new' OS where a different executable format and/or a different kind of filesystem can mean the Nix store or its equivalent doesn't need to be exposed as bare files at all. One Nix hacker has actually proposed doing something like that on top of the Linux kernel, but I think RedoxOS already uses some ideas like this.
> Of course distro-hoppers are going to struggle to understand how their distros work! To distro-hop is to enact a commitment to not understanding your distro!!
Fair point. :-)
> I what that generational gap says for the viability (in terms of user acceptance) of the Nix store in all its ugly glory in the future.
I think there's a word, or several, or maybe just a letter or two, missing here.
But yes, I thought of that, too.
What irks me a little is that the Nix folks I've spoken to seem to feel that they have simply fixed this issue now, and nothing more need be done. While in fact I think there is a lot more work to be done here, and maybe GoboLinux and Spack and things show the way.
The (to me) madness of things like Btrfs volumes, layering another filesystem namespace on top of the Unix one, allowing insanity like multiple distros in one partition:
Hints at other approaches. As in: why not both? Maybe it's possible to have multiple _views_ of a filesystem, so via one API you "see" a flat namespace with hashed directories, and with another a conventional hierarchical one -- just as a Gmail account can be viewed as flat and hierarchical at the same time using an IMAP client.
> Maybe it's possible to have multiple _views_ of a filesystem, so via one API you "see" a flat namespace with hashed directories, and with another a conventional hierarchical one
Spack supports the notion of arbitrary "views" of the store, which can be defined declaratively [1]. Apparently we need to write more highlight blog posts and submit them here b/c people don't seem to know about this stuff :)
For example, if you want to make a view that included only things that depend on MPI (openmpi, mpich, etc.), but not things built with the PGI compiler, in directories by name, version, and what compiler was used to build, you could write:
That puts all your zlib installs in a short name-version directory, most things in directories named by the package and version and a subdirectory for what compiler it was built with, and for packages that depend on MPI it also adds the MPI implementation (openmpi, mpich, etc.) and its version to that.
You can choose to map everything into a single prefix too:
view:
default:
root: ~/myprefix
That is useful for what we call a "unified" environment where there is only one version of any particular package -- it basically converts a subset of a store into an FHS layout.
There are options for choosing what types of links to use and what types of dependencies to link in, e.g., you could exclude build dependencies if you didn't want the cmake you built with to be linked in.
> Apparently we need to write more highlight blog posts and submit them here b/c people don't seem to know about this stuff
100% you do. I find the term a bit odd, but I've been a "Linux professional" for coming up on 30 years now and I've never even heard of this stuff before.
There is _so much_ in the Linux world that is just badly explained, or not explained, and it's very hard to find cogent explanations. Trying to find them, summarise them, and in some cases, just write them myself is a large part of my job...
And it's a huge field.
One of the reasons some things dominate the headlines is that the creators spend a significant amount of time and effort on outreach. On just talking about what they are doing, why.
That's why there are tonnes of stories about GNOME, KDE, systemd, Flatpak, Snap, Btrfs, ZFS, etc. They talk. They explain.
It's also why there are next to none about Nix, Guix, Spack, and legions of others. They don't.
To pick a trivially small example: Nix talks about being declarative, but it doesn't explain what "declarative" means. And it mentions "purely functional" but it doesn't explain that either. Those things right there are worth about 1000-2000 words of explanation per word. Omit that, and the original message becomes worthless, because it becomes insiders-only.
As it happens through decades of reading about Lisp and things, I more or less get it, but not well, and I struggle to explain it.
> I think there's a word, or several, or maybe just a letter or two, missing here.
Oops, I certainly did accidentally a whole word there ;)
> What irks me a little is that the Nix folks I've spoken to seem to feel that they have simply fixed this issue now, and nothing more need be done. While in fact I think there is a lot more work to be done here, and maybe GoboLinux and Spack and things show the way.
I think you're probably right. I'm sure there are others in the Nix community who agree, but presenting a facsimile of a normal filesystem, or reversing the decision about the order in which to put hashes and the human-readable part of filenames (which has compatibility and performance implications that others will fight against it for), these things are pretty far outside what most people who love and contribute to Nix care about. And I don't mean that to say that Nix contributors are aloof, but to emphasize that the people on the Nix core team and other contributors to the ecosystem all have long laundry lists of features that solve problems that are deeply interesting to them or professional vital for them/their company. And those laundry lists don't include backwards-incompatible, far-reaching changes for the sake of what it feels like for newbies and casual users to look at the filesystem on NixOS for the first time.
I kinda suspect that if this changes in the Nix world it will be because Spack's approach becomes popular and widely praised for a long time, so that there is a long period in which the change is seen as an obvious choice despite it seeming like a 'minor' issue to technical stakeholders. But I can picture it happening.
> Hints at other approaches. As in: why not both? Maybe it's possible to have multiple _views_ of a filesystem, so via one API you "see" a flat namespace with hashed directories, and with another a conventional hierarchical one -- just as a Gmail account can be viewed as flat and hierarchical at the same time using an IMAP client.
GoboLinux does have a mechanism like this, IIRC, called 'GoboHide'. So if you run ./configure && make && make install on GoboLinux, autoconf and automake or whatever will find a /usr/lib, and if you run ls against that dir, it will succeed. But you won't see it with `ls /`. NixOS could add this kind of feature, but I'm store paths will still leak out in the settings screens of applications and the outputs of various commands, so it's not quite what we're looking for. That latter problem seems impossible to fix without patching applications, because hard coding store paths in them is an important part of how Nix works. That suggests to me that maybe the Spack way is better— at least then leakage of store paths isn't as shocking to users.
> madness of things like Btrfs volumes, layering another filesystem namespace on top of the Unix one, allowing insanity like multiple distros in one partition
Have you played with Bedrock Linux at all? That might be a fun one. That distro layers other distros together into a single functional whole. So in addition to living on a single partition, you also might have X11 from one distro, systemd from another, and command line utilities from a smattering of 3 others, all running as one system!
> Or installing Windows on Btrfs
I have also tried once to install macOS on ZFS, which apparently can be done. I might try again soon, since I have real Apple hardware to do it on. Last time I tried that on a Hackintosh but quickly ran up against my lack of macOS knowledge.
> these things are pretty far outside what most people who love and contribute to Nix care about.
I suspect you're right. :-(
I have sympathy for the "servers as cattle" approach, but then again, the Linux world is in danger of forgetting where it comes from: from enthusiasts having fun. From people hacking around, learning, exploring, for a laugh, partly because it's free.
And as a vegetarian since my teens, but a sysadmin since my 20s, I have more sympathy for cattle than sysadmins. Sysadmins have choices, and can learn from their forerunners' mistakes. If they don't: tough. Sympathy gone.
I know about GoboHide, yes.
> Have you played with Bedrock Linux at all?
I know of it, but I haven't. I should.
> I have also tried once to install macOS on ZFS, which apparently can be done.
Well, could once. Probably not any more: Apple backed away from ZFS about 15 years ago or so.
It installs and creates filesystems just fine. The only part I haven't successfully tested myself within the last year or two is actually booting from it.
> I have sympathy for the "servers as cattle" approach, but then again, the Linux world is in danger of forgetting where it comes from: from enthusiasts having fun. From people hacking around, learning, exploring, for a laugh, partly because it's free.
One thing I believe deeply is that no one ever really masters the Unix environment without living in it. Without a solid foundation of interactive use, programming with the shell as your REPL, puttering around your filesystem with `cd`, you'll never properly learn your way around. 'Learning Linux' in a 'cloud-native' context tends to mean being alienated from all of that, and I think it profoundly hampers the development of fluency and comfort with the system.
To get that from a box (server or desktop), it has to be a home to you, and homes aren't disposable (the Impermanence framework for NixOS aside).
I do think NixOS has a place there, in keeping computing fun and exciting. For me, it was the first truly interesting distro (at least among those I'd consider usable/practical for me) in a long time. (This was back in 2015 or so.) I was a desktop Linux hobbyist before I took up any devops-related work, and for several years, NixOS singlehandedly breathed new life into that hobby for me. As far as I can tell, this is not an uncommon reaction to NixOS among enthusiasts, if not among casual Linux users.
I knew it was around -- I have written about ZFS several times -- but if it's not in the kernel I didn't think you could boot from it any more, and in recent versions of macOS you can't readily add new kernel extensions any more.
(It is possible, by disabling some of the security settings, but it's very much not trivial.)
> no one ever really masters the Unix environment without living in it.
An interesting take. I can't really argue with that.
> NixOS has a place there, in keeping computing fun and exciting
With Nix /run/current-system/sw/ is basically the /usr of mutable FHS distros, with the difference that it's all read-only symlinks. So you can still look at the layout of your active system.
Homebrew has been doing it for a long time. It's not known for being fast, but without profiling the application I'm hesitant to blame the file layout.
Nix is becoming increasingly popular though. It's an improvement over the GoboLinux model, which requires manual maintenance of version directories (e.g., /Programs/Xorg/7.0). Nix does away with that by choosing the install path of packages based on the hash of their build recipes.
The automatic naming of paths is a critical feature of Nix. Without it, Nix expressions can’t exist. One of the most remarkable features of Nix is that package build recipes, or derivations, in Nix speak, can be treated like any other value in a programming language. It’s possible to reuse and modify existing derivations in Nix expressions without causing conflicts. That’s only possible because different derivations are assigned different paths. If Nix copied GoboLinux and treated packages with differing build recipes as the same, it would lose its declarative and reproducible nature.
Also, there’s no actual use case for Nix store paths with the hashes stripped out. The actual path can easily be derived from whatever is at the root of the dependency. Things like Nix expressions, the result symlink, the system environment in /run/sw/current-system, and PATH. Those are “human readable.” From there, you can get the actual store paths. It doesn’t matter that they’re prefixed with a hash because they still have human readable names at the end, allowing you tell what they contain.
It’s similar to inspecting memory. You don’t want to start by examining raw bytes in memory dumps. It’s far more easier and meaningful to start by chasing variables instead.
What I am saying is that choosing this flat directory store, one that is very hard for humans to read, parse and navigate, is on its own a major disincentive for many people.
The traditional and lavishly-documented Unix filesystem layout is comforting and familiar to hundreds of thousands of people (at a very conservative estimate!) and throwing it away was a poorly-considered move that on its own will make many people refuse to even consider Nix and its kin.
If a project decides to replace it then it needs multiple compelling reasons, and just "search paths will sort it" and "this keeps pathnames short" is not good enough.
What I am suggesting is that if automatic directory name management (via hashes or whatever) were combined with a more human-readable tree, this type of tool would have a much better chance of success.
Even if it were less efficient.
Even if it were more work for the developers.
The benefits of the Nix approach are only compelling to a relatively small number of people and the price is too high for many. That's why after over 20 years, adoption is minimal. That's why work continues on Debian and all the other traditional distros. That's why since Nix was invented in 2003 (AFAIK) we have had AppImage and Snap and Flatpak.
Because this stuff is too hard for ordinary folks.
This is why functional languages as a whole have made so little impact on mainstream computing. They bring great benefits for those smart enough to understand them, but for the rest of us, for ordinary dumb jocks like me, it's too hard.
The same goes for Lisp's prefix notation, and for Forth's and Postscript's postfix notation and Reverse Polish Notation in general.
After coming up on 40 years in the industry, I am convinced that some of this stuff really does have compelling advantages, but it's not compelling enough which is why tens of $millions are being spent on inferior tools such as Flatpak.
And under the hood, Flatpak uses Ostree, and Ostree uses Git, and TBH I reckon about 1% of 1% of the thousands of people using Git and anything related to it actually understand how it works.
This kind of thing is just too complicated for ordinary minds.
Computers are not for computer scientists. Computers should be for everyone. The goal of making this stuff easier to understand is more important than the goal of making it better in some theoretical way.
I think /home and /tmp are the only ones that consistently make any sense currently. The rest seems a grab bag of putting whatever you want wherever you want.
Do you think this reflects the reality of Linux distros, or does that reflect your understanding of them? Would you say that `/usr/bin` or `/usr/lib` are completely random and no two distros use those paths?
I've been using Linux for a loooong time. Sure, you notice trends, but it's still not clear cut what goes where. On my current system, /lib and /usr/lib both link to /lib64. So I guess the distinction between the three wasn't clear to someone else, either. Some configs are in /etc, some are in /var. Some third party stuff puts bins in /opt.
Only if you want somebody else to do it for you. Everything is open and hackable on Linux so you can make it work without systemd. Thus far this hasn't been a priority for others, but that doesn't mean it can't be done
void linux which opted for runit over systemd has bluetooth configurable via bluetoothctl or blueman-applet so seems this is a thing. That said bluetooth is kind of flaky.
You may get a better result with a device that has a usb dongle. Logitech models reportedly work. The Corsair unit I just bought works well for instance.
Not really. There is nothing about systemd that makes bluetooth or audio its exclusive purview. You just have to put in the effort to make it happen. Gotta figure out how they did it and do it too.
It's not gonna happen without somebody making it happen. Be somebody. Systemd is influential because the people making it often are those somebodies but the fact is anyone can be that somebody, you don't need their blessing or that of some standards body to make anything.
I went as far as getting rid of libc. Made a lisp that runs directly on top of Linux using nothing but system calls. It can do anything other software can do: I run strace on a program, my lisp can do what it does if it just replicates those system calls, doesn't matter if it's simple I/O or mounting disks. I want to eventually boot Linux directly into it.
/opt and /srv (and or /usr/local/opt if that's your preference) grant many of the nicities of managing your stuff piece by piece like gobolinux.
From a distro sense everything makes sense. dpkg -S this-file will tell me quickly why a /usr file is there. The distro is supposed to paint the picture for you.
What are the parts you think are the worst messes?
> Linux's filesystem structure is a complete mess.
I don't think that's a valid claim. Something being organized on principles that are different from yours is not the same as it being disorganized in itself.
I don't prefer to call it that. In fact, it's almost the exact opposite of technical debt, which is generally the result of trying to change things too much, too quickly.
Having stable conventions that account for edge cases that are addressed sufficiently to the point that we don't have to think about them is an asset, not debt, and the principle of Chesterton's Fence ought to always be applied when proposing changes that drastically alter long-established conventions.
I'd say that MacOS is even worse which is impressive given that Apple has the absolute control of the platform and seemingly infinite resources and could have refactored the mess at any point.
> I'd say that MacOS is even worse which is impressive given that Apple has the absolute control of the platform and seemingly infinite resources and could have refactored the mess at any point.
That's true, but also irrelevant. The file system can be a mess on MacOS because most MacOS users will never actually interact with the file system in any meaningful way. Apple obscures the file system away from its users by design. If you ask the average Mac user where on the computer their files are stored, they'll likely tell you something like, "They're all in the Photos app". On the other hand, even a fairly novice linux/unix user will almost certainly end up interacting with filesystem directly at some point, probably a lot.
I think the basic file system exposed to users in the Finder is pretty simple. /Applications, app bundles, etc.
The Unix under the hood is just standard stuff and Apples hides away detritus like .plist files. And sandboxed apps have their own little stub filesystems too.
Anyway I like the idea of GoboLinux a lot but kind of think GUIX, just abstracting it away might be the right approach.
I like that userland is kept separate from base as much as possible, but agree it could be even better. I do think the BSDs have more coherency than Linux on the whole. Would be nice if we could simplify things down even further, but I suspect it just becomes one giant bikeshedding exercise, which is probably why it's easier to just cling on to hier https://man.freebsd.org/cgi/man.cgi?hier
You can choose to complain that it so doesn't support it natively, or you can just buy the aftermarket hardware that does support it and get on with your dual-head, non-pro life. Or upgrade your monitor to a single curved 5k ultra wide monitor, which is better than dual-head, imo.
DisplayLink is a standard for connecting GPUs over USB. It's slow but it kinda sorta works -- I have an ancient DisplayLink 7" USB display. Works great on Windows, and I could have a tiny second screen with a console prompt on it, or Task Manager or something, when working on the move on a laptop.
Yeah it's pretty silly. I guess Apple looked at their market and figured something out that met the majority of use cases, but not having that freedom is very frustrating.
I think it's instructive to consider the history of the M1 Macs here.
In many ways they are essentially a sort of iPad with a keyboard and a USB port or 2 instead of an iPad or Apple Lightning connector.
The GPU is very closely coupled to the CPU inside the SoC, as is the RAM and the soldered-in flash nonvolatile storage.
You can't just boot them off USB. You can't just plug in extra drives. You can't add more memory or bigger SSDs without extremely elaborate desoldering of BGA parts.
These are not "PCs with an Arm ISA chip". They're a whole computer, disks and all, in a single closed package, without all the buses and connectors of COTS x86 kit -- which is _why_ they are so much faster.
The Windows Program Files style directories might make sense to you on a single user workstation. But on a multi-user server, its a security nightmare.
1. /bin (and /sbin) were intended for programs that needed to be on a small / partition before the larger /usr, etc. partitions were mounted. These days, it mostly serves as a standard location for key programs like /bin/sh, although the original intent may still be relevant for e.g. installations on small embedded devices.
2. /sbin, as distinct from /bin, is for system management programs (not normally used by ordinary users) needed before /usr is mounted.
3. /usr/bin is for distribution-managed normal user programs.
4. There is a /usr/sbin with the same relationship to /usr/bin as /sbin has to /bin.
5. /usr/local/bin is for normal user programs not managed by the distribution package manager, e.g. locally compiled packages. You should not install them into /usr/bin because future distribution upgrades may modify or delete them without warning.
6. /usr/local/sbin, as you can probably guess at this point, is to /usr/local/bin as /usr/sbin to /usr/bin.
The main historical reason why some root level directories were moved to /usr (which was originally the user directory like today's /home) is that Thompson and Ritchie's first hard disk for the operating system was full. The rest seems to be mostly retcons and backronyms.
At one point I stumbled onto a document from circa 1990 laying out a fairly coherent rationale - / was always a filesystem on the local machine while /usr could be a site-wide NFS share across many machines. Thus you got things like splitting off architecture-independent /usr/share from binary /usr/lib (because if you have a few different kinds of workstations, you'd want one /usr per architecture but /usr/share could be site-wide) and creating /sbin from binaries that had been in /etc from day one. Oh and they came up with /var (also possibly a network share) so /usr could be mounted read-only.
An interesting idea, but I have to figure the majority of Unix sites kept /usr on the local machine like always.
You said Program Files was a security nightmare compared to the Linux mess, that is what he was asking about. How is /bin safer than Program Files? None of the things you listed have any relevance to that question.
I'm familiar with the intent behind the structure. I am mostly asking about the security implications you mentioned.
One downside to Program Files type structure is that there's probably going to be some duplicate libraries lying around. However, as someone who's battled version skews due to clashing maintainer mindsets: that's been a much bigger pain in my ass than a few wasted kbs!
Those distinctions are more of a relic these days. If you take a look at a look at the inodes themselves you’ll see that they’re just symlinked back to /bin in a lot of Linux distros.
BSDs still make the distinction but Linux moved on a while ago.
More precisely /bin /sbin /usr/sbin all links pointing to /usr/bin yes but /usr/local is still of relevance and /opt although it really feels like opt (which doesn't have a great semantic meaning and would be better suited for something under /usr/local
similarly /lib and /lib64 are pointers to /usr/lib
It might be interesting in a vm to remove those symlinks and see what breaks. Presumably a lot.
> Through a mapping of traditional paths into their GoboLinux counterparts, we transparently retain compatibility with the Unix legacy. [...] There is no rocket science to this: /bin is a link to /System/Index/bin. And as a matter of fact, so is /usr/bin. And /usr/sbin... all "binaries" directories map to the same place. Amusingly, this makes us even more compatible than some more standard-looking distributions. In GoboLinux, all standard paths work for all files, while other distros may struggle with incompatibilites such as scripts breaking when they refer to /usr/bin/foo when the file is actually in /usr/local/bin/foo.
Ignorantly: is this how MacOS kind of works? It always felt incredibly good how applications were always just this one “file” you’d drag and drop and manage.
Nothing makes me feel dumber than trying to figure out where the heck anything is or gets placed in my Ubuntu machine.
> Nothing makes me feel dumber than trying to figure out where the heck anything is or gets placed in my Ubuntu machine.
And this is made somewhat worse by Linux file managers trying to hide any part of the filesystem that’s not your home folder or mounted drives.
I get why it’s done, the directory structure of your average Linux install is an absolute maze that’s sometimes confusing even for the technically inclined, let alone the home users targeted by major distributions. It’s ultimately treating the symptom and not the cause, though, and I think more Linux distributions should give serious thought to modernizations of their filesystem structures (as Gobo has).
It’s tempting to play with this myself. My goals would be to come up with a structure that’s reasonably self-explanatory, guides novices away from the dragons, and allows file managers to hide very little (maybe nothing) without consequence.
Why would they not? we all have to use computers. Secretaries used to be able to code, such a thing is not possible nowadays as programmers create more and more obscurity.
Corporate jargon is how companies hide away inefficiencies and illegalities. Now, jargon is usually not that, but most of the time it is. The same way programmers said a kilobyte is 1024 bytes (seriously?)
To add to your point, yes, do ask a random person what notepad.exe doe vs what vi or emacs does.
A text editor is user-facing; the underlying filesystem hierarchy mostly isn't. I'll grant some of it is; it would make sense for a user to know (and perhaps even care) about /home, and in a system that isn't centered around a package manager I could even see them caring about whatever the directory is for applications, but beyond that it doesn't matter how /usr is laid out, or how the CPU is microcoded, or whether the kernel is monolithic, or whether their applications are written in Rust or C#. It's a black box, and for less-technical users that's fine. I won't say there aren't ways to improve, even to simplify things, and I do wish that normal people still programmed, but the reality is that computers really are just that complicated, and once we get over trying to explain technical things to less-technical people, the FHS is fine for what it is.
The trick of shoving everything inside the bundle (including write things) no longer works if you're codesigned - there was always a note in the docs (buried, as usual) that while it might work it wasn't guaranteed to always work. Sure enough, as of a recent release it'll invalidate your signature if you've distributed it codesigned and write into the bundle.
If I wrote programs for Mac, this is one of the things that would really piss me off. Then, I'd write to ~/Library because I'm not gonna spend the time to figure out how to make it work with codesigning, if at all possible.
But like... why? If you wrote for Mac, you'd also probably follow the news and updates from Apple, which didn't exactly hide the direction they've been pushing things in.
You also _should_ write to a third party folder, preferably `~/Library/Application\ Support`. It's what it's for, after all.
> It always felt incredibly good how applications were always just this one “file”
While I agree with that in principle, in practice that's not true for a lot of applications that put a lot of stuff into ~/Library and /Library, and it's not unusual to see multi-step manual instructions on how to uninstall some apps on macOS. (Especially important if those apps add Login items) Windows' Add/Remove Programs at least gives a central space to trigger an uninstall, and most apps stick to it properly.
Why are you downvoted? That is the solution. Every package manager has this feature. One can list the files owned by any package on any Linux system, doesn't matter if it's Arch Linux or Termux.
I respect your curious preference, but my theory of mind is failing me on this one. You like dependencies, or moving files by typing, or perhaps even both?
Heh. Yes and yes and in general I like to see the guts and be free to easily fiddle with them. The Mac way is the software equivalent of how you can't open their hardware easily, which I also loathe. To me a love of Apple is one of the surest signs that a person is not a real hacker/programmer but is just in this industry for the $. "I just want to get shit done" ...no you just want to make money. There are exceptions but they are rare.
Hey, just a heads up: capitalizing the first directory's name? Not cool. It feels like extra work when navigating paths, you know? Pressing shift along with a letter or number every time? Total drag, especially for everyday command line use.
> To that, I can only respond that, in a properly configured shell like the one that comes by default with GoboLinux, typing /Programs takes the exact same number of keystrokes as typing /usr: slash, lowercase p, Tab.
> Not cool. It feels like extra work when navigating paths, you know? Pressing shift along with a letter or number every time?
You know why it never bothers me when I navigate in PowerShell (or cmd.exe for that matter)?
Because the shell is helpful for me and don't try to force me to conserve precious ticks and memory of PDP-11 lacking any usability.
It's always amusing to hear 'case sensitivity rUl3z!' fans who totally miss what it's just the result of the constraints of the original systems. It's not a feature.
More so, it's explicitly addressed:
>> To that, I can only respond that, in a properly configured shell like the one that comes by default with GoboLinux, typing /Programs takes the exact same number of keystrokes as typing /usr: slash, lowercase p, Tab.
Just 'set completion-ignore-case On' and have your life immensively better. Why? Because you now would need one less Shift keystroke per every filename starting with capitalised letter even if you don't use GoboLinux.
Or continue like it's still 1977, why not, your VT100, your rules.
EDIT: oh, noticed this later:
> Why bother installing another shell instead of just sticking with bash for normal navigation?
Yes, you don't even bother to know your own shell.
> Just 'set completion-ignore-case On' and have your life immensively better.
Just saying... macOS does this by default, and a few months ago, I tried to delete a folder called `~/ladybird` and accidentally nuked `~/Library` instead. I wasn't expecting it to autocomplete a filename with a different case.
For non-Mac users, `~/Library` holds more or less all the state for your user account. With it gone, not only do you lose all settings and things, you can't even log in. I had a machine in a very unstable condition and no time to fix it.
I put it to sleep, left it for a month or so until I had a couple of hours spare, created a new user account with admin rights, copied all my files to that account, removed my damaged account, rebooted, created a new user account for myself, copied all my files back in, reset permissions as appropriate, and then logged in to it.
I then had to re-configure all my apps, which took most of the time. Now the machine was stable again, I cleared out the stuff in the admin account.
(No, it's not connected to my network TimeMachine, because it's my work laptop and I want to keep that stuff separate from my personal stuff on my home Macs. All the data is backed up to my company's cloud servers, but that doesn't store any macOS config stuff.)
So in the end, I just had to recreate my browser profiles, reimport my bookmarks and stuff, recreate all my email accounts in Thunderbird – because Mozilla still hasn't got Mozilla Sync for T'bird, dammit – and it was an inconvenience. No actual data lost.
But it has left me strongly conflicted on the subject of case-insensitive tab-completion.
I like it as a convention that indicates at a glance whether something is a directory or just a regular file. Also, any shell can do case-insensitive completion for you. Fish does it out of the box.
I mean, fish isn't the only shell out there. Why bother installing another shell instead of just sticking with bash for normal navigation? Ah, sorry, classic distro discussion. Everyone has the freedom to choose whatever distros they prefer.
It's so sensible it makes me weep. Deduplicating multiple copies of a library can be done by the filesystem if you consider it really necessary, it is after all a file-level redundancy and should be solved at that level.
Some very bad closed source applications will expect way to much from the user system: the steam client, where your get bash scripts instead of sh scripts, a hard debian/ubuntu file layout with the linux mount userland container requirement (windows 10?), and GNU-only niche options of many commands. Not to mention many 32bits libraries. It seems they are still in control of their ABI though (probably using the .symver directive of binutils gas).
That to say the least, the current mess is being forced down or throat by steam, unless your distros don't play video games.
If steam was technically less horrible, more distros would be able to run games way more easily and mechanically more elf/linux users would play games. Because fullfilling all the nasty tantrums of steam is really a pain. Most native games have lesser requirements than steam.
The idea of Gobo was to replace the cryptic Unix filesystem hierarchy with something clearer and more human-readable, and in so doing, replace a whole bunch of things that package managers do, making them almost unnecessary.
Your point doesn't address this at all. You're talking about building from source instead of using a package manager, as far as I can see?
But compiling from source still puts the resultant binaries somewhere in the existing Unix filesystem layout. It doesn't matter where the binaries come from. This isn't about where they come from -- it's about where they go.
Do you mean just putting each app in it's own folder? That just isn't enough for isolation and could be massively wasteful, depending how far you take it.
Flatpak and NixOS are more complex because it's worth it. For example, they don't duplicate exact versions of dependencies on disk.
Flatpak's complexity is also a product of trying to function as a single solution to two different problems. Runtime isolation and packaging/distribution are different things, and there is no compelling reason for them to be conflated together. How you sandbox the execution of your applications should not be dependent on any particular filesystem layout or package format, nor vice versa.
FreeBSD does a good job with this: applications are installed into a standard Unix filesystem with conventional packages and the ports collection. Isolation is then implemented via jails.
The Linux equivalent of using standalone isolation layers, like firejail, on top of standard distro packaging, is far superior to Flatpak.
You are aware that windows has each app on its own folder in program\ files and its completely fine. ext4 supports an insane amount of files, so much so that unless you're doing something really out there, you're never going to reach the limit.
snaps and flatpaks are focused more distribution and are "sandboxed" (quotes because their are security issues with them). NixOS breaks compatibility with existing programs unlike gobo, but NixOS's package repository is exponentially better maintained where gobo's is incomplete and years out of date.
All of these are all still behind how Android stores apps. On Android each application can be distributed using a single file that is easy to distribute. Each application is properly sandboxed. Each application gets its own directory. And something all of the other things mentioned here are missing, each application gets its own subdirectories in the app's directory for storing its state (there are different ones for each user).
This makes sense. If there's one thing windows can do better than Linux it's portable applications. The vast majority of my daily drivers on windows can run from my memory stick on any system without installation. Though this is not the same as what you're taking about.
What you're looking for sounds like AppImages (https://appimage.org/) . I have only used them while downloading games from itch.io, etc. (since i prefer package managers) but they seem to work out of the box on popular distros.
The GoboLinux guys really did "intelligently" come up with a filesystem layout that's humanly understandable. I personally find the old-school UNIX conventions we use fairly arcane given we no longer have 8.3 type limits imposed by lack of storage space, >1GB file-size issues, etc.
I ran GoboLinux 012-015 for a few years on a server that I used for hosting version control software and for the most part it kicked ass.
What was a bit of a stumbling point is that if you needed a package that didn't readily exist, you'd have to create the recipe. The language for creating GoboLinux recipes is perfectly understandable, the issue was often one package would depend on a dozen or a few dozen libraries and you'd spend a lot of time chasing that down, getting the versioning for those libraries right, THEN finding a URL for those libraries/packages, and THEN creating the recipe.
I eventually moved over to Debian and I still cringe and config files in "/etc" maybe the binary was put in "/usr/bin" or "/usr/local/bin".
I'm in the camp that finds systemd annoying and kind of like an octopus (you'll often resort to using a find command to locate the associated .service file, you can't rely on them being in once place, the CLI isn't really intuitive) in contrast Gobo had a very simple set of scripts to manage services and it was very easy to work with.
But, the convinience of just being able to 'apt get' what you need or 'dpkg -i <blah.deb>' outweighs the much more sane and intelligence behind GoboLinux. These days the number of Linux distributions people support has dropped from the myraid of options it used to be to now almost always including Debian (or Ubuntu) by default so you it's unlikely you can't get a package or some program installed, even if it outside the repositories.
macOS definitely does something similar to GoboLinux and it makes working with macOS on the CLI (at least prior to the current versions which are fairly user-hostile) pretty easy -- your pen drive will be located in /Volumes, configuration files for a program under ~/Library, etc.
> you'll often resort to using a find command to locate the associated .service file, you can't rely on them being in once place, the CLI isn't really intuitive
FWIW, if the service is loaded you can just `systemctl status foo` and it'll tell you starting on the second line of output exactly what file(s) are in play, ex.
# systemctl status getty@tty1.service
● getty@tty1.service - Getty on tty1
Loaded: loaded (/lib/systemd/system/getty@.service; enabled; preset: enabled)
What makes folks thing landing pages like this should require JavaScript? Nothing here would need the dynamic capabilities of a scripting language, yet accessibility & SEO are hurt in the process.
You can read the 20 lines of JS that they use. It's for drawing the parallax lines in the background and for the fade-in effect, which can't be accomplished with plain CSS. There's no detriment to SEO or to accessibility, unless the specific disability you have is that you're allergic to JavaScript.
I am honestly kind of shocked every time I see Gobo come up on here that it's still going. I played with it back in highschool 20+ years ago and thought it was an interesting oddity.
It's frankly kind of inspiring that they've stuck with it.
This reminds me a little of the original idea of a new winfs for longhorn from Microsoft except this seems like something that may become useful someday. Some interesting ideas...though shared object dependencies may be a problem...though I didn't take a very close look at they handle this. Perhaps all shared resources go into a specific location? Or perhaps everything is statically compiled? dunno ... anyway there is a reason behind our lovely file system mess. Kind of reminds me of https://www.joelonsoftware.com/2000/04/06/things-you-should-... though in open source land you definitely can take a risk like this.
This is such an obvious solution. Kinda crazy that the whole world uses Debian and such, while this exists since 2002 (it's older than Ubuntu). I mean, yeah, I didn't switch to NixOS despite it being "obviously better" too, but it feels like there must have been much less friction 22 years ago to adopt better design. (Maybe capitalized directory names are to blame, they are disgusting.)
The real issue is case sensitive file systems. When do you actually want two files with the same name in different cases? That's not a human-centric design choice, Mac and Windows got this one right.
The real issue is somebody popularized this absolutely nonsensical idea that filesystems could be case-insensitive. They cannot. "A", "a" and "а" (Cyrillic) are different characters with different codepoints, whether you like it or not. Deal with it.
You can go around case sensitivity in user facing applications by using fuzzy search. Even command line tools can use Tab for autocompletion. Case for case sensitivity is not in ability to have two files which differ in case, but simplicity of underlying code that has to deal with paths.
The dependency map of the program you're running states which one takes precedence and gets mapped (overlay-mounted, to be more precise) to /System/Index/bin.
So, a dependency map that contains "Foo = 1.0" would cause Foo/1.0/bin/uninstall.sh to be the one mapped to /System/Index/bin.
You've fundamentally misunderstood how GoboLinux works. The whole point of these kind of distros is to avoid mixing packages into a single common directory. Each version of each software are installed in separate directories in /Programs so that they won't cause conflicts with each other. Then for convenience, a single version of each software will be symlinked into /System/Index.
This is way more meaningful than the /bin & /usr/bin split that distros are increasingly moving away from because multiple versions of the same software or forks can coexist on the same system. This model also makes /usr/local unnecessary.
That way users can get all the package-manager goodness for stuff they compiled locally. And also because Gobolinux lets people install more than one version of the same package.
I ran Gobo for a while and really enjoyed some of their ideas. It was started by Hisham Muhammad, the author of htop.
It’s a shame that it didn’t get more traction. We probably wouldn’t have needed Docker or any of the gazillion package managers that are proliferating today if it had.
> We probably wouldn’t have needed Docker or any of the gazillion package managers that are proliferating today if it had.
100% feel this way too. So much of the reason why we use containers is just not spreading crap all over the filesystem. It's why single binary distributables made popular by Golang are such a winner, ex: Caddy.
You might be interested in Fedora's Atomic desktops. They are immutable OS's where you don't spread crap all over the filesystem. Instead you rely on flatpaks, containers (toolbox or distrobox), and single binaries (if you got them).
Both aren't that reproducible, I keep running into sha256 hash nondeterminism when upgrading packages on nixos from source after not updating for a while and guix may be better at this.
Are you using channels or flakes? Channels are non-deterministic as your nixpkgs essentially changes completely. Flakes always are deterministic as long as your binary cache is valid or the network hasn't bitrotten, or you managed to escape the nix sandbox somehow (or used --impure). Guix is much newer than nix so it has less pain points, but IIRC guix has no flakes equivalent which is kind of deal breaker to me personally.
Of course nix also has one quite big issue, and that it is input addressed rather than content addressed, but its moving towards content addressed model which will reduce rebuilds and resource use https://nixos.wiki/wiki/Ca-derivations
I am not aware of any Linux distribution that has no package manager (except for LFS, which isn't a distribution per se). Source-based systems typically have some kind of package manager. Gentoo: portage, GoboLinux: Compile, Slackware: pkgtools + sbo, Sourcer/SourceMage: sorcery, Lunar: lin, CRUX: pkgutils, and so on.
Edit, additionally, yes. Such distributions are suitable for essentially any task for which any other distribution is suitable. The primary caution that I would offer anyone when venturing away from RHEL/Debian is how much you trust whatever organization to still be around in five years. If you're comfortable managing everything yourself, pick whatever; otherwise, it's "safest" to choose something that either contractually will still be around (RHEL), or is governed in a way that ensures some continuity (Debian).
Neither is a re-invention of the other; they happen to do things that are partially similar, but also totally different. NixOS uses the /nix/store structure to allow it to pin everything to everything in service of its goal of absolute reproducibility/determinism. GoboLinux restructured the filesystem in order to be "nicer" (very loosely speaking). The result is that (IMO) nix is more technically elegant, and Gobo is a lot friendlier to work with. (I type this from a NixOS machine and with love in my heart for both)
Linux is what made Unix into a mainstream consumer OS.
It came along 22 years after Unix itself.
It's never too late.
Remember that Windows itself was a flop until version 3, and that -- Windows 3.0, in 1990 -- was the little snowball rolling down a mountain that brought multitasking multimedia networked GUI computers to the mainstream.
And thereby created the mainstream marketplace of x86-32 computers: the substrate that allowed Linux to grow.
(DOS made so little use of the 80386 that an 80286 PC was adequate, and after 5 years the 386 made little inroads into the mass market -- they were too expensive, even after the 1989 budget-model 80386, the 386SX. But Windows 3 ran much better on a 386 than a 286, and soon after Windows came along, the 286 was dead.)
Yes, they cite that among other UNIX like OSes while completely ignoring the most successful implementation of their idea, (by several orders of magnitude) Windows. I don't find that cute, I find it ignorant.
I've tried GoboLinux, I think it's better to just # make install and to uninstall, # make -n install | xargs rm -rf or Solus Linux's package manager. (Currently using CachyOS on a comfy HP Envy Move with a comfy Lenovo Wireless Keyboard and Mouse combo, my shameless plug.)
Someone decided to shoehorn Windows Program Files into Linux? Yikes.
Jokes aside, this completely ignores the fact that there is a reason for /usr/bin and /usr/local/bin and /etc and the sbin directories. A lot of it has to do with permissions. If you've ever been a member of a multi-user server you might understand.
I will grant you tho; a lot of people never bother learning and just shove binaries where ever they can get it to work first.
In case anyone wants to know. Here is a good explanation:
> Jokes aside, this completely ignores the fact that there is a reason for /usr/bin and /usr/local/bin and /etc and the sbin directories. A lot of it has to do with permissions. If you've ever been a member of a multi-user server you might understand.
I've administered multi-user servers and I can only guess at what you mean. Like... I've seen distros that default non-admin users to not having sbin directories in their PATH, but even then it's not like that's a security thing, it just makes your path a little cleaner. What security benefit do you see to the different directories?
(In fairness, I can see non-security reasons to split things up - the traditional reason is that /bin is local, /usr is on NFS and shared between every machine in the lab, and /usr/local is non-packaged software built from source. But none of that is a security argument.)
Sorry, I'm not sure if I get your point. Separating /usr and /usr/local doesn't have to do with permissions; it's just that package manager wouldn't be able to deal with users installing files into /usr and overwriting the package manager's stuff. The whole point of Gobolinux is to avoid this sort of problem. Because each package gets its own directory instead of all being stored in a central location, there's no risk of one package stepping on another package's toes, and it's also possible to install multiple versions of the same package.
Yes, /usr/bin and /usr/local/bin is for the package manager vs user compiled. The link I posted mentioned it. But separating bin from sbin is more of a permissions thing. I suppose you could set permissions individually on each Program folder, but that seems like a pain. The multiple versions thing is pretty nice tho.
I thought it was implied that bin vs sbin is a permissions thing. What is outdated in that link? I use Fedora 39 Atomic and, as far as I can tell, even it follows these conventions (and even added /var/userlocal for themselves).
Strictly, the difference between /bin and /sbin versus their siblings in /usr has to do with what you can fit on the RK05 disk pack that the system boots from. If it comes up too broken to mount the
DECtape (or if you're lucky and your institution's not cheap, the second RK05) where /usr lives, you'd better be able to fix it with whatever's in /, because until you do that's all you've got. Hence keeping some binaries there, but not too many; your root volume is only a couple megabytes large.
Too, if /usr is on tape, that's a lot slower in random access than even a contemporary disk, which matters because you also don't have enough core memory to avoid paging binaries - so even things you might not need to fix a broken system still may be worth putting in /bin if you can afford the space, if they're frequently enough used to be worth the speedup.
I believe the difference between /bin and /sbin per se had to do with dynamic versus static linking, with /sbin reserved for statically linked versions of binaries critical to bring up or repair a system - after all, if something in /bin is linked against libraries in /usr/lib, then you still won't be using it if you can't mount /usr.
I'm not so sure about that part, though; it's been a very long time since any of this mattered at all to how Unix is operated in practice, and it is really only of interest to those curious about how the constraints of early hardware informed the evolution of historical filesystem layout conventions. Even I'm not old enough to have actually worked with such systems, although I don't miss it by much, and am certainly old enough to have studied a great deal more about them than some.
edit: If you're really interested in the topic, then you should certainly spend some time with Rob Landley's collection of historical documents [1], which Google is apparently no longer competent to find based on their content - some of this I was actually looking for in the course of composing this reply, and only found it when I happened to search the name on a mailing list message linked in another reply here. So much for "information wants to be free" - apparently on today's Internet there's no money left in making information able to be found.
¹ https://gobolinux.org/doc/articles/clueless.html