When I first started packaging software for $employer, we adhered to the FHS. Then we started shipping software to corporate customers, who have Policies and Rules and Expectations which are diametrically opposed to the FHS. They don't want logs in /var/log, nor binaries in /usr/[local]/bin, because then someone with root access has to be involved in the installation process. They don't even want an RPM if they can help it, for the same reason. In the end we revamped the whole shebang and everything lives in our own folder structure whose root can be anywhere, where everything can be owned by arbitrary users, because customers are afraid of root.
They don't care that this makes their installation less secure, because they are convinced that it makes things more secure. shrug
You mean because that's obviously way more sane than spewing the installation of every piece of software all over the system.
There's a reason it's basically only Unix that works like that. Slightly disappointing that you were directly confronted with the problems of the FHS but still think it's better... because customers are idiots or something.
> You mean because that's obviously way more sane than spewing the installation of every piece of software all over the system.
> There's a reason it's basically only Unix that works like that.
I think you're underestimating how common it is for software on both Windows and macOS (the former more than the latter) to spew its bits all over the system. In the case of Windows especially, this includes things like the Registry and events sent to Event Viewer.
As long as you don't move them around without notifying the proper package manager: which one, by the way? There is dpkg, and apt, and npm, and pip3, and...
If you install into the root hierarchy then you must use the one that your OS is using. Creating such a package for your application can be a nuisance but it really is not rocket science.
You can only use language specific packager managers like npm and pip3 if you install into a specific sub directory outside of OS file hierarchy conventions.
Disappointing? Sorry to read I've disappointed you, dear internet user. I've administered a lot of systems over the years, and they often contain tools I don't really know and didn't install. It's always much easier to look for things in the usual places, like logs in /var/log, instead of having to figure out where each random application decided dump its stuff.
I'm quite fond of the path of least surprises.
Splitting an installation across multiple locations has additional benefits. It's a good thing that the application user can't modify the binaries (which is pretty much enforced when the binaries live in /usr). You can achieve the same thing when dumping everything in /opt/$foo, but that's pretty much the worst of both worlds imo.
not a *nix fan but professional computer science here; my limited understanding was that the Debian people took on their own methods of breaking things up and re-ordering them; there is no *nix standard despite multiple efforts at standards, only de-facto practices; powerful and arrogant people along with hurried and clueless people, repeatedly build setups that do not follow standards, as detailed in the post.
To prevent damage from malware running with user privileges. If the user really needs to modify the binaries, they can reach out for sudo or ask the sysadmin. It's such a rare and risky use case that it warrants a barrier normally being in place.
None of those are like what Unix does though. I'm not saying you shouldn't store configuration data in some other location to the app itself. That's quite reasonable.
What Unix does is mix up the contents of all the apps themselves. All app binaries are in `/usr/bin`. All libraries are in `/usr/lib`. All their data is in `/usr/share`. No other OS does that as far as I am aware.
When I built my own packages, back in the old days, I would never adhere to this standard. I found spreading different pieces and parts of the same app all over the filesystem to be a bit bizarre. If I built Apache, it was built with a prefix of /usr/local/apache_VERSION, with a link from /usr/local/apache. This made it easy to upgrade: I'd build a new version (with a new prefix), migrate the configs (if needed), change the symlink (which was really just for each of tracking the current version), then blow away the old version. This was back in the "servers as pets" days.
The XDG specification is more sensible, in my opinion - configurable user-specific directories for things like executables and libraries, with a sane default prefix of "$HOME/.local".
I do think that the practices that evolved in early unix, and which are, after a modest amount of evolution, codified in the FHS, are not a good fit to modern systems, where software comes in packages, and you want to be able to deal with packages differently according to whether they come from the system package manager, vendors, internal developers, etc.
I wonder if we could have a rework of the classic layout where we have /usr/share, /usr/<arch>/bin, /usr/<arch>/lib, /etc, /var/lib, /var/cache, and /tmp, but then inside those, enforce a rule that everything is in a subdirectory named after its package. So you would have dropped your stuff into /usr/x86-64/bin/elricsapp/elricsapp, /etc/elricsapp/elricsapp.conf, etc. Then the customer sysadmins could have simply created those directory, chown'd them to a new elricsapp user, and installed your software as a normal user.
You'd either have an astoundingly long-winded PATH and LIBPATH, or have some procedure for collecting symlinks to everything in /usr/bin.
Fair. The only reason to split it up is to make it easier to do things like putting the read-only bits on a read-only filesystem, backing up all the config in one go, etc.
This kind of document really interests me. My project is called Linux on the Web, and has been under fairly active development for the past decade. I am trying to innovate on both operating system theory as well as application development best practices, all at the same time.
The current top level directories in LOTW are: bin, etc, home, mnt, tmp, usr, var and www. There used to be a dev directory (and others) that really just added to the complexity of the codebase more than anything else. LOTW development is a constant balance between the age old "complexity vs functionality duality" in systems design theory.
How can you call a system "Linux" if you don't expose system operations in a file-like API?
Granted, /dev is a pretty crappy way of doing it, but /proc, /sys and (bash shim, not actually a file) /dev/tcp are what I believe expected basic functionality of modern "Linux"
Does anyone have any resources for other operating systems? This might be one of those things that I need to test drive OSes for but it'd be neat to just read about.
It’s a pity that every single docker container throws away those standards and uses some weird /prometheus directory. I’m still putting my apps into /opt or something like that.
What in particular are you referring to? A lot of software still depends on paths that adhere to the FHS. Dynamic linkers, etc all bake in assumptions about search paths based on FHS. Certainly application developers will violate FHS when placing config files etc but I’m curious what the Distros are tossing out?
The only "major Linux Distro" that I know that actually doesn't follow FHS is NixOS, and even NixOS has FHS-compatible overlay options for handling proprietary/pre-compiled software.
Ummm, there is a general acceptance of a newer version of LFS, version 3.
If version mistaken, you must be thinking of just two distros.
- NixOS, like totally tossed LFSv3 out, 100%
- RedHat/Fedora, they are smushing both /bin and /usr/bin together. This breaks any link with embedded world (and will come back to bite them when Intel ME becomes abandonware due to malware pervasiveness and their inability to keep those out.
Not quite. NixOS still relies heavily on the FHS layout. It just dropped the usage of some global directories in order to provide features it otherwise couldn't have.
More specifically:
* Only a few top-level directories are actually dropped, like /bin, /lib, /share, and /usr. Well technically /usr exists, but it only has /usr/bin/env in it and /usr/bin is excluded from the default $PATH.
* Individual packages placed under /nix/store/ still adheres to FHS. While packages technically don't need to adhere to FHS, Nix expects commands to be placed under ./bin, man pages under ./share/man, and so on and so forth.
> RedHat/Fedora, they are smushing both /bin and /usr/bin together. This breaks any link with embedded world (and will come back to bite them when Intel ME becomes abandonware due to malware pervasiveness and their inability to keep those out.
FYI, UsrMerge is being (rightfully) adopted by multiple distros including SUSE, Debian, and Arch.
yes, that is the decision made by said distros in the best interest of working toward an easier but REMOTE recovery mechanism.
Not something one would not want to do in terms of increasing the overhead of remote (and thusly its general) system administration nor weaken the security stance of an overall host.
It may look good to do such smushing of [/usr]/bin for the short term but the introduction of vastly more surface attack areas should give any serious security architects the pause.
A tiny smatters of Linux distro are smushing both /tmp and /var/tmp.
That one is never a good idea, not even in production servers nor embedded world.
/tmp gets emptied with each reboot.
/var/tmp is about surviving across reboot. But the app would run just fine if it were ever to be emptied.
# Alternatively
Some have said that /var/cache would be the more appropriate use over /var/tmp.
But /var/cache is used for daemon cold restart and reconstruction of internal structures after reboot, restart, reload, or worse, Ethernet cable disconnect under only systemd-networkd (daemon gets killed by systemd). Your prized networked daemon falls into this category.
/var/tmp is generally for session-specific (that are expected to survive across reboot) as opposed to daemon’s internal structure. ISC DHCP client is one.
Solaris had merged /bin and /usr/bin for like two decades. The difference nowadays is mostly cosmetics; one is a symlink to the other and that’s mostly it. It could make a difference in the eighties when there was a reason for a separate /usr.
> This breaks any link with embedded world (and will come back to bite them when Intel ME becomes abandonware due to malware pervasiveness and their inability to keep those out.
Can you expand on that? What does /usr/bin have to do with embedded systems programming??
In a really, really secured Unix environment, the /bin is typically a read-only partition.
In a secured embedded world, this would be a separate flash hardware chip whose read-write is controlled by a hardware switch.
Busybox would encompass most of the entries under /bin but it is often that package management must negotiate out-of-band with admin doing upgrades when using bigger “busybox” like GNU utils.
Intel ME and its storage of binary blob has been problematic of defying clean delineation of nested security boundary, notably when TPM is not covering Intel ME to a point of Intel making another stab at TPM.
So where do you put the upgrades for Intel Management Engine?
This paper indirectly covers Intel’s future plan of dealing with these shortcomings.
Today’s fight is about making these partitions as easy to upgrade as possible (for /bin.). My assertion is that remote upgrade^H^H^H^H^H^Hrecovery should never be a possibility when your /bin code has been thoroughly vetted and secured.
> In a really, really secured Unix environment, the /bin is typically a read-only partition.
> In a secured embedded world, this would be a separate flash hardware chip whose read-write is controlled by a hardware switch.
I fail to see how this is secure without also securing /etc (PATH, LD_LIBRARY_PATH), /lib, /usr/lib, /usr/bin, /usr/local/bin - and at this point, might as well make the whole root mount thing RO and have a separated block device for the RW parts instead.
> Intel ME and its storage of binary blob has been problematic of defying clean delineation of nested security boundary, notably when TPM is not covering Intel ME to a point of Intel making another stab at TPM.
Intel ME is _trusted computing_, among with all sad characteristics that "trusted computing" implies; you either have faith that it is working correctly, or your shoud not be making anything that you need security on a platform that embeds it.
PATH can get emptied and the directory of its own executable is retrieved before a secured app shells out to another app.
I think it’s great that you think it is “trusted”, but I prefer the “trust but verify” approach.
That’s why using a separate hardware flash chip, that cannot be controlled by CPU instruction due to a hardware switch, has become that last bastion of security nowadays.
It is just that it becomes an inconvenience for ease of use in these days and ages.
> I think it’s great that you think it is “trusted”, but I prefer the “trust but verify” approach.
I don't trust "Trusted computing" at all. It is, by it's own definition, rotten to the core.
> The core idea of trusted computing is to give _hardware manufacturers_ control over what software does and does not run on a system by refusing to run unsigned software.
"hardware manufacturers" in this case meaning, of course, Intel, not whoever is using their product to build actually productive things.
RHEL/Fedora going for unified /bin is about system recovery.
On the types of systems you're expected to install RHEL, if you can't mount the root filesystem correctly, then /bin isn't going to help you either - what you need instead is that dracut put your repair/minimal tool set in your initramfs.
System recovery should comprise of two distinct and separate components:
- a simple, hardened and vetted “handler”
- the upgradable components
RedHat now chose to combine the two.
And RedHat’s current approach is just now muddling security boundary between the two as my experience shows. This is not new trend but a constant battleground that I have personally fought alongside over many RedHat distros since 1998.
After all, their customers are overwhelming them for ease of use of system recovery, as well as for security against persistence malware.
But you cannot do both. That’s why Intel is taking another stab at TPM/ME.
While Intel is reworking, the current and prudent thing to do is to lockdown the /bin (expand initramd mountings, and I think this should include a separate /usr mount so that / can be mount read-only for one mission profile and /usr be mount read-only for another)
Of course, in these days and age, the best security is for /bin to be in a flash partition that is protected by a hardware switch, for /bin not to have any network tools/access, and for ME to be totally disabled (and if Intel ME cannot be disabled, then to never, ever use the management Ethernet port that ME is tied to).
> But you cannot do both. That’s why Intel is taking another stab at TPM/ME.
I trust Intel and their closed blob firmware much less that I trust Red Hat.
> While Intel is reworking (...)
> Of course, in these days and age (...) and for ME to be totally disabled (and if Intel ME cannot be disabled, then to never, ever use the management Ethernet port that ME is tied to).
I agree that the current Intel ME isn't trustworthy. It is "trusted computing"/"treacherous computing" after all. But I am also very skeptical about a "new Intel ME" being trustworthy as well.
> then to never, ever use the management Ethernet port that ME is tied to
This doesn't help a lot when the ME has powers over protection ring -1, and can arbitrarily mess up with your main system.
They don't care that this makes their installation less secure, because they are convinced that it makes things more secure. shrug