Heh. The OS landscape has changed over time, as disks have gotten much faster and much larger, and some of the design decisions made decades ago no longer make sense.
It's fair to say that some of the specific design constraints no longer apply.
However, that's not to say that we haven't arrived at similar conclusions (a root filesystem accessible at boot, multiply hierarchies of additional storage under, say, /usr, /usr/local, etc.
I mean, the ultimate conclusion of the UsrMerge argument would be to put all directories on root.
I find the justifications for UsrMerge to be specious at best, and better described as counterfactual. I strongly suspect that a large part of the reasoning is Red Hat's chronic inability to enforce discipline among its software packagers (both within and outside of Red Hat).
I've always wondered, are the directories hard coded into the kernel or are there some EV's somewhere or something pointing to them?
I can guarantee one of the bigger problems with mainstream Linux adoption is that the top level of the file system is covered in OS directories. For all its faults, keeping the OS under C:/Windows keeps most Grandmothers from deleting system32.
In the mid-80's to 90's when disk space was at a premium, it was quite common to have one big file server that had the /usr/local, /opt, /home, etc. hierarchies on it, shared out over NFS to the other desktop machines on the network.
As most/all of the machines would be similar architectures, they could share the binaries. In cases where there was more than one architecture, there would frequently be multiple file servers, one per arch. Another remnant of this on modern systems: "/usr/share" usually contains data like machine-independent help files that could be mounted across architectures.
The result of this was that users could hotseat from machine to machine, and site-specific software would only need be installed once, which was a space/complexity win.
/usr/local/ is a place to put software that's not managed the same way. It keeps the systemwide package management from overwriting or otherwise disrupting software it doesn't know about.
Software in /s?bin/ /usr/s?bin/ /usr/lib/ etc. is not guaranteed to work on another machine. A binary could be for a different architecture, even if it was installed by the main package manager. You can't copy /usr/bin/ binaries from an arm platform to an x86 platform, for instance. You can't copy optimized crypto software using aes-ni instructions to pre-aes-ni systems; it'll crash. You can't even reliably copy binaries from a different machine with the same cpu model, if the machines are using different package sets that will result in library incompatibilities.
It's "locally managed". Under, say, Debian Policy, a Debian package may create directories under /usr/local, but it may not create files, or delete either files or directories. Which means that the local administrator can add/remove software here.
/opt is another directory that's generally intended for third-party software. So, say, you'll find /opt/oracle/, /opt/dell/, /opt/google/, /opt/ibm/, etc., in which various enterprise tools go. There's no reason for space rationalization you can't move this to /usr/local/opt and symlink back up to the root filesystem, as some prefer and I generally do.
Where things get confusing is when you're on, say, an Mac and install software from the DarwinPorts project, which creates a managed tree ... under /usr/local. This strikes me as somehow wrong (/opt/darwinports/ would be a better choice IMO).
I've been using Linux for about 4 years now, and I've tried to find a document like this a few times and failed. This is probably the best reference on this topic I've seen yet.
I find the explanation provided on the busybox mailing list to be far more insightful: http://lists.busybox.net/pipermail/busybox/2010-December/074...