The IT dept installed and compiled tons of software for the various systems and AFS had an @sys string that you would put into your symbolic link and then it would dereference to the actual directory for that specific system architecture.
Yes, everything at CERN, at least in the 2000s, was in AFS. Fermilab was also using AFS extensively.
I remember compiling AFS from source for Scientific Linux 3.x because there was a weird bug that didn't let the machines mount AFS when they were integrated with LCG (before it was renamed to WLCG: https://wlcg.web.cern.ch/)
Well I'm 50 but AFS in college was superior to all the NFS and NIS silliness I've put up with at the 8 companies I've worked at since then. I wrote another comment about our Unix groups at work and how we have a setuid root command that we type in our password and it changes our groups dynamically.
> The IT dept installed and compiled tons of software for the various systems and AFS had an @sys string that you would put into your symbolic link and then it would dereference to the actual directory for that specific system architecture.
This sounds cool, but I've wondered - couldn't you just stick something like
(I'm actually doing something like this myself; I don't (yet) have AFS or NFS strongly in play in my environment, but of all things I've resorted to this trick to pick out binaries in ~/.local/bin when using distrobox, because Alpine and OpenSUSE are not ABI compatible)
You could do that, but @sys is resolved in the kernel - so you can use it in symlinks and just add .../bin to your path (with bin -> .bindir.@sys ) and thus it works for non path cases too...
Not as advanced as what Domain/OS did before it - the kernel straight up evaluated arbitrary environment variables in the path resolver, which they used for things like selecting "personalities" (BSD vs SYSV vs "native") but it wasn't restricted to any particular names. "We don't make them like that anymore..."
What's really interesting about @sys is that it supports a search-path at run-time, so the resolution can walk back through a list of target systems to find an available one.
My university was similar (SGI, HP-UX, IBM AIX, Sun, Linux, SCO), but they used NFS to mount home dirs local to the computer clusters, which wasn't as cool because it wasn't possible to mount home dir volumes remotely like an AFS campus. They also, unfortunately, used original NIS which could easily extract all password hashes of all users with a simple `getent passwd`. I proceeded to run John The Ripper against a dump of everyone and found 60 passwords in 30 seconds, including several tenured professors.
Those were the days when portability and longevity were important and there wasn't as much of a monoculture or incompatible code/language features churn.
When working on a class project it was great that I as a normal user could create an ACL group and add and remove users to it and then give them read or write or both permissions on a directory in my account.
At my job we have hundreds of projects and there are strict security requirement so we only have permissions to the projects that we are assigned to. The problem is that software and libraries are in different directories with different permissions so they can't just add us to every group as it would go over the limit for number of Unix groups. So we have another command that is setuid root that we type in our password and it changes our Unix groups on the fly. The process for adding people to the groups has to go through a web site where only the project lead can add people and even then it can take a day because some VP needs to approve it.
Last time I tried, mounting a directory webdav server in the same manner as an NFS or CIFS server was a hot mess. Some FUSE client tried to fully download and cache everything in ~/.cache or something.
It's been a while, but I haven't tried anything since then.
CIFS is stupid for UNIX <--> UNIX, and NFS has that UID mess...
I tried it back when obsd included a client, hooking into the public list was amazing, you could almost see an alternate web in there but based on the unix filesystem instead of http. unfortunately afs was dropped in 5.2
The big disappointment for me at the time was that obsd did not also include a server component so it was comparatively much more difficult to use afs in your own infrastructure. The lesson being always make the effort to include the server side if possible. Without that you feel like a second class citizen.
It's not, it's way more simplified as 9p is as close to platonic ideal of Unix I/O turned into RPC as possible, whereas AFS included considerable infrastructure for example to efficiently distribute larger read-only datasets.
You could, partially, achieve something similar by layering multiple services in plan9, but often it would mean switching over to a different protocol at some point.
My university in the 1990's had hundreds of Unix workstations from Sun, HP, DEC, IBM, SGI, and Linux.
It was all tied together using this so everything felt the same no matter what system you were on.
https://en.wikipedia.org/wiki/Distributed_Computing_Environm...
https://en.wikipedia.org/wiki/Andrew_File_System
The IT dept installed and compiled tons of software for the various systems and AFS had an @sys string that you would put into your symbolic link and then it would dereference to the actual directory for that specific system architecture.
https://docs.openafs.org/Reference/1/sys.html
https://web.mit.edu/sipb/doc/working/afs/html/subsection7.6....
"On an Athena DECstation, it's pmax_ul4; on an Athena RS6000, it's rs_aix31" and so on.