The 9p filesystem lives on in a few "modern" places, like within WSL, QEMU, and other places where it's a good bridge between a host OS and container or VM.
Unfortunately it's not implemented for Windows hosts. There's a patch in the works but the review of the first submission came back with a lot of requested changes. They're planning another submission though so hopefully it makes it in.
I think GP meant to say (the extremely confusingly named) virtio-fs, as opposed to virtfs/9pvirtio.
Virtio-fs is independent of 9P and has optional support for using shared memory to greatly increase performance, it also maps better to the permissions and metadata of "modern" filesystems IIRC, not sure if 9pvirtio had this problem but I remember coworkers having permissions problems with the 9P bridge in WSL2 (Plan9's permission system is very simple and doesn't map well to other VFS's)
> Plan9's permission system is very simple and doesn't map well to other VFS's
Plan 9 is a pure VFS OS. 9p uses regular unix permissions which map just fine. The major issue is that since plan 9 is all vfs, there are no crufty unix leftovers like hidden dot files or hard/soft links. User specific configuration files belong in $user/lib and bind replaces hacky links. These old unix hacks were accommodated in 9p2000.u. Further extensions to .u resulted in 9p2000.L which adds some Linux metadata stuff (I cant remember right now, my memory of 9p2000.u/L is fading).
When it was first announced, it wasn't clear it would end up being so esoteric, but I remember that Ken Thompson, Dennis Ritchie, and Rob Pike were working on it, and maybe Brian Kernighan too?
Timing-wise, it was too late for what they ended up doing, but anything all those people worked on is bound to have some interesting ideas.
It's been weird watching the rise of iOS loosen the reliance on, or even awareness of, files, when 9P was all about files. Files for everything!
> I had the impression that it was held back by its proprietary license.
That was my impression, too. And then it got relicensed to a FOSS license... that was GPL-incompatible. And then it got relicensed again to GPLv2, and then it got relicensed yet again to its current MIT license.
In any case, each of these license changes was too little too late to really improve Plan 9's practicality from a "can I legally use this?" standpoint.
I tried to install it once, a long time ago (around 2000?). Linux was already taking off, BSD's had the possibility to overtake it, other unices were made available more for the end user.
But basically, the hardware support was pretty bad. It took me a long time to find a SCSI controller which was supported, when ATA disks were already standard for years. Same with network- oder graphic cards.
Nowadays, if esoteric OS's would just support standard Vmware hardware, they'd be much more successful (looking at you fuchsia!)
It's a similar story with the Self programming language (https://en.wikipedia.org/wiki/Self_(programming_language). There are many software engineers and computer scientists who never heard of Self, but Self's prototype-based approach to object-oriented programming had a major influence on JavaScript, and a lot of the work done on making fast virtual machines for Self made it in the Java Virtual Machine.
There’s no need to be snarky, especially when you’re missing the point. htop missing a race on a PID is no big deal. GDB doing so is a critical issue. /proc is nice to have for diagnostics but once again it makes for terrible API.
Sad but true. Android and iOS are the worst offenders I've seen. They're apparently trying to completely get rid of the concept of files altogether, which is really unfortunate for anyone wanting to build cool stuff on those platforms.
I understand there can be security benefits but at what cost.
> They're apparently trying to completely get rid of the concept of files altogether
I’ve never followed Android very closely but iOS began with no user-facing notion of files and added one many years and major releases later. Granted they’re not Unix everything-is-a-file, as in you can’t execute them or do all sorts of other everything-is-a-file operations with them. But adding mostly-general file functionality is definitely not trying to get rid of the concept.
How does it know where to send it? Does it just assume it should use the contents of the Host header and send the request there or is the "url" in your path the destination? Does it support things like SNI?[1] Can you spoof that? Does it expect clients to parse out the raw output of the HTTP response? I have so many questions. From a quick glance this seems a lot harder to work with than curl for both trivial and non-trivial uses.
I tried to find docs on it, but couldn't. If you could link me to some, I'd appreciate it.
[1] After posting this I remember like a doofus this is an OS from the 90s. Of course it doesn't. But a similar question could be asked about any other TLS level setting. That's just one I've had to spend more time debugging using curl in the past.
To be fair you'd probably end up with a tool like curl just for setting all the options and headers, but it would just be a wrapper around sending the request to the url file. Just like there are tools for reading and parsing files in /proc.
I think you're looking at the wrong end of the timeline. The people who built it had already built UNIX and C (and one of them would later write Limbo). Go's initial splash in the "press" was helped massively by that pedigree.
I meant that if same people went to work to some small unknown company nobody would care as none of their software would have a pull of something pushed by Google
I know that's what you meant and you absolutely have it backwards. These are the people who built some of the most iconic technology BEFORE building Plan 9. Are you seriously not familiar with their work on UNIX and C at Bell Labs? Because all of them were legends long before going to Google. Them going to Google was a big deal because of who they already were, not Google. If you're not old enough to remember their ideas being a big deal before they joined Google you need to go do some reading because you're lacking pretty real historical context.
I'd actually love to see such a paper. Plan 9 threading is more like coroutines, but it is certainly possible to run multi-core/multi-process code.
That said, I can't say I've seen anyone try to write anything that scales like nginx on Plan 9. That doesn't mean it hasn't happened, I've just not seen anyone talk about it too much.
It did run on IBM's Blue Gene for a bit (https://www.usenix.org/legacy/event/usenix07/posters/vanhens...) but as you can see that was some 15 years ago, and I'm not sure we're talking about anything even remotely similar to a single computer handling tons of concurrent connections.
Plan 9 can also run Go binaries, but, again, not really sure we're talking about the same thing as nginx-level scale.
> I'd actually love to see such a paper. Plan 9 threading is more like coroutines, but it is certainly possible to run multi-core/multi-process code.
Rob Pike was on of the main developers behind plan 9 and Go and involved in concurrent programming research focusing on CSP.
Multi-processing was a main focus of plan 9's design and it works well as procs are cheap to spawn on plan 9. Procs are also the smallest unit of execution on plan 9, threads are just light weight procs with a shared heap to pass pointers around. Thread stacks can also be shared as well by being allocated on the heap via fork(2) RFMEM flag (its all done with malloc in the background).
The issue with vanilla (aka labs or legacy ) plan 9 is there is a hard coded limit of 2k procs statically allocated at boot. This was a pragmatic design decision. The unfortunate side effect is vanilla plan 9 falls over under any sort of work load requiring spawning lots of procs like handling web requests. This is actively being worked on by 9front developers so sites hosted on 9front should hold up better (patches welcome :-).
> That said, I can't say I've seen anyone try to write anything that scales like nginx on Plan 9.
Because you really don't need those big web serving monoliths on plan 9. You wire things up using rc scripts and programs like execfs (implements cgi) plus httpd/tcp80 or another web serving listener and sandbox code using namespaces. Plan 9 is more true to unix philosophy and more unix than unix. (edit: execfs is experimental but available on shuthub.us along with other webstuff like tcp80)
Plan9 libthread is literally goroutines just without the syntax sugar - up till version 1.5, golang shipped significant chunk of Plan9 standard library with itself. It's also where the saner networking interface came from, as Plan9 was to support networking from start, not depend on quick and dirty port of non-Unix stack like BSD Sockets
Would love to "Lego" block build hardware and software components on the following stack:
1. RK3588 Rockchip 12in Thinkpad cross HTC style slideout keyboard form factor
2. Plan 9 Legacy OS
3. seL4 microkernel, Qi cross Racket IDE
and to compete what people can do scaling out to the "cloud" at price thresholds, maybe a gifted individual or team will realise the 2100 movie's HAL9000 sentience.