Every time I read about Plan 9 I get a little sad. Unix is so good at what it does that the better enemy of the good doesn't stand a chance.
There are lots of better alternatives to Unix out there. But with the incumbents strengths being what they are I think we'll be stuck with Unix for a long time to come.
Possibly the only thing that will change this is if it is ever deemed that the Unix model is inherently broken from a security point of view and one of these other alternatives provides a fix. Performance and conceptual models are not enough to ditch decades of investment in technology that is good enough.
There's a slight problem with simply deeming Plan 9 "better" and Unix (for whatever value of Unix-ness) "good". Depending on your definition of worth, it's probably easy enough to find a system that is even better, i.e. more "pure" in that regard. Obvious examples, if your focus is underlying concepts, would be Oberon and Lisp Machines, where it's straight access to data structures and functions, without superfluous parsing of lines of text and command line arguments...
Compared to really other systems, Plan 9 is still Unix. Don't think the designers would disagree...
Plan 9 is Unix in the way that Unix is still Multics. So is Plan 9 still Multics?
Really, the superficial similarities should not overshadow the fundamental differences. QnX is also Unix by that definition, but in fact is a completely different system under the hood.
Compared to the others I've mentioned? Sure, and that would most likely mean that most systems are closer to Unix than e.g. SmallTalk. Probably even including Windows, unless one would have a very favorable view of COM.
I'm getting a new Macbook in a few days (after 5 years with my current machine I desperately need an upgrade) and I'll leave this one for a permanent virtual machine with Plan9 on it. In some sense it will be my "just write, no idle browsing" machine." There's something about Plan9 I find great.
If the more optimistic predictions regarding the upcoming generation of non-volatile memory are true, then we may see store and main memory merging. I think that would be a window of opportunity for new OSes, though I don't doubt that Linux could be made to run under that model too, somehow.
Unix will stay as long as the machine model is Von Neumann with shared memory and interrupts. If any architecture is sufficiently remote from that model, especially something without shared memory, Unix won't stand a chance.
Multiprocessors weren't commonplace until users of the Windows 9x kernel were dragged screaming and kicking to the XP kernel. 64-bit processors were not commonplace until AMD came out with a clever way to make them look like 32-bit ones that could run 32-bit Windows.
In the meantime, Unix users were using 64-bit multi-processor boxes since ages. Yet, there was no mainstream consumer 64-bit multi-processor architecture before there was no 64-bit multi-processor mainstream consumer OS.
The same applies here - most current improvement is directed towards making current x86 and ARM computer architectures faster because there is no popular corpus of software that could take advantage of different architectures. Sony, IBM and Toshiba tried with the Cell and failed. The Xeon Phi may prove to be an interesting stepping stone in that direction but it too uses a shared memory model.
I'd love to see some radical ideas tried, but, right now, I think I won't.
It's a bit tragic that the most popular OSs of today are the bastard descendants of the most popular mini-computer OSs of the 70's.
Although Unix was not designed for multi-processors originally, its main run-time abstraction (process networks) is easy enough to port to any platform with a common store. So there's not much to discuss about single-core vs multi-core, it's either one VN or multiple VN machines connected to a single abstract memory.
Remove the shared memory, then issues start to rise.
You say "radical ideas tried" but I believe that is not relevant. Radical ideas typically don't fly because they are radical. However you can already see things happening that are not radical but are breaking the Unix machine model very hard:
- "accelerators": these are really fully fledged co-processors with their own local memory. There is no easy way to conceptually delegate Unix threads to accelerators precisely because they don't share the memory. (Now with recent GPUs the architecture was modified to actually share the memory between the CPU and GPU so things become possible, but not all "accelerators" have that feature)
- on systems-on-chip you now have scratchpads next to cores on the architecture. Some cores may not have access to the "main" memory of another core (eg the radio controller in telephones) although they are fully-fledged cores as well. Because of this lack of shared memory, it is not conceptually possible to envision a Unix system where processes access these extra processors transparently.
tl;dr there are already hardware architecture with separated memories and Unix can't cope with that easily because its main abstraction requires shared memory.
Linux is better. In comparison, Plan 9 is like primordial soup, whereas Linux is an advanced alien life from from the future.
There are so many parts of Linux that are far better than Plan 9, and polished for production work loads. Real life solutions are sometimes messy, and take many evolutions to get right.
Sure, elegant and simple maths is good. But if the maths is just wrong, and doesn't work, then the more complicated maths that actually works is better.
Linux is not better. Linux looks better, feels better and so on but under the hood it is just a rehash of the predecessor of plan 9.
I'm using linux every day on both my desktop and a whole army of servers and I have some experience with both plan 9 and other (micro kernel based) systems. Linux (and in fact all Unix flavours) have some systemic problems that those other os variants attempt to address. That the user experience is less polished does not detract from that.
Maths don't enter into it.
To re-phrase your analogy: Linux is a crocodile, it is ancient, dangerous and well adapted to its environment. So well adapted that it is a force to be reckoned with, even if you're an advanced creature from the future.
I would lean towards the opposite.
Plan 9 is a creature that was designed well from day one, but initially limited by restrictive licensing.
Linux is probably one of the largest hack and slash jobs in programming history, but it was available under very free and liberal terms from the beginning and development was open to everyone, hence it garnered more community support and as a result was more successful.
Every time something to do with Plan 9 comes up I lament the fact that the most elegant elements of its design have failed to be realized in practical terms in other operating systems. When they are realized, they usually lose the elegance.
Mostly I think about union mounts (which are very different in plan9 than the version that's been integrated into Linux) and the privilege escalation system (which has always seemed, to me, to be so much better than the sieve that is setuid).
(Note: I'm aware there have been attempts to bring purer versions of these things to linux, but they never gain any traction)
In Linux CLONE_NEWNS requires CAP_SYS_ADMIN which makes it almost completely useless; it can't be used to the extent and in the way it is used in Plan 9.
I believe once you create a user namespace (https://lwn.net/Articles/528078/) you can then create the other kinds of namespaces (mount, process, etc.) without requiring privileges outside of your user namespace.
My eye went straight to it as well but more because I had passively (embarrassingly) assumed that Plan9 didn't have non-western fonts by default. 'twas a silly thought.
I really need to run Plan 9 again. I haven't done so since the late 90's. (I also thought that Oberon was pretty interesting too.)
EDIT: With so many former Bell employees there, my personal fantasy is that Google would takes over Plan 9 and starts releasing images of it for Chromebooks. I know its not going to happen but I think that would be sweet.
It's really a very good piece, which I'd summarize as a catch-22. To be important you must be different. To be successful you must be the same. Linux solves that by being the same. (And indeed Apple, acquiring NeXT did as well.)
You can't write a kernel in Go because it has a mandatory runtime system and garbage collection. However, Mozilla's Rust language is similar in some ways, can work runtime-less, and a small demo kernel has been written in it: https://github.com/charliesome/rustboot
Of course that you can; kernels are not magic. Many kernels have been written in languages with far more heavyweight runtime systems than Go. A few examples have been provided in the comments. Here is a toy example in Go: http://gofy.cat-v.org/
You need some bits of assembly, but that's true for C, C++, Rust and whatever else as well.
he´s not saying that its not possible to use go for that.. its just that the compiler puts the go runtime wich is specific for the userland and that in the end of the line will create a bunch of kernel syscalls for whatever systems you are..
to make this happen, it should be possible to, at least change the runtime , or not use a runtime at all.. so the little boot part in assembly could call a go function in kernel mode
Of course you'll need a specific runtime just like for C you can't use the user space libc. There's no difference between C and Go here.
You can create your own runtime as you see fit. In fact, the Go distribution used to ship with such a bare metal runtime. Search the repository history for the tiny runtime. Reviving it again should be easier then porting Go to a new architecture, which is quite easy. Making a stub runtime (equivalent to booting a C kernel with no libc) is not more than half an hour of work.
Go can call assembly and C, and assembly and C can call Go. This is used a lot in the runtime, nothing special about it.
You can't write a kernel completely in Go. But that's the case with every language except assembly (and, with a GCC-specific x86 extension, C). If you were to implement the core runtime in assembly, there'd be nothing to stop you from writing the higher-level functionality of a kernel in Go.
This maybe true for most system languages on some platforms. My OS internals are rusty, but from hacking on minix, there was an issue for x86 bootstrapping into protected mode stuff to set important things like interrupt handlers and other addresses. Whether this is done in the language by macros, embedded asm or external asm, ultimately a binary is produced that is code and data segments for the architecture.
how about create a option in the compiler to ditch the standard runtime, or even to put another runtime in place..
suppose you create a kernel kit library/runtime.. instead the one in go.. who assume you are in userspace.. its possible..
i think if only they allowed to create a binary with no runtime in it... and with the binary encoding for the boot (dont remember what it is right now) it would just be feasible.. people could start hacking it
So.. Plan9 looks cool. Everything is distributed, kind of cool. I get that. I've also read this https://news.ycombinator.com/item?id=658247 and other documents about Plan9.
So, what is its killer feature that makes ME want to try it out?
> So, what is its killer feature that makes ME want to try it out?
Transparent distributed multi-user environments and clean unification of files and objects are some of the main goals [0], but, at its core, it is a research OS, and has never evolved to fill a specific niche. Many people, including Alan Kay (I mention it because I recently re-watched a few of his presentations), have spoken out about the need for more open-sky/exploratory research. It might gain popularity if it specialized into a good distributed control-plane, performance-constrained systems, or as a container OS, but it is not any of those things. Try it out for the fun and experience of trying it out.
Is anyone else unable to resolve bell-labs.com? I've tried with my local connection, then SSH'd into my home machine, and tried via 3G on my phone and no luck.
Actually it's a compressed image of a bootable SD card containing a full Plan 9 distribution, including all source code for ARM, x86, powerpc and MIPS.