I have more hope in projects like fuchsia, where separation of concerns is taken seriously - there are providers for "things" and on the fly configuration can glue several independent components to serve as one - for example instead of "Google Maps", "Bing Maps", or "Apple Maps" - you would have several map data providers (maybe some with higher data quality for specific regions), several map data viewers, several events, places, traffic, etc. providers - and you end up with one combined view of all them, e.g. not application but an user "story", as briefly explained here:
> you would have several map data providers (maybe some with higher data quality for specific regions), several map data viewers, several events, places, traffic, etc. providers - and you end up with one combined view of all them
I think the doc you linked to doesn't support your idea; it sounds much more like the replacement for activities and intents in Android than what you described. The service providers have no incentive to supply data independent of a full experience (why would Bing let you show maps in your app without agreeing to a ToS? what if the map provider has different streets than the traffic provider?), the app developers have no incentive to do this (if one phone might display Bing maps and another Google in my app, how does that make my life easier?), and the user probably doesn't want this (why am I seeing grainy black and white satellite imagery in one area and high fidelity color imagery in another?).
Think of the headache gluing all that data together. Every map viewer would need to support the data format for every map provider. Every traffic service would need to expose a protocol for querying traffic data on a stretch of road, and you pray that they mostly work the same and know which streets are which. You hope that every point-of-interest provider exposes the same set of fields and level of detail you need to make them useful. And that doesn't begin to touch on the intersection of rendering things well and the data needed to make that work (which streets to show at each zoom level, arrangement of points of interest, knowing which streets are important enough to show traffic for, special cases like when freeways are closed...).
Honestly this sounds like a maintenance nightmare. You would have to track API changes not from one provider but three or more, and figure out what to do when there are differences in the data, and how to integrate features that are similar but not identical. Maintaining just a couple of those systems would be a full time job.
NPM does not make it any easier to glue stuff together, we have had packaging for decades. Just because you can pull in a package does not make it easy to work with the api in a standardized way.
I think the argument is Node.js projects are often stitched together from many packages with different APIs and are therefore a good example for stitched together programs that work.
The argument isn't very good in my opinion. Combining independent services differs from combining libraries.
What you describe sounds like a recipe for the lowest common denominator in features combined with a geometric increase in bugs.
If it's just linking, loose coupling, it might work OK (for something like document creation or messaging) but it won't be integrated at a higher semantic level.
That's pretty much standard behavior in the mobile world. Services and activities are made available and the app petitions the OS to access those "lowest common denominator features".
Although you've decided to reffer to it in disdain, it's also a very basic principle in software design.
>geometric increase in bugs
Only if you believe that developing and maintaining specialized programs somehow increases the bug count "geometrically".
I understand the OP’s ‘lowest common denominator’ comment to mean all good but unique features can’t be included in the abstraction. For example, only Google has Streetview or anything like it, so you can’t abstract it. As a result, you end up not using the abstraction for the most part.
Only if you believe that developing and maintaining specialized programs somehow increases the bug count "geometrically".
I think if a program developed with one version of n components might have x bugs, a program with two versions of n components may have (n^2)x bugs; that is, for every new version of any given component you add to the system, you multiply the number of possible configurations of the program.
Again, if it's all loose coupling, it'll work (but won't be very interesting). But if you have lots of random calendars, different mapping components, different messaging subsystems - it all becomes much, much more complex. Abstractions leak, break down. They have different ordering semantics, different performance, different failure modes, different concurrency support; there's no end to the number of problems introduced by trying to be flexible about what dependencies you plug in.
Will publishers ever stand for that? Or will they bitch and moan about feeling "unable to provide a satisfactory customer (read: branding) experience" while they go and reinvent the wheel (read: write their own proprietary version)?
I wish the mods hadn't changed the title; I chose the original title to focus more on what's cool about it.
Anyway, Harvey is distributed in the same way that Plan 9 is/was distributed: the services are meant to be run across a network. On a properly set up network, you'll have an authentication server which does only authentication, because that's the keys to the kingdom. Similarly, you'll have a file server which ONLY serves files, and speaks to the auth server to manage access. Then you'll have a CPU server which lets remote users connect and do stuff; it mounts its root filesystem from the file server and authenticates users with the auth server. You can also have terminal machines, which typically netboot, get their root from the file server, authenticate with the auth server, and basically act as a personal workstation for whoever sits down; when you're done working, just save your work and reboot the terminal.
Of course it doesn't have to be run like that and many people don't, because they don't want to run 4+ systems. You can combine auth, FS, and CPU services onto one machine and just connect from a Windows, Linux, or Mac machine using 'drawterm' (think of it as an ssh equivalent).
Honest question, not trying to be dismissive: This architecture sounds old to me, as in things were built like that in the 80s or earlier but evolved past. Is that so? If so, what makes those decisions newly relevant?
The architecture does indeed come from the late 80s/early 90s, but I think it's more relevant today than ever. Separation of services is, in my opinion, essential to security. By putting the authentication service off in its own machine, you restrict the attacks that can be made on it; the auth server only talks over a highly restricted protocol. On a standalone Unix system, users log in to the same machine that stores the passwords. They're only a privilege escalation exploit away from getting the hashes of everyone's password, and these days privilege escalations are a dime a dozen.
When this scheme was designed, it was frankly a little bit nutty. The CPU, auth, and file servers would be VAX or Sun systems, costing tens of thousands of dollars, and the terminals would be either slightly cheaper Suns or IBM PC-compatibles costing thousands of dollars themselves. Today, you could probably cobble together a passable network for a few hundred dollars, assuming you use cheap Atom boards for everything except the CPU server (which is meant to be a beefy compute box, but let's be honest nothing in Plan 9 uses a lot of cycles). This makes the architecture more sensible than ever.
It seems a bit older than that. Much of the design goals for Harvey were in the early MULTICS systems of the 70's. In some ways I view Plan9 as Bell Labs attempt to bridge the great components of MULTICS and UNIX together.
I don't mean to be argumentative. MULTICS was in use by Honeywell Aerospace into the early 90's simulating physics models.
There was a centralized computer which was programmed via punch cards which you could dial into to submit your specific simulation and later return to for results.
Good point. Multics was used just as much, or little, as Plan 9 was. We might wish our predecessors had made the jump to distributed, securable operating systems, but there was never a time at which it made sense to do so.
You're absolutely correct, and like I said a lot of people run their Plan 9 systems with all the services on one box, which kills a lot of the security advantages.
However, if you compare setting up a Plan 9 auth server to setting up a Kerberos server... well, basically anything to do with Kerberos makes me long for death. The Plan 9 auth system is one of the best things they made and I highly recommend checking out the paper: https://css.csail.mit.edu/6.858/2013/readings/plan9auth.pdf
If you emulate a dozen machines on one physical machine, then a single exploit can traverse them all. If you pack a dozen "single board computers" in a case and give each a single function, then entire classes of attack are ruled out.
> On a standalone Unix system, users log in to the same machine that stores the passwords. They're only a privilege escalation exploit away from getting the hashes of everyone's password
But what machines are, in practice, multiuser today in that way? My work computer has only my account, and sometimes a temporary one if a technician needs to log in to troubleshoot something. For home use I don't think it makes sense. And for a prod network, again users aren't logging into the machines directly.
Here is an excerpt from the about page, I don't know enough to argue for or against the architecture but this part caught my eye.
"those who believed in Harvey were thought to be insane, but eventually found not to be."
So they recognize that many will think it is an insane idea but are playing a long game to prove that to be incorrect.
================================
About the Name
So, why “Harvey”? Mostly because we liked it, but there are at least 3 other good reasons based on the origins of Plan 9 operating system:
Harvey is a movie, Plan 9 is a movie too.
Harvey is a rabbit, Glenda is a bunny.
Most importantly: in the movie, those who believed in Harvey were thought to be insane, but eventually found not to be.
>This architecture sounds old to me, as in things were built like that in the 80s or earlier but evolved past.
Actually most modern OSes in use are even older in their concepts.
Plan 9's concepts as described above have slowly creeped into Linux but not fully. So the architecture above was only experimental in the 80s/90s and is still nowhere available in the mainstream today.
So your concern is like saying "we've had macros and closures since the 70s" in a world that still uses Java/C#/etc.
The idea does indeed come from the 80's, but things didn't evolve past them. The architecture of our current OSes was settled at the 60's and early 70's, and we never moved from it.
It is not clear why such things were never adopted. There were many licensing problems on the way, and a Microsoft monopoly pushing things into even older architectures than the competition. There were also failed modern projects, but there is little evidence to decide if it is a hard problem, greed people forced everybody into a worse path, or if it is something people do not want.
One nice thing about Plan9 is there is no concept of "root".
Plan9 "fixes" many of the flaws/deficiencies of the original UNIX, such as that one.
Whats cool about this Plan9 project compared to the original Plan9 is one does not need to use the Plan9 compiler to compile it. One can use gcc, clang or icc.
They key word in my question being "things evolved past that". Age doesn't matter, but if things are done in a different way now it's for reasons, not merely for novelty.
That would make it an improvement over whatever else you are using which is most likely running on an architecture designed in the 60's and first implemented in the 70's.
> you'll have an authentication server which does only authentication
> you'll have a file server which ONLY serves files
These sound like disadvantages. Decentralization would be better. I'd like to share the storage of all my servers and not have one auth server as a single point of failure. I imagine you can setup up a redundant auth server at the cost of more hardware but why not decentralize? This seems a lot like the old way of doing things.
And indeed one of them may be local to the computer serving local drives, and another could be the network resources. Heck it could even allow file servers with different security settings (e.g. the USB drives might be mounted in a hostile file server space).
I mean, it sounds a bit like 'software defined computing' if you'll excuse the terrible metaphor, a bit like SDN which abstracted the physical networking layer.
Does Harvey abstract the hardware layer? So it could theoretically scale to a huge amount of machines that look like one giant powerful one?
Wouldn't the speed of operation be limited to the speed of the network though?
Anyway, sorry if the questions sound silly, I don't know much about this stuff.
Biggest mistake IMO was not clarifying what a distributed OS is supposed to be. Especially nowadays with all the cloud hype I could think of at least three different meanings right off the bat, then clicked all the links on the landing page and erratically browsed the wiki and didn't find anything, not even an external (wikipedia) link which got me quite annoyed at that point.
Plan9 was the 'next version' of Unix made by the people that originally made Unix. It was a small (tiny!) network packet routing kernel (routing 9p, a layer above IP) that is meant to be fully distributed and networked.
Anyone who's wanted to play with Plan 9 should consider giving Harvey a look. Getting started is a pretty easy process: you download the repo on your Linux box (I believe it works on MacOS too), run a script to get the environment set up, run a build command. Once everything's built (about 2 minutes on a decent machine), run another script which boots Harvey in a QEMU VM. The VM will get its root filesystem from the Linux box, so you can easily experiment, make changes, test.
Harvey comes with pretty much everything you'd find on Plan 9. It also supports Go, and there's a (work in progress, but functional) POSIX layer which you can use to cross-compile POSIX programs.
"Harvey is a distributed operating system. It is direct descendant of Plan 9 from Bell Labs which was originally developed by the same group at Bell Labs that invented Unix and C."
Ah, yeah, Harvey is based on that release, it's just had significant changes made to the build system and the code itself. They're running continuous integration and code quality checkers. The code now builds with GCC and Clang and it builds pretty damn fast. It's a cool project and the people running it are really friendly.
It's in C, offers POSIX, and apparently a big-kernel OS. Looks like another Unix-type OS. Why?
An OS in Rust would be a step forward. An OS like QNX would be a step forward. An OS like QNX in Rust would be just what the IoT world needs. But another UNIX/Linux clone? Why?
It's a Plan 9 fork, the Plan 9 papers and man pages would describe the system architecture pretty well.
Most of the changes I know of that they've done (UFS support, ELF executables, standard C without the Plan 9 extensions, a different build system, etc..) are more in the details than the architecture.
From what I read, it's a Plan9 fork refreshed for modern build chains plus golang support. From floren's comments, it sounds like the architecture is still Plan9.
i take a lot of inspiration from plan 9, but a lot of the distributed computing architecture that it is built around is severely outdated and based on a lot of untrue assumptions. for example, you can't really expect to have always-on, low latency internet access. especially if you get anywhere near a mobile context. without good networking plan 9 falls on its ass. so you have to consider, how do you implement the shape of plan 9, without depending on the mythical "reliable network"?
i don't have an answer exactly, but i am looking for one[0]. we are using the linux kernel because hardware support and you don't really need any special kernel features to implement a plan 9-like (read the paper, they didn't to anything special kernel-wise). i'm in the process of copying my offline notes into our wiki, so uh stay tuned or whatever.
i'm hoping to publish a really early preview build some time next week, but we'll see how far i get while i'm on holiday. it'll be soon in any case.
This runs surprisingly well as a Docker container. The awkwardness of trying to use rio on a laptop with a (non-existent) three-button mouse is still there, though, and I hope other Plan9-related projects are considering bringing the GUI UX up to speed, since using a trackpad with drawterm is still awkward...
Again... I understand many will argue but I also believe many will agree: we are to start phasing out C and write modern OSes in safe languages like Rust, Haskell, Go, Crystal etc. Because code written in C is almost guaranteed to be vulnerable. C has done its job when the hardware was slow and low on memory, electronic data was rarely life-important and when there were no languages and compilers that could be used as alternatives but that time has ended and now we are to consider different priorities.
Except in this day and age reading the assembly language doesn't give you the same kind of insight into how the hardware will perform as it did 15 years ago.
When it's time to speak directly to hardware, Rust and Haskell aren't exactly silver bullets: so much of the state ends up being in the chips instead of the program--thus outside the compiler's scope.
If we want to get away from C-level twiddling, we will also need to get away from C-level hardware.
How does an HDL address the limitation of software compilers at the software-hardware boundary?
By scope, I mean that the compiler can really only see as far as the source code it is compiling. In something like an OS--especially a non-microkernel--an awful lot ends up having to be read and written across the boundary, and in an unsafe context.
Oh I see. You were talking about a compiler for software talking to hardware. Well, I don't see why talking to hardware is any different to talking to a CPU or main memory or a disk.
Not so much "talking" as directly reading/writing memory outside of your source scope that is also mutated by other chips--like a DMA buffer.
Or for a traditional kernel, reading/writing directly into the address space of a user process. Wouldn't have to worry so much about race conditions there since the process could be suspended, but there is still an opportunity to fudge the indexing.
It looks like something that just gets complied to VHDL and friends? Is there some way to make the description available to user code/GHC? Now that would be interesting.
I appreciate the effort but I have a feeling that there is not enough developers for existing free OSes and I am not sure it is very wise to fragment the community even more.
I wonder how this compares with Redox OS. This is Plan 9-like, which Redox could be with its 'everything is a URI' concept, but I haven't used either to see.
This was done by Microsoft a lot in recent years. I had the same reaction there. In their case "modern" meant it was quite likely to be half-baked and not very useful.
It sounds like something I'd name my dog. I don't like Jenkins, either. Clearly it's just personal taste, but I prefer either completely made up names, or names of inanimate objects. I think Rust is great, for example. I don't like "Travis". This is all just my own, subjective, personal opinion though. I apologize if my comment came off harshly.
> Harvey is just a name... A name of an old, white, male US citizen.
I'm not sure why, but it seems like the name Harvey was also very popular among Jewish Americans 50+ years ago and beyond. Almost every famous American Harvey I can think of has a typically Ashkenazi last name. Often that happens for names that are Hebrew or German in origin, but looking it up it seems the name Harvey is English, via the Bretons.
Why so many haters? Just the fact that they've got Ron Minnich[0-1] on the team--one of the guys fighting the good fight against all of the voodoo BS in your Intel/AMD firmware--makes it worth a look IMHO.
https://fuchsia.googlesource.com/modular/+/master/docs/modul...