Hacker News new | past | comments | ask | show | jobs | submit login
Harvey – An effort to provide a modern, distributed, 64-bit operating system (harvey-os.org)
280 points by floren on Dec 22, 2017 | hide | past | favorite | 106 comments



I have more hope in projects like fuchsia, where separation of concerns is taken seriously - there are providers for "things" and on the fly configuration can glue several independent components to serve as one - for example instead of "Google Maps", "Bing Maps", or "Apple Maps" - you would have several map data providers (maybe some with higher data quality for specific regions), several map data viewers, several events, places, traffic, etc. providers - and you end up with one combined view of all them, e.g. not application but an user "story", as briefly explained here:

https://fuchsia.googlesource.com/modular/+/master/docs/modul...


> you would have several map data providers (maybe some with higher data quality for specific regions), several map data viewers, several events, places, traffic, etc. providers - and you end up with one combined view of all them

I think the doc you linked to doesn't support your idea; it sounds much more like the replacement for activities and intents in Android than what you described. The service providers have no incentive to supply data independent of a full experience (why would Bing let you show maps in your app without agreeing to a ToS? what if the map provider has different streets than the traffic provider?), the app developers have no incentive to do this (if one phone might display Bing maps and another Google in my app, how does that make my life easier?), and the user probably doesn't want this (why am I seeing grainy black and white satellite imagery in one area and high fidelity color imagery in another?).

Think of the headache gluing all that data together. Every map viewer would need to support the data format for every map provider. Every traffic service would need to expose a protocol for querying traffic data on a stretch of road, and you pray that they mostly work the same and know which streets are which. You hope that every point-of-interest provider exposes the same set of fields and level of detail you need to make them useful. And that doesn't begin to touch on the intersection of rendering things well and the data needed to make that work (which streets to show at each zoom level, arrangement of points of interest, knowing which streets are important enough to show traffic for, special cases like when freeways are closed...).


Honestly this sounds like a maintenance nightmare. You would have to track API changes not from one provider but three or more, and figure out what to do when there are differences in the data, and how to integrate features that are similar but not identical. Maintaining just a couple of those systems would be a full time job.


There's a difference between the systems where this is difficult, and systems where this is outright impossible.

On gluing things together: look at your nearest npm-based software project. It may not be very pretty, but it works in a reasonable way.


NPM does not make it any easier to glue stuff together, we have had packaging for decades. Just because you can pull in a package does not make it easy to work with the api in a standardized way.


Their point was that npm doesn't make it any easier but it's possible.


Please explain to me how npm makes it possible.


I think the argument is Node.js projects are often stitched together from many packages with different APIs and are therefore a good example for stitched together programs that work.

The argument isn't very good in my opinion. Combining independent services differs from combining libraries.


What you describe sounds like a recipe for the lowest common denominator in features combined with a geometric increase in bugs.

If it's just linking, loose coupling, it might work OK (for something like document creation or messaging) but it won't be integrated at a higher semantic level.


> lowest common denominator on feautures

That's pretty much standard behavior in the mobile world. Services and activities are made available and the app petitions the OS to access those "lowest common denominator features".

Although you've decided to reffer to it in disdain, it's also a very basic principle in software design.

>geometric increase in bugs

Only if you believe that developing and maintaining specialized programs somehow increases the bug count "geometrically".


I understand the OP’s ‘lowest common denominator’ comment to mean all good but unique features can’t be included in the abstraction. For example, only Google has Streetview or anything like it, so you can’t abstract it. As a result, you end up not using the abstraction for the most part.


Only if you believe that developing and maintaining specialized programs somehow increases the bug count "geometrically".

I think if a program developed with one version of n components might have x bugs, a program with two versions of n components may have (n^2)x bugs; that is, for every new version of any given component you add to the system, you multiply the number of possible configurations of the program.

Again, if it's all loose coupling, it'll work (but won't be very interesting). But if you have lots of random calendars, different mapping components, different messaging subsystems - it all becomes much, much more complex. Abstractions leak, break down. They have different ordering semantics, different performance, different failure modes, different concurrency support; there's no end to the number of problems introduced by trying to be flexible about what dependencies you plug in.


This sounds like the dreamy infinitely versatile paper design for countless failed projects.

"And our database will be one table with Int64 possible properties and Variant data for any type, and..."


This is the most polite way to express exactly my thoughts.


This basically sounds like OLE. Ah memories!


What is OLE?



Seems susceptible to dependency hell.


Do you see any dependency hell in requesting that the OS launches a web browser or email client?


Normally, yes.


Are you a Java developer by any chance?

Honestly that sounds disastrous. Idealistically, sure it’s a nice idea. Practically, it’d be unusable.


If I understood him/her correctly, it looks pretty much like COM, or the component models in iOS and Android.

They seem to be doing pretty well.


Will publishers ever stand for that? Or will they bitch and moan about feeling "unable to provide a satisfactory customer (read: branding) experience" while they go and reinvent the wheel (read: write their own proprietary version)?


Doesn’t Windows phone also work like this?


*didn't

It is time to give up on Windows Phone.


I will do it when my device dies, maybe by then Google has had the guts to force OEMs to actually push Treble updates.

My WP 10 has received more updates than all my Android devices altogether.


I get your point but it still exists and "didn't" would imply that they have removed the feature.


The landing page doesn't really explain what is interesting about Harvey. What does it mean that it's distributed, exactly?


I wish the mods hadn't changed the title; I chose the original title to focus more on what's cool about it.

Anyway, Harvey is distributed in the same way that Plan 9 is/was distributed: the services are meant to be run across a network. On a properly set up network, you'll have an authentication server which does only authentication, because that's the keys to the kingdom. Similarly, you'll have a file server which ONLY serves files, and speaks to the auth server to manage access. Then you'll have a CPU server which lets remote users connect and do stuff; it mounts its root filesystem from the file server and authenticates users with the auth server. You can also have terminal machines, which typically netboot, get their root from the file server, authenticate with the auth server, and basically act as a personal workstation for whoever sits down; when you're done working, just save your work and reboot the terminal.

Of course it doesn't have to be run like that and many people don't, because they don't want to run 4+ systems. You can combine auth, FS, and CPU services onto one machine and just connect from a Windows, Linux, or Mac machine using 'drawterm' (think of it as an ssh equivalent).


Honest question, not trying to be dismissive: This architecture sounds old to me, as in things were built like that in the 80s or earlier but evolved past. Is that so? If so, what makes those decisions newly relevant?


The architecture does indeed come from the late 80s/early 90s, but I think it's more relevant today than ever. Separation of services is, in my opinion, essential to security. By putting the authentication service off in its own machine, you restrict the attacks that can be made on it; the auth server only talks over a highly restricted protocol. On a standalone Unix system, users log in to the same machine that stores the passwords. They're only a privilege escalation exploit away from getting the hashes of everyone's password, and these days privilege escalations are a dime a dozen.

When this scheme was designed, it was frankly a little bit nutty. The CPU, auth, and file servers would be VAX or Sun systems, costing tens of thousands of dollars, and the terminals would be either slightly cheaper Suns or IBM PC-compatibles costing thousands of dollars themselves. Today, you could probably cobble together a passable network for a few hundred dollars, assuming you use cheap Atom boards for everything except the CPU server (which is meant to be a beefy compute box, but let's be honest nothing in Plan 9 uses a lot of cycles). This makes the architecture more sensible than ever.


It seems a bit older than that. Much of the design goals for Harvey were in the early MULTICS systems of the 70's. In some ways I view Plan9 as Bell Labs attempt to bridge the great components of MULTICS and UNIX together.


The way I see that history, Multics was the second system that never shipped, and Plan 9 was the fourth system that belatedly did ship Multics.


I don't mean to be argumentative. MULTICS was in use by Honeywell Aerospace into the early 90's simulating physics models.

There was a centralized computer which was programmed via punch cards which you could dial into to submit your specific simulation and later return to for results.


Good point. Multics was used just as much, or little, as Plan 9 was. We might wish our predecessors had made the jump to distributed, securable operating systems, but there was never a time at which it made sense to do so.


Numerous centralized authentication services are available for Linux and Windows and others for a long time.


You're absolutely correct, and like I said a lot of people run their Plan 9 systems with all the services on one box, which kills a lot of the security advantages.

However, if you compare setting up a Plan 9 auth server to setting up a Kerberos server... well, basically anything to do with Kerberos makes me long for death. The Plan 9 auth system is one of the best things they made and I highly recommend checking out the paper: https://css.csail.mit.edu/6.858/2013/readings/plan9auth.pdf


Can the component machines be run as VMs on fewer / one physical machine? It would make for a better QubesOS then.


Yes, I've used minimega to run a bunch of Harvey VMs for experimental use. https://github.com/Harvey-OS/harvey/wiki/Using-minimega-to-r...


If you emulate a dozen machines on one physical machine, then a single exploit can traverse them all. If you pack a dozen "single board computers" in a case and give each a single function, then entire classes of attack are ruled out.


> On a standalone Unix system, users log in to the same machine that stores the passwords. They're only a privilege escalation exploit away from getting the hashes of everyone's password

But what machines are, in practice, multiuser today in that way? My work computer has only my account, and sometimes a temporary one if a technician needs to log in to troubleshoot something. For home use I don't think it makes sense. And for a prod network, again users aren't logging into the machines directly.


Here is an excerpt from the about page, I don't know enough to argue for or against the architecture but this part caught my eye.

"those who believed in Harvey were thought to be insane, but eventually found not to be."

So they recognize that many will think it is an insane idea but are playing a long game to prove that to be incorrect.

================================

About the Name

So, why “Harvey”? Mostly because we liked it, but there are at least 3 other good reasons based on the origins of Plan 9 operating system:

Harvey is a movie, Plan 9 is a movie too. Harvey is a rabbit, Glenda is a bunny. Most importantly: in the movie, those who believed in Harvey were thought to be insane, but eventually found not to be.

https://harvey-os.org/about/


>This architecture sounds old to me, as in things were built like that in the 80s or earlier but evolved past.

Actually most modern OSes in use are even older in their concepts.

Plan 9's concepts as described above have slowly creeped into Linux but not fully. So the architecture above was only experimental in the 80s/90s and is still nowhere available in the mainstream today.

So your concern is like saying "we've had macros and closures since the 70s" in a world that still uses Java/C#/etc.


The idea does indeed come from the 80's, but things didn't evolve past them. The architecture of our current OSes was settled at the 60's and early 70's, and we never moved from it.

It is not clear why such things were never adopted. There were many licensing problems on the way, and a Microsoft monopoly pushing things into even older architectures than the competition. There were also failed modern projects, but there is little evidence to decide if it is a hard problem, greed people forced everybody into a worse path, or if it is something people do not want.


One nice thing about Plan9 is there is no concept of "root".

Plan9 "fixes" many of the flaws/deficiencies of the original UNIX, such as that one.

Whats cool about this Plan9 project compared to the original Plan9 is one does not need to use the Plan9 compiler to compile it. One can use gcc, clang or icc.


Why does the architecture's age matter? Things do not just magically stop working well. Why the obsession with novelty?


They key word in my question being "things evolved past that". Age doesn't matter, but if things are done in a different way now it's for reasons, not merely for novelty.


That would make it an improvement over whatever else you are using which is most likely running on an architecture designed in the 60's and first implemented in the 70's.


Like how functional programming came from the 80s or earlier?


> you'll have an authentication server which does only authentication

> you'll have a file server which ONLY serves files

These sound like disadvantages. Decentralization would be better. I'd like to share the storage of all my servers and not have one auth server as a single point of failure. I imagine you can setup up a redundant auth server at the cost of more hardware but why not decentralize? This seems a lot like the old way of doing things.


Having a file server that only stores files does not mean having only one file server.


And indeed one of them may be local to the computer serving local drives, and another could be the network resources. Heck it could even allow file servers with different security settings (e.g. the USB drives might be mounted in a hostile file server space).


A server is just a name; you can't infer the number of machines (physical or virtual) that are behind a name.


Huh, that sounds interesting. Does it scale?

I mean, it sounds a bit like 'software defined computing' if you'll excuse the terrible metaphor, a bit like SDN which abstracted the physical networking layer.

Does Harvey abstract the hardware layer? So it could theoretically scale to a huge amount of machines that look like one giant powerful one?

Wouldn't the speed of operation be limited to the speed of the network though?

Anyway, sorry if the questions sound silly, I don't know much about this stuff.


read that in Hurd's voice


Biggest mistake IMO was not clarifying what a distributed OS is supposed to be. Especially nowadays with all the cloud hype I could think of at least three different meanings right off the bat, then clicked all the links on the landing page and erratically browsed the wiki and didn't find anything, not even an external (wikipedia) link which got me quite annoyed at that point.


Plan9 was the 'next version' of Unix made by the people that originally made Unix. It was a small (tiny!) network packet routing kernel (routing 9p, a layer above IP) that is meant to be fully distributed and networked.

https://www.quora.com/How-is-Plan-9-OS-different-from-Unix



Agreed, if I have to do a research on Google and Quora, then the landing page failed.

I need to be told immediately why Harvey is killer and there are things you can do with it that you can't with Linux, at least not easily.

The only standout feature according to the summary is "simplified sys call". Boy, that's worth dropping Linux immediately! Not.

Don't get me wrong, I would love to show Harvey some support, but I need a compelling reason to get over the massive switching cost.


Anyone who's wanted to play with Plan 9 should consider giving Harvey a look. Getting started is a pretty easy process: you download the repo on your Linux box (I believe it works on MacOS too), run a script to get the environment set up, run a build command. Once everything's built (about 2 minutes on a decent machine), run another script which boots Harvey in a QEMU VM. The VM will get its root filesystem from the Linux box, so you can easily experiment, make changes, test.

Harvey comes with pretty much everything you'd find on Plan 9. It also supports Go, and there's a (work in progress, but functional) POSIX layer which you can use to cross-compile POSIX programs.


It does require a case-sensitive file system, so it takes some additional effort to cross compile from the mac.

Not that that's unusual :)


"Harvey is a distributed operating system. It is direct descendant of Plan 9 from Bell Labs which was originally developed by the same group at Bell Labs that invented Unix and C."

https://github.com/Harvey-OS/harvey


Note that Plan 9 is already licensed under GPLv2 (with caveats I guess)[0][1]

[0] - https://news.ycombinator.com/item?id=7232042

[1] - http://akaros.cs.berkeley.edu/files/Plan9License


Ah, yeah, Harvey is based on that release, it's just had significant changes made to the build system and the code itself. They're running continuous integration and code quality checkers. The code now builds with GCC and Clang and it builds pretty damn fast. It's a cool project and the people running it are really friendly.


thanks for that, was looking for a good reason why it's still C.


It's in C, offers POSIX, and apparently a big-kernel OS. Looks like another Unix-type OS. Why?

An OS in Rust would be a step forward. An OS like QNX would be a step forward. An OS like QNX in Rust would be just what the IoT world needs. But another UNIX/Linux clone? Why?



This is a plan9 descendant.


The actual end of line for Plan 9, was Inferno using Limbo as the userspace language, but many seem to keep forgetting about it.

Harvey would be cool if they would continue Inferno, but with Go instead of Limbo.

As it is, it isn't that much interesting from OS architecture point of view.


It seems like they're taking the same approach as QNX, except across boxes instead of within the same box.


Did you find an architecture description anywhere in there? I've been looking for one.


It's a Plan 9 fork, the Plan 9 papers and man pages would describe the system architecture pretty well.

Most of the changes I know of that they've done (UFS support, ELF executables, standard C without the Plan 9 extensions, a different build system, etc..) are more in the details than the architecture.


From what I read, it's a Plan9 fork refreshed for modern build chains plus golang support. From floren's comments, it sounds like the architecture is still Plan9.


i take a lot of inspiration from plan 9, but a lot of the distributed computing architecture that it is built around is severely outdated and based on a lot of untrue assumptions. for example, you can't really expect to have always-on, low latency internet access. especially if you get anywhere near a mobile context. without good networking plan 9 falls on its ass. so you have to consider, how do you implement the shape of plan 9, without depending on the mythical "reliable network"?

i don't have an answer exactly, but i am looking for one[0]. we are using the linux kernel because hardware support and you don't really need any special kernel features to implement a plan 9-like (read the paper, they didn't to anything special kernel-wise). i'm in the process of copying my offline notes into our wiki, so uh stay tuned or whatever.

i'm hoping to publish a really early preview build some time next week, but we'll see how far i get while i'm on holiday. it'll be soon in any case.

[0]: https://www.heropunch.io/%E5%8F%8B


Nice to see all these new OS coming out.

I'm personally rooting for HURD (https://www.gnu.org/software/hurd/hurd.html) and Barrelfish(http://www.barrelfish.org/)


Obligatory XKCD for operating systems: https://xkcd.com/1508/


This runs surprisingly well as a Docker container. The awkwardness of trying to use rio on a laptop with a (non-existent) three-button mouse is still there, though, and I hope other Plan9-related projects are considering bringing the GUI UX up to speed, since using a trackpad with drawterm is still awkward...


> It can be built with gcc and clang

Again... I understand many will argue but I also believe many will agree: we are to start phasing out C and write modern OSes in safe languages like Rust, Haskell, Go, Crystal etc. Because code written in C is almost guaranteed to be vulnerable. C has done its job when the hardware was slow and low on memory, electronic data was rarely life-important and when there were no languages and compilers that could be used as alternatives but that time has ended and now we are to consider different priorities.


Armchair preaching? Who is this "we" in "we are to do anything"? Unless you're contributing to a replacement, your proclamation simply doesn't matter.


> we are to start phasing out C and write modern OSes in safe languages.

Linus Torvolds probably has the best answer to why not.

https://www.reddit.com/r/programming/comments/1t0gfy/linus_t...

"When I read C, I know what the assembly language will look like."


Except in this day and age reading the assembly language doesn't give you the same kind of insight into how the hardware will perform as it did 15 years ago.


Oh, it might still give the same insight -- for example, when people reading assembler today are the same ones who did that 15 years ago.


Or in BSD's case, 30 or 40 years ago.


Only at optimization level 0 and assuming a dumb C compiler.


And you're compiling to a specific architecture.


Go and haskell are targeted at a different level of abstraction than an OS lives at.I don't know much about crystall but I has a GC.


When it's time to speak directly to hardware, Rust and Haskell aren't exactly silver bullets: so much of the state ends up being in the chips instead of the program--thus outside the compiler's scope.

If we want to get away from C-level twiddling, we will also need to get away from C-level hardware.


It's not outside the compiler's scope at all. See for example http://www.clash-lang.org/.


How does an HDL address the limitation of software compilers at the software-hardware boundary?

By scope, I mean that the compiler can really only see as far as the source code it is compiling. In something like an OS--especially a non-microkernel--an awful lot ends up having to be read and written across the boundary, and in an unsafe context.


Oh I see. You were talking about a compiler for software talking to hardware. Well, I don't see why talking to hardware is any different to talking to a CPU or main memory or a disk.


Not so much "talking" as directly reading/writing memory outside of your source scope that is also mutated by other chips--like a DMA buffer.

Or for a traditional kernel, reading/writing directly into the address space of a user process. Wouldn't have to worry so much about race conditions there since the process could be suspended, but there is still an opportunity to fudge the indexing.


It looks like something that just gets complied to VHDL and friends? Is there some way to make the description available to user code/GHC? Now that would be interesting.


I appreciate the minimal nothing fancy website design.


I appreciate the effort but I have a feeling that there is not enough developers for existing free OSes and I am not sure it is very wise to fragment the community even more.



I wonder how this compares with Redox OS. This is Plan 9-like, which Redox could be with its 'everything is a URI' concept, but I haven't used either to see.


We really could use something new, so I am ready to give anything new a shot. Something completely free and open to new ways of thinking.


[flagged]


Next time feel free to stop at the title and refrain from commenting.


This was done by Microsoft a lot in recent years. I had the same reaction there. In their case "modern" meant it was quite likely to be half-baked and not very useful.


[flagged]


Following in the proud tradition of Plan 9 :)

edit: and rio, and 8½, and acme, and the rest of it!


Genuine question: WHy do you think it is awful? I found "Harvey OS" to have quite a nice ring to it.

It does not sound to me like a blockbuster name, but sounds good enough, like "Rust" :)


It sounds like something I'd name my dog. I don't like Jenkins, either. Clearly it's just personal taste, but I prefer either completely made up names, or names of inanimate objects. I think Rust is great, for example. I don't like "Travis". This is all just my own, subjective, personal opinion though. I apologize if my comment came off harshly.


[flagged]


White? US Citizen?

That's saying more about you than about the name "Harvey".

It came from http://www.imdb.com/title/tt0042546/


> Harvey is just a name... A name of an old, white, male US citizen.

I'm not sure why, but it seems like the name Harvey was also very popular among Jewish Americans 50+ years ago and beyond. Almost every famous American Harvey I can think of has a typically Ashkenazi last name. Often that happens for names that are Hebrew or German in origin, but looking it up it seems the name Harvey is English, via the Bretons.


It's the name of the imaginary rabbit that a drunk talks on the bar, on a scene of Rigger Rabbit.


Why so many haters? Just the fact that they've got Ron Minnich[0-1] on the team--one of the guys fighting the good fight against all of the voodoo BS in your Intel/AMD firmware--makes it worth a look IMHO.

[0] https://www.youtube.com/watch?v=iffTJ1vPCSo

[1] https://lwn.net/Articles/738649/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: