Hacker News new | past | comments | ask | show | jobs | submit login
The Sad History of the Microsoft Posix Subsystem (2010) (brianreiter.org)
228 points by the_why_of_y on April 24, 2016 | hide | past | favorite | 71 comments



Here is a more recent history of Interix and the history of it, from Stephen Walli who was part of it:

https://medium.com/@stephenrwalli/running-linux-apps-on-wind...

https://medium.com/@stephenrwalli/running-linux-apps-on-wind...


"Is Windows NT POSIX Compliant ?" is a good read: https://books.google.be/books?id=t_HcO8cY91IC&pg=PA37&lpg=PA...


Bloody hell - they really were utterly unethical back in the day! Seriously, somehow they were bypassing procurement rules - there had got to be either seriously incompetent decision makers, or some sort of graft that occurred at the time.


Look up the Orange Book certificatoon story after your blood's stopped boiling.

https://gcn.com/Articles/1998/10/26/Former-Microsoft-contrac...


One of the Interix guys gave a talk at my local Unix group in 1997 or so. He said they had a support contract with Microsoft. Once Windows NT came out, Microsoft stopped answering calls for about 6 months. Because NT was going to win, right?

After 6 months, their technical contacts were sheepishly apologetic...

On a personal note, I did my main FreeRADIUS development on XP with Interix for many years. It was... adequate as a Unix replacement. Not spectacular, but adequate.

When that system died, I replaced it with an OSX system. Which was enormously better.


I don't quite understand, what caused the 6 month period of silence and then apology?


Microsoft burned their bridge with Interix because "NT was going to win, right?" Six months later, when Microsoft figured out that NT still needed a Posix compatibility layer, they had to (sheepishly) rebuild their bridge with Interix.


This is interesting. So basically, my understanding is that the Posix subsystem is just a user mode portion of Windows that translates API calls to calls into Executive services through ntdll.dll

Now this means that if you know the ntdll.dll interface then this remains stable. So if you want to develop your own environmental subsystem it should be possible. After all, the Posix subsystem was purchased by Microsoft!

What I'm really interested in is what the ReactOS guys are doing. Have they implemented the same layering? If they have implemented ntdll.dll, then this could actually mean that they could technically get in ahead and do what Microsoft are doing right now.

For that matter, it starts to make me wonder whether someone could do what Microsoft have done but in reverse on Linux! In other words, implement a translation layer that translate ntdll.dll function calls to corresponding kernel syscalls, then work backwards implementing each of the user mode subsystems. Maybe WINE has done this already?


Up until Win7 the Win32 subsystem wasn't entirely in userland. In Vista they started the MinWin kernel initiative to extricate it but I don't know if that's full come to fruition in Windows 10.

If it has then you could theoretically write a WinNT interface for Linux and run the Win32 userland on it. But that's pretty much asking Microsoft to sue you into oblivion.

* According to Wikipedia, the MinWin project started after Server 2003 shipped.


Wouldn't Microsoft have sued the ReactOS guys then?


No, ReactOS is an implementation of the Win32 API, not the WinNT kernel.

There's a layer of separation between WinNT and Win32 that's basically an API. In theory if you implemented this API, then you could run Microsoft's Win32 subsystem on your API without Microsoft's underlying WinNT kernel. That Win32 layer would be Microsoft code and they'd sue the heck out of you for running it.

The WinNT kernel was originally designed so that Subsystems would run on top of it and those Subsystems would provide the interfaces for applications. Originally WinNT had Win32, OS/2 and POSIX subsystems that ran on top of it. Over time the distinction between WinNT and Win32 erroded while the OS/2 subsystem was canned and the POSIX one neglected.

Starting after Server 2003 they began to redefine the boundary between WinNT and Win32. The primary reason was to allow for headless Servers that didn't have the overhead of the GUI (Win32) or other unnecessary functionality like the Printer systems.


That's entirely wrong. ReactOS implements the subsystems that allow drivers to run and a whole bunch more besides.

If you don't believe me, then I refer you to the following:

https://github.com/mirror/reactos/tree/master/reactos/dll/nt...

The Wiki itself states that:

"The ReactOS project reimplements a state-of-the-art and open NT-like operating system based on the NT architecture. It comes with a WIN32 subsystem, NT driver compatibility and a handful of useful applications and tools."

https://www.reactos.org/wiki/ReactOS

You might also want to review:

https://www.reactos.org/wiki/Ntoskrnl.exe

And also:

https://www.reactos.org/wiki/ReactOS_Core

And here is the header for the kernel functions used in ntdll.h:

https://github.com/mirror/reactos/blob/master/reactos/includ...


For that matter, it starts to make me wonder whether someone could do what Microsoft have done but in reverse on Linux! In other words, implement a translation layer that translate ntdll.dll function calls to corresponding kernel syscalls, then work backwards implementing each of the user mode subsystems. Maybe WINE has done this already?

I thought that was what WINE did. WINE = WINE Is Not an Emulator.


The Linux kernel in its current design is not intended to provide subsystems for other operating systems while the NT kernel was from beginning on.


It seems this article badly needs to be updated in light of Ubuntu on Windows. Here's a link from 2016 instead of 2010. https://insights.ubuntu.com/2016/03/30/ubuntu-on-windows-the...


IIUC the new Ubuntu-on-Windows stuff is a different take, reusing some of Interix but diverging substantially from the old implementation.

EDIT Source[1]: "Over time these [Interix] subsystems were retired. However, since the Windows NT Kernel was architected to allow new subsystem environments, we were able to use the initial investments made in this area and broaden them to develop the Windows Subsystem for Linux."

[1] https://blogs.msdn.microsoft.com/wsl/2016/04/22/windows-subs...


Great article!

I'm not too familiar with OS as a subject, but is the separation between user and kernel mode similar to high-level language versus assembly; i.e., the approach they took was to emulate the Linux kernel, which is sort of like a virtual machine. But I imagine emulating a kernel is harder, right? Because of all the stuff that goes on?

And in general, would kernel emulation be a performant approach for running userspace of any OS in any other OS?


It's more like Wine: the code runs directly on the hardware, but there is an extra layer that emulates the system calls. Except that in this case, the emulation has a lot of supporting code in a kernel driver, something that Wine doesn't have. (Actually, for Win32 programs there is also such a layer, however that layer and the NT kernel were designed together from the start, so it can be pretty thin.)


It's an interesting fuzzy gray boundary between the Ubuntu on Windows and "emulation". It's still built as an NT sub-system (like Interix was), and it's still the NT kernel ultimately in charge of everything. The difference seems to be the NT kernel implementing the generic POSIX standard versus the NT kernel implementing the specific system calls of the Linux kernel. So on one hand it is an emulation because there are specific real world binary behaviors (and quirks and bugs) it's trying to replicate, versus implementing standards from a specification document, but on the other hand, it still seems to be the NT Kernel doing NT Kernel things.


I guess the line is a bit blurry, especially when you factor in fast virtualization techniques that involve running emulated instructions directly :)


Gotcha. Thanks for the explanation! Which would you say was harder, Wine or this? It seems like Wine should have been harder since the NT kernel was designed with multiple userspaces in mind...


Wine is basically reverse engineering a badly documented black box. On the other hand, even though the Linux source is available, Microsoft programmers probably aren't allowed to look at it to prevent their code from becoming a derivative GPL'ed work.


> And in general, would kernel emulation be a performant approach for running userspace of any OS in any other OS?

It can come close to the host OS, but that requires a lot of work.


Would this include stuff like printing to stderr being slower on Windows? If they get to write kernel drivers, there could be a chance that they could replicate the file handling behaviors of Linux.


I don't really know anything about this, but if stderr is really slower on Windows, I would suppose that's a userland library thing. I would not know why that would be slower at kernel level. So that problem might not even arise under the Linux emulation.


great link, a must-read for anyone interested in how they did it.


It's a timely reminder for anyone getting excited about Ubuntu on Windows that Microsoft had a working system for doing that, only to slowly run it into the ground. I'd have a lot more respect for the new system if they'd revived Interix/SFU/SUA rather than releasing a different, incompatible replacement.


I think the point is to allow Linux binaries to be used without a recompile, which saves MS most of the cost of maintaining binaries for it.


Which is the key. Plenty of ISVs make Linux binaries, but nobody was interested in Interix binaries.


Indeed. I think you can see this in some of the history in this article: Interix was "POSIX compatible", but essentially its own OS, like compiling for Linux versus BSD. So someone had to maintain binary builds of GNU tools for Interix and thus you ended up with the large "Tools" distribution of user space binaries. Ultimately, "Tools" was its own Unix distribution that was subtly incompatible with any other Unix distribution. Even today on Linux you still see a lot of the headaches in the subtle binary incompatibilities across Linux distributions.

The amazing thing with Ubuntu on Windows is that user space is the same Ubuntu distribution of user space tools as on Linux. That lessens the maintenance burden of the User Space considerably as Canonical is already actively maintaining that distribution, and will continue to actively maintain that distribution, and that there are a considerable number of users of that distribution on Linux already and a considerable ecosystem of third parties building for that distribution. Those are definitely the missing pieces that Interix never had and makes this "Son of Interix" that is Ubuntu on Windows much more interesting.


Does anyone actually care about Linux binaries? A lot of linux programs tend to be shipped as source.


Exactly, couldn't say it better! MS has a history of decisions which were meant to strangle the competition, one way or the other. Interoperability was never one of their strong suits.


Would you actually call for such a revival?

* https://news.ycombinator.com/item?id=11446694


What do you mean "call for"? In the abstract, yes. I might put a bit of personal time into it. I might even pay $30 or so for it. But that's unlikely to move the needle for Microsoft.


Instead of the wistful subjunctive ("if they had revived Interix I would have ...") something in the imperative mood.

Please revive the Subsystem for Unix Applications, Microsoft!

You own the technology. And it addresses quite a number of the issues that you are currently listing on GitHub against the Linux Subsystem. Including:

* Interix has pseudo-terminals. (https://news.ycombinator.com/item?id=11415843 https://wpdev.uservoice.com/forums/266908-command-prompt-con... https://github.com/Microsoft/BashOnWindows/issues/169 https://github.com/Microsoft/BashOnWindows/issues/85 https://github.com/Microsoft/BashOnWindows/issues/80)

* Interix has production code for terminal-style control sequence processing in Consoles. (https://github.com/Microsoft/BashOnWindows/issues/111 https://github.com/Microsoft/BashOnWindows/issues/243 https://github.com/Microsoft/BashOnWindows/issues/27)

* Interix has the mechanism for sending signals to Win32 processes. (https://news.ycombinator.com/item?id=11415872)

* Interix had an init and could spawn daemons. It could also run POSIX programs under the SCM. (https://news.ycombinator.com/item?id=11416376 https://github.com/Microsoft/BashOnWindows/issues/229 https://github.com/Microsoft/BashOnWindows/issues/206)

* The Interix build of the Z Shell runs just fine. (https://github.com/Microsoft/BashOnWindows/issues/91)


it incompatible with its previous version, but OTOH if a Linux binary works without recompilation, is it a problem?


Except this is an article about Interix. Correct me if I'm wrong, but Microsoft seems to have discarded this particular technology entirely and started over fresh with Ubuntu on Windows.


Maybe the fact that Dave Cutler who was much more a VMS guy that did not liked Unix/Posix at all may have had an influence on the design of the kernel of windows NT is an explanation?

I must admit I used SFU and IO was all quirkish (buffered IO called in non buffered mode).

But knowing that async IO, threading was radically different from POSIX they may have decided at some point that it was not possible to offer a quirkless API for Posix by conflict of construction, and they dropped it.

I mean for a lot of programmer an IO is an IO, a process a process... but the kernel may thinks differently.

For the record, before I hear the Unix fan boys says that Posix is superior, PyParallel by continium exploits a parallel version of python on windows and have considerable gain from using exactly the heart of Dave Cutler's architecture: multi-threading, async IO... https://speakerdeck.com/trent/parallelizing-the-python-inter...

I do not say it is the panacea, I say it worths being looked at.

As you guess, thinking that the same cause produces the same effect and if the kernel still use Cutlers architecture around multi threading and asyncio then I expect the ubuntu runtime on windows to have quirks around... asyinc io and multithreading.


I think this is a best described as a tale about using and writing proprietary software. At any time what you've written can be taken away and/or abandoned, and there's usually nothing you can do about it.


Using an abandoned free software project is rarely going to be much fun, either. In fact, using actively maintained free software projects is rarely much fun, since many of them tend to often abandon old features and introduce new incompatible ones, and you either have to keep using old versions of everything or update everything at once, breaking a ton of things. (Of course at times you're way better off with a free program than you'd be with a proprietary alternative, but the reverse can be true just as much in other cases.)


using actively maintained free software projects is rarely much fun, since many of them tend to often abandon old features and introduce new incompatible ones

GNOME has a lot to answer for. Most other projects try hard to avoid this and lean the other way, preserving ancient features at the cost of clarity.


I dunno, KDevelop 4, based on KDE's rather than Gnome's libraries, was nothing like KDevelop 3 and couldn't do some of the old things, and there was no way you could compile KDevelop 3 on an Ubuntu box bundled with KDevelop 4 and the new KDE libraries unless you were really good at this shit. Eclipse, based on the JVM, had a popular plugin that was broken, I think, in Juno or a bit earlier, and whose author refused to update it for the new Eclipse because too much of the things in Eclipse he depended on changed. Upgrading Emacs broke everyone's emacs-lisp hacks that they copied from each other. And so forth.

I think that in the land of Linux, the only things which remain backward compatible are the kernel (if you want to run statically linked userland binaries - not if you want to run driver binaries) and the GNU userspace utitilies like sh and grep (though sh is bash on some distros and dash on others, breaking the very popular assumption that sh is in fact bash, and env might reside in either /bin/env or /usr/bin/env, defeating its purpose of letting you deal with the fact that #!/bin/tcsh will break on systems keeping tcsh at /usr/bin. But a given distro will usually keep its shit where it last put it, which I guess is better than it could have been.)


This oversimplistic explanation misses the facts, laid out to some extent in Stephen Walli's history which is mentioned elsewhere on this page, that the "proprietary software" comprised GPL-licenced things like GCC and many BSD-licenced things including large portions of the command-line toolset.

Actually reading the licence dialogue displayed by the SFUA/SUA installer was interesting: umpteen different iterations of mostly the same BSD licences, over and over, with the only differences being the copyright declarations.



I used Cygwin to offer me a sane environment that ran most of my code and tools since at least Windows XP. Under it, Windows was a reasonably adequate workstation OS.


Between 2001 and the arrival of service pack 2, it was a horror due to virus and worms for sys admins and end users. The ignore the problem thing was a resident resource hungry program called "virus scanner" Com & co were great, ms used it,the rest abused it. And Xp feels fast too, it wasn't, 7 was faster, but it felt the other way.


>And Xp feels fast too, it wasn't, 7 was faster, but it felt the other way.

Desktop composition adds a frame or more of latency. If you disable it (sadly not possible in Windows 8 and later) UI interaction feels noticeable more responsive. But most people run Windows 7 with composition on, so for them it makes sense that "XP felt faster"


Windows was a security nightmare by then but either I was cautious enough or I was lucky enough not to experience any major infection problem on any of my machines.

Proper firewalls alone helped a lot.


> Around this time, the core development team was reformed in India rather than Redmond and some of the key Softway developers moved on to other projects like Monad (PowerShell) or left Microsoft.

This is indeed very sad. It seems like every project shipped overseas suffers the same fate. When will we ever learn that you can't "leverage the salary disparity" - you don't outsource your engineers when your core competency is engineering.


This is not outsourcing. I'm an Indian and I know for a fact that Microsoft India employs the very best talent. Yes, there are cost benefits for having engineering done in India, but for companies such as Microsoft there shouldnt be a quality difference. However where you'd see quality difference is if you try to build an app for $50 with a company you found on a job website. Those employ regular run-of-the-mill, copy-paste-from-stackoverflow developers.


Offshoring then. Talent probably isn't even the problem. It could have more to do with putting 8000 miles between the devs working on some project and the people who care about that project. If you try to get an app built for $50 you're gonna have a bad time regardless of where you do it - that's not an argument.


> the devs working on some project and the people who care about that project.

Those two aren't mutually exclusive. Toyota employees in the US probably still care about the cars they're working on. I can verify that the Finnish employees at Microsoft do great work (e: and care about it), despite being 4700 miles away from Redmond.


I'm not trying to imply that offshore workers don't care about their work. If you take an existing project and send it far away from everyone who was involved with that project, that project will probably suffer, especially with software.


It's sort of a truism that the more expensive software is to buy, the more expensive it is to own... (i.e. worse quality, less development on the part of the vendor, etc.)


I think the starting point of that could be more precisely identified as "the fewer buyers for a given piece of software..." the more expensive it is to own.

If you're the only customer running OS/2-on-NT-using-Token-Ring-over-Carrier-Pidgeon, you're not going to have a good time when you find a bug or missing feature.


And that is why in Open Source, even users that contribute absolutely nothing to the project do have a value to the community


A variant on the network effect effect.


(Absence thereof.)


That holds mostly for either a) software that's made by software developers for software developers, and b) software that's utterly trivial. It's not true for most tools for professionals.

To name few examples - there's no decent alternative to Microsoft Office (free or otherwise), there're no decent open source alternatives to most CAD applications, to MATLAB (no, Octave doesn't count) & Simulink, LabVIEW, musical software, electronics design & manufacturing, etc. Essentially anywhere where the problem domain is nontrivial and the potential audience is limited.


I don't understand what you are trying to say -- you think expensive software is better or worse?

I don't know what Microsoft Office has to do with this.


I'm trying to say that this truism of "expensive = worse" is wrong in very many cases. I assert that except software that's a) trivial, or b) "by developers for developers", expensive software is usually better.

Microsoft Office is a good example of software that a) has a very broad audience, b) pretty much every computer user has seen, and c) somehow nobody can make a decent free replacement of.


I also don't see it as very expensive. We are paying about $100 a year for a Office 365 license that we have installed on three machines. That's something around $2-3 per seat per month for a whole bunch of stuff. That's not expensive at all.


Well that was indeed sad, including the Walli writeup linked by Justin.

Question: Does, and if so how does the Window Station[1] fit into the NT Subsystem framework?

1. https://blogs.technet.microsoft.com/askperf/2007/07/24/sessi...


I still have a Softway Systems CD laying around somewhere. Wasn't quite the same as Linux but was fun to play around with.




Here's my tl;dr:

1. Interix/SUA subsystem was not developed by Microsoft. It was acquired from a company called Softway. It was used internally to transition Hotmail from FreeBSD to Windows. It is believed some important MS customers also made use of Interix and possibly came to rely on it.

2. How to explain MS seeming ambivlance toward a POSIX layer on top of Windows? Idea: Windows API is so complex (convoluted?) as to exclude competition. See Joel On Software reference. He marvels at Windows' backwards compatibility - being able to run yesterday's software on today's computers. Yet he also admits MS strategically developed software that would not run on today's hardware, but only on tomorrow's. (Not intending to single out MS as I know other large companies in the software business did this too.)

Complexity as a defensive strategy. Who would have guessed?

Many years ago, I gave up on Windows in favor of what I perceived as a more simple, volunteer-run UNIX-like OS that was better suited to networking.

As it happens, unlike Windows, _all versions_ of this OS run reliably on most older hardware. Although it was not why I switched at the time, I have come to expect that by virtue of the UNIX-like OS, my applications will now run on older as well as current hardware. I rely on this compatibility.

Unlike Windows I can run the latest version of the OS on the older hardware.

Windows backwards compatibility is no doubt worthy of praise, however the above mentioned compatibility with older hardware is more important to me than having older software run reliably on a proprietary OS that constantly requires newer hardware.

The 2004 reference Reiter cites on the "API War" suggests people buy computers based on what applications they will be able to run.

Unlike the reference, I cannot pretend to know why others buy certain computers. Personally, I buy computers based on what OS they will be able to run. Traditionally, in the days of PC's and before so-called smartphones, if you were a Windows user this was almost a non-issue. It was pre-installed everywhere.

At least with respect to so-called smartphones it appears this has begun to change. Maybe others are choosing to buy computers based on the OS the computer can run? I don't know for sure.

As for the "developers, developers, developers" and availability of applications idea, since switching to UNIX-like OS, being able to run any applications I may need has been a given. In fact, I have come to rely on applications that will only run on UNIX-like OS!

And now it seems MS is going to make running UNIX applications on Windows easier. Why?

As with Interix, will the reasoning behind this successor POSIX layer remain a mystery?


BTW, HN does nothing special with underscores, but matched asterisks are converted to italics.


>MS strategically developed software that would not run on today's hardware, but only on tomorrow's

what do you mean by that?


If you follow the "API war" hyperlink, it's under the heading "It's Not 1990".

When consumers are upgrading their hardware regularly as they were in the 1990's, then developers can disregard the notion of users "upgrading" their software.

Instead they can just write applications targeting new hardware. It does not have to run on older hardware.

The user will be compelled to upgrade the hardware and, in the case of Windows, by default they get the new software. The example cited was Excel versus Lotus123.

MS also benefitted from hardware sales through agreements with the OEM's.


Code bloat. Yes, it runs like crap on today's hardware, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: