"This is a real native Bash Linux binary running on Windows itself. It's fast and lightweight and it's the real binaries. This is an genuine Ubuntu image on top of Windows with all the Linux tools I use like awk, sed, grep, vi, etc. It's fast and it's lightweight. The binaries are downloaded by you - using apt-get - just as on Linux, because it is Linux. You can apt-get and download other tools like Ruby, Redis, emacs, and on and on. This is brilliant for developers that use a diverse set of tools like me."
"This runs on 64-bit Windows and doesn't use virtual machines. Where does bash on Windows fit in to your life as a developer?
If you want to run Bash on Windows, you've historically had a few choices.
Cygwin - GNU command line utilities compiled for Win32 with great native Windows integration. But it's not Linux.
HyperV and Ubuntu - Run an entire Linux VM (dedicating x gigs of RAM, and x gigs of disk) and then remote into it (RDP, VNC, ssh)
Docker is also an option to run a Linux container, under a HyperV VM
Running bash on Windows hits in the sweet spot. It behaves like Linux because it executes real Linux binaries. Just hit the Windows Key and type bash.
"
If you want Windows to be Linux users' home, then how isn't it swallowing them and dragging them from an open source environment to a closed source one?
More questions: Will it be backported to Windows 8.1? How does it differ from CoLinux and andLinux?
>If you want Windows to be Linux users' home, then how isn't it swallowing them and dragging them from an open source environment to a closed source one?
because no one is forcing linux users to do anything. can you really not see the difference between giving an option to developers and "dragging linux users"
That's being pedantic, honestly. Yes, technically Linux is only the kernel and technically running the Ubuntu run-time on a different kernel isn't running Linux.
However, "Linux" is almost always a reference to GNU tools and the Linux Kernel. It may not be semantically accurate, but take that up with the same people that made literally mean both itself and its opposite.
Let's try the opposite. Say someone got Wine working to the point where it was very nearly, perfectly indistiguishable from Windows and they put up a blog post saying "Everything works just as it should under Windows, because it is Windows." Microsoft's lawyers would come around with a C&D, and calling them pedants wouldn't invalidate their case.
He could have said "it's just like Linux, right down to the kernel interface" or "Everything works just like Ubuntu, because the userland is Ubuntu". Succinct and correct. Precision matters.
Your points are fair, but color with the fact that the Windows Subsystem for Linux (WSL) is build to be distro-agnostic. We picked Ubuntu in this first version due to its popularity with developers, but there are few technical reasons (other than us fully and accurately implementing the necessary syscalls) why it shouldn't support other distro's userland environments in the future.
Knowing this, what should we call it?
Windows Subsystem for Running POSIX + Linux Syscall API Compatible Userland Tools? WSRPLSACUMT? :)
I'm genuinely interested on what you all feel would be a good way to think about naming moving forward.
> Let's try the opposite. Say someone got Wine working to the point where it was very nearly, perfectly indistiguishable from Windows and they put up a blog post saying "Everything works just as it should under Windows, because it is Windows." Microsoft's lawyers would come around with a C&D, and calling them pedants wouldn't invalidate their case.
Except that isn't the opposite, it's an entirely different situation.
1) It isn't Windows. It's a complete rewrite of the Windows API's. That is not the same thing that's happening here.
2) A C&D isn't a case, it's a piece of paper (politely) asking you to do something. Calling something Windows and it actually being an infringement on Windows patents are entirely different issues.
> Precision matters.
In carefully crafted theoretical situations feigning as analogies to this situation? Sure. In the real world? Hardly.
> A team of sharp developers at Microsoft has been hard at work adapting some Microsoft research technology to basically perform real time translation of Linux syscalls into Windows OS syscalls. Linux geeks can think of it sort of the inverse of "wine" -- Ubuntu binaries running natively in Windows.
MSYS is not Cygwin and IMHO is way better. Yes, is not Linux, but is a native binary without emulation (translation?) layer. It's been around for ages, before I did cross-compiling from Linux to Windows, that's what I used in Windows.
MSYS and MSYS2 actually are Cygwin-- the original MSYS being a (horribly out of date) fork of Cygwin that never really pulled much from upstream, and MSYS2 attempting to track upstream Cygwin more closely.
You're getting confused with MinGW, which uses MSYS to build native Windows executables. They need MSYS (as a Cygwin-derived emulation layer) because tools like GCC or Bash expect the system to support POSIX APIs and have POSIX semantics-- for example, Windows has no equivalent to a POSIX fork() call. The code you're compiling under MinGW has no MSYS or Cygwin dependencies, but the compiler and tools themselves (gcc, bash, the linker, etc.) do.
Not the person you're replying to, but interesting ...
>tools like GCC or Bash expect the system to support POSIX APIs and have POSIX semantics-- for example, Windows has no equivalent to a POSIX fork() call.
So do Cygwin and/or MSYS emulate the fork() call on Windows? and if so, do you have any idea how that is done? Just interested, since I have a Unix background - not at deep OS level, but at app level and also at the level of the interface between apps and the OS (using system calls, etc.).
Yes. That's one thing we spent considerable engineering effort on in this first version of the Windows Subsystem for Linux: We implement fork in the Windows kernel, along with the other POSIX and Linux syscalls.
This allows us to build a very efficient fork() and expose it to the GNU/Ubuntu user-mode apps via the fork(syscall).
We'll be publishing more details on this very soon.
Interesting! thanks.
The original Unix fork() was found to be somewhat expensive in resources (a little surprising since it was the only way to create a child process), later there was vfork() (copy-on-write) (maybe innovated by BSD), and I read Linux's clone() does even better, though not looked into it in detail.
fork would copy the entire address space of the process, which was wasteful when you're usually going to just throw all of that memory away by calling exec. BSD added vfork to optimize the `if (fork() == 0) exec(...)` scenario, by not copying the memory, and just pausing and then borrowing memory from the parent, until exec is called. Modern operating systems use copy on write pages for fork, so instead of copying all of memory, you just need to copy the page tables.
It means that if the child process actually modifies the memory then those modifications will be visible in the parent process, because they're both using the same address space. It's essentially an awful hack that was added to BSD at a time when it didn't yet use copy-on-write for fork() to achieve the same performance with vfork()+exec() that you would get from a CreateProcess()-like API.
That's why the child is allowed to do almost nothing:
the behavior is undefined if the process created by vfork()
either modifies any data other than a variable of type pid_t used to
store the return value from vfork(), or returns from the function in
which vfork() was called, or calls any other function before success‐
fully calling _exit(2) or one of the exec(3) family of functions.
In Linux, fork(), vfork(), and clone() all use the same underlying machinery, with just a few different flags. clone() is the most general, with flags for what to share; those flags in particular include all the Linux namespaces that serve as the basis for containers. fork() just uses clone() with a hardcoded set of flags, and vfork() does the same as fork() except that it doesn't schedule the parent process until the child calls exec.
How does this new fork differ from the already existing NtCreateProcess with a NULL section handle that was used to implement fork in the old SUA/POSIX subsystem?
> So do Cygwin and/or MSYS emulate the fork() call on Windows? and if so, do you have any idea how that is done?
Cygwin does some pretty horrific hacks to emulate it. It basically creates a paused child running the same binary, fills in its memory, stores the register context of where it came from in a shared memory, and then resumes the child. The child on startup detects that it was forked, and then looks into shared memory to resume running at the place of the fork.
MSYS (as part of MinGW project) doesn't use Cygwin AFAIK; with the licensing implications that it has, because Windows programs written with Cygwin run on top of a copylefted compatibility DLL that must be distributed with the program, along with the program's source code (quoting Wikipedia).
In fact the binaries, that are compiled with MinGW, link with MSVCRT by default (Microsoft Visual C Run-Time DLL). So no compatibility layer, and they don't rely on Cygwin.
Please distinguish carefully between MSYS and MinGW. MSYS (or MSYS2) programs runs on top of a copylefted compatibility DLL that is suspiciously similar to Cygwin. They must be GPL-compatible. MinGW programs link with MSVCRT. They are compiled by GCC toolchain programs, most of which are MSYS programs.
wait wait wait, will I have access to the whole repositories of ubuntu via apt-get or just to a windows repository made on purpose?
In any way, for me, this is a great news. The fact was really hard to work with python/ruby/node/etc under windows and the fact I hate powershell were the two main things about why I work on a linux os all the time.
> will I have access to the whole repositories of ubuntu via apt-get
Yes; full, standard, repo access [1].
> With full access to all of Ubuntu user space
> Yes, that means apt, ssh, rsync, find, grep, awk, sed, sort, xargs, md5sum, gpg, curl, wget, apache, mysql, python, perl, ruby, php, gcc, tar, vim, emacs, diff, patch...
> And most of the tens of thousands binary packages available in the Ubuntu archives!
I've never met anyone that tried to do some scripting with PowerShell and didn't fall in love with it, that's why I find it difficult to believe that you hate it. Or that you tried it.
You might hate the terminal where PowerShell runs, but I don't think you hate Powershell.
I see a future where devs move to Windows due to Bash and stay due to PowerShell.
I've done some terrible, terrible automation using PowerShell and PS Remoting across dozens of Windows servers, and wanted to smash my screen on multiple occasions.
First, PS Remoting is terrible compared to SSH. Commands fail randomly on a small percentage of systems, and the only options to troubleshoot are to login manually to the remote system, and try a number of things, including rebooting the remote system (not very good for servers).
Second, debugging is terrible compared to bash - sure PowerShell ISE allows you to step through your code line by line, however, I don't have anything like "bash -x script.sh" which lets me see the actual execution and return code of every line of my scripts.
Third, bash has a much simpler way to chain output through multiple programs using pipes and treating input/output as simple text. PowerShell is a pseudo programming language with objects and other data types that just don't enable this type of chaining in the same simple and easily understandable way.
It took me weeks to write a PowerShell script that used PS Remoting to loop through a list of provided servers, install a service, set the RunAs user, start the service, create a secret file, and EFS encrypt that file as a specific user. I could have written the same script in hours using bash for Linux boxes. It would have been much more efficient by using tools like GNU parallel.
I'm not sure how anyone that's used both PowerShell and bash for any serious work could say PowerShell is better, unless they're a .NET developer and appreciate being able to blend .NET objects into their scripts, but to me, that just breaks the simple modular composability of the *nix philosphy.
We might have very different use cases I guess. I script a lot to work with data, not to automate things.
I feel very comfortable getting a stream of data and treating it with sed, awk, grep and whatever; but once I worked with objects it feels much more data-oriented.
And please don't get me wrong, I've been in love with Bash since a Slackware CD fell on my hands in the 90s. I just was mind-blown by PS after making fun of it for years - just because the default terminal where it runs is less than great.
I guess this thread proves the point of bringing Bash to Windows: Different people, different uses, different needs and solution. And that makes me happy :)
so is there a better terminal available for PowerShell?
(preferably for win7, because I run Linux at home and win7 is the most likely platform I may spend some time elsewhere)
I do appreciate, from all I've heard about PowerShell, it might be an interesting environment to try some scripting in. The actual shell/programming language can't be (much) worse than bash--I mean let's admit, bash is pretty ancient and therefore didn't have the advantages of progress in designing programming languages we have made in the past decade(s).
I wrote a few 10- to 20-line scripts, but got lost in the documentation every time. You have to wade through mud to get anything simple done. Statements/command lines easily wrap around an 80-character window due to their verbosity. The worst is that every command has about 40+ options, most but not all of them the same as for other commands. Sorry, not for me.
Powershell is so foreign with its typed pipes to those of us who are accustomed to the string based pipes that I have a very hard time falling in love with it.
Exactly. I mean did we really just jump straight from the worst shell in history (the windows command prompt) to one that supports tmux?
However, I don't know if this is the case. I remember in the livestream Meyers saying that they will enable you to choose any shell you want "powershell, dos, bash, and more coming soon". If they just supported _any_ ubuntu binary natively, I don't know why he would've said "more coming soon"
Windows has had powershell for a while, and there is a good argument that its better than the *nix shell because it supports strongly typed objects. Many will disagree, of course, but keep in mind that cmd.exe hasn't been the windows shell of choice for many years now.
Have a look at FreeBSD's linuxulator. It basically maps syscalls (and a bunch of other stuff) from one OS to another. Alexander Leidinger has written a whole lot of blog posts about it - this one's a good start: http://www.leidinger.net/blog/2010/10/27/the-freebsd-linuxul...
Is there a source for this being a syscall emulation layer and not some kind of colocation of the linux kernel in a subsystem somehow?
The latter would be much more interesting to me, since what I really want out of "running linux on my desktop" is for it to actually act like the linux machines my code is targeting, and I'm dubious that a syscall layer will achieve that to the degree I want.
One of my coworkers is from MS and worked on the Android apps on Windows project. This comes out of that cancelled work: It's a full implementation of Linux syscalls in user mode that registers a driver to perform kernel-mode tasks on behalf of the subsystem. The NT kernel has always been fairly agnostic and not tied specifically to Win32; it originally had an OS/2 text-mode and POSIX subsystem in addition to Win32. The NT kernel even uses a Unix-like "\\?\" root filesystem where disks, kernel objects, sockets, etc are mounted.
From what I can gather Microsoft is paying Canonical to help with a few user-mode bits and the Windows apt-get stuff uses the official Canonical sources.
As I said to him: "NT is the only major OS I know of that has always had personality subsystems. Cutler’s vision finally pays off after 3 decades of waiting"
> "Cutler’s vision finally pays off after 3 decades of waiting."
Quite!
But only because the Interix-derived POSIX subsytem was, and is, little-known and vastly underappreciated. Had it been better known, the payoff might have come a decade or more sooner. There are a fair number of questions being asked now, about the new Linux subsystem, where the answer is "No; but the old POSIX subsystem had that.".
I love your last paragraph. Exactly what I was thinking. NT Subsystems were its party trick and its taken far too long for them to be introduced on the grand stage. I bet Dave Cutler is smiling his face off!
Before leaving for MS, Cutler worked on an OS codenamed "Mica" which ran on top of an architecture named PRISM (which was a predecessor to the Alpha). The idea behind Mica was that it was a microkernel which hosted VMS and Unix personalities on top.
The linux module provides limited Linux ABI (application binary inter-
face) compatibility for userland applications. The module provides the
following significant facilities:
+o An image activator for correctly branded elf(5) executable images
+o Special signal handling for activated images
+o Linux to native system call translation
It is important to note that the Linux ABI support it not provided
through an emulator. Rather, a true (albeit limited) ABI implementation
is provided.
I wasn't asking about freebsd (I've used that, even!), I was asking about this new thing on Windows and whether there's been official word that it is indeed the same sort of thing. Looks like it is indeed this sort of thing, though.
A good few years ago now I used to use a project called cooperative Linux that was like what you describe - a user mode Linux kernel running as a service in windows.
Yeah there were actually a couple of projects along this line at various points, but they always suffered in various ways from not being well integrated into the system. What you really want is for the colocated kernel to have first class access to some basic things about the primary kernel (like file systems -- including page caching and networking) so they're not going through an expensive and awkward translation layer.
Cygwin and things like colinux exist on opposite ends of that awkward divide, but something officially supported by the OS could maybe straddle it better.
There was also another project I worked with a while back called LBW (http://cowlark.com/lbw/) which allowed some unmodified Linux binaries to run on windows using Internix and some really neat syscall handler trickery.
In other words, they're not using "Linux" at all. It's an Ubuntu userland on top of the Windows kernel, similar to how Nexenta was an Ubuntu userland on top of the OpenSolaris kernel.
They've had a POSIX subsystem (SUA) for a while, but it's kind of ancient. I assume they resurrected it and implemented some Linux-specific APIs, which would actually be SO COOL.
The Windows POSIX subsystem which shipped in NT 3.5.1 was a minimal implementation of POSIX syscall API plus a userland toolset. That was replaced with Interix which was renamed Services for Unix (SFU) which had a more comprehensive kernel implementation and more up to date userland. However that tech was not resurrected to build the Windows Subsystem for Linux (WSL).
Importantly, WSL doesn't ship with a distro - we download a genuine Ubuntu userland image at install-time and then run binaries within it.
I, for one, would like to see it resurrected. It's exceedingly useful, and is one major reason that I am not, nor will be, using Windows 10. This new subsystem does not have the things that I use SFU/SFUA for. Nor does it have the BSD-style toolset of SFU/SFUA.
Came here to say that. It'd be nice if they open-sourced it so it wasn't yanked away from under our feet one day :) But maybe they are? I shouldn't presume they are not.
I wonder too. I think that even the win32-native msys2 ships with a bash that uses cygwin's dll. Apparently some POSIX/Linux system calls are extremely hard to emulate under win32, such as fork: https://cygwin.com/cygwin-ug-net/highlights.html (search for "Process creation"), which I'd imagine would be critical in a shell for handling redirections, pipes, and job control.
Wine has it easier - it does not translate system calls, it needs "only" dll loader and a set of dlls exporting the right symbols.
NT system calls are not exposed to userspace; only system-supplied dlls can use them. It is done by changing syscall codes for every build, so non-system app would never know, which syscall number to use.
Gave this a long look and I couldn't possibly do anything on a Windows Machine in its' current state. Linux isn't just about running apps - there's a philosophy behind the system. Users first!
As long as Microsoft continues to disrespect the rights of users in regard to privacy, data-collection, data-sharing with unnamed sources, tracking, uncontrollable OS operations (updates, etc) - I will never go near it.
I expect some flack for my position... don't care. I find it especially offensive that ex-open source and ex-Linux users (working for Microsoft) have the audacity to come on here and try to sell this as a 'Linux on Windows' system when most of what makes Linux special (respect for the user) has been stripped away.
It's like giving a man who is dying of thirst sea water.
Most comments here appear to be positive and that's fine... whatever. Please don't sell your souls and the future of software technology for ease of use and abusive business practices. /rant
I mean, how is that different from what's being done here? Is the actual Linux kernel running in some sort of container? This post is light on details but people here seem to be suggesting it just translates syscalls which afaict is the exact same thing cygwin does. (plus support for linux elf binaries, which doesn't really matter since most linux tools are open-source anyway)
So "Linux on Windows" isn't really happening here. Binaries intended to run on Linux are just being tricked to run on Windows -- basically the same as Cygwin, with the minor benefit of not requiring a recompile.
It's a lot more than that. When Cygwin is built, the translation between to POSIX is handled via Cygwin.dll with the various tools recompiled against that file.
This version is taking the native Ubuntu binaries and executing them directly against the Windows API via a real time translation later.
The difference is like playing a game using virtualization technology vs. WINE. As the complexity of the game increases, the former begins to slow down and break.
Yes - Cygwin is essentially the GNU tools recompiled as Win32 apps using a helper library for shared code etc.
Windows Subsystem for Linux (WSL) which underpins Ubuntu on Windows is new Windows kernel infrastructure that exposes a LINUX-compatible syscall API layer to userland and a loader that binds the two.
This means you can run real, native, unmodified Linux command-line tools directly on Windows.
So it's cygwin + ELF support in the kernel. I guess ability to run GNU tools + supporting ELF is being called "Linux". The latter piece seems pretty minor, especially since most things that run on ubuntu are open-source anyway.
I've used it some on Windows earlier, and it worked pretty well. Might still available if anyone wants to try it. The only small issue I had was that the process to download it from the (AT&T) web site was slightly involved, for no good reason, as far as I could see. But not difficult.
It sounds cool, but even though his remarks are clear, I'm still confused!
I got a Mac primarily because of its linux side, but it is actually Linux.
This is still Windows, but with a Linux "side" to it? If I apt-get install redis, do I make it startup like I would in linux, or do I use windows services? In the screenshot there's a /mnt directory, is that behaving the same as it does in linux?
This is so confusing... but if it's legit, then I would actually look at switching back to windows.
To be more specific, darwin was originally based on BSD4.4-lite2 and FreeBSD with the Mach hybrid microkernel. It has diverged significantly since with lots of new APIs in userland, but all major BSD apis are still there - it is still a POSIX compliant system
When I look at it, I basically see NeXTSTEP which is UNIX... I see a lot of OS X is BSD, or, OS X is Darwin + BSD, all I see is NeXTSTEP with a worse interface...
When I say OX is literally UNIX, the emphasis is on the word 'literally'. To be classified as a UNIX system, meaning that an OS maker can use the UNIX trademark, the system have to be certified by the The Open Group.
BSD was UNIX, yet neither of it's 2 prevalent derivatives (FreeBSD and OpenBSD) has applied for certification. They are classified as Unix-like; the same is true for any Linux distribution.
To some people, this may be semantics, but one of the reasons that drew me to OSX was the certification.
yes, I got the reference. I was talking about personal impressions, I was not clear about it though, sorry. I still remember when they (Apple) run an ad showing a powerbook running Mac OS X saying "Send other UNIX to /dev/null", how excited I was at the time :-P
Almost all of the userland command-line tools are from BSD. The most important exception is clang which is independent (though Apple has historically been the biggest contributor).
> I got a Mac primarily because of its linux side, but it is actually Linux.
No part of Mac OS comes from Linux at all. Also, most of the standard tools are from BSD, not GNU.
Maybe you meant "it's a UNIX-like system" but those predate Linux by 30 years or so, and at any rate Windows + Cygwin was already a UNIX-like system to a similar extent, so that's not really relevant to what was accomplished here.
Macs aren't actually linux, they are based (a port I think) on Unix though and are very close relatives to linux.
There is a distinction because you will notice much larger differences between Mac and [insert linux distro here] compared to linux distros to each other.
They are not "very close relatives to linux" unless you also think Solaris, Windows+cygwin, FreeBSD etc. are also very close relatives to Linux. In which case the term becomes meaningless because almost every mainstream OS is a close relative of every other one.
To be more accurate, your Mac isn't Linux, it's BSD. Luckily that *NIX underpinning is often enough to make it a solid development platform even when deploying to Linux.
It's not really BSD either; it's a mix of a bunch of stuff. The kernel XNU is a hybrid between Mach and BSD. I think there are also a few GNU utilities in the base system but I could be wrong about that (?)
http://www.hanselman.com/blog/DevelopersCanRunBashShellAndUs...
"This is a real native Bash Linux binary running on Windows itself. It's fast and lightweight and it's the real binaries. This is an genuine Ubuntu image on top of Windows with all the Linux tools I use like awk, sed, grep, vi, etc. It's fast and it's lightweight. The binaries are downloaded by you - using apt-get - just as on Linux, because it is Linux. You can apt-get and download other tools like Ruby, Redis, emacs, and on and on. This is brilliant for developers that use a diverse set of tools like me."
"This runs on 64-bit Windows and doesn't use virtual machines. Where does bash on Windows fit in to your life as a developer?
If you want to run Bash on Windows, you've historically had a few choices.
Cygwin - GNU command line utilities compiled for Win32 with great native Windows integration. But it's not Linux. HyperV and Ubuntu - Run an entire Linux VM (dedicating x gigs of RAM, and x gigs of disk) and then remote into it (RDP, VNC, ssh) Docker is also an option to run a Linux container, under a HyperV VM Running bash on Windows hits in the sweet spot. It behaves like Linux because it executes real Linux binaries. Just hit the Windows Key and type bash. "