Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] Microsoft and Canonical partner to bring Ubuntu to Windows 10 (zdnet.com)
535 points by raddad on March 30, 2016 | hide | past | favorite | 476 comments



Some additional details from Scott Hanselman:

http://www.hanselman.com/blog/DevelopersCanRunBashShellAndUs...

"This is a real native Bash Linux binary running on Windows itself. It's fast and lightweight and it's the real binaries. This is an genuine Ubuntu image on top of Windows with all the Linux tools I use like awk, sed, grep, vi, etc. It's fast and it's lightweight. The binaries are downloaded by you - using apt-get - just as on Linux, because it is Linux. You can apt-get and download other tools like Ruby, Redis, emacs, and on and on. This is brilliant for developers that use a diverse set of tools like me."

"This runs on 64-bit Windows and doesn't use virtual machines. Where does bash on Windows fit in to your life as a developer?

If you want to run Bash on Windows, you've historically had a few choices.

Cygwin - GNU command line utilities compiled for Win32 with great native Windows integration. But it's not Linux. HyperV and Ubuntu - Run an entire Linux VM (dedicating x gigs of RAM, and x gigs of disk) and then remote into it (RDP, VNC, ssh) Docker is also an option to run a Linux container, under a HyperV VM Running bash on Windows hits in the sweet spot. It behaves like Linux because it executes real Linux binaries. Just hit the Windows Key and type bash. "


This doees sound pretty funny when you think about it. "Our latest improvement to Windows is...

Adding Linux"


This is such a big deal though, this really shows Microsoft's commitment to the new .net Core initiative, and their push for linux support


Honestly, seems like all roads here lead to Microsoft swallowing Linux userbase whole.

What's next, RedHat on Server?


Swallowing? No. Supporting, yes! We want Windows to be your home - the best place to build apps for all the platforms and all the devices. Period.

In case you've not noticed, this is a very, VERY different Microsoft, one I re-joined recently precisely to work on this very feature! :D


If you want Windows to be Linux users' home, then how isn't it swallowing them and dragging them from an open source environment to a closed source one?

More questions: Will it be backported to Windows 8.1? How does it differ from CoLinux and andLinux?


>If you want Windows to be Linux users' home, then how isn't it swallowing them and dragging them from an open source environment to a closed source one?

because no one is forcing linux users to do anything. can you really not see the difference between giving an option to developers and "dragging linux users"


> We want Windows to be your home

I'd rather have a home I have control over, or even trust in.


I'm not commenting on the "good or badness" of this move - it's a brilliant power play from MSFT


Replying to myself for a back pat. Well done, self. You called it.


Basically what Apple did from OS9 to OSX. 'Latest improvement is to add OS9 to NextStep'


This ad seems relevant (OSX => Windows, UNIX => Linux, /dev/null => NUL): http://xahlee.info/i/apple_unix_ad.jpg


Just wait until you start systemd on Windows. ROFL


"The Jock/Geek convergence, it's the end of the world!"


No. It may be Ubuntu, but it's not Linux. It isn't Linux, anymore than running a Linux userland on NetBSD's Linux "kernel personality" is Linux.


That's being pedantic, honestly. Yes, technically Linux is only the kernel and technically running the Ubuntu run-time on a different kernel isn't running Linux.

However, "Linux" is almost always a reference to GNU tools and the Linux Kernel. It may not be semantically accurate, but take that up with the same people that made literally mean both itself and its opposite.


That's being pedantic, honestly.

Let's try the opposite. Say someone got Wine working to the point where it was very nearly, perfectly indistiguishable from Windows and they put up a blog post saying "Everything works just as it should under Windows, because it is Windows." Microsoft's lawyers would come around with a C&D, and calling them pedants wouldn't invalidate their case.

He could have said "it's just like Linux, right down to the kernel interface" or "Everything works just like Ubuntu, because the userland is Ubuntu". Succinct and correct. Precision matters.


Your points are fair, but color with the fact that the Windows Subsystem for Linux (WSL) is build to be distro-agnostic. We picked Ubuntu in this first version due to its popularity with developers, but there are few technical reasons (other than us fully and accurately implementing the necessary syscalls) why it shouldn't support other distro's userland environments in the future.

Knowing this, what should we call it?

Windows Subsystem for Running POSIX + Linux Syscall API Compatible Userland Tools? WSRPLSACUMT? :)

I'm genuinely interested on what you all feel would be a good way to think about naming moving forward.


There's nothing wrong with "Windows Subsystem for Linux".

But there's a difference between "This is Ubuntu running on WSL" or "This is Ubuntu running on our Linux compatibility layer", and "This is Linux".


> I'm genuinely interested on what you all feel would be a good way to think about naming moving forward.

I did actually give this some thought, for what it's worth. There are problems with "GNU" in the name and problems with "Ubuntu" in the name.

But it seems to me that Microsoft has a naming scheme that it is perhaps unaware of. See https://news.ycombinator.com/item?id=11417059 . (-:


> Let's try the opposite. Say someone got Wine working to the point where it was very nearly, perfectly indistiguishable from Windows and they put up a blog post saying "Everything works just as it should under Windows, because it is Windows." Microsoft's lawyers would come around with a C&D, and calling them pedants wouldn't invalidate their case.

Except that isn't the opposite, it's an entirely different situation.

1) It isn't Windows. It's a complete rewrite of the Windows API's. That is not the same thing that's happening here.

2) A C&D isn't a case, it's a piece of paper (politely) asking you to do something. Calling something Windows and it actually being an infringement on Windows patents are entirely different issues.

> Precision matters.

In carefully crafted theoretical situations feigning as analogies to this situation? Sure. In the real world? Hardly.


Android is Linux, but doesn't have your traditional GNU or minimal busybox userland that you'd expect.

Debian GNU/kFreeBSD is Debian, but it isn't Linux.

Mac OS X with GNU tools via MacPorts/Fink/Homebrew is OS X with a GNU userland, but it isn't Linux.

Windows 10 with an Ubuntu userland is Ubuntu, but it isn't Linux.

Linux is a kernel.


Thank you for proving my point on being pedantic.


Xe appears to have proven the opposite of your point, if anything.


Can somebody tell me straightforwardly whether a linux kernel is actually involved in this WinBuntu thing or not?


No, it is not.

> A team of sharp developers at Microsoft has been hard at work adapting some Microsoft research technology to basically perform real time translation of Linux syscalls into Windows OS syscalls. Linux geeks can think of it sort of the inverse of "wine" -- Ubuntu binaries running natively in Windows.

http://blog.dustinkirkland.com/2016/03/ubuntu-on-windows.htm...


MSYS is not Cygwin and IMHO is way better. Yes, is not Linux, but is a native binary without emulation (translation?) layer. It's been around for ages, before I did cross-compiling from Linux to Windows, that's what I used in Windows.

http://www.mingw.org/wiki/msys


MSYS and MSYS2 actually are Cygwin-- the original MSYS being a (horribly out of date) fork of Cygwin that never really pulled much from upstream, and MSYS2 attempting to track upstream Cygwin more closely.

You're getting confused with MinGW, which uses MSYS to build native Windows executables. They need MSYS (as a Cygwin-derived emulation layer) because tools like GCC or Bash expect the system to support POSIX APIs and have POSIX semantics-- for example, Windows has no equivalent to a POSIX fork() call. The code you're compiling under MinGW has no MSYS or Cygwin dependencies, but the compiler and tools themselves (gcc, bash, the linker, etc.) do.


>MSYS and MSYS2 actually are Cygwin

Not the person you're replying to, but interesting ...

>tools like GCC or Bash expect the system to support POSIX APIs and have POSIX semantics-- for example, Windows has no equivalent to a POSIX fork() call.

So do Cygwin and/or MSYS emulate the fork() call on Windows? and if so, do you have any idea how that is done? Just interested, since I have a Unix background - not at deep OS level, but at app level and also at the level of the interface between apps and the OS (using system calls, etc.).


> So do Cygwin and/or MSYS emulate the fork()

Yes. That's one thing we spent considerable engineering effort on in this first version of the Windows Subsystem for Linux: We implement fork in the Windows kernel, along with the other POSIX and Linux syscalls.

This allows us to build a very efficient fork() and expose it to the GNU/Ubuntu user-mode apps via the fork(syscall).

We'll be publishing more details on this very soon.


Interesting! thanks. The original Unix fork() was found to be somewhat expensive in resources (a little surprising since it was the only way to create a child process), later there was vfork() (copy-on-write) (maybe innovated by BSD), and I read Linux's clone() does even better, though not looked into it in detail.


fork would copy the entire address space of the process, which was wasteful when you're usually going to just throw all of that memory away by calling exec. BSD added vfork to optimize the `if (fork() == 0) exec(...)` scenario, by not copying the memory, and just pausing and then borrowing memory from the parent, until exec is called. Modern operating systems use copy on write pages for fork, so instead of copying all of memory, you just need to copy the page tables.


> then borrowing memory from the parent, until exec is called

What does that mean?


It means that if the child process actually modifies the memory then those modifications will be visible in the parent process, because they're both using the same address space. It's essentially an awful hack that was added to BSD at a time when it didn't yet use copy-on-write for fork() to achieve the same performance with vfork()+exec() that you would get from a CreateProcess()-like API.

That's why the child is allowed to do almost nothing:

       the behavior is undefined if the process created by vfork()
       either modifies any data other than a variable of type  pid_t  used  to
       store  the  return  value from vfork(), or returns from the function in
       which vfork() was called, or calls any other function  before  success‐
       fully calling _exit(2) or one of the exec(3) family of functions.


Thanks.


In Linux, fork(), vfork(), and clone() all use the same underlying machinery, with just a few different flags. clone() is the most general, with flags for what to share; those flags in particular include all the Linux namespaces that serve as the basis for containers. fork() just uses clone() with a hardcoded set of flags, and vfork() does the same as fork() except that it doesn't schedule the parent process until the child calls exec.


Interesting, will have to check that out.


How does this new fork differ from the already existing NtCreateProcess with a NULL section handle that was used to implement fork in the old SUA/POSIX subsystem?


>We implement fork in the Windows kernel

So will this be only on Windows 10?


Yes-- the Linux stuff is all coming in the Windows 10 Anniversary update this summer.


> So do Cygwin and/or MSYS emulate the fork() call on Windows? and if so, do you have any idea how that is done?

Cygwin does some pretty horrific hacks to emulate it. It basically creates a paused child running the same binary, fills in its memory, stores the register context of where it came from in a shared memory, and then resumes the child. The child on startup detects that it was forked, and then looks into shared memory to resume running at the place of the fork.

edit: It's even worse than I remembered: https://www.cygwin.com/faq.html#faq.api.fork


Wow. That sure is complex.


MSYS (as part of MinGW project) doesn't use Cygwin AFAIK; with the licensing implications that it has, because Windows programs written with Cygwin run on top of a copylefted compatibility DLL that must be distributed with the program, along with the program's source code (quoting Wikipedia).

In fact the binaries, that are compiled with MinGW, link with MSVCRT by default (Microsoft Visual C Run-Time DLL). So no compatibility layer, and they don't rely on Cygwin.


Please distinguish carefully between MSYS and MinGW. MSYS (or MSYS2) programs runs on top of a copylefted compatibility DLL that is suspiciously similar to Cygwin. They must be GPL-compatible. MinGW programs link with MSVCRT. They are compiled by GCC toolchain programs, most of which are MSYS programs.


wait wait wait, will I have access to the whole repositories of ubuntu via apt-get or just to a windows repository made on purpose?

In any way, for me, this is a great news. The fact was really hard to work with python/ruby/node/etc under windows and the fact I hate powershell were the two main things about why I work on a linux os all the time.


> will I have access to the whole repositories of ubuntu via apt-get

Yes; full, standard, repo access [1].

> With full access to all of Ubuntu user space > Yes, that means apt, ssh, rsync, find, grep, awk, sed, sort, xargs, md5sum, gpg, curl, wget, apache, mysql, python, perl, ruby, php, gcc, tar, vim, emacs, diff, patch... > And most of the tens of thousands binary packages available in the Ubuntu archives!

[1] http://blog.dustinkirkland.com/2016/03/ubuntu-on-windows.htm...


I've never met anyone that tried to do some scripting with PowerShell and didn't fall in love with it, that's why I find it difficult to believe that you hate it. Or that you tried it.

You might hate the terminal where PowerShell runs, but I don't think you hate Powershell.

I see a future where devs move to Windows due to Bash and stay due to PowerShell.


I've done some terrible, terrible automation using PowerShell and PS Remoting across dozens of Windows servers, and wanted to smash my screen on multiple occasions.

First, PS Remoting is terrible compared to SSH. Commands fail randomly on a small percentage of systems, and the only options to troubleshoot are to login manually to the remote system, and try a number of things, including rebooting the remote system (not very good for servers).

Second, debugging is terrible compared to bash - sure PowerShell ISE allows you to step through your code line by line, however, I don't have anything like "bash -x script.sh" which lets me see the actual execution and return code of every line of my scripts.

Third, bash has a much simpler way to chain output through multiple programs using pipes and treating input/output as simple text. PowerShell is a pseudo programming language with objects and other data types that just don't enable this type of chaining in the same simple and easily understandable way.

It took me weeks to write a PowerShell script that used PS Remoting to loop through a list of provided servers, install a service, set the RunAs user, start the service, create a secret file, and EFS encrypt that file as a specific user. I could have written the same script in hours using bash for Linux boxes. It would have been much more efficient by using tools like GNU parallel.

I'm not sure how anyone that's used both PowerShell and bash for any serious work could say PowerShell is better, unless they're a .NET developer and appreciate being able to blend .NET objects into their scripts, but to me, that just breaks the simple modular composability of the *nix philosphy.


We might have very different use cases I guess. I script a lot to work with data, not to automate things.

I feel very comfortable getting a stream of data and treating it with sed, awk, grep and whatever; but once I worked with objects it feels much more data-oriented.

And please don't get me wrong, I've been in love with Bash since a Slackware CD fell on my hands in the 90s. I just was mind-blown by PS after making fun of it for years - just because the default terminal where it runs is less than great.

I guess this thread proves the point of bringing Bash to Windows: Different people, different uses, different needs and solution. And that makes me happy :)


so is there a better terminal available for PowerShell?

(preferably for win7, because I run Linux at home and win7 is the most likely platform I may spend some time elsewhere)

I do appreciate, from all I've heard about PowerShell, it might be an interesting environment to try some scripting in. The actual shell/programming language can't be (much) worse than bash--I mean let's admit, bash is pretty ancient and therefore didn't have the advantages of progress in designing programming languages we have made in the past decade(s).


The terminal is inferior IMHO, but the language is far superior.

Of course, if you're too used to Bash and its paradigm there's a learning curve in PowerShell. Not steep, though :-)

The killer features for me are the objects and being able to seamlessly use C# libraries in my scripts.


I wrote a few 10- to 20-line scripts, but got lost in the documentation every time. You have to wade through mud to get anything simple done. Statements/command lines easily wrap around an 80-character window due to their verbosity. The worst is that every command has about 40+ options, most but not all of them the same as for other commands. Sorry, not for me.


Powershell is so foreign with its typed pipes to those of us who are accustomed to the string based pipes that I have a very hard time falling in love with it.


Exactly. I mean did we really just jump straight from the worst shell in history (the windows command prompt) to one that supports tmux?

However, I don't know if this is the case. I remember in the livestream Meyers saying that they will enable you to choose any shell you want "powershell, dos, bash, and more coming soon". If they just supported _any_ ubuntu binary natively, I don't know why he would've said "more coming soon"


Windows has had powershell for a while, and there is a good argument that its better than the *nix shell because it supports strongly typed objects. Many will disagree, of course, but keep in mind that cmd.exe hasn't been the windows shell of choice for many years now.


screen and tmux don't appear to be supported yet due to issues with terminal emulation. But they're working on it.


With some luck they'll work with Conemu. BTW, it's 3-clause BSD, so you can peek, Microsoft. :)

https://github.com/Maximus5/ConEmu


> This is a real native Bash Linux binary running on Windows itself.

How does it work without VM? I'm super curious!


Have a look at FreeBSD's linuxulator. It basically maps syscalls (and a bunch of other stuff) from one OS to another. Alexander Leidinger has written a whole lot of blog posts about it - this one's a good start: http://www.leidinger.net/blog/2010/10/27/the-freebsd-linuxul...


Is there a source for this being a syscall emulation layer and not some kind of colocation of the linux kernel in a subsystem somehow?

The latter would be much more interesting to me, since what I really want out of "running linux on my desktop" is for it to actually act like the linux machines my code is targeting, and I'm dubious that a syscall layer will achieve that to the degree I want.


One of my coworkers is from MS and worked on the Android apps on Windows project. This comes out of that cancelled work: It's a full implementation of Linux syscalls in user mode that registers a driver to perform kernel-mode tasks on behalf of the subsystem. The NT kernel has always been fairly agnostic and not tied specifically to Win32; it originally had an OS/2 text-mode and POSIX subsystem in addition to Win32. The NT kernel even uses a Unix-like "\\?\" root filesystem where disks, kernel objects, sockets, etc are mounted.

From what I can gather Microsoft is paying Canonical to help with a few user-mode bits and the Windows apt-get stuff uses the official Canonical sources.

As I said to him: "NT is the only major OS I know of that has always had personality subsystems. Cutler’s vision finally pays off after 3 decades of waiting"


> "Cutler’s vision finally pays off after 3 decades of waiting."

Quite!

But only because the Interix-derived POSIX subsytem was, and is, little-known and vastly underappreciated. Had it been better known, the payoff might have come a decade or more sooner. There are a fair number of questions being asked now, about the new Linux subsystem, where the answer is "No; but the old POSIX subsystem had that.".

* Does it support pseudo-terminals? No, according to the demonstration video; but the old POSIX subsystem did. (https://news.ycombinator.com/item?id=11415843)

* Does it let you kill Win32 processes? No; but the old POSIX subsystem did. (https://news.ycombinator.com/item?id=11415872)

* Does it support managing daemons? No; but the old POSIX subsystem did. (https://news.ycombinator.com/item?id=11416376)

* Does it support GUI programs? No (say the people behind it themselves, although I suspect that it could run X clients); but the old POSIX subsystem did. (https://news.ycombinator.com/item?id=11391961) (https://technet.microsoft.com/en-gb/library/bb463223.aspx)


I remember having a Microsoft developer (I'm afraid I forget her name — Six?) come and visit us several years ago to demonstrate the then-new SFU 3.0.

Seeing her send SIGSTOP to a running MSWORD.EXE process and observe it stop updating its window in response to expose events was splendid. :-)


I love your last paragraph. Exactly what I was thinking. NT Subsystems were its party trick and its taken far too long for them to be introduced on the grand stage. I bet Dave Cutler is smiling his face off!


> "NT is the only major OS I know of that has always had personality subsystems."

In what way is a "subsystem" different than a "library" or a "process" or a "driver" (if it runs in kernel space)?

Any process can use the native API. What's special about a "subsystem"?


What was his vision?

I know that Dave Cutler was heavily involved in designing NT, and was earlier the same with DEC VMS. (Had read the book Inside Windows NT.)

But don't know what vision you refer to. Was it about the personality subsystems?


Before leaving for MS, Cutler worked on an OS codenamed "Mica" which ran on top of an architecture named PRISM (which was a predecessor to the Alpha). The idea behind Mica was that it was a microkernel which hosted VMS and Unix personalities on top.

It is widely claimed and believed that when he moved with his team to MS that he reimplemented Mica as the NT kernel. http://www.textfiles.com/bitsavers/pdf/dec/prism/mica/


Close, except ...

There is NO Windows "apt-get stuff" - it's just apt-get. From Ubuntu.


Thankyou for this concise summary. Much clearer now.



Thanks, this is what I was actually looking for. :)


From linux(4):

     The linux module provides limited Linux ABI (application binary inter-
     face) compatibility for userland applications.  The module	provides the
     following significant facilities:

     +o	 An image activator for	correctly branded elf(5) executable images

     +o	 Special signal	handling for activated images

     +o	 Linux to native system	call translation

     It	is important to	note that the Linux ABI	support	it not provided
     through an	emulator.  Rather, a true (albeit limited) ABI implementation
     is	provided.
https://www.freebsd.org/cgi/man.cgi?query=linux&apropos=0&se...

"Mapping syscalls from one OS to another" was really just an example to give the OP an idea of how this sort of thing works without a VM.

Edit: Nevermind then


I wasn't asking about freebsd (I've used that, even!), I was asking about this new thing on Windows and whether there's been official word that it is indeed the same sort of thing. Looks like it is indeed this sort of thing, though.


A good few years ago now I used to use a project called cooperative Linux that was like what you describe - a user mode Linux kernel running as a service in windows.


Yeah there were actually a couple of projects along this line at various points, but they always suffered in various ways from not being well integrated into the system. What you really want is for the colocated kernel to have first class access to some basic things about the primary kernel (like file systems -- including page caching and networking) so they're not going through an expensive and awkward translation layer.

Cygwin and things like colinux exist on opposite ends of that awkward divide, but something officially supported by the OS could maybe straddle it better.


I feel it is related to Windows Server container. Maybe Ubuntu works like a Docker container on Windows.


There was also another project I worked with a while back called LBW (http://cowlark.com/lbw/) which allowed some unmodified Linux binaries to run on windows using Internix and some really neat syscall handler trickery.



They added a subsystem in Windows that responds to Linux APIs.


In other words, they're not using "Linux" at all. It's an Ubuntu userland on top of the Windows kernel, similar to how Nexenta was an Ubuntu userland on top of the OpenSolaris kernel.


You pretty much nailed it :)


They've had a POSIX subsystem (SUA) for a while, but it's kind of ancient. I assume they resurrected it and implemented some Linux-specific APIs, which would actually be SO COOL.


No - this is a whole new thing.

The Windows POSIX subsystem which shipped in NT 3.5.1 was a minimal implementation of POSIX syscall API plus a userland toolset. That was replaced with Interix which was renamed Services for Unix (SFU) which had a more comprehensive kernel implementation and more up to date userland. However that tech was not resurrected to build the Windows Subsystem for Linux (WSL).

Importantly, WSL doesn't ship with a distro - we download a genuine Ubuntu userland image at install-time and then run binaries within it.


> Importantly, WSL doesn't ship with a distro - we download a genuine Ubuntu userland image at install-time and then run binaries within it.

So can we use WSL by itself and pick a different distro, if we'd rather use say Alpine or openSUSE or Arch's userland?


> However that tech was not resurrected ...

I, for one, would like to see it resurrected. It's exceedingly useful, and is one major reason that I am not, nor will be, using Windows 10. This new subsystem does not have the things that I use SFU/SFUA for. Nor does it have the BSD-style toolset of SFU/SFUA.


Does Windows 10 still ship with the OS/2 subsystem? (os2ss.exe)



Thanks!


So LINE on Windows like WINE on Linux?

Seriously:shows an appreciation of where a lot of the workload for computer programmers is these days.


Came here to say that. It'd be nice if they open-sourced it so it wasn't yanked away from under our feet one day :) But maybe they are? I shouldn't presume they are not.


Lol "linux is a cancer" has new meaning now :)


Is it a cancer if it doesn't kill you, but actually helps you be stronger? Maybe Linux is more like gut bacteria now. Linux is E.Coli?


e.coli isn't a gut bacterium, ... is it?


Think of it as Wine, reversed.


Eniw?


Emulator Not In Windows

Sorry it's not recursive.


ENIW is 'Nix In Windows


Enterprise Networked Internet Workstation?


Everywhere Now Is Windows


Vinegar?


Curious to see what's the support of the Windows API then.

Follow up questions would be if it will support kernel-dependent utils like tcpdump, ifconfig, etc.


I wonder too. I think that even the win32-native msys2 ships with a bash that uses cygwin's dll. Apparently some POSIX/Linux system calls are extremely hard to emulate under win32, such as fork: https://cygwin.com/cygwin-ug-net/highlights.html (search for "Process creation"), which I'd imagine would be critical in a shell for handling redirections, pipes, and job control.


Apparently they re-implemented the Linux API's along with the ability to run Linux executables directly.


The same way Wine works for Windows binaries. Translate system calls.


Wine has it easier - it does not translate system calls, it needs "only" dll loader and a set of dlls exporting the right symbols.

NT system calls are not exposed to userspace; only system-supplied dlls can use them. It is done by changing syscall codes for every build, so non-system app would never know, which syscall number to use.


Huh, TIL.

That explains why Wine always seemed so buggy, though.


Gave this a long look and I couldn't possibly do anything on a Windows Machine in its' current state. Linux isn't just about running apps - there's a philosophy behind the system. Users first!

As long as Microsoft continues to disrespect the rights of users in regard to privacy, data-collection, data-sharing with unnamed sources, tracking, uncontrollable OS operations (updates, etc) - I will never go near it.

I expect some flack for my position... don't care. I find it especially offensive that ex-open source and ex-Linux users (working for Microsoft) have the audacity to come on here and try to sell this as a 'Linux on Windows' system when most of what makes Linux special (respect for the user) has been stripped away.

It's like giving a man who is dying of thirst sea water.

Most comments here appear to be positive and that's fine... whatever. Please don't sell your souls and the future of software technology for ease of use and abusive business practices. /rant


What's "Linux" about this and "not Linux" about Cygwin?


Cygwin translates calls to Windows via cygwin.dll. It works well for the most part, but in some corner cases, it doesn't work so well.


I mean, how is that different from what's being done here? Is the actual Linux kernel running in some sort of container? This post is light on details but people here seem to be suggesting it just translates syscalls which afaict is the exact same thing cygwin does. (plus support for linux elf binaries, which doesn't really matter since most linux tools are open-source anyway)


See here: http://blog.dustinkirkland.com/2016/03/ubuntu-on-windows.htm...

> Microsoft research technology to basically perform real time translation of Linux syscalls into Windows OS syscalls


So "Linux on Windows" isn't really happening here. Binaries intended to run on Linux are just being tricked to run on Windows -- basically the same as Cygwin, with the minor benefit of not requiring a recompile.


It's a lot more than that. When Cygwin is built, the translation between to POSIX is handled via Cygwin.dll with the various tools recompiled against that file.

This version is taking the native Ubuntu binaries and executing them directly against the Windows API via a real time translation later.

The difference is like playing a game using virtualization technology vs. WINE. As the complexity of the game increases, the former begins to slow down and break.


"just"

You don't have to be a fan of Microsoft but they deserve at least some credit.


That is quite a bit different from Cygwin.

Running the Linux kernel on Windows has been done before IIRC - coLinux?


That's what I was thinking but coLinux never did 64-bit


Cygwin is pure userland. This Linux API subsystem is a small bit of userland (similar to kernel32.dll in the Win32 subsystem), but mostly kernel code.


Yes. You can quickly see this when running ping. On Cygwin it runs 3 times on Linux forever.

Edit: thinking about it I'm not sure if I'm right. Or does Cygwin map the ping command to the Windows command?


> does Cygwin map the ping command to the Windows command?

It looks like Cygwin has a ping package, but it isn't something you'll get unless you specifically select it in the installer.


Cygwin requires a recompile. This is running Ubuntu binaries.


Yes - Cygwin is essentially the GNU tools recompiled as Win32 apps using a helper library for shared code etc.

Windows Subsystem for Linux (WSL) which underpins Ubuntu on Windows is new Windows kernel infrastructure that exposes a LINUX-compatible syscall API layer to userland and a loader that binds the two.

This means you can run real, native, unmodified Linux command-line tools directly on Windows.


So it's cygwin + ELF support in the kernel. I guess ability to run GNU tools + supporting ELF is being called "Linux". The latter piece seems pretty minor, especially since most things that run on ubuntu are open-source anyway.


No, it's supporting all of the linux system calls in the Windows kernel. Epoll, fork, exec, etc. It's a totally different implementation than cygwin.


Got it, thanks.


They must be 2 days early.


"Embrace, extend, extinguish."


There was another choice like Cygwin, though not as comprehensive - UWin - by David Korn, creator of the Korn shell (ksh):

https://en.wikipedia.org/wiki/UWIN

I've used it some on Windows earlier, and it worked pretty well. Might still available if anyone wants to try it. The only small issue I had was that the process to download it from the (AT&T) web site was slightly involved, for no good reason, as far as I could see. But not difficult.


It sounds cool, but even though his remarks are clear, I'm still confused!

I got a Mac primarily because of its linux side, but it is actually Linux.

This is still Windows, but with a Linux "side" to it? If I apt-get install redis, do I make it startup like I would in linux, or do I use windows services? In the screenshot there's a /mnt directory, is that behaving the same as it does in linux?

This is so confusing... but if it's legit, then I would actually look at switching back to windows.


> I got a Mac primarily because of its linux side, but it is actually Linux.

No, it's not. OSX is literally UNIX. OSX is based off Darwin, which is based off BSD.


To be more specific, darwin was originally based on BSD4.4-lite2 and FreeBSD with the Mach hybrid microkernel. It has diverged significantly since with lots of new APIs in userland, but all major BSD apis are still there - it is still a POSIX compliant system


NeXTSTEP originally used 4.3BSD-Tahoe as its base with Mach, and Darwin's a direct descendent of that.


When I look at it, I basically see NeXTSTEP which is UNIX... I see a lot of OS X is BSD, or, OS X is Darwin + BSD, all I see is NeXTSTEP with a worse interface...


When I say OX is literally UNIX, the emphasis is on the word 'literally'. To be classified as a UNIX system, meaning that an OS maker can use the UNIX trademark, the system have to be certified by the The Open Group.

BSD was UNIX, yet neither of it's 2 prevalent derivatives (FreeBSD and OpenBSD) has applied for certification. They are classified as Unix-like; the same is true for any Linux distribution.

To some people, this may be semantics, but one of the reasons that drew me to OSX was the certification.


What appeals to you about the certification? Are there ways in which OS X behaves in a "more standard" way than FreeBSD ?


yes, I got the reference. I was talking about personal impressions, I was not clear about it though, sorry. I still remember when they (Apple) run an ad showing a powerbook running Mac OS X saying "Send other UNIX to /dev/null", how excited I was at the time :-P


Almost all of the userland command-line tools are from BSD. The most important exception is clang which is independent (though Apple has historically been the biggest contributor).

Also, large parts of the kernel are from BSD.


> I got a Mac primarily because of its linux side, but it is actually Linux.

No part of Mac OS comes from Linux at all. Also, most of the standard tools are from BSD, not GNU.

Maybe you meant "it's a UNIX-like system" but those predate Linux by 30 years or so, and at any rate Windows + Cygwin was already a UNIX-like system to a similar extent, so that's not really relevant to what was accomplished here.


Macs aren't actually linux, they are based (a port I think) on Unix though and are very close relatives to linux.

There is a distinction because you will notice much larger differences between Mac and [insert linux distro here] compared to linux distros to each other.


They are not "very close relatives to linux" unless you also think Solaris, Windows+cygwin, FreeBSD etc. are also very close relatives to Linux. In which case the term becomes meaningless because almost every mainstream OS is a close relative of every other one.


And that's why we all should conert to TempleOS once and for all.


To be more accurate, your Mac isn't Linux, it's BSD. Luckily that *NIX underpinning is often enough to make it a solid development platform even when deploying to Linux.


It's not really BSD either; it's a mix of a bunch of stuff. The kernel XNU is a hybrid between Mach and BSD. I think there are also a few GNU utilities in the base system but I could be wrong about that (?)


    $ uname
    Darwin
    $ /usr/bin/cmp --version
    cmp (GNU diffutils) 2.8.1
    [...]


Right! I though there was still some GNU stuff floating around ;). Thanks for the example.


Well, this has increased the changes of my next laptop being a Surface Book by around 100%. I already loved the form factor of the thing, but lack of bash was absolutely causing me to hesitate and wonder if I could justify doing all my work in a Linux VM or something (I can't).

I'm genuinely very tired of OS X, which (to my perception at least) has gotten steadily worse with every version. I for one will be happy to switch.


Yea same here, the Surface Pro is just such a nice piece of powerful hardware and a great form factor to carry around. Having bash/linux subsystem on Windows makes it a pretty damn nice development machine for pretty much anything (web/games/etc.).


If you don't mind having an extremely bloated OS, which is why I quit programming on Windows and switched to Linux in the first place. Now they've just added even more bloat and I imagine developing using the "linux environment" will be a huge pain in the ass because it'll basically be like programming on a new operating system.

Can't get node.js to run on your "Linux environment" and access a database running on windows? Good luck finding an answer for that on stackoverflow.

You'll have to target yet another environment for any app you develop. Will it be running on a Windows server? A Linux server? A server running "Windows with bash"?


Bloat is an overused, nebulous concept. Most people actually like rich desktops and lots of features, which is why Mac laptops have proliferated in the dev community.

Things like Node.js already run pretty well on Windows as it is, and MS is building native tooling in Node.js (e.g. their Azure CLI).

With this change, Microsoft is definitely going to encourage a lot of Surface adoption for geeks.


Really...? What would you change about OSX? I just switched from Windows to a Mac and I can't see myself ever going back...


Working SMB networking would be nice. Tired of mysterious issues. I can't remember a single time in past year when whole directory copy/move to SMB share succeeded. OS X just gives me these descriptive "error -51" or whatevers. (It's really mature (not!) of OSX to have that blue screen icon for SMB shares.)

Stable USB stack would be nice as well. Ever since El Capitan, virtual machines I run off USB drive have been getting random I/O timeouts.

OS X tends to need quite a bit more memory than Win10. Win10 is as usable on 2 GB RAM as OS X on 4 GB. OS X graphics driver is also pretty slow, some 30% slower than on Windows. OpenGL support is pretty bad on OS X.

On Windows 10 side my biggest issues are unstable (or temporarily unavailable) RDP and bluetooth stereo audio stuttering. RDP color accuracy leaves also a lot to be desired.


Funny, that it's SMB that I hate on Windows. With OSX or Linux its no problem to access shares with different credentials, no matter whether Windows server in AD or standalone samba-based NAS, while it is a major pita in Windows.


Odd. I think my issues might have something to do with file sizes. Maybe there are some issues with files over 2 GB or 4 GB. I don't know. I've just resorted to using FTP (ugh!) and USB drives to get files out of my OS X machines.


The network is OK? Are you connected over wifi? No packet loss?

And even though it is 3x3 MIMO, copying 20GB vm images is not something you want to do over wifi, so in the end I've got the Thunderbolt Ethernet adapter. Works like a charm, shorted the transfer by more than 10x.

SMB by itself never gave a problem (clean install of 11.0, then continuously updated to 11.4).


11ac wifi. Usually connected at 702/780/867 Mbps. Works fine from Linux and Windows VMs running on OSX (and laptops).

Transfer speed after overhead over 11ac 867 Mbps wifi is usually 400+ Mbps.

No packet loss (or at least it's below 0.1%).


The unstable USB stack is annoying. Before it was fixed in the latest OS update middle clicking with a USB mouse would cause the USB audio driver to segfault. Clicking too fast would end up with a kernel panic.


I have far more issues with OS X than with Windows. I get kernel panics, waking from sleep crashes, weird issues with external monitors not being detected as connected, or worse, disconnected.

Windows also offers a lot more customizations than OS X.

However, there are things I like about OS X also, like spaces and multi-touch trackpad support.

I use both on a daily basis.


I don't disagree that quality has dropped a bit lately, but I don't think your experience with OS X is typical. I have not had a kernel panic in years. Maybe time you did a clean install.


The panics I get are related to Thunderbolt, so I suspect it's an issue with the Thunderbolt kernel drivers. I run two external Apple Thunderbolt monitors and that's at the core of most of my issues.


This could very well be it. I was about to comment that I've been using OS X for about 10 years and I'm pretty sure I can count the number of kernel panics I've had on my fingers. Most of those happened a long time ago.


I run two external Thunderbolt monitors through a couple of cheap adaptor boxes and have never had a problem.

I suspect RAM issues. OS X isn't great if - say - Chrome eats all the memory. And if the RAM itself isn't rock solid, you will get crashes.

A lot of issues went away when I installed 32GB.


> I get kernel panics, waking from sleep crashes, weird issues with external monitors not being detected as connected, or worse, disconnected.

This is a real problem for me as well. Not enough to make me want to ditch my Mac, but it's a real PITB.


Like I said, maybe it's my perception, but I've lost count of the number of times it has failed to connect to a Wi-Fi network, randomly crashed... it's been unable to connect to any links from http://t.co for months, for goodness sake (this has recently been fixed).

I think part of my objections are that OS X used to be absolutely rock solid, around the Snow Leopard era. An entire release dedicated just to tuning up the OS! Unheard of now - I will never install a new version until the x.1 patch is out, there are always huge bugs.

I don't expect Windows to be rock solid, I just don't expect it to be any worse than OS X any more.


I haven't had any bugs at all on El Capitan.


There's no Surface Pro equivalent for OSX (the iPad Pro not running a real OS is too bad), and Mac machines are crazy expensive for what's inside. At least, that's why I switched.


> Well, this has increased the changes of my next laptop being a Surface Book by around 100%.

And thus MS has achieved their goal.


[flagged]


Yes of course. Anyone who doesn't recognize that your opinionated development style is superior has to be a shill.


Have you forgotten about privacy issues in Windows 10 or you just don't care?


I thought you could turn them all off if you were really diligent?



The leaking random machine ID is pretty bad. The other things seem rather harmless, or you can turn them off.


Me too, but to be honest I am not quite sure what I don't like about Mac OS X anymore. At work I got Arch with i3 which is (extremely) addictive, but at home I have Mac OS X (and I use for work too). I don't really have any specific problems (compared to Arch :) with Mac OS X. Still I was thinking to put Arch on it, but I don't want to tinker to be able to watch netflix after a long flight in that 'weird' network which the wifi driver version X.Y doesn't like :) Windows felt weird after getting used NIX-ish systems so it was out of the question.

I have tried to figure out why I want a non-mac for my laptop and concluded I just like change... :) I was almost settled on that dell xps with ubuntu, but if the Surface Books get thunderbolt 3 and this before the autumn I am pretty sure I can't resist anymore...

EDIT: typo


Why don't you just use Linux in a VM on Windows? I don't really understand why native bash (and full Linux ABI) would make Windows a better development environment than just running a Linux VM.

If you plan on using the Linux environment and having it interact with the Windows environment you're going to have the same limitations that you would with a VM, OR you'll have to change your workflow because the way a program running under a Linux environment interacts with some windows service is going to be a completely new thing.

Will I be able to use a windows only service to interact with a command line program written in python running in the Linux layer? If I can't interact with the windows layer completely then it's very much like a VM or a container running inside a jail.

What happens when I install python or nodejs and stuff just doesn't work right? Like say I have a database running on windows and I want to interact with it with python. Will I have to rely in Windows making sure the compatibility layer always work?


So, VMs are effectively a completely different machine. Different memory space, different disk space, different process space, etc. This is linux applications running on the same machine as your windows applications. Yes, they'll be able to talk to each other. Your windows service will be able to talk to a database running from linux and vice versa. The linux processes are still processes running on your windows machine (evidently they're some sort of "lighter process" but that hasn't been explained well... but it was explained that they'll be able to be communicate directly with other processes via sockets, ports, etc).

The demos are VERY convincing. Basically everything works exactly like you would want it to work. It's exactly ubuntu and windows running through the same kernel at the same time.


Honestly: don't know... :) I suppose when looking for a new fancy (and expensive) gadget I would like it to work like I want it to out of the box. I can admit it is not a rational thing: I would have to install stuff either way, but it makes me _want_ it less... I know it doesn't make sense, but I believe that is it... :)


Oh, and I don't think I am afraid if something doesn't work (like node in your example or what not) with the barebone ubuntu on windows as long as the minimals: drivers, etc etc work... I am pretty used to tinkering and figuring out those things and I really enjoy that barebone speed. Willful waste makes woeful want, my mom always said :)

EDIT: idiom


Just use linux.


Increasing a ~0% chance by 100% is still ~0ish%


No, it's ambiguous whether he was talking about percentage or percentage points.


Interested in how long it will last this time. Windows NT were POSIX compliant long time ago, but that was discontinued.

https://en.wikipedia.org/wiki/Microsoft_POSIX_subsystem

https://en.wikipedia.org/wiki/Windows_Services_for_UNIX


The POSIX subsystem was so crippled that it was unusable (no graphics or network). SfU was neither free nor included by default, except for one free version which also happened to be its last. Also at those times Linux did not have the market penetration that it currently has with Android.


Right. As I've heard it, the POSIX subsystem was essentially a checkbox feature to meet some government contracting requirements.


You confuse the POSIX subsystem with SFU/Interix (originally called "OpenNT", but soon renamed to "Interix". Later bought by Microsoft and rebranded as "Services for UNIX"):

> https://www.samba.org/samba/news/articles/low_point/tale_two...

> http://brianreiter.org/2010/08/24/the-sad-history-of-the-mic...


I specifically remember that I read that (it was a checkbox feature) about the ancient POSIX subsystem. The SFU/Interix system was a bit more capable, I think? I did install it at one time, but never really used it.

Edit: yes, that is in fact exactly what the first link you gave says: The POSIX subsystem was added as the POSIX standard had become very prevalent in procurement contracts. [...] This original subsystem was, I think it's fair to say, deliberately crippled to make it not useful for any real-world applications. Applications using it had no network access and no GUI access, [...] SFU contains a full POSIX environment, with a Software development kit allowing applications to be written that have access to networking and GUI API's.


I remember trying to use the Services for UNIX Version 1.0 Korn shell for a job. Coming from bash/zsh, I loathed it. I opted for Cygwin's bash ASAP.


What I heard was that they took advantage of every function that could technically be "implemented" by setting errno appropriately and returning an error value, rendering the subsystem useless while still allowing the box to be checked off.


How about GUI applications?

I worked at a place that developed a "Linux on Windows" thingy back in the Windows XP days. It was essentially like WINE. A user-mode Windows program would load the Linux binary into the Windows program's address space and execute it, trapping any attempts by the Linux code to issue system calls, and the Windows program would then service those system calls.

For non-GUI stuff this worked remarkably well. I was able to grab the binary for rpm off of my Red Hat system, and the then current Red Hat distribution disc, and install successfully almost all of the RPMs from the disc and have almost all of the non-GUI ones work.

I had expected big problems from the case-insensivite vs. case-sensitive filesystem issue, but in practice there were only a handful of things that ran into this. Mostly Perl stuff that used both "makefile" and "Makefile".

GUI stuff was another matter. We could run XFree86 under Cygwin, and then the Linux apps under our WINE-like program would work. However management was not keen on the idea of including Cygwin and XFree86 if we turned this thing into a project. Also, we wanted an X server that would fit in better with a mix of X and native Windows apps running at the same time.

I spent a while trying to write a Windows X server straight from the official specifications. I got as far as being able to get xcalc to display a window and all the controls to show up right, but weird things happened with events. Everything looked fine when I packet sniffed the communication. I still had not figured this out by the time management decided that this whole thing did not have enough of a commercial market to continue the project.


Want to make this absolutely clear - this is a command-line only toolset for developers.

It is not built to support GUI desktops/apps. It is not built to run production Linux server workloads. It is not suitable for running micro-services or containerized environments.

Again - this is A COMMAND-LINE-ONLY DEVELOPER TOOLSET!


Any plans for it to become more than that?


GUI applications not currently supported, and as far as I've seen, not intended to be supported (but who knows in the long term).


I wonder if this will mean that Windows will ship a more up to date Bash than OSX!

Apple stopped updating Bash in OSX when the upstream license changed from GPL2 to GPL3, I believe. (Fortunately, they keep the bundled zsh more up to date)


wow, so... 2007?


Yup

    % uname -a; bash --version; zsh --version
    Darwin hostname 15.4.0 Darwin Kernel Version 15.4.0: 
    Fri Feb 26 22:08:05 PST 2016; root:xnu-3248.40.184~3/RELEASE_X86_64 x86_64
    
    GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin15)
    Copyright (C) 2007 Free Software Foundation, Inc.
    
    zsh 5.0.8 (x86_64-apple-darwin15.0)


thank goodness zsh is the superior shell anyway


It really should be the default shell by now. I don't the see the point in Apple maintaining the 2007 version of bash with security updates when they can just promote ZSH which has near 0 breaking changes from bash


Apple can't do that or many people will just move away to... Well, Windows now.

Users with zsh as their default shell are a tiny minority.


The former is nonsense, and the latter sheer unsupported guesswork.

The actual evidence from history of changing shells, from the Ubuntu and Debian worlds where they actually did make a change of shells (from Bourne Again to Debian Almquist) a few years ago, is that it doesn't drive people away in the first place, let alone away to Windows.

Even if one did a survey to make the latter not unsupported guesswork, one would have (if my experience of StackExchange is anything to go by) to account for all of those who answered that "My shell is Terminal.app." or "I have oh-my-zsh as my terminal.".


    $> uname -a
    Darwin K2523 15.4.0 Darwin Kernel Version 15.4.0: Fri Feb 26 22:08:05 PST 2016; root:xnu-3248.40.184~3/RELEASE_X86_64 x86_64
    $> which bash
    /bin/bash
    $> bash --version
    GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin15)
    Copyright (C) 2007 Free Software Foundation, Inc.


No surprises there. You will find that the tone towards GPL across the tech world change with the intro of GPL3.


This might be the most exciting news I've heard in a long time. Being able to use Visual Studio and .NET for web development while using zsh and all the other Linux tools? Dreamland.


The only reason I can't really use Windows as development OS are the inferior terminal emulators. As good as ConEmu is, it is still worse than Terminator etc. Unless I can run a native Linux terminal emulator, it doesn't make much of a difference to me. Also, the filesystem differences don't help.

I was also running into Haskell compilation problems that were fixed by running Ubuntu in a Vagrant environment but speed was slow. There isn't good NFS support on Windows either (there is some).


I love tiling window managers...and real package management... and free software.

Windows still has a ways to go. I think this might make some Windows stuff easier to deal with, but I still prefer jobs where I can run Linux natively on my workstation.


I was always grumpy about switching to OS X because I missed xmonad, but then I found amethyst and rebound the keys to be the same :)


I would call powershell anything but inferior.


PowerShell isn't a console or terminal emulator. It's a shell that uses conhost just like any other console application on Windows.


Powershell ISE is quite a good terminal emulator (even tho it wasn't intended as one), it's also extensible via addons and there are quite a few nifty ones like git integration and the likes.

This is the ISE in a default configuration https://imgur.com/xz9Kfpt On the left just an open terminal, in the middle a script which can be edited and executed at any time with F5, and on the right all the powershell commands which could be either immediately executed or inserted into your script with ease.

Unless you need tab browsing that much, which you can get via addons, the ISE is one of the best "terminals" out there imho.


I'm always put off by the ~five second startup time when I open PS to start learning it. Any tips for speeding it up?


On my i5-2400 with 8 GB of RAM and a 7200 RPM hard drive, it takes six seconds to start the first time and four seconds to start subsequent times.

On my i5-3550 with 16 GB of RAM and an SSD it takes a couple seconds to start the first time and less than a second for subsequent times.

Both machines are running Windows 10.

Right now, the machine with the spinning rust is loading a bunch of files with an I/O priority of "background" because it just got booted into Windows; that might slow it down a bit because of the seek times and I don't know if Windows is willing to starve background I/O for seconds at a time to speed up interactive requests (I doubt it).

Update: once all the background preloading is done, PowerShell restarts in three seconds on the spinning-rust machine.

Long story short, I think getting an SSD will be the thing that makes PowerShell start acceptably fast.


We did a lot of work on PowerShell startup in this next release - I think you'll be happy.

Jeffrey Snover [MSFT]


Ancient Thinkpad dual-core with 512Mb RAM and 5400 legacy spinning rust disc running OpenBSD 5.9.

Left click in fwvm, select xterm, window appears in less than my blink response time.

Seriously: I think I might pop Win10 on an old Dell i5 that came with Win7 and play with this.


Many thanks for the comparison! I'm indeed with a 7200 RPM disk. I guess I should invest into some new hardware soonish :)


Speed was an issue on Windows 7 (PSv2), but they sped it up considerably in Windows 8 (PSv4, I think). Maybe installing the current PowerShell version helps; I doubt it's inherent in the OS.


I'm on Windows 10 and I agree it has improved a lot since Win7, but it's still not pleasant. I'm rocking a 7200rpm spinning disk, so as suggested by adiabatty, getting an SSD might help.


Can confirm, it starts immediately in 2012r2.


Depending what you open, powershell.exe should open as fast as cmd.exe pretty much, the ISE can take a few seconds to load based on the addons you have and how many PS cmdlets you have registered on your system.

As people have mentioned the biggest factor here is probably your hard drive since you are loading maybe couple of 100's of small files when you load the ISE.


PowerShell is great. But I agree it starts slowly.


Just upgraded to WMF 5.0 on Windows 7. Start time is much better now. 4 seconds vs 8 seconds on a 2nd gen i5.


Serious, non-snarky question:

What does this give you that you would not already have with cygwin? The latter installs .exe versions of the usual command line utils, and I'm almost certain ZSH and the others you speak of are included.

I do not understand the practical implications of this move by Canonical/MS other than PR - what's actually changing from a user/dev standpoint?


Cygwin is and always will be only an emulation-layer, never the real deal. For most day-to-day things it works perfectly, but when you run into some corner-case, most of the time you are out of luck.

My only real problem with Cygwin is, that it misses a command-line package manager. If they could adopt pacman for package management like MSYS2 does, I'd be a happy camper.

edit: To deploy Cygwin based applications you need to get a commercial license from RedHat (if it's not FOSS). Which could be a deal-breaker.


> My only real problem with Cygwin is, that it misses a command-line package manager. If they could adopt pacman for package management like MSYS2 does

There is babun (https://babun.github.io/). It is essentially a wrapper around cygwin and comes with a package manager.


This is a good answer. I always feel limited in Cygwin, it doesn't feel quite right. And it takes quite a bit of tweaking to get working correctly. Case in point: Try getting gvim to work properly from Cygwin.


I've always used apt-cyg to install things, which works OK.


You realize cygwin's setup.exe package manager has a CLI, right? The issue with pacman in msys2 is that it's posix dependent, which fails badly at updating the core posix layer itself. Cygwin's setup.exe is a native Windows executable and doesn't have this self hosting problem.


That doesn't solve the problem that if it is trying to update the Cygwin DLL, you need to shut down everything. And if there's an update to something like bash or coreutils, same thing (since Windows does not allow writing to executables that are running).


Sure. But self-hosting pacman makes it literally impossible to do correctly. Updating cygwin itself should be done by an outside-of-cygwin solution to invoke setup.exe, just write a little powershell provisioning script or something.


?

Arch is certainly capable of updating pacman via pacman,and it's been a while but I'm pretty sure you can update apt/dpkg via the usual apt-get upgrade on Ubuntu


Those aren't operating under the restrictions imposed by Windows.


apt-get install whatever from any ubuntu repository

Not sure about X11 apps, but whatever. Largely this makes running a special win32 build of redis for whatever dev you're doing unnecessary.

I'm currently running windows on this laptop, but I have a virtualbox instance running Lubuntu for doing any UNIX specific dev. Ports and files are shared across windows and linux transparently, which means there's far less need for need for running+maintaining a separate developer's VM.


I would assume much better stability and integration. If this works as I expect, I will be able to do things like apt-get install which is a huge improvement over cygwin. Another benefit is that since it's Ubuntu tools and projects will support it vs cygwin which is usually "we don't use it, so figure it out and then we'll post it here for all the other poor saps using cygwin"


I think so. Cygwin is good enough to use UNIX tools in Windows. But maybe they will support better packages than Cygwin.


Different strokes.. that sounds like an absolute nightmare to me. .NET is not a good web development framework, and Visual Studio is totally overkill for web development.


I'd like to know what kind of web applications you've built and what tech stack you've used for them for you to make such an uninformed statement like that.


I'm not really interested in posting my CV to HN. Suffice to say I've been the lead web developer at several companies and have been doing that for ~13 years now. I've used all the popular web development languages, and written everything from small applications to web sites with hundreds of thousands of users.

I personally think .NET is much worse than any of the more common web languages (even PHP or Perl) for the web. If I were writing a Windows application then I'd probably write it in .NET using Visual Studio, but not a web application.

As I said in my original comment "Different strokes.", you may like .NET. That's fine. It might be the right choice for you and the wrong one for me. I was more commenting that it was amazing to me that someone would think it was awesome because it sounds like the complete opposite to me.

I guess I should have asked what you find compelling about writing web applications in .NET.


I would like hear arguments, what you don't like particularly? I'm not saying .net is the best web dev platform, not at all, but i wouldn't say it's worse than most. It has it's own set of pros/cons, like every other, but generally, to me it looks quite decent, despite heavyweight VS/IIS, which is another story. Looking at mvc, rest, looks pretty much like any other modern dev stack:/


It has a reasonable MVC model, it mostly boils down to it's just way overkill. Using t for web development to me is like using a 27 foot truck to get groceries. The beauty, to me, of even "large" web applications is that they can still be light weight.


What is "overkill?" The framework? The language? The UI? The CLR?

I have issues with Microsoft's MVC (mostly that there is no official way of splitting it across several solutions and keeping working routing) but I've never found it overkill for enterprise-style webapp development.

We used MVC/Entity Framework. It works well as a RAD for the backend with full HTML/CSS/JS for the front end that we can get creative with. Reminds me a lot of Java development.


Visual Studio is overkill for web development IMHO (and again, different strokes. I know some people like to write PHP in Eclipse.)

The MVC model itself is not overkill, sorry that sentence was not clear. I should know better than make contentious comments on HN that are going to spawn a bunch of aggressive responses when I'm trying to start my day.


Making contentious comments is fine, the problem is you have to back them up with anything solid. The basis of your argument is that Visual Studio is overkill for writing web applications. That has nothing to do with ASP.NET and more to do with the desire for simpler developer environment. This can be solved by using VSCode, or setting up Omnisharp for the various text editors out there.

You have not given any solid technical reason as to why ASP.NET is a bad framework. In my experiences, it's more or less as capable as Ruby on Rails, Clojure, Java, etc. You've stated it's overkill, meaning what exactly? Are you even aware of the changes being made to ASP.NET vNext? The dotnet cli tool? The only complaint you seem to have is that the tight coupling of ASP.NET to various Windows platforms is a little much for people who are used to Go or RoR.


Ok, so it sounds like you're objecting to Visual Studio, not to .net itself. Or not quite?


ASP.NET MVC Frawmework is the most popular web framework in dotnet. I think you don't know well about dotnet.


You still have not provided any specific reasons for "why" you believe Visual Studio/.NET is overkill for web development. I would like to actually know because I am curious.


That's true, MVC and Webapi can do everything, like Rails, all functionalities you need and don't need are inside. Most .net devs are expecting that, compared to node devs where they would have everything splited into small packages. One fx was designed in 2000, when that made sense, other one in 2010 ...

But, you don't have to use mvc; there's Nancy or low-level Owin. So why do people complain about MVC when there are other choices? Certainly not like in other platforms, but at least few good ones exists! Why judge whole platform because of one fx?

Similar like EF or Nhibernate. They are big and heavy and very slow if not used properly, but also there's Dapper, massive or simpleData.


I'm guessing its been a few years since you've touched web development in .NET? A lot has changed...


I think technical reason is much better than bringing up your CV.


Hey read the comment I was responding to. He asked for my CV, that's why I responded like that.

.NET is overkill for web application development, IMHO. But I tend to eschew large frameworks in general. YMMV.


Being stuck in "lead web developer" roles for 13 years probably rules you out of being an influencer though.


I'm not interested in being a manager, executive or a founder. I've had opportunities and pass. I like being a developer. It's not stuck.

As far as being an "influencer"; do you see any links on my profile? Again, that's something other people find appealing, not me.


If you can't develop with it on all platforms it sucks. Not to mention the job market for .NET devs is pretty shit and I don't know a single person who actually enjoys it.


I like using F# for web development. Check out Websharper. It's like Elm or Purescript but comes with all the amazing tools MS has developed for non-web-dev F# for free.


Yes, I agree, F# and Websharper are a whole different breed than the ASP.NET, C# apps I used to create. F# is a lot of fun too for more than just Web dev.


ASP.NET is a great. I've used it since it's existed (coming from what is now called "Classic ASP").

I think the HN intolerance towards Microsoft / zealousness for Apple is showing here. Certainly .NET isn't for everyone, but I don't think "is not a good web development framework" is justified. Check out http://nancyfx.org/ if you're looking for something more lightweight than the full ASP.NET / IIS stack.


NancyFX is pretty great! Nice and lightweight.


I think eventually, we'll see Windows transform into a Linux distro with a Windows UI.

I'm not trying to wrinkle anyone's shorts, but this just makes a lot of financial sense. Let the "community" do most of the OS development and only maintain the Windows UI. This allows them to focus more on services and Azure.


It would look like that's the direction they're moving in if this product were a Windows environment on Linux. Something like a Microsoft supported version of Wine.

As it is, it looks more like a Linux environment on Windows. Analogous to a Cannonical supported version of Cygwin.

I'd love to see Windows as Linux distro because I'd prefer to give full access to my hardware to Linux and only pull out a Windows environment when an application requires it. Desktop Linux users are in the minority though, so I expect there's a lot more demand for the reverse.


It could make sense if they want developers to start creating Linux focused apps, and then over time deprecate the win32 parts.


Ten years ago I would have said that's ridiculous thinking.

But today kernel software is practically commoditized by Linux. Competing feature wise is a fools errand - it's just too costly and slow to go it alone.

FreeBSD could be another choice also. Lots of industry support.


As I see it, Windows 10 is an abstraction layer on top of...well increasingly, on top of anything. The current iteration is a more robust abstraction over diverse hardware (where Windows for the Desktop has always found its strength) -- but now the stack is more unified from micro-controllers to just short of big iron.

Pushing operating systems under the abstraction is just the next step after decoupling Windows from hardware. In a sense that's been a theme for Windows since the development of .NET.

The value of Windows has been as an ecosystem and it almost certainly will remain one. The tradeoff of running Windows is a tradeoff and it comes with big advantages for some users.


>I think eventually, we'll see Windows transform into a Linux distro with a Windows UI.

As long as you consider OSX to be a Linux distro (lol) with a Apple UI, then sure.

But I doubt Microsoft ever gets any closer to unix-like systems than Apple is.


> But I doubt Microsoft ever gets any closer to unix-like systems than Apple is.

I'm confused by this comment. OS X is literally UNIX. In fact, I think it's the ONLY UNIX system available to consumers.


umm.. FreeBSD, NetBSD, OpenSolaris.

OSX is a thin layer of UNIX with a lot of non-UNIX like stuff. Aqua over X. Self-contained apps over package management (unless you want to count the app store).

I find it more like a broken borked *NIX system than anything.


Neither FreeBSD nor NetBSD have been submitted to the Open Group for certification, and are thus considered "Unix-like". When I say OSX is literally Unix, I mean it has passed certification and can legally use UNIX trademark.


You can get solaris for free from oracle.


I knew that, but I left it out because I don't consider it consumer-grade. It's not something you'd find on a PC you'd buy at Best Buy, Tiger Direct, Amazon, etc, nor is it something you'd give to a non technical user and expect them to use.


The cynic in me sees this from the opposite direction.

Microsoft has already gone out of it's way to take control of the hardware and kernel (think secure boot on intel and the _total_ control on arm). They're now allowing you the privilege of running some posix userland applications (which have no real power) so people don't complain too much when they make it impossible to boot a custom kernel on newer hardware.

"What do you mean you can't boot linux? Don't be silly, you're already running ubuntu!"


Microsoft is probably going to make a lot of money from selling apps/games on win-10. Why would they lose that for and commoditize their OS and the API layer ?


My guess: It'll be a lot like how Android works. The kernel is OSS, but "Windows(Google) Services" requires a proprietary license.

Android's license is "you need to put the Google Play Store and the Google App ecosystem on the phone". Window's license might still be, "pay us money".

You wouldn't pay for the kernel (because of GPL) but you would pay for branding and support (like RHEL) and you would pay for the ability to run the "Windows Application Compatibility Layer".


Severely doubt it.


Let me dream.


Why? There are many more Windows applications than Linux ones.


See previous episodes: NeXT with a Mac UI, VMS with a Windows UI despite relative dearth of NeXT and VMS apps.


Exciting stuff! This is (very?) similar what with LX branded zones in SmartOS[1][2] -- and, as I've said before[3] -- we want as many other systems as possible take this approach. That said, we also know how long a haul this is, and how detail-intensive it is. So I've got a bunch of questions. ;) First, is the source available? Second, how much of the One Hundred Language Quine Relay[4] runs successfully? (For us, running that sucker to completion was an important early milestone.) Assuming that runs, my next questions are all common apps/programs that we knew to be especially thorny for one reason or another: Go, strace, tcpdump, systemd, etc. Anyway, it's great to see the SmartOS approach so broadly validated by Microsoft -- and hope that they both finish the job and open source it all!

[1] http://www.slideshare.net/bcantrill/illumos-lx

[2] http://us-east.manta.joyent.com/patrick.mooney/public/talks/...

[3] https://www.youtube.com/watch?v=l6XQUciI-Sc#t=1h7m15s

[4] https://github.com/mame/quine-relay


The linked slides say:

"Problem for software unenlightened by ABI (golang)"

I know they're not your slides, but do you also think Go is unenlightened (which reads: clueless and unaware) or do you think perhaps it consciously rejected the common ABIs?


I've never gotten a straight answer as to why Go makes system calls directly instead of doing what every other program on the planet does and calling into the system libraries. Certainly, it made the ports to non-Linux systems absolutely brutal: systems are required to make themselves look like Linux for Go to function correctly. So yes, this is unenlightened -- or perhaps it's a conscious attempt to make Go unportable?


Hell has indeed frozen over, and that's good news! From the screenshots, that actually looks like a proper terminal too.

I wonder what will happen to Powershell now.


Powershell will probably be fine. Besides them both being a REPL and scripting language they're very different. I guess the simplest way to put it is in powershell you're passing around objects and in bash you're passing around strings. That's an oversimplification though. I'd start wondering about powershell's future when bash starts getting the ability to tap into .Net the way powershell does.


As exemplified by this announcement, Bash in isolation is not that valuable. It's the standard UNIX tools that most Bash scripts just assume are there that really round it out as a platform. Supporting the piping of objects instead of text? Not such a big deal. Replacing UNIX tools with object versions? I agree - very valuable, but now a massive undertaking involving redefining a lot of flags, etc.


Right, PowerShell is probably irreplaceable as a sysadmin tool. But for other tasks lumped into the "scripting" category, I'd much rather use bash.


PM on the PowerShell team here. First, just let me say, I'm a huge fan of Linux and can't be more excited about Bash coming to Windows.

As others have mentioned throughout this thread, PowerShell isn't going anywhere. We're investing considerably in the PowerShell ecosystem. PowerShell/WMF 5.0 just came out with a ton of new features[1], and we're not slowing down any time soon.

Because it's operating mostly in user mode today, Bash on Windows is much more suited to developer scenarios. I've already played with workflows where I'm running vim inside of Bash on Windows to edit PowerShell scripts that I'm executing in a separate PowerShell prompt. In fact, I can plug along fine in a PowerShell window, run a quick 'bash -c 'vim /mnt/c/foo.ps1'', make a few edits, and be right back inside my existing PS prompt. This really is just another (really freaking awesome) tool in your toolbox.

[1] http://msdn.microsoft.com/en-us/powershell/wmf/releaseNotes


> Bash coming to Windows

This is part of the long-standing problem for people: this loopy re-presentation of what happens that completely ignores the past and even the present. A lot of us have been using bash and other shells, and indeed vim and other things, on Windows for years. They aren't "coming to Windows". They've already been there for a long time.

We've been able to invoke "vim foo.ps1" to edit our files, and do so without any necessity for an intermediary (and entirely supernumerary) "bash -c" too. I did so myself, only yesterday. This is not the news.

A new "Linux" subsystem is coming to Windows NT that allows one to spawn and to run unaltered Linux binaries directly. Explaining this as "bash is coming to Windows" is to give a hugely dumbed-down explanation, one that is so markedly wrong that it (mis-)leads to the very same mistaken assumptions about the imminent death of PowerShell and so forth that you are now having to counter in several places. (I know. It's not your own explanation. Nonetheless, one should not adopt the error from someone else, especially if one then has to firefight the world leaping to the wrong conclusions based upon it. That's just making a rod for one's own back.)


Let's hope death by obsolescence. Being forced to learn a proprietary not-invented-here scripting language with little value-add over Bash is an anathema to any developer.


There's quite a few things that you can do with powershell on a windows os that you can't do with bash since the APIs aren't there.


It's not a conceptual stretch to have Bash adapters for .Net objects, PowerShell is unnecessary.


>I wonder what will happen to Powershell now

For Windows shops (like my current job), it will most definitely be sticking around


has PS been adopted that widely? the syntax always struck me as weird


For automated building/deploying .NET projects on Windows you have very little choice. Powershell has its own libraries to invoke MSBuild and it's integration with .NET saves a ton of effort on more complex tasks (like parallelization, background jobs, and service management). Is it irreplaceable? Maybe not, but it's clearly the most powerful tool for the job.


PS is actually pretty awesome


Was hoping the comments here would be talking about PowerShell. I try to use it whenever I can, but if bash performs the same, the community may leverage it more for shared scripts/etcetera.


Clearly, the end times are upon us


Linux API integration as native Powershell commands, push Powershell as 'unified Bash' for Windows/Linux, try to gain marketshare.


Security will still be Windows "security" though. I know my way and I haven't had a Windows PC infected in years but while using it I certainly don't have the relative peace of mind that I have while on Linux.


Sounds like a regression on Canonical's issue #1. The resolution case was "A majority of the PCs for sale should include only free software.". This article does indeed appear to showcase active work toward a regression on bug #1

https://bugs.launchpad.net/ubuntu/+bug/1


In general I certainly agree, but there are also advantages for free software of this.

For example, this will probably help expose and fix lots of bugs in Microsoft's implementation of Linux interfaces, which will be a benefit to free software developers and vendors.

Also, general users will get more exposure to free software programs, and may be more open to buying a legit Ubuntu or other Linux computer in the future. For example, I was able to switch my wife over to using Linux Mint without any issue, which was undoubtedly made easier by the fact that she was already using LibreOffice, Thunderbird, and Firefox on her Windows PC.

It seems like people are able to pretty easily run free software programs on Mac OS X, and all things being equal I think that has been a great benefit to free software, and a lot of web developers et al seem to be willing to make their program free software friendly and release them under free software licenses. I would love to see a similar trend with Windows, even if I personally think that proprietary operating systems are extremely harmful and need to go the way of the horse and buggy.


Mark Shuttleworth marked that bug as resolved a couple of years ago, which I think was a mistake. I recently reproduced this bug in that I went to a local PC store and attempted to buy a machine without any proprietary software. They said that they didn't have any such machines available (never mind a majority of their machines, as suggested by the original bug text).


He also uses the popularity of Android as part of his reason for closing it. Sure, Android itself is open source, but you still have to go out of your way to find Android devices that are purely FOSS.


> but you still have to go out of your way to find Android devices that are purely FOSS

This is pedantry, and certainly there's a sliding scale of openness for devices. But unless I'm very mistaken, there are no devices available that even approach 'purely FOSS'. What would such an 'Android' phone even be? No google play services, no google play store, crippled and buggy open GPU drivers, and still a proprietary baseband. Not that I'm happy with this situation it just seems impractical.


> but you still have to go out of your way to find Android devices that are purely FOSS.

Can you elaborate?

I'm not aware of any physical Android devices which are able to boot and function as an Android device (e.g. with a hardware accelerated GUI, can make phone calls over GSM/CDMA) that don't require proprietary vendor blobs.

I am excluding the Android emulator because I don't think it qualifies as a "device"


Money talks. Canonical attempted to make bank from their partnership with Amazon, yet people over the web went insane and fought tooth and nail to shift the weight to derivatives, like Xubuntu and Mint.

Naturally, this is one of their alternative methods.


To clarify, it sounds like what Microsoft has added to Windows 10 is a Linux ABI.

This has been done before with other x86 OSes: FreeBSD has had 32-bit ABI compatibility for at least a decade (https://www.freebsd.org/doc/handbook/linuxemu.html), and the "lx branded zone" for Solaris also has 64-bit support (https://docs.oracle.com/cd/E19455-01/817-1592/gchhy/index.ht...).

It looks like Ubuntu was the first to package some Linux binaries for Windows. I guess that's useful?


Linux ABI support was introduced in FreeBSD 2.1 released in 1995, 20 years ago.


I know I've run bash on Windows before -- but I don't remember if it was with or without Cygwin. I assume this announcement is running bash natively, without Cygwin or anything VM-like.

Did they have to contribute patches to bash, or just install it by default? I don't see anything on the bash mailing list, but the development is not particularly open.



OK interesting -- native Ubuntu Linux binaries on Windows too. FreeBSD and Illumos are also emulating Linux too, i.e. translating syscalls in the kernel. I wonder if Microsoft grabbed some of that code since it's open source :)

This just shows how standards are made... implement first then think about it later :) I don't think the Linux syscall interface is the model of clarity, but that's what we have.

EDIT: This answers my original question... apparently they didn't patch bash -- they patched their own kernel to run the bash binary, and all Linux binaries! It was done at a binary level rather than source level.


Would anyone be able to give insight on why this would be useful? Im curious, sure, but I'm at a loss to think of a use case.


My guess for the marketing target is developers who use Macs because their tools of choice are native to POSIX, with Windows API implementations as second-class citizens at best.

Maybe things have improved since, but at least a few years ago, it was always a crapshoot to try to get some new open source tool set up on Windows/Visual Studio, vs. batting close to 1.000 on Mac with configure && make && make install.

Another way to put this is that the world Terminal.app gives you access to is a huge selling point for developers, and this is part of Microsoft's attempt to provide something as useful.


They haven't and this is exactly it, as least for me as a web developer.


Perhaps avoid dual-booting or VMs in environments where you need both Linux and Windows.

This sounds either a bit like CoLinux, or like the POSIX subsystem revived. Remember: Windows has kernel support for different userspace APIs, and the well-known Windows API is just that: A user-mode subsystem running atop the kernel (there have been OS/2 and POSIX subsystems before).


As well as what others have said, it might long-term give an easier path for developing things that work across platforms too.

Lots of companies spend a lot of effort to run code on multiple platforms (SQL Server recently announced Linux support; .NET core has supported runtimes on Linux too and tons of OS languages have runtimes for multiple platforms). It would be great for both devs and end-users if the number of things that are different between platforms was reduced.


Maybe to be able to run linux programs on a windows machine? Could be good for developers targeting the linux platform, especially with the now ported .NET platform, MSSQL and so on.


Docker containers would be an excellent use case.


I can't +1 this one enough. I have some developers on my team who insist on using Docker on a Windows setup, and it is painful to use with VirtualBox.

OSX is better because it doesn't feel too different from Linux (aside from setting docker machine ENV variables). Still virtualized so you take a performance hit.


Docker recently announced a beta where they are using native systems in both Windows and OS X though. So I doubt this is the driving force for that.


The Docker beta that was announced is using Windows 10 builtin virtualization APIs. In theory when utilizing kernel virtualization features there will be less overhead (closer to the metal), but still overhead.

I wonder how they will make Ubuntu happen on Windows. Reading some of the comments, some speculate a subsystem, while others suggest an interoperable interface.

Edit: reading bitcrazed's comments it looks like it will be implemented a la WINE. No need to recompile binaries made for Linux x86; you'll be able to run apt packages from Ubuntu out of the box.


It's more complicated than that:

The beta announced a few days back by Docker uses HyperV to boot a Linux kernel to run Linux Docker.

The preview announced around a year ago by Microsoft and Docker is a native Windows implementation of Docker, running on the next Windows OS.

[later]

But...

This new layer should let you run most Linux containers straight on top of the next Windows.

Interesting...


Shell and command line tools: Linux users can be more productive in Windows now. The command line interface is very confortable for do a lot of things. If you need to do batch work (rename a lot of files, onvert some images, add metadata to your mp3 collection), you can do it now in the console. Before this, the alternative in windows was to use a sub-par shell, or install several graphical application and expend your evening clicking here and there.

Applications: Now you can install and configure applications like apache, postquesql, etc. on windows in the same way you do in linux/other unix platforms.

Strategically, this is a big win for Microsoft. Now they can go to their clients that are moving or thinking about moving to Linux and tell them "There's no need to migrate, just install your apps in Windows."


You would not be running ubuntu in virtualbox but "natively" on Windows. Allowing you to have a unix layer for your interpreters,runtimes whatever.

You could still develop stuff for server OS while having the ability to play games without having to reboot or use wine inside linux.


My office IT will only allow and manage Windows PCs on directory server. So I need to run VirtualBox everywhere to develop in Linux. If I could cut that step out, that wouldn't be a bad thing.


You'd almost certainly be in for a nasty shock.

IT departments not worrying much about what you do as the superuser inside a virtual machine that is running only with your user credentials, is one thing. But tell them that you're now going to be installing and running random Ubuntu softwares, not in a virtual machine but natively within Windows, and they will prick up their ears and start to take notice. Even the ones who are alright about what's being installed will want to think about things like control over what packages can be installed and locally-hosted repositories. "So, tell me how I set group policy for your apt-get installer?"

And if that is not a worry, let me relate some personal experience of using the Windows NT POSIX subsystem. Anti-virus programs, particularly the ones with the whizz-o features of "let's check what 'the crowd' said about this program" or "let's run this program for a little bit in my controlled execution environment to see whether it does malware-type things", don't like this a lot. I had to go through the unblocking of "/bin/foo is a rare program" so often, for everything from "ls" to "ftp", that it was in danger of becoming an automatic reflex.

Goodness knows what the likes of DeepGuard will make of programs that use a wholly new set of system call entrypoints into the kernel. (-:


Very good point. Looks like I'm sticking with Virtualbox. =)


I'm not sure I fall in the "usual" scenario, but my primary OS is linux (slackware) from years. I just miss some AAA games on my windows machine


Instead of supporting a hybrid/dual boot desktop you support Windows, and run Linux applications on top.


First stage of EEE strategy.


Very interesting and great to hear, though I would prefer zsh or fish. Didn't they fix their command window (terminal) too?

Recently, MS is making all the right moves technically, but they've also doubled down on the spying, pushing W10 without consent, and still in bed with NSA, so they are still out of consideration for me.

A shame really. :(


From what I'm reading it's the entirety of Ubuntu, including apt. Use bash once to apt-get install zsh and you're done.


Yep, they changed the title on me, and now that I've read several of the articles here, I see it is a whole/minimal ubuntu userland.



This has a faint scent of an early April fool joke.


It does, but then find me a Linux user who doesn't dual-boot into Windoze for desktop apps. This way, you save one reboot. I wonder, however, how stable this will be...


At home, I am using ubuntu since 2009. The first year, I have dual-booted a lot. Progressively, I have stopped. The last time I have booted in Windows was december 2014 to check if it was still working. If a desktop app does not work with wine, I do not use it.

If you can have all the comfort of Linux (huge catalog of software that are easy to uninstall, network transparency, ...) with the assurance that your hardware will be fully supported by the OS, it would deserve a try.


I had similar progression when I switched to Gentoo in 2007. I would boot into windows maybe every 3 months, and it was always a hassle with the security updates. Wiped out the partition after a couple of years of checking into windows maybe once every 6 months.


As a Linux user you'd want to run Windows as your main OS?

I'd rather use a VM (or WINE).


I would. Professionally I command a high rate consulting within the Microsoft ecosystem. All of my professional work is on Windows. However I'd love to run some Linux libraries on Windows. For example, tensorflow would be very useful. Sure I can fire up a VM, but then I lose GPU support which is a big deal.


That use-case is not what I think of when someone says "Linux user" - to me, that's a Windows user who wants to run some Linux software.


I agree. I use a VM for Windows as well. It's probably safer to keep Windows sandboxed away in a VM. I know this doesn't work well for everyone, but any app I use in Windows is not that resource-intensive. I haven't tried WINE recently, but due to headaches years ago, I tend to avoid it and just fire up the VM.


Me, for everything but "work work"[0] for long stretches of time, including moonshine consulting, photos and causual gaming (Counter strike):

accounting? web based

service reports for moonshine work? Office 365 online or Open-/Libre-office

gaming? Steam has worked nicely on my not too beefy desktop for years (I only play CS:GO though)

Today I'm back on Windows 10, mostly, since Windows 10 is less annoying and my current employer don't care if I have a personal account on my new nice laptop.

[0]: Work for NotSoBigCo between 8-16


I've been running Linux full-time for since 2013, but then again, I don't play games, so everything that I use actually is on Linux.


I can't think I a single thing I'd need to boot into Windows for anymore.


Visual Studio is better than any IDE in existence. With ViEmu it becomes even better.


Unless you don't have 30 spare gigs of storage space...


If by 30 you mean 20 gigs then, yes. Also, 20 gigs on a very decent 256GB SSD costs around $6.40. I don't know how much your hourly wage is, but the time Visual Studio saves me compared to other development platforms makes up for that money pretty quickly.


I was just being cheeky, honestly, but that price doesn't scale linearly, at least not on a laptop. A 512gb SSD might have a higher cost-per-GB than a 256GB SSD.

I don't run Windows and consequently haven't used VS in any kind of intimate detail, I'm sure it's great if you like dealing with IDEs. I feel more productive with Vim, tmux, GHCI, and GraspJS for doing of my web development.


> If by 30 you mean 20 gigs then, yes.

I'm pretty sure VS doesn't require nowhere near that amount and you are talking about Windows symbols.


That might be true; I had a slightly tainted image of VS when I had to use it three years ago, and after everything I needed was installed, I only had like a gig left (this was on a weak, underpowered netbook, admittedly).

In retrospect it's not entirely VS's fault, though I just found it amusing how quickly it ate through my storage when Vim only takes like 90 megs.


Games.


Fortunately for me, the only game I really care about (Civ5) is available on Linux thanks to SteamOS. It certainly seems as though I am in the tiniest of minorities, though.


for me it's turbotax that does not have a linux version


For the past 5-6 years I've used https://taxact.com/ and haven't needed Windows.


My house has a pantload of computers, none of which have Windows installed -- it's actually been this way for years. I have an ubuntu laptop, my wife uses OS X. My kids mostly use iOS/android devices and occasionally another ubuntu laptop or chromebook.


I am similar to reacweb. I dual-booted from 2008 through about 2011, then switched to using Linux as my only bootable OS and Windows in a VM occasionally.

I've noticed since about 2013 that I booted into the VM less and less often. The most recent was after maybe 9 months without using Windows? I wanted to check how something related to batch files worked, purely for curiosity (i.e., unrelated to professional work). There being so many updates queued up that I almost said "screw it" to the whole thing, reasoning that I could ask a friend to check easier/faster than the wait was worth.


> find me a Linux user who doesn't dual-boot into Windoze for desktop apps

I've been using Linux on the desktop for 19 years, and I've never once dual-booted into Windows on any of my personal machines.


Me.

Wine, and for the troublesome apps: VM.


Hi.


I am inordinately happy about this. The article was a little sparse on details, though... is there a link where we can learn more?



Me too - while I love Linux (inside a Putty terminal), I also love[1] Windows as a desktop.

[1] Meaning being very familiar and having fewer disappointments...



I am sick and tired of Linux hardware support. Looking at what I run on a Linux GUI, there's nothing which actually is Linux specific (right now I have Skype, Slack, Konversation, shutter, phpstorm, geany, Chrome and Firefox running). If I am able to run the Linux LAMP stack then I will be happy to ditch all the problems with battery life, video drivers and bluetooth and just use Windows. This stuff here is really big.


This might actually stop the exodus to OSX ive seen in the developer community.

And I'm kind of surprised Microsoft did this since they seemed to be pushing for Powershell before the Windows 10 release.


We'll be continuing to invest heavily in PowerShell going forward. Bash on Windows is awesome right now for developer scenarios, especially for folks coming from an OSX/Linux workflow.



It's interesting to see Microsoft move their OS more and more in sync with Linux/OSX. Maybe they should invest in a Linux kernel based OS and build a compatibility wrapper on top of that instead?


Drivers. There are a lo of device drivers written for the Windows kernel that would need to be updated.


Why? The Windows kernel has many advantages over Linux.


I'm genuinely curious. Can you name few?


It has a very special blue graphics mode built in.


What about file permissions? If I can develop on a native IDE running on Windows and share files with a Linux VM (or anything running Docker) without screwing up file permissions, then I'm on.


It seems like it has its own file system, maybe a virtual one. See the screenshots.


Glad to see this announcement. I hope this doesn't dampen the emphasis for PowerShell which is a better Bash for Windows


I am seeing this narrative in HN and web which is ridiculous, You have nothing to worry about PowerShell,Microsoft showed they are dead serious about Windows 10 and PowerShell deeply integrated to .net platform. So I am pretty sure and I can predict nothing will happen to PowerShell because this is against company interest.

They are trying to introduce bash as native shell as another option. That's all.


right, which means more and more people won't bother learning PowerShell and just use grep and the like.

It is still somehow a secret that PowerShell even exists. I can't count the number of Windows users I know that still open cmd.exe, or install cygwin so they can grep files.

I'm just saying that dealing with text is inferior to .NET objects on windows, and the PowerShell pipeline is much more powerful. However, I don't know how many people will end up learning this cause, "hey I can just use bash!"


Good joke!


Amazing. Can't tell you how many times we've asked Microsoft for a better shell like linux. Or how many times we've said (half-jokingly) that Microsoft should just give up and use a more unix-y file system and swap out the kernel for linux.

This all not to say that Microsoft tech in these low level areas don't have advantages over linux or are bad, but it'd be nice to have it at a low level.


> Microsoft should just give up and use a more unix-y file system and swap out the kernel for linux.

You do realize that that's the exact opposite of what is happening here, ne? The Windows NT file system is used, and the Linux kernel is being swapped out for the Windows NT one.

"At a low level" NTFS actually is "Unix-y", of course. It had to be in order to support the POSIX subsystems. Case sensitivity, hard links, symbolic links, and a wide degree of freedom for filename characters are all there, at a low level.


Yes I do, but the ideas/conversations have been happening, on and off, for many years before this.

Especially when they weren't doing so hot and the idea gets floated around (obviously without much serious thought, just something we say) "oh man, what if they just gave up on that part and used linux to make it all work" -- Then everyone goes "that'd be cool, but then all those apps wouldn't work..."

Hence why they chose this route which required some pretty fancy research on their part to implement. Which is even cooler imho. Probably the best senario for Windows/MS over all in the end.


Oh, another subsystem. Yay, good for Microsoft. The problem with the subsystem approach is that it's hard to get programs running in the subsystem to interact with the desktop world, which is win32. What I like about Cygwin is that the programs that run under it are win32 programs: they can equally well use CreateWindow and ppoll. Cygwin programs understand NT permissions.

It's always been possible to run Linux programs under Windows: just run a VM. What Microsoft has done here makes it less painful to run Linux programs, sure, but these programs still exist in their own little world. Cygwin programs, on the other hand, are Windows programs. To me, that makes them much more useful.

Now, maybe I'm wrong. Maybe the new Linux subsystem is more tightly integrated with the rest of the system than I'm guessing. But based on the available documentation, it looks a lot more like SFU or Interix than it does Cygwin, and that's a shame, because if I'm right, Microsoft misunderstood the whole point of Cygwin. Again.


> Cygwin programs understand NT permissions.

Windows NT POSIX subsystem programs understand NT permissions, too. There's quite a lot about it in the Interix doco, explaining how ACEs are mapped and so forth. This is not a problem with the subsystem approach, demonstrably.


In my perspective, this Linux subsystem are very useful for running server components like redis, or nginx which doesn't have official windows build. So you can develop server apps which will run on Linux prod, but develop it with Windows


Unless I can fire up windbg and point them at these binaries (which would involve teaching dbghelp to read DWARF, and be useful in its own right), I still don't see much of a point. I can always just use a lightweight VM, and paravirtualized cross-VM filesystem access already exists.


You're rather presuming that one would use windbg. That's an unwarranted assumption, given that the hypothetical developer here who is developing server apps using redis and nginx "with Windows, for Linux prod" is likely to think of gdb as the debugger.

Of course, whether gdb would work is something that we haven't yet been shown.


Project Janus reloaded? AFAICS it's the same approach. Some shim layer that translates between Linux kernel API and native Windows API. http://www.cnet.com/news/suns-solaris-10-to-run-linux-apps-t...!


It actually works on SmartOS (an open Solaris derivative): http://www.slideshare.net/bcantrill/illumos-lx


Maybe the NT kernel could gain a linux-syscall-compatible layer, instead of adding compatibility in userspace. That would allow for compatible (same-arch) binaries!

(EDIT: this would also require ELF loaders and all kinds of other good stuff, but still a possibility IMO)


Guess what, there actually is an ELF loader in the compatibility layer: https://pbs.twimg.com/media/CcTwejAUEAAq_jR.jpg


One (more) thing that is conspicuously glossed over is non-ELF executables. We've seen Win32 programs invoking ELF64 binaries demonstrated. And of course the Windows NT Linux subsystem already "knows" that it's a Linux subsystem process when a program comes to exec() a script, and doesn't have to worry about checking binary formats over fork() because the program image doesn't change. But missing so far has been Win32 programs invoking Linux executable scripts, that have #! lines.


> Maybe the NT kernel could gain a linux-syscall-compatible layer [...]. That would allow for compatible (same-arch) binaries!

What makes you think "maybe"? This is (reportedly) exactly what's happening. What did you infer was happening from the headlined article?


So nice to be remembered! Reading some of this MS stuff has given me serious deja vu... (Source: I worked on Project Janus, a long time ago in a galaxy far far away)


This looks like a long-term plan to kill linux. The only way to kill linux is from inside out.


Embrace. Extend. Extinguish.


Once upon a time Steve Ballmer said that GPL/Linux was cancer, it's odd to see his company spreading it =)


Hopefully I'm just being paranoid, but as a full time Linux user Microsoft's friendliness makes me nervous. I think I liked it better when we were bitter enemies. Remember embrace, extend, extinguish...


If you can't stop it...


If you can't beat 'em, join 'em.


Hah, nice little Emacs jab there

"Of course, I have no idea how to CLOSE emacs, so I'll close the window. ;)"


I think this will be met VERY VERY badly by readers of this comment but this change as far as i understand is windows telling canonical to help them in taking their customers i live in egypt and other than containers and CLI and running it as webservers there is practically no way for anyone to even consider using linux i don't think this will end up well for linux i think windows attempts will take the webservers business from linux in 5 years or something which is probably 90% of it


Microsoft has their own cloud service, Azure.

And Ubuntu's biggest "market" these days is not desktop, but as a container base.

So this is MS getting cozy with Canonical to offer a development environment for Ubuntu based containers destined for Azure.


A lot has been written, in this and the other major Hacker News discussion, about Microsoft. Very little has been written about Canonical. Another ZDNet news piece (http://www.zdnet.com/article/ubuntu-not-linux-on-windows-how...) quotes in part a Mark Shuttleworth press statement that is on the Ubuntu WWW site in full (http://partners.ubuntu.com/microsoft).

Another thing that no-one has mentioned at all is how this pairs up with UbuntuBSD.

* https://news.ycombinator.com/item?id=11326457

Michael Hall of Canonical is quoted elsewhere (http://www.cio.com/article/3046588/open-source-tools/ubuntub...) saying that

> I think it's a cool project and I'm looking forward to seeing how far they get with it. It would certainly be an interesting addition to our already varied list of official flavors, if they can get there.

If one has Ubuntu binaries, one can now run them directly on top of 3 operating system kernels:

* On the FreeBSD kernel, with UbuntuBSD.

* On the Linux kernel, with Ubuntu Linux.

* On the Windows NT kernel, with this new Windows NT Linux subsystem.

So whilst RedHat is busy pushing systemd, and the systemd people are busy pushing a convergence of all Linux distributions into systemd operating systems that do a whole lot of things in the same single way, Ubuntu is apparently taking on Debian's "universal operating system" mantle and extending it to places where even Debian is not.

Everyone is focussing on Microsoft. It's important to remember the "and Canonical".


And going by Canonical's marketing only a few days later, its ubiquity is what they are pushing.

* https://news.ycombinator.com/item?id=11464703


I think everyone is focusing on Microsoft because MS is the big dog in all this.

And their history is one of appearing to embracing something, and then introducing less and less subtle differences once they have the majority share. Also known as Embrace, Extend, Extinguish.

So it may well be that Canonical have a short term win here, but that in the longer term MS will sideline Canonical as the major share of developers have adopted "MS Ubuntu".


I think this is great. I live in emacs and a bit of bash. emacs on windows works, but, does not feel right. Everything is a little wonky (the path slash thing, the unix find and aspell tools etc). If this makes emacs feel like it does on linux native or mac, I will seriously consider buying a windows laptop. Well done new guy in charge of Microsoft!


It's true, I always have to install some bundle that includes R and LaTeX along with Emacs, otherwise it's a nightmare installing all the different programs individually and making them work together. But still for me the main problem of Emacs on Windows is that the fonts look blurry and horrible.


I'd be more excited if they added support for NFS to every edition of Windows and fixed some quirks that is has.

Bash alone isn't that useful, you'd need other stuff and you can already get that from other sources. There's also a distinction between a terminal program and shell. Bash is a shell. iTerm2 is a terminal program. Cmd.exe is both?


I know I'm being picky, but there's NO Linux here. Linux is the kernel, and the kernel is THE piece they're not including.

So rather than repeating "Linux on windows", this is "Ubuntu's userland on windows", or "GNU on windows", or any other variation, but NOT "Linux".


In fairness: the Windows NT OS/2 subsystem doesn't have the OS/2 kernel, the "NT Virtual DOS Machine" doesn't have a vanilla DOS kernel, and there's no Unix kernel in Windows Services for Unix. So they're all named after the thing that isn't really there, and that the Windows NT kernel substitutes for.

Furthermore, only some of the (free) software bundled up as Ubuntu is "GNU". Being copylefted doesn't by itself make something part of the GNU Project.

* https://www.gnu.org/software/software.html#allgnupkgs

The annoying thing was the Microsoft video where twice Rich Turner of Microsoft stopped Russ Alexander to clarify what was happening and gave incorrect clarifications of how things were "running on Linux ... on Windows". They patently are not "running on Linux".

Microsoft has been quite happy to say over all these years that OS/2 1.x programs were "running on Windows" and DOS programs were "running on Windows" and Win16 programs were "running on Windows" and even Win32 programs were "running on Windows", M. Turner. This is just plain "Linux programs running on Windows". There is no need to make it confusing when it actually isn't. (-:


Cool! The *nix/Windows ecosystems coming closer can only be a good thing. I've hesitated in getting a Surface Book since it'd always need a good network connection to support remote development, but this has the potential to change the game.

I had always hoped this would happen when I was younger, and now it's finally here.


I was hoping for a fully supported Ubuntu running on MS Surface Pro.


This is basically a dream come true for me if this is executed well.


Agreed. I currently have an Ubuntu guest VM on a Windows host and use both simultaneously for day-to-day work.


I do the same thing! Even now, WiFi has poor support on Linux with many computers, especially newer business tablets, so VMs or remote desktops are the only options. This will mean I can get full performance.


Yep, running a Surface Book with this setup and it's fantastic.


That will not make me use windows on my computers, but it will make at places that use Windows sooo much better.


FYI, http://aka.ms/uowterms points to Canonical's site.

Pretty excited about this, given that I've been using Cygwin for ages and always found it a pain to hit those edge cases where it didn't work.


I'm excited for native git on Windows. The current Git on Windows versions are damn slow.


This is awesome. My guess, is that they will also integrate building Linux applications with Visual Studio. See https://blogs.msdn.microsoft.com/vcblog/2015/11/18/announcin...

for an example of how to debug using gdb in Visual Studio. Having an integrated Linux environment, would make this support seamless. Then Visual Studio becomes the cross-platform hub for building code for Windows, Linux, iOS, and Android.


Interesting, as a rails dev who develops mainly on vim, the main reason I've switched to an MBP from a Thinkpad was because of this.

Will reconsider a windows laptop again next, if build quality and battery life are comparable.


Base coming to Windows has nudged me to strongly consider going back as well. But finding a laptop with comparable build quality to a MacBook is also what I’m finding troublesome to accomplish.


Hmm? Pretty much everything in MacBook pricing bracket is comparable (highend ASUS Zenbooks, Dell XPSs, highend Thinkpads, etc.)


The Windows version of gVim is much worse than MacVim, if that's a thing that matters to you.


This was possible few years ago on x86 win XP.

http://www.colinux.org/?section=home

I guess it's possible to do something like this again.


I'll be very curious to the the source of this native-Windows port.


I don't think bash itself has been ported, it's simply running under a Linux subsystem. I imagine it's similar to the discontinued POSIX and UNIX[2] subsystems. In this case, bash probably believes that it's running in on Linux, not that it's been ported over to compile in Windows.

[1] https://www.wikiwand.com/en/Microsoft_POSIX_subsystem

[2] https://www.wikiwand.com/en/Windows_Services_for_UNIX


Another item on the Apocalypse is coming list is checked.

On a more serious note - Unity is something I deeply hate, and a lot will depend on the quality of the implementation.

It will be good to develop easily for linux too.


Sounds like this is an experiment to me - the first step towards a hybrid OS. Do they want to try and eat into the OSX/development stack in an effort to make windows relevant to that market? Or is there a bigger picture here where they eventually want to blur the lines between linux and windows? Perhaps they will "pull an android" on the patent front but have a free alternative that you can use. I guess I've been around too long and seen too much.


It's not really an experiment. Or if it is, it was one that was performed in 1988. Windows NT has from the start been designed to have this, multiple operating system "personalities" layered as "subsystems" over a single kernel. As someone who has been around long and seen much, you should recall the OS/2 1.x subsystem aimed at pulling in the OS/2 1.x market, the "NT Virtual DOS Machines" aimed at pulling in the DOS market, the (original) POSIX subsystem that people characterized as little more than marketing checkboxery, and so forth.

Gradually all of the subsystems, and processor architectures, fell by the wayside. The excitement for some is less that this is some fundamental architectural change in Windows NT. It isn't. It's that this is the first new thing in (desktop/laptop/server) Windows NT for a while that isn't "The customer can have any subsystem and processor architecture that he wants, as long as it is WinNN and Intel/AMD.".

It would be good to see the Interix subsystem come back, too. And maybe a second processor architecture, as well. (-:


rm -rf /* now on Windows


Not sure what the best URL is for this story, so we just picked the oldest active thread. We can change the URL if there's a significantly better one.


Not surprised. Seems that as of late the biggest use of Ubuntu is as a container base.

If there is one thing Microsoft has always delivered, it is strong developer tools. And this is another case, as developers now don't have to fiddle with a VM to get a Ubuntu test environment going before stuffing things into a container.

Every embrace of Linux Microsoft has been doing as of late can be traced back to their Azure cloud service.


This will make me drop my MBP for the Surface Book.


Will this work on Windows Phone?

It would seem like this compatibility layer goes some of the way to running Android apps on WP10.

But it might be handy to have a GNU/Linux distro in your pocket, coupled with Continuum, running inside a chroot.

(sure there's debian chroots on Android. There's Ubuntu Touch but I've never seen a retail handset, whereas Lumias do exist.)


"According to sources at Canonical, Ubuntu Linux's parent company, and Microsoft, you'll soon be able to run Ubuntu on Windows 10." I'm not a native english speaker, but that seems like a poorly phrased sentence, isn't it? (I'm currently reading "Eats, shoots and leaves")


A lot of bash already exists in the "git bash" shell that comes with the git Windows installation [1].

I've been using it for a while now for a lot of non-git stuff too and I'm quite happy with it.

[1] https://git-for-windows.github.io/


Real question: why are users considered "hard core" just because they run bash commands from a CLI?

It's such a strange though to me, the idea of being a sys admin and clicking around in a Windows computer... is that how it happens, or do most Windows admins use one of the CLI tools mentioned in the article?


I think you're asking two questions. Why would sys admin use a cli? Think about it this way, what if instead of letters your keyboard came already attached with words you can use instead? That's the best analogy to the difference between the gui and the cli I can come up with. It's just 2 APIs, but the latter is generally more complete, faster and offers more control.

As to why people are considered "hard core" when they use the cli, it must have to do with the (somewhat false) notion that one must have a special kind of mind to be able to remember all these commands. Most cli users know that they're not geniuses, they just had the patience to go through a tutorial or read parts of the manual. Then they repeatedly used a small set of commands that they need almost daily and that stuck to long term (or muscle) memory. Over time they took some notes when they encountered handy but seldom used commands. They've done this for many many years with many many tools. Look over their shoulders as their little fingers do their thing and mistake their craft for wizardry.


GUIs can be useful too? I don't know why that's so hard to believe.


I'm assuming some of this was driven by Microsoft's desire to support the Docker ecosystem? Great move!


I'm really impressed with Microsoft's decisions lately. To put it bluntly: YOU GUYS ARE AWESOME.


Yes, I also think that their decisions over the past few years are pointing to a real renaissance within the organization, and they are starting "again" to focus on their core strengths.


The ability to use apt-get is what is truly intriguing.

Will I be able to change the repositories that I face off against?


So a bunch of CLI tools, that use a shim layer to translate Linux syscals to Windows kernel API. I think this will be somewhat like OS X's POSIX base. I'm dreaming this could mean eventually being able to run KDE or openbox or sth, perhaps by community effort.


Actual tweet: https://twitter.com/windowsdev/status/715211234702966785

No much information for now, but that seems to be a little revolution.


ls c:\program/ /files\documents\recycle/ /bin\

ugh


According to the screenshot it's will be:

ls /mnt/c/program\ files/documents/


ln -s /mnt/c/program\ files/ /mnt/c/sw


Is Microsoft starting to get it?


Cool! Linux with Viruses :-)


All I wanted was a win32-subsystem COW fork. NT already had a COW fork. It always has. All you needed to do was wire up that fork to win32. Instead, you did some other random, much less valuable thing.


I'm on Win8, haven't updated to Windows 10. Will this /only/ work on Win10? I really don't want to update -- but will update if it's the only way.


How does this _license wise_ play together with the GPL?


It seems to be an implementation of a subset of the Linux kernel ABI (binary interface to applications) on top of the NT kernel. It's most likely written from scratch, similar to WINE or FreeBSD's Linux ABI, and in that case there shouldn't be any copyright issues.


Does this make the development story much more clear? I was always confused about the differences between msys, msys2 , mingw, cygwin, etc.


Wow, the lack of a solid, built-in bash and apt-get has been the major reason I'd never consider windows for everyday use.


What about the lack of middle-click paste and virtual desktops?


Yeah virtual desktops are nice, I am not generally in the habit of using middle click anymore though.


i kind of prefer it the other way around, like others here have suggested

i want a better wine, that can run sql server, and ms visual studio


I don't see much reason to do either. Virtualization pretty much works now. I'm sure there are use cases when that's not enough, but I think majority of people who want Windws+Linux would be happy with either of those running in a VM (depending on their use case).


> It also seems unlikely that Ubuntu will be bringing its Unity interface with it. Instead the focus will be on Bash and other CLI tools, such as make, gawk and grep.

I will love it but not having a graphical interface limits the added value. Currently the main problem with running a desktop Linux in a VM is the limited support for 3D/2D that increased CPU usage making your whole computer unusable.

On the server side Hyper-V in Windows 10 is a partial solution that already works.


> Currently the main problem with running a desktop Linux in a VM is the limited support for 3D/2D that increased CPU usage making your whole computer unusable.

Bullshit.


The HN guidelines ask you not to call names in comments. Please post civilly and substantively when commenting here:

https://news.ycombinator.com/newsguidelines.html


Excuse me... It seems like you are an ignorant and happy downvoter. HN is for civilized discussions, if you don't agree make your point.

For anybody else, just search this on Google: 2d 3d cpu ubuntu (virtualbox OR vmware OR "hyper-v")


VBOX Guest Additions, Virgl for KVM, GPU passthrough .


So... you know something that I don't and instead of teaching me you insult me. Also, your answer doesn't invalidate what I said since these are experimental approaches with many issues (see for example https://forums.virtualbox.org/viewtopic.php?f=7&t=69732 )

Now, have you tried these experimental approaches with Unity on VMware/VirtualBox/Hyper-V? Please let us know your results so anyone can benefit of that.


Those are not experimental approaches. I've tried them about 5 years ago.

Also, how is that experimental:

https://www.youtube.com/watch?v=37D2bRsthfI

Stop spreading FUD if you don't know what are you talking about.


Clearly you don't know what FUD really is.


Great. Now I may consider buying a PC again.


Great News! For many people, this might make the full transition to a real Linux distribution much easier.


Genuinely interested: will I be (in near future) run Linux native docker on Windows?


Embrace, extend, extinguish.


Embrace! Extend! Extinguish!


Ubuntu join with win shjt?


I am curisous about can these ELF programs interop with win32 ones?



The question is: Does the date command work?


Does this obviate msysgit?


well, I guess 2016 truly is the year of the linux desktop


quite similar to Microsoft and Nokia partnership


So they've reintroduced Services for UNIX Applications?

This isn't news. This is Microsoft up to the same dirty tricks they pulled in the 90's to try to kill UNIX.


Grabbed popcorns, now open the floodgates!


This post has more details http://www.zdnet.com/article/microsoft-and-canonical-partner...

The interesting part is "Ubuntu will primarily run on a foundation of native Windows libraries."

If this is true Canonical is playing with fire. That could be the embrace step of the usual embrace-extend-estinguish script. Think what happens when Microsoft adds some new functionality to those "native Windows libraries" and Ubuntu/Windows is extended to use it and Ubuntu/Linux obviously not. If (when?) the majority of Ubuntu's users will be on Windows Microsoft only needs to start developing its own Ubuntu and cut Canonical out of the loop. If there will be a significant amount of Ubuntu/Windows servers by then, very little will be left for Canonical. Only the cost of Windows licenses can save Canonical on the server. The desktop will be lost given that most of Ubuntu's desktops are born as Windows machines. The more convenient way will be to add Ubuntu/Windows to them and keep a Windows OS for games or just in case you need some Windows native application.

Maybe Canonical is thinking about leaving the desktop and focusing on the server. Still it's a risky move.

Another interesting post of October 2015 http://www.linuxjournal.com/content/ubuntu-conspiracy "The word is that Microsoft is in secret negotiations to purchase Canonical." Maybe they're playing the Elop move without having to change CEO.


> "Ubuntu will primarily run on a foundation of native Windows libraries."

I think that you're predicating an entire argument on what was probably a bit of slipshod writing. After all, we know (now) that what this is is Linux binaries running on the Windows NT kernel, and that Microsoft hasn't actually done anything to those binaries at all (and indeed touted that as a feature). So Microsoft hasn't taken any steps to make Ubuntu softwares specific to the new Windows NT Linux subsystem.

Nor has this even been positioned as an "Ubuntu on Windows server". Indeed, the Microsoft people promoting it have been stating (in all-capitals or boldface, no less) that it's for enhancing developer command-line workflows. It apparently doesn't even have the server capabilities (i.e. running programs as services) that even the old (Interix) Windows NT POSIX subsystem had.

You're also discounting the other Ubuntu news of the month.

See https://news.ycombinator.com/item?id=11415985 and https://news.ycombinator.com/item?id=11416376 .




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: