Hacker News new | past | comments | ask | show | jobs | submit login
Windows Subsystem for Linux GUI (github.com/microsoft)
618 points by anchpop on Sept 10, 2021 | hide | past | favorite | 457 comments



I wonder what Linux exclusive software they are hoping to support. Everything I use in my Ubuntu daily driver has a Windows build or corollary app. I get how ‘you never have to leave Windows’ is a nice thing for their business, but I don’t see this being a reason I would stop dual-booting. The only reason I run windows in the first place is for a few apps, mostly games. Otherwise I really enjoy the bloat-free, ad-free, telemetry-free, snappy, tractable, and undistracted experience that is Linux desktop computing.

It would be nice if they did something actually useful, like add native ext4 support.


Even though our dev environment at work is Ubuntu, I greatly prefer running Windows on my ThinkPad X1 Extreme, because of the superior hardware support for the devices I use. Specifically:

• I spend a fair amount of time on Zoom calls (who doesn't?), and I like to use my Apple AirPods so I can move around while we talk. I was never was able to get these or any other Bluetooth headset to work on Ubuntu. They pair only as headphones with no microphone. On Windows they work "out of the box".

• I use a triple-monitor configuration with three 4K displays: the ThinkPad's internal 15" display and two 24" externals. In Ubuntu I can only get two displays to work. (BTW one external display is in landscape mode above the ThinkPad, and the other is to the left in portrait mode. I highly recommend this configuration - the portrait mode display is great for reading docs and especially PDF files.)

• I run the external displays at 200% scaling and the internal at 300% to match the differing pixel density. I didn't see any way to support this configuration in Ubuntu, much less be able to move an app window between displays and have it automatically update its scaling factor to match the display. This works "out of the box" in Windows.

So I run Windows on the hardware and Ubuntu with my dev tools like PyCharm and SmartGit in a VMware VM. (VMware works a lot better for this than VirtualBox - the display response is much snappier.)

Of course these are my own needs, and I have no quarrel with anyone who has different preferences. But I welcome anything Microsoft can do to make this an even smoother experience than it already is.


Bluetooth headsets (I have Bose) and automatic input switching works with pipewire, fractional scaling with different settings for each monitor works with Wayland.

I am using it on Archlinux without any problems whatsoever.

The problem with Ubuntu is - when it releases their latest version, it is already shipping a couple years old software versions.

And for pipewire, wayland, mesa and other desktop related things you want to run the latest version at all times. This is one of the major reasons why Valve chose Archlinux as a base for their new SteamOS version for Steam Deck.


  > The problem with Ubuntu is - when it releases their latest version, it is already shipping a couple years old software versions.
That is quite a dramatic take and simply not true.

It maybe fits somewhat for Debian Stable (that has a freeze of 4 to 6 months), but Ubuntu bases off Debian Unstable (sid) and sid is very close to the Arch experience - I know because I use both a lot. Granted, Ubuntu adds a bit of a delay due to QA and all that release fuzz a rolling release like Arch does not has to care for, but if an upstream software release happened one or two months before an Ubuntu release it's really likely to be included in that release.

  > And for pipewire, wayland, mesa and other desktop related things you want to run the latest version at all times.
Meh, in general I agree with the sentiment, but there are also regressions that hurt running into, and if you have HW that was released over a year ago it may not matter that much.

  > This is one of the major reasons why Valve chose Archlinux as a base for their new SteamOS version for Steam Deck.
Not directly, they could have used Debian sid for that, and the fact that SteamOS 2.0 is still on Debian 8 (newest is 11) also shows that they did not try to go for the latest releases until now. I'm working on a Debian derivative and we just backport things ourself if really required, it is a bit of work but not that much (we're definitively orders of magnitudes smaller than Valve) - especially as the Debian unstable/sid repo is quite up-to-date and thus we often can just take it from there and base on that anyway.

But actually Debian and Arch Linux are really close anyway, I do packaging for both (but neither a DM nor Arch trusted user) and if the software is not awful to package in general it's quite the bliss to do for both, there are also lots of parallels, even if often slightly hidden. So I won't care much; may even try out setting up Debian Sid once I get my Steam Deck :)

FYI: here's some good background read regarding all this from a Debian developer whom also works for Collabora on the Steam Runtime: https://lists.debian.org/debian-devel/2021/07/msg00214.html


Debian used to be my main desktop distro 10 years ago, but I switched to Archlinux because for me personally Debian reqired too much work after packages got broken (I was on testing or unstable, don’t remember anymore), and it happened too frequently. And also Archlinux AUR was very powerful with no good alternatives on Debian at that time.

I still have some servers running Debian but I have migrated most to Ubuntu because of much simpler upgrade procedures between stable versions (basically I have enabled automatic upgrades with auto reboot on all of them).

I am an enthusiast but I like to have things running smoothly and even with Arch being rolling release, it has been working smoother for me than in my time prior with Debian.

Maybe it is because of the excellent documentation on Arch wiki, maybe it comes with experience. All of this is subjective anyway.


I'm using Fedora. Scaling works, but not for all apps. Gnome apps work just fine. But Chrome, Intellij Idea and some other apps do not scale when moved to other display. AFAIU Wayland-native apps work, but those which use Xlib do not re-scale properly.


I saw on their bug tracker that Intellij devs will work on it this year.


> • I run the external displays at 200% scaling and the internal at 300% to match the differing pixel density. I didn't see any way to support this configuration in Ubuntu, much less be able to move an app window between displays and have it automatically update its scaling factor to match the display. This works "out of the box" in Windows.

This works if you use Wayland rather than X11. I'm using a similar configuration today.


That is good to know, thanks for the tip! I will definitely try that whenever I use Linux on the hardware in the future.


However, the X1 Extreme has an Nvidia GPU.


Since June Nvidia proprietary driver supports KMS and Wayland. So fractional scaling with Nvidia should work now.


It kind of does, but ... it also doesn't.

I've been running {KDE, Gnome} on Wayland + KMS on a box with a GTX1080 for a few months, and

- it is really laggy (sometimes the mouse cursor is choppy)

- fractional scaling is practically unusable due to all kinds of important apps (e.g. Chrome) not supporting it and just blurring the screen instead of properly scaling

Overall I can't recommend it to non-enthusiasts, unfortunately.


No lags for me with GTX 970. And I use Firefox, it has Wayland support.


"It works for me on very specific, ancient hardware, and also don't use <software you use>" isn't a great selling point for people who just want their machine to work and aren't bought into Linux On The Desktop as a philosophical ideal. That's why people use WSL2! Popular software works, popular hardware works, you can run Linux programs from the command line without installing and managing a separate VM yourself (yes, yes, it's virtualized under the hood by the OS, but you don't need to manage the VM yourself), and now you'll be able to run Linux GUI apps too.


If you want a great Linux desktop experience, don’t buy Nvidia GPU’s. Intel and AMD has very good open source drivers, while Nvidia has only the proprietary ones and they are known to have all kinds of issues. There is a good reason Linus Torvalds said the famous words “fuck you Nvidia”.

And major problem that still persists with WSL is the NFTS mounts in Linux. At work we can’t have decent compile times on Windows because of the file system.


WSL2 uses ext4, not NTFS. Just make sure the files live inside the WSL2 filesystem and not on the Windows host, and you're good to go.


To be honest, with the amount of video meetings I have, I invested in a far better audio setup than Bluetooth headset microphone.

The amount of meetings I've had where the audio from the other end sounds like the BART announcements is too high...

That being said, I know what you mean. However, I'm actually rather annoyed at the headset microphone functionality on windows, because if anything starts using the microphone, it switches modes and the audio quality goes way, way down. Fine for a zoom call maybe, but having horrible audio in a video game is not fun. I constantly have to go to control panel to turns off hands-free telephony.


Audio quality matters to me too, and of course the AirPods are quite a compromise. The speakers in them sound good enough, but I know the microphones are not great. Mind sharing some details of your audio setup?

I do like having wireless headphones of one sort or another, so I can walk around while staying in the conversation.

Also, over-the-ear or in-the-ear (the kind that go into your ear canal) headphones don't work for me. The only kind I'm really comfortable with are the kind that sit lightly in my outer ear, e.g. AirPods and not AirPods Pro.

But in any case, I'm eager to hear about your setup. And I'm probably not the only one who would welcome better quality audio. Thanks!


As far as a microphone goes, I use a Blue Yeti on a shock mount suspended next to my desk along with a pop filter (about ~$200 USD all together). To be honest, I've had it for awhile because I've played games or talked with friends on Discord regularly for years. We all started with the inline microphones on our headsets and collectively decided to move past what we loving described as "BART station audio" since it became an almost daily activity.

As far as work goes, a few of my coworkers use RTX Voice to remove background noise since they use their gaming computers for work. For headphones, I personally like over-ear headphones and use wireless noise-cancelling headphones.


Thanks! Yes, I've always heard good things about Blue mics.


> if anything starts using the microphone, it switches modes and the audio quality goes way, way down.

Funny, I don't see this on windows, which will leave the headphones at high quality (except for when I'm speaking, at which point I don't care that much about audio) - but on Linux I have to set the audio profile for the headset to low quality in order to enable the microphone at all - so all audio out is crap as long as I'm on a call/have the microphone enabled (even muted).

Still waiting for this to improve - but looks like it'll require new hw (new Bluetooth receiver and sender).


Ubuntu by default pairs Bluetooth headsets as A2DP, which has higher quality for music but no mic-support. You have to go in to Ubuntu bluetooth sound settings and change the output device to headset mode every god damn time you turn on the headset. And then toggle it back to A2DP if you want to listen to music again. I don't know how Windows or iOS deals with this but there both modes seem to work seamlessly.


Pipewire can switch to headset mode automatically whenever a microphone input is needed. And it switches it back to headphone mode afterwards. It works on Linux exactly like on a phone or macOS. Pipewire is backwards compatible with Pulseaudio so there is no reason to not migrate.


I've not heard of pipewire. Need to check this out because I'd love for my truly wireless earphones to be able to seamlessly switch modes on Ubuntu. I use a wired headset for work calls I need to be on quickly because it's pretty flawless, but the tether is more than a little annoying.

EDIT: It took me 5 minutes after reading the comment above to replace pulseaudio with pipewire on Ubuntu 20.04, now I have access to my earphones' high quality codecs too right from the Ubuntu sound control panel!

I used this[0] then this[1]

[0] https://ubuntuhandbook.org/index.php/2021/05/install-latest-...

[1] https://ubuntuhandbook.org/index.php/2021/05/enable-pipewire... - I only followed steps 1 and 4 and audio was switched instantly, earphones already paired. Now I have AAC, SBC, SBC-XQ...

EDIT2: Switching to/from my earphones and from 1 bud to 2 buds appears flawless so far, even for the Spotify desktop for Linux app which usually requires a `pulseaudio -k` to send the audio out of the right device usually, even if it's correctly selected in sound settings.

EDIT3: Don't forget to mask pulseaudio (yellow box, second link) or pulseaudio will load on reboot and break things, no amount of systemctl disable will stop it without masking.

EDIT4: Linked site has different theme on mobile so yellow box in EDIT3 isn't yellow.


oooh... Thanks! Thanks!... This was a breeze and I had to pinch myself to make sure that it was really working...

PS: In the past, I've wrestled with PulseAudio & BT dongles and my JBL headsets on Ubuntu. Banging my head on a brick wall would have been more pleasurable than that.

PPS: and today we have Pipewire on the front page!


Glad you got it working. I deserve no credit though, I just followed some instructions and linked it here.

It was even easier to setup on my desktop (now running Manjaro rather than Ubuntu because of SteamPlay/Proton et al) and simply `sudo pacman -Sy manjaro-pipewire` followed by a reboot. If it complains about pulseaudio related conflicts, just `pacman -R` the packages it mentions then try manjaro-pipewire again, and then reboot and you're done.


I sort of had heard of pipewire - but only to the extent that it was the next audio/video stack.. No idea about how far along it was. Also no idea that it had the BT headset thingy all sorted out.

Your comment made me go "huh! that's easy enough to finish off in 10 now" and provided the impetus I needed


Does Macos doesn't deal with this? IME, MBluetooth Audio from Mac sounds like a fart in a pringles can unless you update some plist somewhere. But once you do that It JuSt WoRks.


  > I spend a fair amount of time on Zoom calls (who doesn't?), and I like to use my Apple AirPods so I can move around while we talk. I was never was able to get these or any other Bluetooth headset to work on Ubuntu.
I do not want to be the cliché Linux user and recommend some config change and assert that would have been simple and helped 110%, but out of interest, did you also try something like

  > Set ControllerMode = bredr or ControllerMode = dual by editing
  > /etc/bluetooth/main.conf file
  > systemctl restart bluetooth
(paraphrased and s/sysv/systemd/ from https://itectec.com/ubuntu/ubuntu-pairing-apple-airpods-as-h... )


On linux mint, zoom works fine, with bluetooth headphones, attached speakers, bluetooth speakers, etc. Has for years (well, I started using it (bluetooth headsets with linux) in 2008 or so, and it worked then as well).

On windows, brand new Dell laptop for work, with an insanely locked down version of windows 10, zoom often crashes, especially when sharing my screen. This in turn takes down many other applications. Generally making the whole windows experience far from optimal.

My Sager Laptop a few years ago, and now my HP Omni (personal) laptop, I regularly drive 2 screens and the laptop display. Works. Out of the box. Work windows 10 laptop, its a crap shoot at best. And I can't use the NVidia card very much in windows 10, simply because the system is so locked down. Thus I'm stuck with an expensive and useless feature. One that works flawlessly in my locked down linux box.

On different scalings for different monitors, its built in to mint.

I'm guessing you are either running a very old version of Ubuntu (literally all the complaints you made are many years out of date, having been solved long ago), or you copy-pastaed from somewhere else. My priors on this are 60% the latter 40% the former.

On my linux laptop, I run windows the way it should be run (if you really need to run it). In a kvm instance. Never touching real hardware. And whats funny about this, is that the virtualized Win10 is faster than the far more expensive windows 10 work laptop right next to it.

Go figure.

FWIW, I've been using Linux on my desktop for 23 years, and as my primary OS on my desktop/laptop for 20 of those years. So ... YMMV.


This seems mostly anecdotal, but I've been using Bluetooth headsets and headphones on Linux since around 2008, and most have worked out of the box with no issues.

I do remember at the time, windows didn't really work with those Bluetooth headphones, and I actually started bringing my own Linux laptop to work just so I could use my headphones.

When it comes to Zoom, I'd say the problem is having to use that crap and not the fact that they barely support Linux. MS Teams also won't work on Linux, but it's hard to claim it's an issue on Linux's side. I'd suggest looking at something like Jitsi, which is also encrypted and takes security into consideration.

Ubuntu is a pretty bad example for anything though: they're usually trying to reinvent the wheel and it's very common for things not to work there and work anywhere else. I personally have mixed feeling because they both make Lonux more popular and build good tools, but also give Linux a bad rep at the same time. Maybe give Fedora a shot?

Finally, per-display scaling works fine on Wayland, but won't work on Xorg. I believe Ubuntu still uses the latter.


It is not true that Ms Teams does not work on Linux. We do have several devs working on Linux (don't know which specific distros though) and it works just fine. There are some annoying pop ups saying "Teams is ready" anytime they get a message and one of them has to switch something about his graphics from time to time to be able share his screen (which has never been a problem problem for anyone else), but I wouldn't say that it doesn't work at all.


"just fine" might be a stretch, it's an awful program - but so are most commercial alternatives (slack is marginally better).

But yes, it works as intended on Linux (I use the official snap with Wayland/Ubuntu and even screensharing works).


> When it comes to Zoom, I'd say the problem is having to use that crap and not the fact that they barely support Linux. MS Teams also won't work on Linux, but it's hard to claim it's an issue on Linux's side. I'd suggest looking at something like Jitsi, which is also encrypted and takes security into consideration.

Jitsi is not as good as Zoom. Zoom seamlessly integrates with multiple monitors, and it provides a variety of tooling to rearrange your view of people and shared desktops. Just as an example last week I was helping two coworkers troubleshoot something, and I was able to have both of them share their desktops simultaneously. I had one on one monitor and one on the other. It was painless and instant. Maybe Jitsi supports such a thing somehow but it would have taken a minute or two to find the right buttons to press.

Also Zoom is encrypted. It has always been encrypted. Zoom lied about having E2E encryption and also weirdly had a lower-grade AES. Zoom is weird because while I 100% mistrust their motives in using weaker encryption, there are some legitimate tradeoffs between reliability and encryption - and it's actually pretty unlikely that they could implement E2E encryption without compromising video quality.

And ultimately especially with the pandemic and only being able to see people via video, even the smallest problems are potentially quite massive. I've used Jitsi a bit, and I don't think it's an exaggeration to say that it would mean that I would have spent at least an extra hour a week during the pandemic troubleshooting video when I was trying to have a nice visit with friends or family. I'm not going to be an ideologue when I lose that kind of time.


About the headphones, are you sure you've selected the right thing in the app? Not pulseaudio, but zoom. I had this problem that zoom would ignore whatever I set with pulse. Once I figured this out my problems went away.

For games, proton has come a long way. I don't find myself needing to boot out of Linux very often anymore.


Thanks for asking. In fact that is one of my pet peeves with Zoom, that it has its own audio selection independent of the host OS. This is a problem on Windows as well - and I assume on macOS too.

I have a friend who uses Zoom a lot on her Windows laptop, and this "feature" of Zoom has messed her meetings up so many times!


WSL v2 is a VM too and better integrated. I like it very much.


> like to use my Apple AirPods so I can move around while we talk. I was never was able to get these or any other Bluetooth headset to work on Ubuntu.

Is this due to Apple having drivers for AirPods for Windows or Is Windows Bluetooth stack that good?


I didn't install any Apple drivers, just paired and connected them in the Windows settings. I also tried a couple of cheaper Bluetooth headsets and they worked equally well. None of them would connect as headsets in Ubuntu 20.04.


I have a short bluetooth pairing troubleshooting content for RPi [1] which shows the basic bluetoothctl commands, Give it an attempt next time.

But manufacturers do test their devices with Windows Bluetooth stack (incl. Apple) as after all they hold the majority market share.

[1] https://abishekmuthian.com/fixing-bluetooth-issues-on-raspbe...


This. It is basically a way out for people that don't want to run Mac.


Indeed. We take for granted this kind of robust hardware support on macOS and Windows.

macOS by itself wouldn't work for me, though. I would still need to run a Linux VM because of our very fussy build system. And besides, where would my beloved TrackPoint be? :-)


Nice setup. I prefer one large monitor over dual displays and a desktop over a laptop. I just don't have any use for the small laptop screen and keep it closed most of the time.


> I spend a fair amount of time on Zoom calls (who doesn't?)

I'm way more productive with email and just a couple phone calls a week as needed.


Ha! Tell me about it. :-)

In truth, I don't spend that much time on Zoom calls, and of the few that I have, many are no video, just substitutes for a phone call with better audio quality and the ability to share screens if needed.


To be honest, theres alot of research related software (including for visualization) that works on Linux only or has very poor support for windows. As a roboticist one example that comes to mind is the whole open source robotics ecosystem with ROS/Gazebo which is pretty linux only. Personally I've been using linux as a daily driver for 10+years now so its not a problem for me per-se, however I'm sure there are many who would be interested in seeing better windows support. I've heard of similar issues in the domain of particle physics and a few other niche research areas.


As someone who uses Windows as their main driver, I will personally find it a useful way to test/debug our Electron based app on Linux. Right now I’m using a full VM.

When you consider that Edge is available for Linux, MS could very well be using WSLg to develop it.

So it seems to me that this just makes it easier to anything you need to do on Linux, “on” Windows.

Of course making it easier for people on Windows to make software for Linux seems like a way to help Linux, which is a bit confusing to see MS do.


> Right now I’m using a full VM.

WSL2 is also a full VM


Is that so? Isn't it more like a bridge between Linux and Windows kernels, so that stuff is ultimately delegated to Windows?


WSL1 was done that way (though with a compatibility shim not a real linux kernel), but it had a number of shortcomings, primarily that I noticed in file io performance, but also in compatibility as they had to map all the syscalls themselves.

https://docs.microsoft.com/en-us/windows/wsl/compare-version...


A bridge between kernels is how most VMs work these days. The kernel inside the VM has special drivers for extra-simple 'hardware' that the host OS provides.

As opposed to WSL1 where there's a wine-esque module in the windows kernel, and there is no linux kernel at all.


It is, just with some fancy integrations that makes it more comfortable - for example memory reclamation. But it's all a fancy VM in the end.


A super fast one. After one click, within 2-3 seconds I can have an Ubuntu terminal open with wsl. To spin up a vm from let’s say VMware, I need at least 10times that.


I don't think that's magic in the hypervisor, so I doubt it affects overall performance of the VM very much. But WSL images have a special boot process instead of a full init system like systemd, which is probably where the fast startup comes from. Maybe it also has to do with how they configure storage for the VM. On real hardware with a decent SSD, Ubuntu usually gets you a graphical login in less than 10 seconds.

Anyway it's a cool feature, and I'd love to know more about how it works.


And the fact that they probably use a slimmed down kernel with just the right amount of modules, I guess.

In any case, booting Linux (the kernel) is always incredibly fast, and booting to a tty, without all the systemd units that are generally loaded, is incredibly fast per se.


WSL 1 was. WSL2 is a VM.


But a special VM.


Still a VM :)


As far as I know, both the NT kernel and the WSL2 Linux kernel use the same hypervisor below them.


Also looks good to debugging puppeteer with head on docker container.


> I wonder what Linux exclusive software they are hoping to support.

It would be more of a case of how well certain software works, or how well that software works together, than one of supporting Linux exclusive software. There have been a variety of ways to run Unix software under Windows for decades. Quite often, there are quirks to deal with unless considerable effort has also been put into the Windows native version. I doubt that WSL will actually appeal to many existing Linux users, but it will probably prevent the slow flow of people from Windows to Linux.

I agree that native ext4 support would be more useful for people who dual boot.


Whenever you want to natively operate on files within WSL instead of going through the network share abstraction, this is definitely helpful. I'm running my git GUI (Sublime Merge) on the Linux side and am currently piping the UI through to Windows using VcxServe. If I can remove another dependency using this - great.


So even if a windows build is available, sometimes the user experience of linux-first software on windows can be subobtimal because of differences in filesystem and process model of the two operating systems. I much prefer using git and emacs within WSL than their windows builds.

> The only reason I run windows in the first place is for a few apps, mostly games

Another reason to use windows is if you are on a laptop and care about battery life. Browsers on Linux still don't have hardware accelerated video playback.


> Another reason to use windows is if you are on a laptop and care about battery life. Browsers on Linux still don't have hardware accelerated video playback.

That's just not true. Might be a problem with some GPUs but not with all. At least I got hardware accelerated video playback on Chromium with my AMD GPU.


I happily found Chromium-VAAPI almost 2 years ago and it works great with Intel integrated GPU video acceleration on Arch Manjaro Xfce

https://aur.archlinux.org/packages/chromium-vaapi


At least for AMD cards it works with no extra configuration. Besides installing the open-source driver if the distro doesn't do it automatically. I'm not sure which framework it's using VAAPI/VDPAU. My test method is: my GPU activity spike up when I hit play on a Youtube video. :) I'm on Arch/Gnome.


IIRC, they were primarily interested in getting GPU acceleration (for ML tasks) to work in WSL2.

I presume getting GUI working on top of those GPU APIs was a trivial task (and maybe done by one of their interns or during a hackathon).


It sounds like they did significant work getting the whole stack to work well, from Wayland to the RDP back-end to improvements to the RDP client on the Win32 side.

Your interns must be rock stars on meth.


Most likely energy drinks + Adderall + financial insecurity. Adderall, Meth, same thing.


That sounds like the killer app. GPU passthru can be an absolutely gnarly undertaking with any virtualization system.


GPU passthrough on WSL2 is a thing since around December 2020 -- but I'm not sure if it's in stable yet, I had to install an insider build for it. It works surprisingly well and I was able to develop my ML project using it (with the help of VSCode devcontainers)


Why did you want to develop it through WSL2 instead of just compiling it on windows?


Anyone could already do it with a few lines of code.

Edit: Seems people don't believe me, here's an article how to do it: https://techcommunity.microsoft.com/t5/windows-dev-appconsul...


That shows how to set up an X server in windows that WSL can access. How do you get GPU acceleration for ML tasks using X11?


I think it's mostly a "because they can". WSL is a no-brainer because developers are used to Unix shells and most programming languages are Unix-first, Windows-maybe. But WSLg feels like a weird experiment with no purpose.


This could very well be the case, but as someone who is just dipping a toe into programming, installing and using WSL2 knowing that I can fall back on GUI when I can't figure out bash is a feature for me.


I think the main target is ML applications that depend on GPU access and being able to run a WM is just a side-benefit.


My networked workplace computer needs to be able to compile a Windows application plus peripherals running various other architectures. The Windows part happens best in Windows, while the other parts are remarkably painful to compile without Linux. And all the platforms can compile in parallel.


I suspect this is to try to push large business/corporate clients to drop Linux.

Windows adds support for running Linux apps, then spreads some FUD about Linux, and convinces companies they need to ban dual-booting and only allow Windows internally.

It certainly does _sound_ like MS.


I was looking into this further because it’s sort of impacting my dual-boot workflow. I have ext4 media drives on Linux that aren’t viewable or readable from Windows, but Linux can at least read the NTFS drive. The third parties I’ve tried in the past for making Ext4 readable in Windows File Explorer have some sketchy security concerns and/or missing Win10 support.

It looks like using the method described in the link below it’s now possible to mount ext4 drives via WSL2 and even browse them in File Explorer:

https://superuser.com/a/1630438

It’s not clear if they are also writable or not, I’m off to try it!


You can use WSL to mount ext4 and other filesystems supported in linux.

https://docs.microsoft.com/en-us/windows/wsl/wsl2-mount-disk


So far no good. That feature was a preview release that required insider builds, which require enabling telemetry that sends, among other things, "information about websites you browse, apps and features you use..." Moreover, I'm not able to use Windows 11 which has the feature because of my AMD Threadripper 1950X processor.


I wonder if disabling telemetry works on the insider builds.

There is a guide here:

https://medium.com/geekculture/how-to-stop-windows-10-from-s...

However waiting for 21H2 is probably a lot easier.


Almost everything I use runs faster on Linux so even if it isn't Linux exclusive I'd very much prefer to run it on Linux.

VS Code brilliantly lets me develop on WSL2, IntelliJ is getting there but with these new developments it might become easier to run everything on Linux.

That said, IT at work not only tolerates Linux but actively support it so I might be back on Linux again very soon.


If all your development is on Linux, it’a convenient to run IDE there as well.


> It would be nice if they did something actually useful, like add native ext4 support.

https://docs.microsoft.com/en-us/windows/wsl/wsl2-mount-disk


I use WSL with VcXsrv on my work machine. For me, the biggest feature is that I can share clipboard between vim running in WSL and windows. It's also often significantly faster to get a program up and running in Linux than on Windows, especially if it has lots of dependencies.


you would be surprised at what a large effect the removal of friction can have. I bet this will convince a decent chunk of people that dual booting isn't worth it


Exactly, don't underestimate what removal of a few annoying hoops to jump through can do.

The other day I wanted to compare gitk on Windows with the same on Linux. But there was no Xserver installed, so the idea was dropped.

The whole reason was to see if gitk also had that annoying enumeration at startup in Linux.


Well, ROS?

I can't tell you how many people want to mess around with the robot operating system but don't want to dive head first into Linux. Hell, my very large robotics company won't even give you a linux machine. You're forced to use Mac or Windows or build it yourself.

But in general, I haven't dual-booted my main machine since WSL got good, and I'm a linxu-first kind of person with a penchant for windows gaming.


I bet Microsoft would rather that they used Azure Sphere OS or Azure RTOS for that purpose.


Possibly, but that would be misguided. That's not even close to the niche ROS fills. ROS runs on Ubuntu pretty much exclusively, is essentially a mediocre messaging middleware and a mediocre build system, upon which 95% or more of the worlds decent robotics research is conducted. The main research tools, like Gazebo, require gobs of GPU processing that usually preclude virtual machines. So everyone dual boots or goes Ubuntu-native. I'm not saying that's why WSLg is being invented, but if it can produce native-ish GPU performance, that would be an amazing use case.

WSLg had an early demonstration of ROS simulations running, in fact. So I can double down on this being a use case.


I have a Windows machine for my work at giant megacorp. I run Linux on my at home machines, but the honest to god I can't believe it's not Linux experience I've gotten from WSL has been great. I pretty much tab into my full screen Linux Window manager running in a X11 client window and get down to work.


> Everything I use in my Ubuntu daily driver has a Windows build or corollary app.

And using them is an endless shuffle with fractured distribution and update. WSL gives you apt-get. And good luck when every port you use integrates with a different subset of the ca 5 SSH options that are in common use on Windows.


They’re supporting servers. That’s the whole story. They won business desktops but lost enterprise in the “anything that requires network access”, and want to continue to sell software to those customers. Making development less onerous supports that goal.


I just really like the OS experience of Ubuntu. I’d rather run a windows vm but the gpu doesn’t play very nice sometimes


the reverse would also be pretty useful. heck if they would pull off office for linux even when they would charge windows pro and a 365 license it would probably be welcomed.


I would buy that the moment it is released. That would be the fabled Year of the Linux Desktop.


You can always just use Unity mode on the free VMware Player. VMware may allocate resources, but plays nicely with sharing them when not actually in use, so there's not much a performance hit on the host machine unless you really need to do something that has the CPU pegged.

I think VirtualBox has a similar feature, but in my limited experience VB doesn't perform as well as VMware.


Would be bad for vendor lock-in.


They made it specifically for machine learning


Qemu manager would be good.


Terminal emulator.


Every single person who has ever claimed Linux runs flawlessly on a piece of hardware has had some flaws. I include myself here - I once bought a laptop with entirely OSS mainline kernel supported hardware and compositing didn’t work on external displays.

99% hardware support isn’t good enough. I want to make new software not troubleshoot other people’s.


Thinkpads generally work perfectly. Also, windows hardly has even 99% compatibility with hardware. There is hardware that behaves better under linux than windows as well.


I have a colleague that uses Linux with a thinkpad, he was the person that made me think of my comment. Linux works perfectly but then starting Citrix causes a weird perpetually zooming effect on his external display.


> It would be nice if they did something actually useful, like add native ext4 support.

Other way around. The Kernel getting real support for NTFS (was merged into Linus' tree a month ago [0]) there's hope to get native performance on WSL2.

Microsoft is building the dev environment for the next decade.

[0] https://www.linuxtoday.com/news/linux-kernel-5-15-will-have-...


Linux and Windows use mutually exclusive permission/ACL bits, even on the same NTFS filesystem.


If Linux would just adopt NFSv4 ACLs, there'd be nothing mutually exclusive about it, but instead perfectly in tandem.


> even on the same NTFS filesystem

can you explain a bit how this works?


I think they use extended attributes in NTFS to provide the Linux file system permission.

https://docs.microsoft.com/en-us/windows/wsl/file-permission...


That was WSL v1. WSL v2 is a full blown VM and the filesystem is native ext4 and lives in an image file.


> Microsoft is building the dev environment for the next decade.

Big claim, most devs I've met either use Mac or Ubuntu. Can't remember anyone using Windows...


Have a look at JetBrains' developer surveys. Windows is consistently the most used OS.

https://www.jetbrains.com/lp/devecosystem-2021/#Main_on-whic...

Windows: 61% Linux: 47% macOS: 44%


There are literally dozens of us!

No, really, when all of my development happens over SSH or inside Docker anyway, it doesn't really matter which is the "outer" OS. I'm happy with Windows.


Mac is a small slice of the software engineer market, and mostly in the web-app/mobile space.


every single FAANG (and many middle tier companies too) distributes macbook pros to its devs.


Just because there is no viable enterprise-ish Linux computer that fits those environments. Sadly :(


Definitely not true since (at least at FB) you can also get an auxiliary laptop (Thinkpad) with Linux (Fedora). Just that no one wants them.


It really depends on the company. In my limited experience I can say that only FAANGs will probably allow such an environment. Other big corporations aren't quite there yet.


And yet that doesn’t constitute even close to the majority of developers.


It’s pretty common among game, .net, and Java EE devs. For me as a Python/Node/Cloud dev it was kinda a nonstarter (it all theoretically works, but has all kinds of little bugs and caveats) until WSL was stable. Since then it’s been perfectly viable for anything I’m working on, and I was able to use it exclusively for dev work for about 6 months. I have a Mac laptop too, but my desktop is too beefy to not use as my daily driver. Still, I prefer Linux for development work.


<raises hand> I've consistently developed Unix (then Linux) server software on WindowsNT since 1996.


I develop Linux software on Windows using Visual Studio / C++. It lets me build and debug my servers on Linux machines remotely using nice IDE and tools. Also use CLion from Jet Brains the same way


Linux has had read-only support for NTFS longer than WSL has been around. And if you think that Kernel patch is a testament to the greatness of Windows, you should try reading some of it. It's infamously incomprehensible.

I'd be onboard with Windows as a next-gen dev environment if it was compatible with more filesystems, had a more organized file structure, featured greater CPU compatibility, and eliminated the system registry altogether.


For those that don't know, this will only be available in Windows 11, see https://github.com/microsoft/wslg/issues/347#issuecomment-87...


I see that those of us with a 7th gen Intel core CPU that "need not to worry because W10 will be supported for years" will immediately start missing functionality.


At the very least the support requirement is a soft one, not a hard one. A large portion of motherboards from that era had firmware updates to officially support Win11, and Win11 WILL work on an i7-7700k even though its not in the list. You unfortunately won't get it through Windows update and will have to install the hard way.

And if there's problems, you'll be sorry out of luck.

Pisses me off, but at least it's not a complete blocker.


According to The Verge, they may block security updates if you install manually through the ISO, so that's a no go for me: https://www.theverge.com/2021/8/28/22646035/microsoft-window...


Ok, if that's true, I'm back to raging about how absolutely ridiculous this is. Why obsolete a computer that can still run almost anything on high-ish settings @.@ Because there's 0.01% more crash or whatever....


I’m speculating it was supposed to let them drop Meltdown patches. Doing so easily creates “20% performance gain over Windows 10 on same machine”. Aligns with the statement that they “block” “security” updates.

But this is clearly coming from someone who’s not actively in coding role and without consultation with developers, as some of CPUs(namely Ryzen 2k) to be supported don’t have the required but not very well debugged features(MBEC for Zen 2 - Ryzen 3k and up with luck).


I don't think 7820HQ has more resistance to Meltdown than 7700HQ. In fact TSX-NI might make it worse. Yet they allow the former but not the latter.


> I’m speculating

Was that a pun?


There is the path that that could be TRUE, and the path that that could be FALSE :p


so they can get an extra $60 out of the OEM for a new Windows license when you buy another PC


To be honest I’m not going to miss the annual update that messes with all my settings and tries to force a MS account on me.


You know, you will still get those.


I assume not, that we will just get security updates until it goes end of life. No more "features".


I hope you're excited for the 21H2 feature update, coming in the second half of 2021. The announcement mentions that it will get 18 months of servicing so I'm assuming this means we will get future feature updates as well.

https://blogs.windows.com/windowsexperience/2021/07/15/intro...


Support doesn’t mean you get all the new features.


Yeah bummer. My old Windows machine won't be upgradable to Windows 11 ... interestingly I was able to install the Windows 11 preview and even get the WSL2 update with the integrated X11/GUI and it worked great. However I was notified I couldn't upgrade to official build and the only recourse was to re-install Windows 10.

I'll need to revert to one of the available X11 servers but I wiped out the old configuration and it's kinda painful to automatically set $DISPLAY and also get Norton Firewall to play along.


I have a solution for the disk/partition type/layout incompatible with upgrading to newer Windows 11 builds (but not the TPM workaround) but I haven’t gotten around to packaging it and publishing it for download on our site.


Don't think I've heard about those, what all requirements changed on the storage side there?


I don’t know if it’s what the GP was referring to but partition requirements pertaining to MBR vs GPT and specific requirements for alignment, MSR properties, and order of partitions has been locked down considerably. Annoyingly they all manifest as an opaque “this PC isn’t compatible” or similar message.


To be fair, it's also the first prerequisite on the linked page.

> WSLg is going to be generally available alongside the upcoming release of Windows. To get access to a preview of WSLg, you'll need to join the Windows Insider Program and be running a Windows 10 Insider Preview build from the beta or dev channels.


WHAT??????

I always thought it would be enabled on Windows 10.


I love WSL1. With WSL2 and the move to Hyper-V, I just decided to drop WSL and manage my own VM manually.

I use the remote development tools on VSCode that power WSL to make everything feel like it's running on the host directly, just like WSL2.

This way I don't have any confusion born from it pretending not to be a VM. No issues with network port mappings, dns stuff, who owns what binary and how does it execute? When is WSL2 running and when is it not? etc etc

Ultimately, I would rather run Linux on my desktop as a daily driver but the desktop experience is not quite there yet for me to live (at least) 8 hours a day in. Gnome 40 looks great though, can't wait too see what the future holds.

My dream is running Linux as a daily driver where the UI is more polished than MacOS. Windows will run in a VM and will be used for API calls for playing games (I wonder if we will ever see some kind of GPU sharing for virtualisation).


What's missing from the Linux Desktop experience for you?

I currently use all three operating systems on a weekly basis, Ubuntu with i3 (a window tiling manager) is my daily driver. Everytime I switch to windows or Mac os, it always feels like a downgrade for me because I can't replicate the tiling experience. The one thing I love about mac, is the trackpad experience, I wish windows and Linux could get to that level.


1. Update glibc and everything breaks

2. Update Nvidia drivers and everything breaks

3. 99% of laptops have at least one device with missing or broken drivers. 802.11ac is very old in 2021, but the most popular ac chips still need an out-of-tree driver which will break when you update your kernel. Have fun copying kernel patches from random forums

4. Even the smallest of changes (ex: set display scaling to something that's not a multiple of 100%) require dicking around with config files. Windows 10's neutered control panel is still leagues ahead of Ubuntu's settings app. (Why are the default settings so bad? High DPI 4K monitors have been out for a decade now. Maybe they've fixed this since I've last checked. Or maybe all the devs use 10 year old Thinkpads)

5. Every config file is its own special snowflake with its own syntax, keywords, and escape characters

6. Every distro is its own special snowflake so it takes forever to help someone unfuck their computer if you're not familiar with their distro. Releasing software on Linux is painful for a related reason: The kennel has a stable ABI, but distros don't. You have to ship half the distro with every app if you want it to work out of the box. Using Docker for GUI apps is insane, but sometimes that's that you gotta do.

7. The desktop Linux community seems to only care about performance on old crappy hardware.

8. Audio input and output latency is really high out of the box. That's one of many things that require tweaking just to get acceptable performance.


That hasn't been my experience at all - except for the graphics card issue where there is really some work to be done before the specific driver for the card works.

For everything else Linux is actually a lot easier to work with than Windows - where you may need to download specific drivers for it to work.

As for your comment about config files - the same is the case with Windows where instead of config files, you may need to tweak registry keys.

If you don't mind the default settings, it is about the same experience on Linux and Windows. When you start tweaking things, then your mileage will vary.


> As for your comment about config files - the same is the case with Windows where instead of config files, you may need to tweak registry keys.

This is a pretty bad comparison. Here's a list of things I had to use config files (or terribly documented randomly downloaded CLI apps) for in Ubuntu but have a simple GUI (or just work) in Windows:

- Mounting a network drive

- Adjusting my touchpad behaviour (whether to supress on keyboard entry)

- Restarting my network adapter

- Wipe my SSH credentials

The last time I touched Windows registry was to disable automatic USB device connection, but that's only because I don't have group policy on this machine.


I've never had to deal with anything as brittle as xorg.conf on Windows.


1. My experience with Windows 10 is that there's a considerable chance that it will break after an update and refuse to boot requiring me to reinstall it. With Linux even when using bleeding edge Arch things rarely break and when they do I can see WHY and then FIX it.

2. Not a problem with AMD at least.

3. Mileage may vary

4. Ubuntu is a joke. Try Manjaro.

5. Learning curve is steep but you install Gnome or KDE and have a proper desktop environment that don't get in your way or show ads without touching a single configuration file.

6. Mileage may vary but most distros are Debian, Arch or Fedora based and environments are not that fragmented as you make it sound. Also there is Flatpak and others as an alternative for building and distributing desktop applications.

7. I'm playing games with decent performance on a GPU that was released on last month. Also it's only crappy if doesn't get the job done and is good that the community cares for long term support better than this trend of disposable software and planned obsolescence.

8. Never had this problem.


1. shrug that hasn't been my experience. You also mentioned blue screens in another comment, but I've only seen one blue screen in the last decade and that one was my fault (RAM timings too tight; Surprised it lasted long enough to run a benchmark).

2. Like I told the other guy: "Yeah I'll just trade in my RTX 3090 for something that has half the performance and doesn't support tensorflow"

4. More fragmentation? If the most popular distro is a joke, why would I try something even more unstable (rolling release)?

6. Flatpak and snap are definitely progress. When more people can run identical binaries, I suspect a lot of these problems will go away. There will still be issues with administration (why are there so many ways to configure networking?), but yeah it's progress.

7. Why would I want "decent performance"? I want low latency (min 300fps rendering) while running at max quality

8. Then you've never measured it. Try listening to your own voice using software audio loopback. You'll find that the default audio latency is greater than internet latency within the US. That leads to people interrupting each other in video calls and callouts being late in co-op games


4. Stability has nothing to do with reliability. I had less issues with Arch Linux than with Ubuntu LTS and I hear people say the same thing. Even Windows 10 is a rolling release OS. If you want a ready to use stable OS there's Manjaro where repositories are purposefully a bit behind Arch.

7. Decent is 1080p@60Hz, what most people play. What you want is extreme performance. Honestly 300fps at 4K is just absurd even on Windows unless you're playing an old game as CPU becomes a bottleneck and why would you need it? Do you have a 300fps display?

8. Never measured it but trying pw-loopback I don't see how whatever it's can be a problem even for competitive gaming.


At least for your glibc / nvidia concern, I feel like if you stick to the binary packages, at least on Ubuntu, it's hard to break. Recently I've been trying to do all the gaming I can on Linux with Proton, and at least for single-player games, it tends to work pretty well. (MP is still a big issue due to the current state of anti-cheat)

Audio latency is one struggle I agree is still pretty bad. I had a pretty good time getting SteamVR running on Linux with Beat Saber running via proton, but tuning the audio latency was a big pain.


1 why/how would you break glibc? This sounds like you're "holding it wrong"? You don't go willy-nilly changing system dlls on windows, do you?

2 yeah, nvidia does not support linux/Foss. It sucks. Buy amd, or use windows.

3 I've not really seen this lately. Ubuntu/canonical does a pretty good and pragmatic job of enabling non-free drivers - but sure, not all vendors care. See 2.

4 not with eg Ubuntu 20.04 lts Wayland. I'm rather positively impressed with the settings app. I'm not all that happy personally with all the dbus/changes that enable this pretty gui - I tend to prefer simple config files. But at any rate - I don't think your point 4 has been true for a while.

5 yes and no? Honestly I think the only config files I'm editing is nvim and vs code. My terminal for example has a perfectly usable gui preferences coupled with a perfectly readable and version controllable config file.

Well and the occasional bash rc tweak.

6 maybe?

7 I think it's more that hw vendors don't care about Linux, so it takes a lot of time for drivers etc to materialize.

8 Hopefully pipewire will kill pulse audio, and this will cease to be the case.


1. sudo apt-get update && sudo apt-get upgrade

2. Yeah I'll just trade in my RTX 3090 for something that has half the performance and doesn't support tensorflow


1 when was this? On Ubuntu lts or Debian stable?

2 if a vendor makes good hw, but poor drivers for your os - that really is on the vendor, not the os?

Still, I agree it's frustrating that nvidia is unwilling to commit to Foss/linux.


1. Ubuntu 18 LTS

2. Meh, I never said it was on the OS. I'm just listing some reasons why it's impossible to use Linux on my personal desktop. Believe me, I've tried.


> Ubuntu 18 LTS

You updated system glibc and system packages broke? Or third party packages?


Sorry for being a bit of a Nix shill, but NixOS aims to solve 1, 2, 5, and the latter part of 6.


1) This should not occur. You do not update glibc on a stable desktop environment. If you use rolling release distributions, yes shit will hit the fan (even though proponents/fans of such will deny such 'ever happening to their X years of Arch or w/e usage).

2) Don't use Nvidia; use AMD or Intel with FOSS drivers. I know this sucks if you don't have the option (its a package deal, or you already bought it).

(I've only bothered to address the first two. Doesn't mean you're correct on the other 6.)


1. Yeah it "should not" happen yet I've experienced it on a relatively sane distro. Btw "Update glibc and everything breaks" is a quote from Linus Torvalds. He's a newcomer on this scene, but he makes some good points: https://youtu.be/5PmHRSeA2c8?t=588

2. No thanks. I'll stick with my RTX 3090 for my personal desktop and 2x2080Ti for each of my servers. We both wish there was another vendor that supported tensorflow with reasonable performance, but you know that's not the world we live in.

3-8: Well you better address them then


Yeah, "Update glibc and everything breaks" is a quote by Linus Torvalds on Debconf 2014. But why? Do you know? I do. Because this very thing happened on Debian Testing. Not Debian Stable, Testing. That's akin to a rolling release like Arch. Debian GNU/Linux is a sane Linux distribution, as long as you run Stable. If you run Testing or Unstable shit is going to break, just like Arch or Nix or .. (at least Nix got good rollback support to mitigate it)

I'm happily using a Vega56 and can achieve decent performance on 1080p and 1440p on Linux. Raytracing is a gimmick to me, tear-free gaming not (but I got FreeSync working). I don't work with Tensorflow or Hashcat or such, I'm talking about Linux desktop gaming here. There's no need to use Nvidia there unless you need exotic stuff like raytracing. The AMD Radeon Navi series (5000 and 6000) deliver excellent performance.


If you are going to cite a joke, cite this: Windows, you do anything and the screen go blue. Problems exist and they need be addressed but Linux did not stop at time and the scenario is not the same as it was at 7 years ago.


This hasn’t been my Ubuntu experience at all. I’ve used it for years


It's funny you mention tiling because honestly one of the biggest things preventing me from moving to linux full time is the lack of any decent window manager like https://docs.microsoft.com/en-us/windows/powertoys/fancyzone... where I can arbitrarily draw/resize zones and drag and drop/snap windows on to them. Every window manager I've used in Linux, and gnome extension, all try and force you to have non resizable areas, areas of equal proportions, don't support snapping etc. And needed serious foo for moving a window about. If anyone has any suggestions I'd love to hear it


Powertoys is amazing, it allows you to run your Windows machine more like Linux/Mac.

Awake is like Amphetamine on macOS.

FancyZones is like a tiling window manager.

Keyboard Manager allows you to easily rebind a key such as Caps Lock.

Power Rename allows you to rename based on regexp.

Powertoys Run is basically a light version of Alfred.

File Explorer addons allows you to for example preview Markdown (.md) files.

Color Picker saves me the hassle of making a screenshot and then uploading it to a website to figure out the hex value (or having to open up a bloated image editor). Useful for (even simple) image editing, web design, etc.

I'm sure I missed some utilities but these are the features I use.


Microsoft Word. That's literally the only thing which stops me from installing Windows on all of my friend's and family's aging laptops. LibreOffice just isn't capable of 100% replicating the formatting of a docx generated by Word, which for most people is a deal-breaker. (For me it's not, but for non-techie associates it is.)

I just don't understand why there's not more pressure on MS to port Office to Linux in exchange for them extracting so much value from Linux for their proprietary operating system.


> I just don't understand why there's not more pressure on MS to port Office to Linux

Why is this hard to understand? Who uses Linux on desktop? Check some browser stat.

Run a remote Windows in a cloud and throw Office in there.


> LibreOffice just isn't capable of 100% replicating the formatting of a docx generated by Word

I mean even different versions of Word can't do that reliably...


How much of a tinkerer are you? Try this: https://github.com/Fmstrat/winapps


I really do not want MS office anywhere near my Linux machine.

I wish that Microsoft's products fully supported open document formats and standards.


CrossOver has been working for nearly 20 years on Linux, and Microsoft Word has always been one of their priorities.


For me, reliability. Literally I have to futz with it so often. I moved from a Mac recently to a T495 thinkpad with Radeon in it which is supposedly fully supported by Ubuntu. Turns out that depends on what day of the week it is. Constant GPU problems.

I have considered switching back to windows.


There was recently some kind of fundraiser to hire a developer fulltime to make the trackpad support on Linux closer to macOS.


Last time I checked there was literally one person maintaining trackpad support on Linux; no wonder it needed work!


Microsoft Word. That's literally the only thing which stops me from installing Windows on all of my friends and familys aging laptops. LibreOffice just isn't capable of 100% replicating the formatting of a docx generated by Word, which for most people is a deal-breaker. (For me it's not, but for non-techie associates it is.)

I just don't understand why there's not more pressure on MS to port Office to Linux in exchange for them extracting so much value from Linux for their proprietary operating system.


> I wonder if we will ever see some kind of GPU sharing for virtualisation

Hasn't this been solved for some time now? Or am I misunderstanding what you want? I remember seeing a few articles about using this for gaming on Windows VMs on Linux hosts. I haven't jumped in though so maybe it's not what I think it is.

https://wiki.gentoo.org/wiki/GPU_passthrough_with_libvirt_qe...

I think it's also definitely in use for compute stuff on VMs in professional environments.

https://docs.microsoft.com/en-us/windows-server/virtualizati...

Edit: Ah, maybe sharing a GPU between a host and VMs isn't possible at the moment. Only pass-through and maybe sharing between the VMs?


I could be wrong but I think you can only pass through an entire GPU as long as it's unused on the host.

Sharing a used GPU, in the same way we can share a CPU with no performance loss via Intel Hyper v, is not possible.


Sharing a CPU with no performance loss isn't possible either. The amount of CPU loss varies dramatically by the exact virtualisation stack used and by the application running on top of it, but it is very clearly observable.


Are you sure? I am not sure if Geekbench is a valid benchmark but I have an AMD 5900x.

Running it in Windows (host) I have a lower score than I get in VMWare running Ubuntu (guest).


Yeah, just edited my post with that point. I think you're right.


Intel GPUs can be shared, it is called GVT-g (expensive AMD and nVidia have similar abilities). It isn't trivial to set up at this point but it sounds like it works.


Nope, Nvidia will never let you pass PCIe virtual functions to VMs. They'd rather charge 7x the price for data center GPUs with that capability.


I have GPU-P working with Hyper-V on a NVidia 3070. It is strange though, as the guest OS sees the 3070 and some software works well (e.g. FurMark) but you can't install the NVidia Game Ready installer (doesn't see a GPU). Otherwise, you are correct.


Again: don't buy Nvidia. This is one of the many reasons why you should consider avoiding this brand (for GPUs). I even gave them the benefit of the doubt with their Nvidia Shield (bought two Pro 2020's) only to have them shove me up ads through my throat via a software update.


That's not NV related sadly, but applies for all Android TV devices. It's beyond their control...

And for PCIe VFs for virtualized GPUs, AMD does _not_ provide it either outside of their datacenter line.

Note that if you just passthrough the whole GPU, that works out of the box on NVIDIA cards.

(btw, for some NV cards, there is: https://github.com/DualCoder/vgpu_unlock to crack that protection)


Again: You don't have a choice.

Which AMD GPUs support VFs? Which ones support ROCm? The older AMD consumer GPUs did, but the newer ones don't. They prefer to charge extra for the data center GPUs just like Nvidia.


If your only experience with GNOME is Ubuntu, give Fedora a try. The parts are 95% identical but (in my opinion) the user experience is a lot more coherent, because it's all designed together and nothing's bolted on.


I don't understand claims "desktop experience is not there yet". What do you even need so specific? Care to explain? I tend to think it's more of people's problem at that point than linux desktop experience problem


It's not _functionally_ problematic, just aesthetically.

Mostly it's just that I never go a full day without performing some kind of maintenance on my Linux desktop environment to try to get it looking right, and it never looks right.

I also run multiple monitors, so that aspect tends to throw in some extra jank.

At this point, running Gnome completely unmodified has been the most success I have had - but even Gnome 40 has some questionable design decisions.

Windows is a poor example of polish but I feel there's a lot we can learn from by looking at MacOS's desktop environment. Having used each for significant amounts of time (years) and currently living on Windows, MacOS really takes the prize for the "nicest place to live but worst landlord".


I go years without "performing some kind of maintenance on my Linux desktop environment". I can't even imagine what you would even be trying to fool with, and it not working.


About your last point, this was posted a couple weeks ago on here https://news.ycombinator.com/item?id=27870399


>> Weston is the Wayland project reference compositor and the heart of WSLg. For WSLg, we've extended the existing RDP backend of libweston to teach it how to remote applications rather than monitor/desktop. We've also added various functionality to it, such as support for multi-monitor, cut/paste, audio in/out, etc...

Did they push those changes upstream? This seems like it could be another way to run GUI apps in containers on Linux too.


https://github.com/microsoft/weston-mirror

I could not find any reference to it upstream or mention in the mailing lists.


You can already run GUI apps in containers using pure wayland, just bind the socket into the container.


> just bind the socket into the container

I thought Wayland relied on shared memory with the compositor to work? I could be way off though


Containers run under the same kernel, so sharing memory works the same as it does for processes generally.


Likewise X, if you pass it the socket (although there are caveats around everything but just X11, like audio, which have to be routed out separately)


This was my exact question, as much as I hate Microsoft and Windows (15 years of using linux now in my brief 32 years on the planet).... this could be the project that pushes Wayland to fruition finally. It could also significantly improve GUI support in general.

I guess getting the right thing for the wrong reasons is better than not getting them at all? I'm not a very good pragmatist.


The README mentioned that they have their changes in a separate public GitHub repo and that they plan to upstream it


This is undoubtedly cool but I'm curious to know of a use case that would warrant installing this. Could this just have been a step in creating "Windows Subsystem for Android" [0] that they decided to release as its own layer?

The screenshot on the github page shows VSCode, Edge, Blender, Xcalc, Xclock and GNOME file manager which are all either available natively on windows or redundant.

[0] https://www.xda-developers.com/wundows-subsystem-android-ben...


Accessing the Windows filesystem from WSL and vice versa is extremely slow, so running for example your IDE in WSL and having your code etc. stored in WSL is useful. I think that is one of the big usecases. It's already kinda supported in vscode, where it runs a vscode server in WSL and Windows just runs the frontend.

It's useful for me when developing dotnet intended for Linux as I can store the code in WSL and be able to build, debug, run docker and so on directly from vscode.


Why is it so slow? I use VMware player and swapping files is fine. WSL2 seems like it's just using a lightweight VM over HyperV... is Hyper V really that much worse than VMware (and VBox) in this?


WSL2 uses 9P (of the venerable Plan 9 pedigree, I assume because Erick Smith is a nerd!) to remote the filesystems. I don't think that's part of HyperV (though it could be a custom VSP/VSC, and that channel is ridiculously slow.)

Personally I haven't run into slowness other than Windows sucking at copying lots of small files, which is NT's fault for allowing FS filter drivers and their ridiculous locking scheme.


Are you talking about WSL1 or WSL? Wsl2 is much much faster due to having a virtualized real linux kernel running


Accessing the WSL filesystem from WSL is indeed a lot faster on WSL2. Accessing the Windows filesystem from WSL or vise versa is even slower in WSL2 compared to WSL1.

https://docs.microsoft.com/en-us/windows/wsl/compare-version...

> As you can tell from the comparison table above, the WSL 2 architecture outperforms WSL 1 in several ways, with the exception of performance across OS file systems.


WSL2 disk access from the Windows side is very slow. It's the reciprocal problem of WSL1.

WSL2 Linux apps now get proper performance now but if your IDE is on the Windows side, access time to project files on native Linux partition is terrible.


...actually e.g. VS Code has a really great split backend, so you can actually have the frontend "properly" on Windows anyway, and yet still use WSL2.

My own case was something different: annoyingly configured automated browser tests with Cypress. Just running them inside WSL and letting that start a browser on the distro itself was the most comfortable way to debug these.


System performance is, IO between Windows and Linux isn’t.

https://github.com/microsoft/WSL/issues/4197


But I mean come on, they're Microsoft. If they really wanted filesystem access to be efficient across systems, we'd have it by now. Although convenient, I doubt that's on their list of primary motivations for doing this.


>But I mean come on, they're Microsoft. If they really wanted filesystem access to be efficient across systems, we'd have it by now.

Here's one of the developers saying it is hard back in the WSL days.

https://github.com/Microsoft/WSL/issues/873#issuecomment-391...

The reason makes sense to me, but I'm not an expert. Maybe you could expand on why you think they could do it but chose not to?


Seeing how every file system (NTFS) imrovement since windows 7 (like new compression methods) is done in layers running on top of ntfs. I think that Microsoft has either lost the source to ntfs driver or institutional knowledge of how it works.


The real answer is that they're building filters on top of NTFS because that's precisely how it was designed to work.


GP's explanation was almost as plausible, and much funnier.


WSLg doesn't seem to have much overlap with Windows Subsystem for Android (although WSL itself does). Android doesn't use Wayland, or generally any of the GUI stack that desktop Linux does.

Pretty sure the point of the WSL and WSLg projects are to lure developers who would otherwise use macOS. After all, your local environment is likely even closer to production using WSL than it is on the BSD-derived macOS userland and Darwin kernel. Actually, early on in its life macOS (poorly) supported X11 apps using XQuartz as a similar lure.


WSL was the rebirth out of Project Astoria failure.

https://mspoweruser.com/windows-subsystem-for-linux-started-...


I can think of one niche use case; Houdini is a Linux/Unix native 3D graphics software which I would prefer to work with in a Linux environment, but on Windows to be alongside all the other applications in my workflow. I imagine there is software in other fields with similar situations.

Note there is nothing wrong with the Windows version of Houdini, it's just Linux is more suitable for it.


Hardware drivers for new machines? As in, Windows supports all the hardware in your machine, but Linux doesn't (yet).


I've been using this for the several weeks on Windows 11 insider builds and its great!

For those asking comparing versus X forwarding, at least for my purposes, I've found X over a socket very limiting in that remote opengl basically stops at version 1.1. With WSLg my apps run on MESA version 4.5, meaning they actually run. I haven't even tried with the vGPU driver yet and its already a very nice improvement.

Would be even nicer if PCIe device assignment wasn't locked behind Windows Server licensing however.


> Please note that for the first release of WSLg, vGPU interops with the Weston compositor through system memory. If running on a discrete GPU, this effectively means that the rendered data is copied from VRAM to system memory before being presented to the compositor within WSLg, and uploaded onto the GPU again on the Windows side.

This is a pretty big limitation. Hopefully it can be addressed soon.


It's not amazing but a quick calculation says that a full 1080 screen will generally transfer in just under half a millisecond.


If that's the only loss of performance, that sounds amazing compared to running anything under Wine.


If only wine had the full source code to anything Windows available, as does Microsoft with anything Linux


??? I game on Wine all day and it's sometimes faster than Windows.


Yeah, an extra DtoHtoD in most applications is pretty bad. Here it should be good enough to get the feature up and running, and I imagine it’s something they’re planning on optimizing.


This talk (from November last year) discusses a pretty detailed design for a zero-copy architecture. Sounds like it will be addressed soon. https://youtu.be/EkNBsBx501Q?t=1438


WSL seriously changed the amount of work I can do from my gaming PC, but I’m not sure if that’s actually a good thing based on my productivity over the past few months.

That aside it’s terrific to see MS putting something good into Windows rather than just removing things and taking choice away from the end user.


> That aside it’s terrific to see MS putting something good into Windows rather than just removing things and taking choice away from the end user.

That's the tip of the iceberg MS want everyone to see, and point the finger at.

MS want people to stop using Linux as an alternative since they lost the battle when they attempted to kill it during the Ballmer era. The plan now is more subtle: making sure everyone using Linux will want to do that from a Windows machine, with all the implications about security and privacy, which would be non existent since any malware (or Windows itself) that for example used Windows keyboard drivers to sniff passwords while one connects say to the bank under WSL "because it's more secure" would be 100% undetectable from that Linux.

The next step will be libraries to access Windows internals and GUI from WSL, so that one can build hybrids that run only on Windows+WSL; very convenient, but unfortunately now Linux is displaced and the only way to benefit from all that software will be to run it under Windows. In the end, MS will create their own Linux distro which runs on top of Windows and will essentially kill all other non-server oriented Linux distros.

Most see WSL as a good thing; I see an elaborate, and have to admit, very clever, way to take complete control of Linux in the next years.


So the Windows Subsystem for Linux GUI (WSLg) business plan looks like this, I assume:

Step 1: get lots of devs using WSL / WSL GUI.

Step 2: Get them comfortable with flexibly using WSL GUI on Linux and Windows interchangeably

Step 3: roll out your poison pill: new Version X, offering great compatibility on Windows but bad integration with Linux; maybe Linux support is buggy or nonexistent, maybe the API doesn't mesh with Linux systems at all, maybe it has license conflicts and Linux has to do a rewrite to be FOSS or write a hacky FOSS shim. Whatever creates the most pain for Linux / FOSS users.

Step 4: Stuck with being tied to WSLg, Developers go to the business and say "either we have to spend a lot of time fixing Linux issues or we buy Windows licenses" at which point the business happily buys Windows and Office 365 volume licenses and keeps going.

Step 5: Microsoft maintains its monopoly for another 10 years.

The "I want to stay independent" workaround is (I assume) writing API layers that can serve "thin GUI clients" on multiple platforms (I guess like Electron or a regular web application or something.)


> Whatever creates the most pain for Linux / FOSS users.

Not necessarily to this level of malice.

Many (most?) Linux users, especially new ones, use also Windows, so all it needs for MS to convince them to use only Windows+WSL and not say a dual boot machine or two machines, is something that just integrates the two systems, so that most users will feel more comfortable running everything under Windows.

The killer product in my opinion would be something that allows accessing to Windows internals and GUI from a Linux program (imagine "/usr/bin/excel", a port of Excel that works only under Windows+WSL). Those functionalities would be offered by something that "pure" Linux distributions could not offer, including WINE, since we're talking about the full OS and not an API translation layer. Once users and developers are accustomed to it (many devs already develop under WSL) we'll reach the point in which the two worlds will fork in favor of Windows: what is developed under Linux will also work under Windows+WSL but not the other way around. That would probably be the moment MS will introduce their own Linux distribution (advertised as the only one that can take full advantage of "most recent Linux developments") that under the hood could either be normal Windows+WSL, or a different one containing a "hidden" Windows blob allowing developers to run native Linux, hybrid Linux+Windows, and possibly native Windows apps, either free or a lot cheaper than Windows+WSL.

If this happens, most Linux users, especially desktop ones, would rather go back to Windows rather than for example stay with Ubuntu+WINE. Server, embedded and other smaller niches users will make an exception, but Linux is in serious risk of losing all other users.


The things that make me not like Windows are it's aggressive Windows Updates, lack of real true security, and the way windows regularly ignores settings. Steam automates configuring Wine for you and most games just work. But for work? There's no reason to use Windows anymore unless you have some proprietary software that runs only on Windows.

Btw the whole "Access Windows internals and GUI" from Linux already works. You can just run Windows commands from bash and of course make scripts using those commands so essentially all windows command prompt commands are available to you now.

But remember it's Windows. You can set all the registry settings you want but it will still sometimes ignore your settings and just do what it wants.


> MS want people to stop using Linux as an alternative

Linux desktops as they currently stand aren't even close to a threat to Windows, this is Microsoft using the Linux userspace to get developer mindshare back from OSX.


Windows terminal + wsl is good, but I think the native terminal in MacOS is still much better to use. In the Win 11 beta the ease of use is pretty similar, but the unix terminal is just better integrated into MacOS. Probably my favourite terminal command is `open` which on MacOS lets me open any file in the associated application. I haven't found the same for WSL yet.


`<filename>` from Windows, or `explorer.exe <filename>` from WSL. As simple as that.


Hah, I was kinda hoping someone would know a trick I just hadn't found yet! Thanks for this. I set an alias so `open='explorer.exe'` which saves me some typing and muscle memory.


If you use powershell

  ii filename


I personally think windows’ user interface is way more buggy/slow than the new gnome. The latter has a properly smooth overview window (with gestures), while windows’ flickers on a much better laptop..


That and most corporate Windows users simply can't switch to the OS of their choice due to rigid IT department rules.


I wonder how many people who rely on Linux have truly moved to WSL? I gave it a try and I find real Linux vastly superior. Hybrid Windows+WSL apps, what is the point? Developing Windows apps with .NET is very easy and pleasant.

Don't get me wrong, I agree we should be concerned about WSL. But I think it's also very possible Microsoft recognizes developers that aren't developing with MS tech (ie anything but .NET these days) pretty much never use Windows and they are trying to patch that gap.


I'd believe their target isn't users who are already using Linux, but users who will be using Linux for first time going forward. Think all the university students who made Linux the system to develop on will now make Linux on Windows as that system. Eventually, WSL will take over baremetal Linux because it was more convenient to get started, technical superiority notwithstanding.


I've been a Linux user since 1995 and run Windows + WSL2 on my desktop machines. It's not too deep and pretty similar to why so many folks were drawn to Macs; a no brains just works GUI with the ability to launch a terminal and do real work on a UNIX-like system.

I can use a single machine to do everything I need, without rebooting and without making sacrifices.

I can watch Netflix and play games, without needing to write a f'n shell script to fix the screen tearing present in the nvidia driver - or realizing a particular game has quirks or doesn't work in Proton, so I just have to throw up my arms and say "Well, I guess we just don't play that game".

I can pop open a terminal anytime and have access to a real Linux system, as opposed to the faux "uncanny valley linux" solutions like Cygwin and Git Bash that seem to work until they don't. And unlike a traditional VM there's no management involved; I open the terminal when I need it and close it when i'm done, just like a normal application.


I don't think that's it. it targets frustrated Linux devs that are tired of Gnome/KDE/X/Wayland wars, and things breaking if you don't do things just right. I spent 2 years on pure Linux and I switched back to Windows + WSL2 and I'm pretty happy. use it for work and personal use. It gives me the apps and use ability of OSX, the Linux console and none of the headache. maybe I'm just out of the honeymoon phase but I'm tired of constantly tinkering with Linux to get it to work. to install the right tool, outdated stuff in package managers, etc.


For me it's a far superior experience than the alternative which is macos. Instead of battling a close but frustratingly different OS than my server target, I get to run the exact same OS. So a bunch of pain vanishes.


That seat that Microsoft have on the Linux Foundation board of directors will come in useful for that.


I just might switch to linux instead.


You can always fall back on the Linux Subsystem for Windows GUI, aka WINE/DOSBox/VMW/VBX/QEMU, with varying levels of integration/fiddle/config.


Varying levels of integration, with fiddling being required less and less (eg. Proton), and with a nuch higher level of privacy. Also, I don't want my computer to feel like a billboard for Candy Crush Saga.


Sorry, your reply was missing a sub-legible customer experience improvement key. Please re-authorize at your nearest _Subway_.


Most corporate PC users do not have that option. WSL solves that.


This is the most plausible explanation. Since most of the newer dev frameworks use very unix like CLI, windows as a dev operating system was feeling very left out.

VSCode made this painfully obvious.

Most corporates would just offer you a mac instead. WSLg is pretty cool, but if you have a choice, running ubuntu or debian is better, with emulation for whatever windows legacy stuff you need to support.


What dev frameworks?

The ones I use, work perfectly fine in Visual Studio and Eclipse, and Powershell does the rest.


As a WSL user at work, this addresses exactly zero of my problems with running Windows. Windows Defender is still going to randomly spin up and bring my machine to a halt. The Windows Start Menu with its ads is still there. There are still 15 pieces of device driver desktop experience crapware running in the background. The RAT required by our IT department is still resident. Adding GUI support doesn't move the needle: the missing feature is freedom.


> The RAT required by our IT department is still resident

This sounds like organizational problem, not a technical problem. Have you considered applying a new position for another employer that allows to run Ubuntu/Red Hat/etc. environments?


While I do get the "dirty" feel running Windows and I fully agree that having crapware installed by device drivers is shady - the start menu ads seem to be gone (along with awful tiles) in Windows 11. I also rarely use start menu (powertoys launcher, although it's been crapping out on Win 11 recently with focus and such).

I recently built a desktop since my MBP would turn in to a jet engine if I try to spin up a non trivial docker compose and I went for windows because I wanted to game - started on Windows 10 and was appalled at the ammount of crap they had preinstalled and there with the OS. Windows 11 beta has been much better in this regard.


What's a RAT? I've never heard of it, and I'm afraid it's an not a googleable word. Remote Administration Tool? Something like Intel vPro?


They’re likely referring to a remote access trojan (malware) which likely shares many features of whatever administrative tool the IT team uses at that company.


Is there a fundamental difference between a remote administration tool and a remote access trojan?


Seems like roughly the same difference between a malware keylogger and say, X11. They both intercept all your keystrokes, but you wanted the latter one to do so and didn't want the former.

("But I didn't want the remote admin tool on my computer!" It's not your computer, it's the company's computer. And they wanted it there.)


It sounds like you really just need Linux and can skip Windows.

FWIW, all this WSL stuff is mostly Microsoft trying to Extend, Embrace and Extinguish Linux.


> It sounds like you really just need Linux and can skip Windows.

My point exactly! I've been making that case for a while and I think I'm gaining some traction.


I've had to work with macOS in the past, and I was simply honest about where my time went. If I wasted all day trying to make something basic work in it, I'd just say so in standup.

Eventually, my manager proposed paying for a VMware licence and I could run Linux on that.

I still had to suffer with multiple displays (the macOS host had to detect them before the VM could use them), but I finally stayed getting work done.


For me WSL has completely replaced dual booting setups.

There are still some quirks with integration such as certificates and more.

I really can’t wait for the GPU experience to be fully supported and also be able to use pytorch from within containers.

Microsoft is so far doing everything right with WSL IMO.


For me that happened already 10 years ago with VMWare.


You need a Quatro GPU to do that, right?


One could run Linux and use Wine to run some Windows apps. On the other hand one can run Windows and use WSL to run some Linux apps. Depending on the individual's needs which apps are important to have access to the decision could go either way. This is why Microsoft is investing in projects like this - to tip the scales and lure more users to run Windows as primary OS.

P.S. And then there is Google which wants you not to care about OS as long as it could launch Chrome browser, and to use web apps instead of native.


With WSL v2 , you can run Windows and all Linux apps.

WSL v1 had limitations but WSL v2 is a VM -- a more integrated VM than usual but still.


> With WSL v2 , you can run Windows and all Linux apps.

Can it run perf yet? I.e. are they passing through hardware performance counters from the host?


iptables doesn't fully work either (I was trying to setup a game proxy), and I suppose a few other things.


or just download Virtual Box or the free VMware Player. If you're on Linux, running Windows in "Unity" mode is already about as seamless as WSL2 w/ GUI appears to be.


This. Been using seamless mode on virtualbox for a decade(s)?, have a full terminal on windows, and now i can run vscode in my linux vm while in a windows desktop. I just use mate and with hide buttons on the panel enabled.

WSL1 with Windows Terminal works good for now quick youtube-dl's, WSL2 needs hyper-v, so kinda deal breaker for me as a virtualbox user.


It's almost like they're trying to embrace the Linux ecosystem and extend it for windows users...


extinguish coming soon


Google has been doing this for years with ChromeOS and... it kinda works. It mostly works if you stick with their distro (Debian Stable - 1), but I've never been able to get their display forwarding tools to work anywhere else.

Seems like Linux is complicated enough without running it in a VM and forwarding everything up to to the host OS.


you don't have to wait until Windows 11 for this nonsense. I am unfortunately locked at my current mission to be on a Windows laptop, my first Windows experience in years. As soon as I read up on wsl2 I installed it and inside installed the xrdp server and a corresponding lightweight environment, then just rdp into it (but every time the wsl gets restarted, you'll have to do a /etc/init.d/xrdp restart). Although, I can't wait until I get to a sane environment again. Without Windows and all headache it brings me...


I haven't had any problems running all types of GUIs in WSL (1 & 2) through Xming for years now.


I recall running the X11 server that came packaged with Cygwin back in the 90s. Where there is a will, there is a way, for several decades now.


>I recall running the X11 server that came packaged with Cygwin back in the 90s. Where there is a will, there is a way, for several decades now.

And Cygwin still runs like a champ today. At least for my use case.


And before Cygwin was relevant, we had X-Win32 and Hummingbird.


I will be the curmudgeon here I guess.

Microsoft has been following this strategy for a while. Announce something useful. I click in and look forward to start using it. How do I download it and get started?

Oh I need to install a beta version of Windows on my computer to use this feature. I dont want to do that.

Then weekly or biweekly new stories pop up and I hope it has had a proper release, but no, insider only.

I would prefer if MS announced, "tool under development to be released 03/25" and then "ready for production today". Synched so that you can get the correct version of windows to use the new tool/feature/framework.

It says it will ship with the next version of Windows, so it will ship in Windows 11 then?

I wish there was a meta tag on Hacker News that identified if a program, platform, airplane, city building, etc is vaporware, under real development or production ready / move in ready / book a flight ready etc etc.


Then who would test it?


I made the switch from full-linux to WSL about a year ago. I never used windows outside of gaming but I got tired of putting up with minor hardware/driver frustrations on my particular laptop (surface).

I'm happy to report I have encountered zero issues. In fact, I don't think I've even used it for anything outside of emacs/cli.

I am kinda surprised by this link since I've been using the x version of emacs this whole time.


Anyone else worrying about Windows swallowing up Linux?

I hope their intentions are altruistic.


For better or for worse, I'm not sure the aggregate direction Microsoft takes as a company is either altruistic or malicious, I'm pretty sure it's ultimately profit driven.

I expect there are MS engineers who see this as altruistic and executives that see this as a way to keep developers from moving off Windows and ultimately as customers. In the end, I'm not sure what this will mean for "desktop Linux". I've already had one colleague dump "bare metal Linux" in favor of Win10 + WSL2.


That seems like a very complex architecture. RDP client and server? That seems like a strange approach for a single machine solution.


It's because WSL2 abandoned the initial goals of WSL1 and just did a VM instead.

I wish MS continued evolving WSL1 instead of doing the VM approach but c'est la vie.


The amount of effort that went into WSL1 including the number of bug-for-bug changes involved was tremendous. It blew my mind when WSL2 was announced because the hyper visor approach was already possible (and in use) before WSL1 was announced but MS made an explicit decision to do the extra work to make their own Linux subsystem for Windows the harder/better way… then gave up.


They were amazingly successful -- more successful than should have been thought possible -- but they couldn't overcome the semantic file system differences.


Is this documented anywhere? I’d be interested in reading more


Neither the Windows IO API or even the low-level NTFS APIs map cleanly to POSIX semantics. It means you can’t just forward calls from the subsystem to the IO stack, you need to actively marshal them to and fro. This, in addition to certain operations just being plain more expensive on Windows/NTFS (opening files, creating processes) due to different programming paradigms/approaches just give a very high impedance mismatch that makes performant IO highly unlikely by nature for anything trying to run on top the existing system rather than virtualized.


Yeah, the greatest features of WSL1 was the fact that it wasn't a VM. All apps were running natively and managed by the windows kernel.

I now have to deal with the fact that every so often the WSL 2 VM will simply consume too much memory, which really stinks.

WSL1 felt SO close to being perfect.


You can switch back and forth as required, from https://docs.microsoft.com/en-us/windows/wsl/wsl2-faq#what-w...:

What will happen to WSL 1? Will it be abandoned? We currently have no plans to deprecate WSL 1. You can run WSL 1 and WSL 2 distros side by side, and can upgrade and downgrade any distro at any time.


I get all that, but it's hard to imagine with the current naming scheme that you'll be doing much beyond the bare minimum support for WSL1. I have a hard time believing that new WSL1 features will land.

There's a giant chasm between "not deprecated" and "actively supporting".

For example, I'm guessing that running a docker container with WSL1 is something that will never happen.


Yea, but WSL1 has basically seen no new features or meaningful fixes since WSL2 was introduced.


>I wish MS continued evolving WSL1

I do too but only from a techy POV. I think it was awesome they expanded their old posix apis into a drop in linux replacement and wish it could have continued being expanded.


For me, it comes down to the fact that WSL1 apps were ran like native apps. That was amazing. It meant I could kill a WSL1 app from task manager. It meant that those apps were only taking the memory they used, not an entire VM's worth of memory. It meant I didn't have to manage yet another virtual machine environment on my PC.

WSL2 is certainly the way to go if you want a more "true" linux experience. I just lament the fact that WSL1 came so close to being true enough.


From what I recall, an MS engineer explained why they gave up on WSL1 as being intractable issues with the way Windows and Linux interact with the filesystem, leading to major, unfixable performance issues.


Ironic statement considering how X was designed to work.


I posted this because I've used Linux GUI apps on Windows with WSL1 and an X server. This seems much more complex than that.


RDP is ready, tested and quite complete. Much better than designing a new protocol.


I've been using linux for 20 years and my one frustration is missing Powerpoint. I recently joined the world of education and they are Powerpoint mad. Hats off to LibreOffice but Powerpoints look like they have been dragged through a hedge backwards. Is anyone solving this problem on linux locally or maybe wsl is the way for me to go?


This is pretty cool. I've been using VcXsrv to run an X server on the windows side so that I can pop open gui Linux tools (setting the DISPLAY variable manually to {windows host IP}:0)

The two remaining glitches here are harmonizing file system support across both domains (its fine if you are in the Linux domain and reach over and get or put files into the Windows domain, but the other direction has "issues". And some sort of USB support so that devices can be handled in either domain easily[1].

Mostly I find it is an easy way to use my Linux work flow on a machine that for other reasons has to have Windows on it. Overall the impact is lower than it is if I run an actual VM.

[1] Recently discovered that a windows executable running on the Linux side can "see" the USB stuff so running dfu-util works from the Linux side.


Microsoft has so much ego that rather than simply making a Linux distro they insist we run Linux on Windows.


> simply making a Linux distro

It would take decades to migrate Windows users to a Linux distro.


How would a Microsoft Linux distro be different from, say, Ubuntu? Would it bundle WINE, perhaps with Microsoft insider knowledge of how their APIs work? How would they make money on Windows if everyone could just use that?


They probably know they can't stop developers from deploying to Linux. They're trying to the Windows desktop relevant.


They don't need to try while fighting 1% GNU/Linux desktop.

The whole point of WSL is to bring into Windows, those devs that buy Apple hardware instead of supporting Linux OEMs, and couldn't care less as long as they have their POSIX like experience.


They don't insist, this is for the folks that should be supporting Linux OEMs but rather pay Apple.


This is cool and I’m not knocking it. Just wanted to point out the irony that back in my day there was lots of fear that MS would embrace extend and extinguish Linux. How ironic would it be if this happened inadvertently because WSL got that good.


So does that mean DirectX will be fully available under any Linux distro at some point?


I love it how in the screenshot in the description you can see that the shortcuts on the desktop are mostly games. Now you know why the hypothetical user is sticking around using Windows as their daily driver in the first place :)


How is this better than just running vcxsrv and inside WSL setting DISPLAY=WINDOWS_HOST_IP:0? I've been doing this to run graphical linux apps for a couple years now, both on WSL1 and on a regular Hyper-V Linux VM.


DPI scaling (especially per-monitor DPI) is much much better, it it actually handled better than native Linux


With hardware acceleration and CUDA?


I am yet unsure how it is able to use "hardware acceleration" considering it is using RDP.


And audio?


It just works?


It seamlessly works with VcXsrv too after you set your DISPLAY. I've been using it for years too, even back in 2018 while running Sublime Text within WSL 1 as a primary code editor.

If you place your VcXsrv config in your Windows startup folder everything automatically works. I haven't touched my config in years. The machine boots and it's all good, complete with proper clipboard sharing so you can copy things to and from WSL.


I could never get VcXsrv to work with DPI scaling well. So many hacks upon hacks...


That's fair enough. I have a 2560x1440 display but I run it at 1:1 native scaling, everything updates quickly with no flickering or tearing. I've found in general using display scaling anywhere is always a questionable experience since not every app is developed to take advantage of it.


That's doable on 1440p, but IMO it's a dealbreaker on 4k.


Absolutely. That's why I bought and recommend[0] a 1440p monitor unless you plan to use a 32-36" 4k display which IMO is a bit too big. The 1440p monitor I use is 25" and feels just right because it requires minimal head / eye movement to see everything.

[0]: https://nickjanetakis.com/blog/how-to-pick-a-good-monitor-fo...


Fonts look much better on 4k screens with 2x scaling than on 1440p: https://tonsky.me/blog/monitors/

If you stare at text all day, 4k is pretty nice. The difference between 4k and 1440p is easily noticeable to me even on very small displays, like 15" laptops. I have a 1440p gaming laptop, and the less-sharp text is slightly annoying; I otherwise love it as a laptop, though, so I live with 1440p since it's a fairly small screen anyway. On larger displays like an external monitor I definitely wouldn't use 1440p.


From what I understand 200% 4k is the same screen real estate as 1080p. It's a trade off I suppose. I would much prefer 1440p on a smaller monitor (like a 25") which offers pretty good DPI. I can read HN's text at 1:1 scaling from around 3 feet away and I wear glasses, 4 feet is doable too but it starts to get blurry. In my day to day I stand a little further than an arm's length away.


I use it for two things:

1. Running Linux build tools that expect to pop up a browser for SSO auth (e.g. with ECS). These tools do the right thing if you install chrome in WSL. Otherwise you need to copy the URL into a windows browser which is slightly inconvenient and for me sometimes inscrutably doesn't work.

2. Running UI dev tools that don't work properly cross-OS from Windows. E.g jconsole Java networking seems to be mostly broken between the host OS and processes listening in WSL so this is a workaround. Run the GUI client in Linux. JVM process discovery also works which is nice.


there are tools that solve this. I think it's even called WSL Tools. adds a few things to launch browsers on windows side with urls on Linux side and other things.

not sure about java, haven't touched that in over a decade


I'd love to use WSL2 but the longstanding issue with slow I/O between WSL <-> NTFS host is a dealbreaker. It basically means you have to keep all your data inside ext4 in Linux and that defeats the whole point - e.g. you can't keep your code, or files you download etc in Windows.

With WSL you can keep all data in NTFS, have near native speeds but you can't run a bunch of Linux cmdline tools.

I don't know if they can solve it, its basically sending data across the network, But this is the last barrier to a great Linux on Windows.


Does it really defeat the point? I see WSL2 as a Linux VM with less hoops to jump through. Sharing data with the host system isn't useful, since I don't have programs on Windows that need to communicate with programs in Linux, and I store everything in github/gdrive anyway.


I thought I'd have that problem but I just keep all my dev files in WSL2 now and I stopped using Intellij. VSCode has very fast WSL2 filesystem remoting.


Try IntelliJ Projector. It's awesome.


Ah yes, 9p2000. The network filesystem where caching is non-existent and left as an after-thought.


Maybe go the other direction. Add an ext4 disk to the VM and then access it through \\WSL$\


WSL2 is essentially just.. Windows + Linux. I tried it and it is awesome. Cannot wait to see further progress that comes out of this. I really cannot leave Windows to be honest. The network effect is too strong. Coupled with recent Microsoft effort such as Visual Code, its looking like they are doing nothing but going towards the better direction than the old days. Who would have thought. Would you believe it if anyone said this to be the case, 10 years ago?


> I really cannot leave Windows to be honest. The network effect is too strong.

I suspect whatever's keeping you on Windows isn't really the network effect. It's usually: comfort level/personal preference, or a set of software that vendor(s) can't/won't port to another non-Windows platform.

The fact that so many applications have been rewritten as browser-accessible services has liberated me. I haven't owned a system with a Microsoft OS since ~2004 or so.


Don't forget corporate policy. I would do all of my work on linux except I am barely allowed.

Tools like teams and outlook are also just not as good on Linux, and really important for work.


> or a set of software that vendor(s) can't/won't port to another non-Windows platform.

Is that not the network effect? I.e. everyone uses Windows so developers target their software for Windows, so everyone uses Windows?


I was also amazed with WSL, it genuinely made me think I didn't have to leave Windows anymore. It is honestly one of the best products Microsoft has launched recently. The development tools division of Microsoft is on fire and should be commended.

The Windows division is another story though. With all the Windows 11 news I decided to give desktop Linux a spin for the nth time in 20 years. Installed Manjaro and I'm extremely impressed. Even though I have Nvidia graphics everything is buttery smooth, all my productivity tools are there, setting up my VPN was far easier than Windows, and even more amazingly most of my games work well thanks to the recent push by Valve and the steam deck.

I will probably stick with it this time, so maybe for me 2021 finally is the year of the Linux desktop.


Dual boot? This does look slick, I was an avid WSL user until I started dual booting. Now I almost never need to boot to Windows.

I get that if you often need to switch it can be a pain in the ass but at least Linux respects my privacy and freedom.


As the author of EasyBCD, I can tell you that interest in dual-booting has collapsed to near zero over the past decade.


Anecdata, but I used to dual boot, until Windows mucked up the Linux boot more than once. Didn't play nice. So I run Windows in a VM now, it's not getting near the boot sector again.


Yep. I have a strict "Dual boot on dual drives" policy now because Windows thinks its too precious.

It only played to their disadvantage, for machines with single storage device now doesn't boot Windows at all or only from VM.


How much of that effect do you think is due to recent Windows versions not playing nicely so you still get some hassle anyway and/or to improving options to run Windows virtually on a Linux host with close to native performance and compatibility?


What do you mean by "not playing nicely". With UEFI boot you can dual boot all day. There is no need to modify MBR. So nothing gets overwritten on updates.


I didn't say dual booting was itself the source of the danger (though it is true that in days gone by that was also a source of problems).

The issue I had in mind was the unrestricted hardware access that Windows has if it is running natively. This is an operating system that has literally pushed updates that inadvertently deleted user data, among other severe problems, and that will deploy its updates automatically to many users. Dual booting won't ensure the integrity of your system against that kind of threat. Running Windows in a virtual environment means it can't damage the rest of your system even if it deploys a seriously broken update without warning. And that kind of virtualisation is getting more practical all the time even if for now it remains the preserve of serious Linux hackers.


Same situation here. Dual-booting Linux and Windows 10, and I figured I'd boot into Windows often enough for it to get obnoxious. But I only ever get on there to play a few demanding games (which I already don't play often anymore), or make music with an A+ DAW for making music that doesn't run super effectively on Wine. Linux handles everything else I do like a champ.


A friend of mine has been complaining that a DAW is the only thing keeping him stuck in Windows at this point as well. In his case, he specifically said that VST's were the problem. Was your experience the same?


Bitwig is a very good DAW with native Linux support. It's made by former Ableton devs so it definitely leans in that direction, but it works pretty well for other types of workflows too, especially with the recently released version 4.

VSTs are definitely an issue; most high quality commercial plugins are still only released for mac/windows. However there are a few projects for running them in wine and it generally works pretty well.

I do think we'll see more and more Linux in studios going forward, but it would help if Linux got its pro audio story together. Pipewire is a big step in the right direction but not yet mature.


Yep, for me it's the DAW and VSTs that keep me needing Windows for now. You can try to make them work with Wine or whatever, but it's not worth the hassle.


> Would you believe it if anyone said this to be the case, 10 years ago?

To be honest, Microsoft astroturfers have long existed, for much more than 10 years.


> Coupled with recent Microsoft effort such as Visual Code

Which is officially supported on Linux.

There might be reasons to run Windows, but this is not one :)


I think hes referring to how you can run the VSCode GUI in Windows but develop on WSL because they built an integration. It's pretty neat. And most people are using Windows for other reasons (drivers, gaming ,etc), this just makes development not a pain anymore.


> its looking like they are doing nothing but going towards the better direction than the old days

https://rentry.co/areweweloveopensauceyet


"awesome"


It's about the culture. Windows doesn't respect your privacy and you are treated like a child, because most people who run Windows wants Microsoft to make all decisions for them, just like a parent.


I used WSLg for several months and just recently switched back to x410. WSLg is badly broken with non-trivial apps like IntelliJ IDEA.


I wonder if WSLg will work on my passed-through 3090 from my linux host to my Windows VM, or of this requires barebone Windows.


It's so weird to see Teams on the Windows Desktop next to a Linux Desktop Window... [0]

[0] https://github.com/microsoft/wslg/raw/main/docs/WSLg_Integra...


Just waiting for some Linux-based VDIs now. Azure Virtual Desktop is all RDP-based. It also used to be called "Windows" Virtual Desktop.

While it's not something I would necessarily use for myself, having the option is really empowering especially for engineers within companies.


In Windows 11 + WSL2 + Google Drive Sync, you can cd into Google Drive at /mnt/g and use it like a normal directory.

I'm not sure when this became the case; it wasn't possible 6 months ago with Windows 10 + WSL2 + Google Drive File Stream.


So how long until us poor fellows running enterprise windows see this?

I currently use vcxsrv which works mostly fine, but it's hard to convince other people to adopt the multitude of hacks I have to make this work, and supporting windows in builds is painful.


Can you run whole desktop environments like XFCE, Gnome; or run a Window Manager like i3?

I'm a long time VMware user and never really used Unity (name?) view which allows running Linux apps seamlessly on Windows or Mac. WSLg seems to be the same feature.


I don't know about WSLg specifically but last time I heard if you runs a x server with an application like VcXsrv, install XFCE and configure it to use the x server. You can pretty much run XFCE or any other desktop environment in WSL. I don't know about the performance however.


Got this running on my Surface Book the other day. Totally useless, but very cool!


Windows + WSL2 is starting to catching up with Chrome OS + crostini. How exciting.


This reminds me of an old VMWare version where you could break out windows from the guest OS into the host. I'm surprised it wasn't patented by VMWare (or maybe that expired?)

Either way, pretty awesome to have this.


Pretty sure VirtualBox has that still.


Amazingly this seems better integrated than Mac's XQuartz which I always find awkward and buggy. If it weren't for the forced ads and updates I would consider switching back to Windows.


This is exciting. I dual boot Windows and Linux, cause although I really like my setup on Linux, the desktop experience is not quite there yet for me.

I wonder if I can use something like bspwm, maybe not... haha


Have you tried KDE plasma?


I have been getting by with VcXsrv just fine, for simple apps like IDEs and stuff (vs code, pycharm community).

The advantage of wslg would be running hardware accel'd apps?


What Linux desktop apps do people want to run on Windows?

I'm struggling to think of anything I would use that isn't ported alteady, being GTK or QT or Java based.


The use case I care about, and I imagine the use case Microsoft do as well, is developing for Linux on windows, so running an ide and not having to worry about a complicated cross compiling toolchain backing it.


As someone who is forced to use either Mac or Windows at work, I would love this for the sole purpose of using i3 again.


I imagine if your app needs to interact with your Linux system, running it within WSL is a lot nicer.


Just do it already, MS. Replace Windows/NT kernel with Linux, and make Windows another Linux distro. Seems like you're kinda heading that way.


Is it finally the year of Linux on the desktop yet?


It's the year of Embrace+Extend on WSL.


This isn’t surprising given all the incentives that led to WSL1, but it’s still a damn wild ride to arrive here.


DEVELOPERS DEVELOPERS DEVELOPERS

DEVELOPERS DEVELOPERS DEVELOPERS

DEVELOPERS DEVELOPERS DEVELOPERS

DEVELOPERS DEVELOPERS DEVELOPERS

DEVELOPERS DEVELOPERS DEVELOPERS


Will I be able to use i3?


I can't imagine you'd be able to use i3 for your windows apps from what I've read, but it should work with your Linux side.


Yes.


This will be available in Windows 11, so your processor does have to be able to upgrade to that. If you're fine with an OS reinstall there are ways to force the install.


I don't think he is referring to an i3 processor, but rather https://i3wm.org/


How long before even Linus migrates to Windows as his daily driver?


Now someone make a Linux Subsystem for Windows GUI.


Isn't that pretty much what Wine does? Having been using Linux as my daily workstation for a year now, I've been blown away how easy it is (and generally transparent) to install and use Windows applications on Linux.


Actually, not quite. Wine is closer to what WSL1 was. The closest equivalent to "Windows Subsystem for Linux GUI" on WSL2 for Linux would just be ... running Windows in a VM, with FreeRDP doing per-app tunneling to the Linux host.

I think there's even some software for automating this somewhere out there.


>Wine is closer to what WSL1 was

Not quite. Wine just translates windows syscalls to linux ones but WSL actually reimplemented the linux api inside the NT kernel (which was designed with the ability to use multiple OS apis.)


Neither reimplemented the other syscalls directly in the kernel. In WSL1 the NT kernel kicked Linux syscalls to an lxcore.sys driver to convert them into equivalent NT calls and objects. In WINE most things don't make direct syscalls (they make userspace Win32 calls and WINE reimplements that and many other Windows APIs in a way that calls Linux syscalls directly) but for those that do (e.g. game DRM) the Linux kernel added a SECCOMP_MODE_MMAP mode to seccomp() to trap unknown syscalls to a handler (in this case WINE) to do the same thing.


Well, I said "closer", not an exact analogue. They're pretty close equivalents, though, as most Windows programs don't actually call syscalls directly, but link in an OS-provided DLL and call an exported symbol from it, with the userspace to kernel bits abstracted away from most user programs. Wine (mostly) re-implements those DLLs, effectively re-implementing the Win32 API (a userspace API) in Linux's userland.

(programs are allowed to call the kernel directly, though, and Wine has to handle those cases esp. for DRM/anti-cheat code in games that poke at the kernel directly, recently Linux was patched to allow userspace programs to directly handle syscalls [0][1], making Wine ... closer to a WSL1 equivalent?)

[0] https://lkml.org/lkml/2020/8/10/1323

[1] https://github.com/torvalds/linux/blob/master/include/linux/...


As far as I understand Wine does way, way more than that. Wine actually re-implements all Windows API's. Not syscalls, but higher level libraries like DirectX and whatnot.


I guess it would be this: https://github.com/Fmstrat/winapps


Thanks, this sounds like an interesting project. Will look into it.


Can this run tiling window managers?


- Embrace

- Extend <- You are here

- Extinguish


I was worried about this happening in the early 2000s, but not anymore.

Have you seen how terrible Windows and Windows Server is?


Has it ever been good at any point in time?

No, yet people use it and develop for it.


This can never be the best GNU/Linux experience because you leave up your freedom and privacy at the door of the Windows login.

Anyone who is serious about the future of openess, freedom and privacy rights in software and general should strive for the original. I advocate not to hand over MS the control over the Linux desktop.


Large companies will always find a way to profit from the most valuable aspects of society at large.


I completely agree with the sentiment of what you're saying. That said, that's not at all what this is. They are just making it easier to run GUI apps in WSL. This is already something you can do with VcXSrv or any windows-based X server. I've actually been using VcXSrv to run a full Ubuntu Buddgie desktop with GPU acceleration and native performance on my work machine for over a year now. If anything, this has made it easier to _get away from_ the telemetry and crap that goes along with a default windows install because windows has absolutely no idea what I'm doing within my WSL installation.

So yeah, nothing to see here, if anything this is good as it makes linux more accessible to people stuck in windows-only environments. This isn't even M$ making a desktop environment. They have just written an X server into windows instead of having to install one yourself.

side note: I'd also be quite happy if Windows slowly removed the windows parts and replaced them with unixy parts until the whole windows ecosystem could be considered unix-based. That would be so great for so many reasons.


I only had to scroll a few more lines down from your post here to find an example supporting your statement: https://news.ycombinator.com/item?id=28486717


Correction: This can never be the best GNU/Linux experience for people who feel the same as you do, that your privacy is highly-valuable and Windows takes some of it away.

I don't agree with those feelings, so it is indeed the best GNU/Linux experience for me.


I also wonder if this is the end of the "embrace" phase or the start of the "extend" one.


I don't want to want this, but I do.

Only semi-related but what I really want is for easy windows apps on linux that work without fail. I prefer my linux box and generally hate the windows ui (don't get me started on windows settings or audio). I've tried switching to linux full-time but I don't know if I can hack it. Games are 90% there and I can do without the few that don't work with proton, but there are just too many apps that only work on my windows side that I just don't think I can dump windows.

Wine gives me inconsistent results and breaks for just about anything that needs registry access, not to mention its pretty complicated. I'm hoping to stumble on some tool I've been missing out on that makes everything easier because I wan't to run linux as my daily... I just don't know if it is practical.


Which apps? Not that it makes a difference for this conversation, but I'm interested in keeping up to date with what the "killer apps" are that keep people from switching.


If you're talking about Windows apps that keep me from switching to Linux, I have an oddball one: it's a keyboard re-mapper that I wrote back in Windows 3.1 and still use. It does the same re-mapping in every application (except for some reason in Microsoft Edge). It's not a simple 1-for-1 mapper, which I think is readily available in Linux.

At the simplest level, it re-maps ^H to the left cursor arrow, ^N to PageDown, etc.

But it gets more complicated: ^D maps to seven down cursor arrows (i.e. it moves the cursor down seven lines), ^U in the opposite direction. ^C usually (more details below) maps to ^Left (i.e. go to the beginning of the word), Shift-^Right (select to end of word), and ^C (copy selected text). (Notice the final ^C does not cause recursion!)

^A once goes to the beginning of the line, ^A twice goes to top of screen (I forget the exact keystrokes it emits, but this works with most apps), and ^A thrice goes to the beginning of the file. Analogously for ^E, but end.

Finally, it has two modes. In the normal mode, all the cursor control keys do their normal cursor movement thing. But type ^Q, and the cursor keys are now in select mode: ^H outputs Shift-Left, i.e. selects the character to the left, etc. Drop out of select mode with ^C (copy selection--different from what I described above!), ^X (cut selection), or ^Q again (do nothing with the selection).

I'd love to be able to reproduce this kind of behavior in Linux. I'm sure it's possible, but I don't know enough about keyboard re-mapping, or keyboard drivers, to do it.


That sounds pretty similar to the QMK firmware that runs on my keyboard (an Ergodox-EZ).

It's been a blast to play around with. Best part is that it travels with the keyboard rather than the OS, so I can plug into a different computer and retain the same layout without needing to install anything.

I've been slowly resetting all my OS hotkeys/shortcuts to their defaults, and customizing the position of those keys on my keyboard instead. Current layout, for reference: https://configure.zsa.io/ergodox-ez/layouts/xbzAL/latest/0


Might be a good solution, if it can output multiple keys with a single keystroke, but unfortunately my keyboard can't travel with me. In particular I can't plug it into my work computer.


If you use Xorg you can use XInput2 and XIGrabKeycode[0] to grab specific key combinations and get notified (via event messages, they happen asynchronously) when they are pressed. Then you can use XTest and XTestFakeKeyEvent[1] to send the event you want.

XInput2 and XIGrabKeycode should provide the highest priority grabbing under Xorg so that even applications that do server-wide grabs (e.g. games) will be bypassed.

XTest was meant for automated UI testing but can be used for all sorts of automated behaviors.

For the first part i wrote a simple program[2] years a go that uses xkill to kill the toplevel window with Ctrl+Alt+K (mainly for games that grab the input and hang) which can be used as a quick example. I haven't tried to use XTestFakeKeyEvent but there seems to be a lot of code out there which can be used as an example, e.g. this one[2] (see the send_key function near the top).

[0] https://linux.die.net/man/3/xigrabkeycode

[1] https://linux.die.net/man/3/xtestfakekeyevent

[2] http://runtimeterror.com/tools/xkeyller/

[3] http://git.yoctoproject.org/cgit.cgi/matchbox-history/plain/...


Hmm, looks like the sendkey.c thing might work. Although the documentation on XTestFakeKeyEvent() specifically says "This extension is not intended to support general journaling and playback of user actions", which is what (I think) I want. I wonder why the disclaimer.


I haven't tried it yet, but I recently learned about this tool and have it on my list to test drive:

https://github.com/rvaiya/keyd


For me there are a couple areas that just have a tough time in linux: VR Development, Digital audio workstations and niche utilities. It is getting better but still has a ways to go in these areas imo.

Specifically: VR Development - Unity now has a linux version which is great but there is no oculus runtime which means no oculus testing (SteamVR works but has some hiccups)

Digital audio workstations - Looking primarily at FL Studio, yes you can wine it but for me the audio delay makes it very difficult to use. I'd love to find solutions around this but haven't thus far.

Niche utilities - For game dev I've got a ton of old utilities for visualizing or converting old 3d object files to newer formats, sdks for old games that I occasionally need to pop into and all of them struggle or require a lot of setup to work properly on linux. For these I find myself booting over to windows, grabbing what I need and popping back to linux.


I just recently switch from windows to linux. Was planning to do some 3D printing tomorrow, but saw that Fusion360 had poor Wine support. So I probably need to learn a new software or setup a VM or something.


Fusion360 is the only thing I run a VM for. There’s a repo out there that sets up wine and installs it but it just doesn’t work very well at all. I’m using VMWare Player and set up the virtual disk to boot from the Windows drive and run it that way. It works really well. Other 3D printing stuff like PrusaSlicer works great on Linux. I’d love to have a native version of Fusion though. Maybe someday.


> don't get me started on windows settings or audio

I'll bite - what's wrong with the audio? Friends on mine in game dev often complain Linux audio is hopeless to work with.


Not the parent poster, but I'll chime in.

I have a pretty solid Dell laptop from work, and yet, there is one frustration that beats out anything else: audio.

I can't play music without it stuttering and skipping and sounding choppy and cutting out if something resource-intensive is happening, like Firefox loading a new page (but how often does THAT happen?)

Same with audio notifications. When my "new mail" notification sounds choppy, the underlying system must be just absolute garbage.


My main complaint on windows is more on the UI than the technical audio. I've got like 15 audio devices listed under the audio menu and windows can never figure out which one I intend to be playing from (and gives them terrible names) so I have to constantly be manually switching it around until I find the right one. My experience on Mac and Linux is that they seem to be able to switch to the right device as it connects then switch back appropriately when it disconnects.


Makes sense actually. That's been an annoyance of mine as well. I suppose I never realized it's better on other platforms.


Keeps switching between different outputs in games and no matter what I do with the default communication device etc. it keeps happening. I lose audio in games on random intervals and I don't get it back until I switch my output device to something else and back.

Used to happen only in Warzone but now I'm noticing it affects games with other engines as well.

Granted I have a very non-standard setup but it shouldn't be causing any of these issues.


Does Windows think that audio devices are being connected or disconnected? That's the only time I've had the default device switch on me. Annoyingly it can end up happening if you have a display that presents itself as an audio endpoint and then that display is turned off or even just goes to sleep.


Interesting perspective. I had always blamed it on something pertaining to my VFIO setup but I think you are pointing out something that might be at play here that I hadn't considered before.

I'll disable the speakers on the monitor from the HW menu on the monitor and see if that helps.


Maybe they just mean the audio UI? It's complex, and at least on Windows 10 it's a mix of the new UI and old UI.

For example, figuring out how to configure and test surround sound channels means click through multiple dialogs, and it's not clear how exactly to get there.

With Windows 10, it's even harder to access sound mixer than it was on previous versions, and this is what to use in 99% after clicking on audio icon in the taskbar.


USB hotplug is still a mess in Windows and so using any external audio interface or soundcard is just a mess.

The UI is complete garbage but siblings said enough about that dumpster fire already.


USB hotplug works fine, and has for 20+ years now. You may be blaming the OS for your vendor's incompetence at driver maintenance.


Standby/Resume or hibernate? Let's roll a 12-sided dice on wakeup, "1" is for "I forget all settings about a random device and re-install the driver" and "2" is for "I gonna act like it's not even there until you plug it out and back in again". Using a different USB port today? Hope you don't mind re-configuring!

This is especially annoying for audio devices, because lots of applications - and certainly any slightly more advanced setup - require explicit configuration of audio devices and when Windows does its "it's a different thing every time I see it" temper tantrum this means you have to go back to every application and tell it again "Yeah, that OUT3-4 that doesn't exist any more? Use the OUT3-4 that does exist", because the user-visible name stays the same (it's the same hardware, after all), while the underlying ID changes for one reason or another.

This has nothing to do with vendor's drivers btw., this is simply how hotplug works in Windows' driver model.

So no, this is not "working fine".


Have you heard of VFIO for Graphics Cards in Linux host with Windows Guest? Haven't tried it yet, but it's my winter project and I'm excited:

https://passthroughpo.st/


I haven't personally done much with VFIO but I have looked at it. From my initial look it seemed as if it would require two gpus, one for the host and one for the guest but it looks like some people have single gpu setups working

https://github.com/joeknock90/Single-GPU-Passthrough


Would a Windows VM on your Linux system help? Maybe some apps have robust alternatives?


Fantastic. I predict this thread will be filled with reasonable, capable, intelligent technologists explaining that Linux doesn't have wide-ranging hardware support, with replies by Linux experts who tell them they're wrong.


This might actually be the year of the Linux desktop


Or the year of Lindows or Winux.


If the host system is Linux then this can be called Windows Subsystem for Linux (the same thing that WINE does).

If the host system is Windows then its Linux Subsystem for Windows.

Please do not fuck with logic.


Windows (subsystem for Linux) i.e. a subsystem of Windows that is for Linux. I think.


Extend.

> Weston is the Wayland project reference compositor and the heart of WSLg. For WSLg, we've EXTENDED the existing RDP backend of libweston to teach it how to remote applications rather than monitor/desktop.

It has been admitted.


RDP is a proprietary protocol of Microsoft. Extending their own protocol sounds pretty normal.

And the code seems available on their weston-mirror. It just a merge away.

Microsoft does enough shady things in the now, let us not try not force some EEE pattern.


I already asked this in the past, and want to ask again. Is Microsoft a corporation of goodness now?


Asking if a publicly traded company this big is good or bad is pointless. A corporation is psychopathic; if it goes Patrick Bateman or Dexter Morgan depends on the environment.

The current environment incentivizes expanding the developer ecosystem, hence DX investments.


In my opinion, this is actually a question of values. My position: absolutely not, but I take it as axiomatic that Microsoft (et al.) are incapable of any actual "good."

This is simply Window's attempt to build a new walled garden. If they were actually serious about advancing the state of civil computing, they'd make the NT core available as a microkernel that can be modularly placed into the Linux ecosystem. That is the _one_ thing I can think of which might raise my opinion of them (and I'm sure they lose sleep at night, knowing they haven't got my endorsement).


I'm of a similar opinion. If they want to prove that they heart Linux, that's what they're going to have to do. Or, at the very least, document everything (including DX) so that the Wine devs can do their thing even if MS don't care to help. Until then, "MS <3 Linux" is nothing more than PR speak in my mind.


Absolutely agreed. Microsoft is not a pleasure to develop with, which is (in my opinion) a losing position with time. They see what Linux makes a pleasure, so they pursue the trappings of the community while damning the spirit of cooperation. Cynically, I see their moves as nothing more than an attempt to capture social capital.

Hopefully, nobody is having the wool pulled over their eyes. Don't get me wrong: their incorporation of a TTY-like interface into CMD, and the Linuxification of Powershell, demonstrate the craftsmanship that Microsoft pride themselves on. It's good tech, but tainted. I will never trust Microsoft after the RDP fiasco.


Why this should be exactly microkernel?


Perhaps it needn't be; I, with my negligible OS dev experience, just like microkernel architectures better. It seems more sensible to have microkernels managed by a microkernel loader. This might be an opinion I come to recant in time. The core of my position is that Microsoft needs to stop doing Microsoft things if they want to be taken seriously as a good faith actor, but I'm not holding my breath.

Until they make moves to break down the walls of their garden, they're just another barrier.


Providing POSIX and Linux-specific APIs does actually place them into Linux ecosystem.

Programs bult for Linux suddenly can be run for Windows users. That's a boost in adoption potential for Linux programs (large part of the ecosystem). And adoption is very impotant for further development and success of software.

On the other hand, for Linux users this makes Windows more attractive - why not choose Windows for your next laptop, if all your Linux software runs there. That undermines Linux userbase.

But overall, I feel WSL is good for Linux.


WSL is winning people over who had left Windows simply because the dev experience outside of the VS IDE is sub-par. We'll see what this means for Linux.

It could mean that people start realizing that there's gaps between Windows and Linux that need to be closed to make Linux more attractive to users. Alternatively it could mean that people don't acknowledge those gaps and instead gripe about EEE.

I know which outcome I'd put my money on.


Yeah, that's what gets me: it's obviously engineered to drive traffic one way without giving anything back to the community. I'm sorry, but I'm not interested in APIs to interact with a black box. Until Microsoft makes Windows user-controllable, I will never treat it with respect.

I do see your point, however, and I hadn't thought it of that way; do you see anywhere the community might pick up on this for some benefit?


Many ways. For example, it is more compelling to choose Linux as the target platform for new programs, because that way the program works both on Winoows an Linux. Therefore, more software for Linux world.


Microsoft is a group of 180,000 people, it's too big to be classified like that. A small subset of them are making this cool thing, and you can debate whether or not their intentions are good, but that's about as far as you can go in making a broad moral judgement.


Microsoft has just shifted to being what IBM was in that late 90s for all intents and purposes. IBM didn't care what you ran on their platforms, even at the OS level. They just wanted that sweet sweet support contract and computer leasing money. "You want to run Linux on our mainframes? Hell yeah, sign here." Now with Azure, Microsoft gets money of the same shape, and correspondingly makes some of the same strategic choices.


Balmer was their wake up call. A lot of destructive policies that ensure short term benefits destroy long term sustainability.

They are good, just as any public can be good company. IE. Just a little bit more sensible about cooperation instead of demolition.


Embrace

Extend <-- you are here

Extinguish


What are they extending? What functionality does this add to Linux that is only available on Windows?


One is their DirectX extension that only works on WSL2. It allows you to access the DX API through a shim driver. You can now have a Linux application that needs access to /dev/xdg which is only available in WSL2.

https://devblogs.microsoft.com/directx/directx-heart-linux/


Hmm probably closer to true neutral.


Please define what is good.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: