Hacker News new | past | comments | ask | show | jobs | submit login
DirectX is coming to the Windows Subsystem for Linux (microsoft.com)
706 points by modeless on May 19, 2020 | hide | past | favorite | 531 comments



It appears that Microsoft has now initiated the "Extend" phase of their classic Embrace, Extend, Extinguish playbook. The key is that the proprietary Microsoft specific API added by this patch is only usable in a WSL environment as it relies on many pieces of proprietary closed source software that Microsoft is unwilling to open source. This patch does nothing but fragment the Linux ecosystem and encourage people to develop software that only works in WSL environments. With the "benefit" of forcing the Linux kernel developers to pay the maintenance costs of course.


Look, I'm not saying saying we should "trust" Microsoft not to Embrace, Extend, Extinguish if they could - but they can't and they know it, so that's not their strategy. EEE was hinged entirely on the dominance of Windows and IE for the Extinguish phase. They won on desktop, but today most of the action has moved to mobile (where Windows lost completely to iOS and Android) and webservers (where Linux is massively dominant). In those domains, they are no longer the 500 pound gorilla in the room, they're the "hey, fellow kids" oldtimer trying to fit in and get in on a piece of the action. Look at a comp sci class in the last decade, and you'll see 1/2-2/3rds of them with MacBook Pros in their laps, because it compiles iOS apps and it's a *nix so it easily runs most of the same tools as a linux webserver. That's what WSL is about, trying to win back developers. That's why they bought Xamarin and Github. That's why they released VS Code. They're trying to win back developers by meeting them whereever they are, even if they know they are targeting platforms where Microsoft is not dominant and has no hope of becoming so. They're trying to make it feasible to target a linux webserver while developing on Windows. They're trying to make Azure a serious contender for cloud computing outside of the corporate C# world. There's not a realistic scenario where they become dominant enough on those platforms to execute any kind of Extinguish move. And they don't need to - there's still tons of money to be made with just a slice of the cloud pie. I think EEE went out the window along with Balmer and the 'Windows First' doctrine. I'm not saying they wouldn't, but I'm saying they can't and they know it, and they're ok with that.

All this is not to say that there's any reason for upstream to accept any bad patches that rely too much on proprietary code. But in this case, I think the analysis of the underlying motivations for that patch was outdated.


> EEE was hinged entirely on the dominance of Windows and IE for the Extinguish phase.

EEE was more effective when Microsoft was completely dominant, but it's clearly still effective even when that dominance has diminished.

This move, for instance, will cause Linux users to depend on a proprietary, closed-source API entirely controlled by Microsoft. It's very bad for Linux and very good for Microsoft. It's a lever of Microsoft control, extended into the rival Linux system. Very easy to see how it effectively fragments the Linux ecosystem and allows Microsoft to forcibly pull users who come to depend on DirectX from Linux to Windows by manipulating this lever - reducing Linux support in the future, etc.

This is classic EEE, and the only way you can dismiss it is if you believe their PR, as apparently you do:

> they are no longer the 500 pound gorilla in the room, they're the "hey, fellow kids" oldtimer trying to fit in and get in on a piece of the action.

As the other comment explained, they are still a huge gorilla in spaces that are absolutely vital to Linux: server, desktop, and laptop.

Linux as we know it lives or dies on the server and desktop / laptop markets. Yes, Android is technically based on the Linux kernel, but it builds an entirely different ecosystem on top of it. The Linux ecosystem is only on the server and PC - exactly where Microsoft is competing.

Finally, the availability of this proprietary API will reduce the incentive for hardware companies like Nvidia to allow development of direct Linux support (drivers etc) for their products - which is what Linux really needs in order to survive, let alone prosper.


> As the other comment explained, they are still a huge gorilla in spaces that are absolutely vital to Linux: server, desktop, and laptop.

No reasonable person at Microsoft considers Linux to be a threat on the desktop/laptop. No reasonable person at Microsoft considers Windows Server to be a threat to Linux on the server.


Microsoft holds a large and fast-growing share of the enterprise server market with products like Office365, Exchange, and Sharepoint.


O365 is SaaS. Customers that are hosting Exchange on-prem are being encouraged to move to O365 for email. Same for SharePoint. If I had to guess, O365 is helping to reduce, not grow, the market share of Windows server.


What's important is that server-side services that Microsoft offers are replacing Linux servers. Whether it's SaaS like Office365 or more traditional server software like Exchange and Sharepoint is not crucial.


Except that Linux usage on Azure has surpassed that of Windows a long time ago...


And I was talking specifically about Microsoft products like Office365, Exchange, and Sharepoint.


Microsoft's interest in on-premises business with these products is waning, just as the market share for them is also waning. They most assuredly aren't trying to win the war against Linux on the server. That war has been won (by Linux) and MS is off chasing other revenue streams that are growing.


Fast growing?

The on-premises infrastructure market has been and is projected to continue shrinking in favor of cloud spend.

Googling around one can find reports with names like "IDC Worldwide Operating Systems and Subsystems Market Shares report" that show how prevalent Linux is vs. Windows Server even in Enterprise IT.


There are enough Fortune 500 using Windows Servers to keep Microsoft in business for a long time.


> As one last point to consider: Linux as we know it lives or dies on the server and desktop / laptop markets

I agree with your general sentiment, but just wanted to note that high-end embedded systems are also a very Linux-heavy domain. Even though from hardware side it might look similar to the mobile space (ARMs everywhere etc.) it is actually a completely different world.


> In those domains, they are no longer the 500 pound gorilla in the room, they're the "hey, fellow kids" oldtimer trying to fit in and get in on a piece of the action.

Said of a company which owns a market-shaping player in every layer of the stack: OS, directory, database, applications, development languages, source code repository, and the second-biggest cloud to host it all in. Given that mobile and web involve about half of that list, I fail to see how they're some sort of non-player, trying to play "catchup" to the rest of the world now.


SV distortion field.


Not to mention the biggest public company in the world, alternating with Apple.


> Look at a comp sci class in the last decade, and you'll see 1/2-2/3rds of them with MacBook Pros in their laps,

Perhaps in some countries, definitely not everywhere. At my university, there is only a single Mac across 40 or so laptops.

> because it compiles iOS apps

I doubt students buy Macs for that. iPhones are not nearly as popular outside the US.

> and it's a *nix

Students don’t care about its UNIX certification. Most don’t even know what that is when they start their undergrad.

> it easily runs most of the same tools as a linux webserver

No, it does not. Many important dev tools are not available, not even through brew.

> That's what WSL is about, trying to win back developers

Globally, the majority of developers are on Windows/Linux, not macOS.

> They're trying to make it feasible to target a linux webserver

I’d bet this is about avoiding Linux becoming the major platform for ML/DS/AI, rather than anything about macOS.


[flagged]


Max 10% at my university in Germany. I’d say windows/linux was about an even split for the rest.


> That's what WSL is about, trying to win back developers. That's why they bought Xamarin and Github. That's why they released VS Code. They're trying to win back developers by meeting them whereever they are, even if they know they are targeting platforms where Microsoft is not dominant and has no hope of becoming so. They're trying to make it feasible to target a linux webserver while developing on Windows. They're trying to make Azure a serious contender for cloud computing outside of the corporate C# world.

So in other words, what you're saying is they are Embracing this FOSS linux-based developer workflow...

And you could almost say they're Extending the linux kernel to support DX12 on WSL...

Am I missing something?


Could you explain to me what you think an Extinguish step would look like in that sequence? Because I don't think there is a realistic one, and I don't think Microsoft thinks so either.


I think it's hyperbolic to imagine that MS will manage to fully extinguish Linux outside of WSL, but that's not the only way they could damage the linux ecosystem by leveraging the foothold they are building in the FOSS world.

It's not so hard to imagine picking up a legacy project at some point in the future, and pretty much having to develop it on Windows and deploy to Azure because of some WSL-specific dependencies baked into the framework it's built on.

That's just one scenario, but the point is why would you expect that they wouldn't leverage their market-share to disadvantage competitors if they ever got into position to do so? Just because they aren't in that position now doesn't mean we shouldn't be put off by the fact that they're building those levers to fragment the ecosystem now.


Correct me if I'm wrong, but my impression here is that Windows Subsystem for Linux will support DX12, but not broader Linux support for DX12.

So if you want a machine that can play games and run a Linux dev toolchain, that would require a Windows install. If you can incentivize folks who may never have installed Windows to do so, and keep them there as the only place they can both play games and use their preferred dev environment, then you've extinguished that portion of the desktop linux install base.

EDIT: Also I absolutely love the "I don't think Microsoft thinks so either." Microsoft is a for-profit corporation who's eyeing a way to get users who traditionally avoid them to come to their platform and stay there. They aren't some benevolent font of cool tech, they're trying to sell you products.


I agree, when we look at Microsoft's motivation, considering them driven by profit makes more sense than anything else. Hence why I'm critiquing comments that seem to think Microsoft is driven by some innate desire to destroy Linux and FOSS. They're not, they just want to make money, but back when their making money was based on Windows, and they thought Linux was an existential threat to that, those two goals coincided. Today, I think Microsoft cares much more about building up Azure to beat AWS and become the biggest cloud platform - they don't care if the servers are running Linux or Windows, as long as it's running on Azure - being seen by developers as Linux-hostile runs counter to that goal. So again, I'm not saying they wouldn't go for an Extinguish if they could, but I think they know that they can't and they have a strategy where they don't need to.


We're not talking about Microsoft's cloud / server strategy though. Linux as a server OS is different from Linux as a development platform. The two have nothing in common except the word "Linux."

Microsoft is taking steps to embrace desktop Linux, we've seen that with WSL. Instead of ditching Windows for a different/better dev platform, just keep using Windows.

They are extending their own desktop Linux by adding the capability to use DX12 in Linux, so long as it's run on Windows. Note that they are not adding this capability to a straight Ubuntu install, let alone any other distribution.

To claim "Microsoft knows they can't Extinguish desktop Linux" is naive at best, and shill-like at worst. They don't need to completely eliminate desktop Linux for EEE to apply here. the "Extinguish" part comes naturally from fewer and fewer devs choosing an alternative to Windows.


> We're not talking about Microsoft's cloud / server strategy though

Yes we are. I was talking about the motivation behind their action. And I think it's quite obvious that 'get businesses to use Azure over AWS' or 'get web developers to use Windows instead of Macs' is a much, much more likely motivation than 'Extinguish desktop Linux'. Why would they care about that? Desktop Linux is absolutely no threat to them, and there's no money to be made winning that fight more than they already have. Know what, I'll grant you that in their attempt to steal web devs over from MacOS, and getting a better user story for a web dev deploying Linux servers from Windows, they will probably also hurt desktop Linux - if nothing else, simply by creating another viable alternative for devs that isn't MacOS. But I can pretty much guarantee you that extinguishing desktop Linux is not their motivation - it simply makes no sense.


What you're saying is essentially: If we can't come up with a way in which Microsoft could proceed to the Extinguish phase, they can't. Because clearly, if we, after spending five minutes thinking about it, can't come up with something, then it is also impossible for this company that has spent three decades perfecting the concept, and literally has billions of dollars to throw at the problem.

Presumably because we smart and they dumb. Or something, I don't know?

Personally, I think that their thirty years of experience and billions of dollars might make it possible for them to come up with a plan I wouldn't have thought of.


I agree with you. Microsoft plays a long game. Through giving out promotional / free copies of Excel, with 1-2-3 compatiblity, they killed Lotus. The same with Word and Wordperfect. The same with Netscape. And those companies completely dominated their markets. As market leaders maybe they were lazy, taking profits, and not enhancing their product. That happens with most market leaders. But the fact is, it is impossible to charge $150 for WordPerfect when Microsoft is giving away a compatible word processor. As their only / primary source of revenue, WordPerfect and Lotus could not survive a free competitor.

They don't have to kill Linux, and in fact, would not want that IMO, because then someone might bring another pesky antitrust suit against them. But if they can dent it in any way, they will.

For those who say Microsoft has changed, is a different company, is embracing open-source, etc.: if that truly is the case, wouldn't they release DirectX for all Linux distributions, not just theirs?


I think the Shadowrun adage is applicable here: "Watch your back, shoot straight, and never, ever, cut a deal with a dragon."

Microsoft is the dragon here. Even if it appears friendly you still don't cut a deal with it, or you become a pawn in its game.


The extinguish step is breaking the spirit of the FOSS developers.


They already did that to themselves by throwing away GPL.


The modern form of EEE is buying OSS devlopers, who increasingly make excuses for Microsoft and proprietary extensions.

The modern form of EEE is being platinum sponsor of the Linux Foundation and insert a CoC into the source tree.

Corporations don't want the "angry young OSS men" of the 1990s and early 2000s, who could be an actual threat.

Corporations want submissive, neutered and obedient developers.


> you'll see 1/2-2/3rds of them with MacBook Pros in their laps, because it compiles iOS apps

Apple can be as hardheaded as Microsoft. I have several Apple machines and each of them was bought specifically for developing iOS apps, other uses appeared later. Apple can provide Swift for Windows and Linux, but they will never allow building iOS apps on non-Mac hardware.


The impression that "most of the action has moved to mobile" is just an impression.

First of all the market share of desktop versus mobile is roughly 43% for the desktop, 57% for mobiles and tablets. More importantly however is that the rise of mobiles has been sharp since 2009, but it's been stagnating since 2017 and this stagnation trend is clear. Mobile devices too are a commodity, their market isn't growing any bigger and people aren't replacing their phones as often. The market for the desktop did not shrink. It just reached saturation. And as it will become increasingly clear, mobiles have reached saturation too.

What devices do people use in companies to do their jobs? Laptops. Sometimes tablets, for drawing on them, although a piece of paper would do. Mobile devices have been and remain essential for communications, this meant phone calls in the past, nowadays it means WhatsApp, Slack, email, along with shitposting on Facebook/Instagram/Twitter, but that's about it.

> "webservers (where Linux is massively dominant)"

Sorry to disappoint you, but the market for web servers is actually small, unless you're Amazon and Microsoft realized at some point that selling Linux boxes on Azure is probably more lucrative than what they were doing. The web servers space was never Microsoft's so they had nothing to lose.

But the enterprise space is dominated by Microsoft, with their dominance only increasing. Exchange, Sharepoint, Office 365, Skype, Microsoft Teams, Azure DevOps, freaking Yammer, MS SQL, soon GitHub, their reach in the enterprise and their adoption, once you're familiar with the space, is actually quite scary.

I have to hand it to them, they became really good at marketing. Otherwise I can't explain this portrayal of them as being the underdog. Or the constant nagging messages I see about them having changed, due to them releasing VS Code or .NET Core.

No they haven't changed much. The tooling they make for developers has always been top notch and while .NET was proprietary, they standardized it and they never attacked Mono. Office has been available on Macs since 1989. And they no longer target Linux with patent threats of course, they target Android instead. Until they'll find some way to sell Android. They adopted Chromium, a master move since now they can cut some development costs and win back some users and sell them on Bing too. We'll probably see them windowfying Android too.

---

Note that I do enjoy several Microsoft products. But I'm always skeptical when hearing about their motivation. I don't understand what makes people cheer for these big companies, as if they are sports teams. Use their products for what they are and dump them as soon as you find something better.


Yeah, most of the consumption has moved to mobile. But people who make stuff are still predominantly using desktop/notebook computers.


Right, and most of the consumption is being served from linux servers. So Microsoft is trying to use their strength on desktop and in production to stay relevant in that ecosystem, even though it involves other platforms than Windows - hence the focus on cross-compilation and cross-platform tech.


> The tooling they make for developers has always been top notch

I have to emphatically disagree there. Up until recently Windows didn't even have a decent terminal app. As a developer I tend to shy away from Windows precisely because the *Nix alternatives offer a better development experience, from cli to package management and more.


Not every developer needs a terminal app to feel like one.

In fact Mac OS also didn't have any until OS X.


Real developers automate. It's hard to automate without a terminal :)


You can automate with scripting languages and REPL.


If you're using a terminal to automate on windows, you're going to have a bad time. You'll want something like AutoHotKey which has been around for a long while.


While you're not wrong, I think that you're referencing a different flavor of automation than GP is.


You can automate just fine in a console with PowerShell.


OS X was released in 2001, which is a pretty long time ago.


Yet classical Mac OS developers were able to do without one for 17 years, and I bet most developers actually targeting Apple devices still do without one, when I look around the office over here and see InteliJ, XCode and AppCode.


Doesn't MPW count?


Yeah, but most people were using the GUI tooling anyway and not everyone was a MPW user.


Windows still doesn't have a decent terminal app! The new one just crashes when I resume from suspend. (I believe it's because when the computer suspends, it forgets about the external monitor. But it's still not evident to me why that should make it crash.)


If we consider third parties, ConEmu is well beyond decent.


So what you're saying is "Developers, Developers, Developers!"?


Watch out for flying penguins.


Very well stated.

People should be much more concerned about EEE from Google, as we’ve been seeing more of all the time with Chrome.


On businesses and government Microsoft Office continues to have a dominant market share. And that's a lot of money.

Not only US government, but almost every government around the globe is using it. And on macOS, that's still the case.


Google is already one step ahead of Microsoft with ChromeOS and Android.


I find this a cynical take as far as EEE goes. But even with Microsofts mostly solid efforts into open source I find myself a bit skeptical as well.

Most recently with the Live Share extension for VS Code. From skimming the licensing it is only to be used with the Visual Studio family of products. Which is an incredibly disappointing approach.

I think this is simply about ML and GPU compute for WSL. And I think Microsoft is genuine in that it sees business value in doing open source work, integrating well with Linux, etc. But at the end of the day the business interests are there and different parts of the company can be motivated quite differently.

Some skepticism remains warranted.


The license of Live Share as appears among VScode plugins indeed says "You may use the software only with Microsoft Visual Studio or Visual Studio Code", but the license of the source code of the extension on Github is an MIT license:

https://github.com/MicrosoftDocs/live-share/blob/master/LICE...


This is not the code of the extension, it's just documentation.


I think it is as well, honestly the DX12 part of this is DOA as far as I'm concerned because it's not open source. So that leaves the better support for other APIs as the key delivered feature. Could MS fix that by setting DX12 up as an open to implement API and open sourcing much of this? Yes. But until they do you're better of using the various translation layers from DX to vulkan when using linux even on WSL.


As a FreeBSD user I find this kind of comment hilarious given I see Linux specific code littering upstream open source projects with no thought for other platforms on a regular basis. I occasionally see resistance to fixing these issues as well.


Frankly, I don't see what's so funny.

I don't presume to know if EEE is Microsoft's strategy here, but if it is, do you think a fleeting moment of amusement at Linux's comeuppance is worth the damage done to the larger open source ecosystem in its wake? Linux and FreeBSD are on the same team here.


I think a lot of that is coming from redhat/IBM, which is currently doing their own take on EEE. Believe me I get pissy about that too.


Honest question, is your opinion that we should have been stuck with something like SysVinit just so it's more convenient for the BSDs, while the likes of RedHat contribute the majority of the work?

And if yes, can you point to an example of *BSD doing something similar while pushing their platform forward?


This is a loaded way to frame the question, some of us were just fine being "stuck" with SysVinit. I try not to get dragged into systemd flamewars because life is too short but it's disingenuous to claim that before systemd init systems were some kind of inescapable hellscape.

I administrate multiple systemd-based linux distros, FreeBSD servers and buildroot-based embedded systems and I can tell you that systemd still gets in my way regularly while the sh-based init systems tend to Just Work and are very simple to understand and maintain.

Of course I know that a big reason is that I'm very familiar with un*x system administration and the gotchas of shell scripting while systemd is probably more approachable for somebody who doesn't want to learn arcane knowledge about chmod and file locking and setuid and symbolic links but I think that explains why there's still so much pushback against systemd all these years later: people who care about init systems know enough about them that systemd feels over-engineered and unnecessarily complex while not bringing a lot to the table.

Never once in the past years of running systemd have I thought "oh man, I sure am glad I'm using systemd and not an old SysV/BSD init system!". Not a single time. I did have multiple occurrences of systemd breaking stuff after an update though.


> systemd still gets in my way regularly while the sh-based init systems tend to Just Work

For me systemd based systems allow me to have declarative, portable unit files where init scripts don't. They allow me to reliably monitor and restart services, they shut things down properly instead of just force killing as many init scripts end up doing.

I instantly know how to manage most major distros now that systemd's common among all of them, have no hesitation of writing a proper service file even for minor tasks and I get a ton of functionality 'for free' too.

Init scripts were always a poor-quality mess, non-portable among systems, non-consistent, non-deterministic. If your experience differs there's still plenty of non-systemd choices out there. They're not as prevalent as systemd ones, but that's because the people who sit down and actually write the code we all use find the services systemd provides valuable.


Arguing that the advantage of systemd is portability is rather bold!

And even if portability was the point I'm not sure I see the big deal. Writing an RC script from scratch if the software you use doesn't provide it is generally trivial. The vast majority of the time you wouldn't have to do that anyway as it ships with your OS's packages anyway. Sure, systemd might be "tidier" with its standard APIs and whatnot but it's also a lot more complicated and opaque than a bunch of shell scripts running one after an other. And if you're serious about sysadmin you'll have to learn shell scripting one day or the other anyway.

>Init scripts were always a poor-quality mess, non-portable among systems, non-consistent, non-deterministic.

They're non-portable, that much is true but so is systemd, that's a weak argument. If everybody had adopted FreeBSD-style init it would be equally as portable, it's a self-fulfilling prophecy. With that logic we should just all ditch un*x and start running Windows since most people already use that anyway.

The rest is nonsense. It's poor quality, non-consistent and non-deterministic if you write them that way. Sure, shell scripts being turing-complete opens the door to a lot of nonsense if people go wild and gives more latitude for very sloppy code but it doesn't have to be that way.

>the people who sit down and actually write the code we all use find the services systemd provides valuable

Systemd has been pushed down everybody's throat for a while now, saying retroactively that people use it because they find it valuable is a bit of a stretch. I'm sure many of them use it because that's what's available. I wrote a bunch of systemd unit files myself, I assure you that it wasn't meant as an endorsement.

Besides it's only one side of the equation. Maybe it's nicer for the people writing the unit files, doesn't mean that it's a good thing for people actually having to use them. I'm sure many software maintainers would prefer if everybody ran the same OS on the same hardware with the same use cases but that's not how the real world works.


> Arguing that the advantage of systemd is portability is rather bold!

It's merely a statement of fact, systemd services accept the same set of commands across distros, which is rather unlike SysV.

> It's poor quality, non-consistent and non-deterministic if you write them that way.

That's a bullshit statement, because everything fits it. Of course everything is great if you make it great. And?

The point is that systemd's declarative nature makes it hard to screw up services and even badly written ones will get enough common functionality for free that they'd be usable.

> Systemd has been pushed down everybody's throat for a while now, saying retroactively that people use it because they find it valuable is a bit of a stretch.

Systemd got adopted because people generally found it valuable enough to adopt over what they had before.

> Maybe it's nicer for the people writing the unit files, doesn't mean that it's a good thing for people actually having to use them

Matter of opinion, but I happen to think that having a uniform set of commands working at work and at home is nicer for users too, over the patchwork of scripts that SysV was across the various distros.


> For me systemd based systems allow me to have declarative, portable unit files where init scripts don't.

Portable to what?


Portable to other systemd-using Linux distros.

sysv init scripts simply weren't portable between distros, leading to tons of non-standard, incompatible fragmentation.

Users simply couldn't simply take their own scripts over to another distro and expect them to just work, given said differences.

With systemd, unit files will simply just work between all systemd distros, given the standardized format.


Oh. That hasn't been my experience. Where did they standardize the names and set of the services you can depend on?


If the names of dependencies are the only unportable thing, we have indeed come a long way.


> Where did they standardize the names and set of the services you can depend on?

You're ignoring what the parent said and talking past.

It's not the 'sets' that are standardized, it's the set of commands that are applicable to a systemd service and to any systemd Linux distro that is.


I don't think we should have stuck with something like SysVinit, there's definitely room for improvement, but saying "it's either SysVinit or systemd" is a false dichotomy.

If I was approaching building an init system I'd make a better language for writing init scripts than bash, some kind of interpreter that processes mostly declarative init files, sets things up, and then exits. An incremental improvement that works with existing systems instead of putting a whole bunch of new (generally un-audited) code into PID 1, with all the security implications that implies.

Redhat may contribute the majority of the work but they're also very good at positioning themselves so that outsiders can't really contribute any work, or get any independently developed standards implemented.


How can people replace SysVinit without "doing their own take on EEE"? Your alternative approach is still not going to be compatible with SysVinit either, not to mention it being strikingly similar to systemd.


Archlinux had their own non-sysvinit system (which went away in favor of systemd) as does voidlinux (runit based) and alpine (OpenRC) today.

The problem is less the init system itself, but applications that depend on a specific init system [1] (gnome used to be a major source of contention in that regard).

[1] https://wiki.gentoo.org/wiki/Hard_dependencies_on_systemd


What apps depend on a specific init system?

Gnome does not depend on systemd, but rather logind.

KDE used to be the same, until someone started maintaining ConsoleKit2 again, proper, at which point they were happy to support it.

ConsoleKit was dropped because it just wasn't being maintained, and had various limitations.


Why does logind depend on systemd?


It's part of the systemd project umbrella? It's part of the systemd monorepo? It uses libsystemd?

Maybe it's simply easier to maintain this way?

elogind exists, if you care. It exposes the DBus interfaces that logind supports for applications to call.

Thus, environments like Gnome can be supported on non-systemd systems if they emulate and/or expose and implement the required DBus interfaces.


That's sort of the complaint, why I accused them of "doing their own weird version of EEE". When faced with a choice of "make everybody else do a whole bunch of work so that they're not forced to use your entire project-umbrella" or "make some minor changes to our architecture so things aren't as tightly coupled" they almost always choose the one that forces you to use systemd.


To the extent that you believe that free software development should work such that random people on internet forums dictate the architecture to the people who are doing the work, and instead of writing their software to solve problems that actual users have, they should comply with these whims, you better prepare to be disappointed.


logind has a stable API, it is possible to build alternative implementations not coupled to systemd, yet so far I don't see much on that front.

See also https://lwn.net/Articles/586141


Is applications depending on systemd particularly a bad thing? If we don't want applications to use non-SysVinit features, we might as well keep using SysVinit.


Why should my window manager dictate what programs are allowed to start it?


For the same reason every piece of software 'dictates' its dependencies.


If you don't care about portability at all.. (regardless if it's just within linux or to *BSD).

And like I already mentioned in my previous post.. there are other alternatives around than just SysV-init..


>How can people replace SysVinit without "doing their own take on EEE"?

By not tightly bundling their init with other OS components, by not having GNOME desktop somehow depend on what init system you're using (as opposed to services running under that init system).

Also you know how shell scripts have `#!/usr/bin/env bash` at the top of them? Well the reason why my hypothetical init system would be compatible with SysVinit is because instead of `#!/usr/bin/env bash` it would have `#!/usr/bin/env new_init_language` at the top of it. 'new_init_language" could implement almost all the same features that systemd unit files do.


> If I was approaching building an init system I'd make a better language for writing init scripts than bash, some kind of interpreter that processes mostly declarative init file

Guess what systemd does?

> sets things up, and then exits

And that's the crux of the issue isn't it? Because on modern systems, things need setting up and tearing down all the time.


> things need setting up and tearing down all the time.

Well not really, that's an artifact of systemd's over-engineered design. There's nothing stopping you from tearing something down from an init script or doing more complicated dependency management using the CLI as your RPC mechanism (but red hat needed a reason to use their in-house RPC mechanism).

Honestly something like systemd could be pretty reasonable if it wasn't created with the express intent of "combating fragmentation". Not that there aren't a whole bunch of technical and architectural issues with it, but still.


> There's nothing stopping you from tearing something down from an init script

Except most init scripts I've seen are rather brittle and only "work" if the PID dance is exactly as the author predicted, are not declarative and hard to debug.

I don't want to go back to init scripts, for all systemd's faults, the past was worse.

Using the CLI as my RPC mechanism etc. just sounds like I should spend a bunch of time doing work that systemd can do a better job of managing for me.


> Honest question, is your opinion that we should have been stuck with something like SysVinit just so it's more convenient for the BSDs,

BSD init is far better than Sysv. Adopting it would have been a step forward.

There are also other init systems that are better designed, like openrc or runit.


> There are also other init systems that are better designed, like openrc or runit.

OpenRC is practically speaking a thin wrapper around SysVinit, not much of an upgrade if you ask me.


Linux getting first-class support from applications isn't a valid basis for criticism. It's not like Linux is actively doing harm to FreeBSD or anything.


As a Linux user I find this kind of comment hilarious given I see FreeBSD specific code littering upstream open source projects with no thought for other platforms on a regular basis.

Why for example are Jails not compatible with Linux? Is that not considered upstream because FreeBSD's "integrated"? How convenient.

I think what you see is happening because these "upstream" projects are actually primarily for Linux and by Linux devs. They just happen to work on BSD as well, but weren't you guys saying this whole time how the BSD approach of having everything integrated into one system is better and the whole GNU/Linux ecosystem of several upstream components is strictly worse?

So what exactly are you complaining about? Us trying to use Linux-specific features to make the Linux ecosystem the best, rather than target the lowest common denominator while you guys take full advantage of FreeBSD's specifics?

Seems less than fair to me.


> I see FreeBSD specific code littering upstream open source projects

Any examples?

> Why for example are Jails not compatible with Linux?

What? Jails originated in FreeBSD over 20 years ago? This question doesn't make sense.

The OP was referring to systemd affecting upstream by requiring systemd for things (ie Gnome). Are you saying there are pieces of software you have always used, but no longer can because they directly rely on jails?

I don't even use FreeBSD, but this was a weird argument.


I am saying GNOME is primarily a Linux desktop environment, developed practically exclusively by Linux users and thus it makes sense for it to make use of Linux specific features, just as it makes sense for FreeBSD technologies to take full advantage of FreeBSD-exclusive features.

Just because GNOME wasn't using systemd in the 90s when systemd didn't even exist does not mean it should not use it ever, even if it makes sense for GNOME devs to do so now.


> The key is that the proprietary Microsoft specific API added by this patch is only usable in a WSL environment

The other key, which everyone crying "Extend" seems so eager to ignore, is that they don't actually want anyone to use this API: https://lkml.org/lkml/2020/5/19/1139

"I'll explain below why we had to pipe DX12 all the way into the Linux guest, but this is not to introduce DX12 into the Linux world as competition. There is no intent for anyone in the Linux world to start coding for the DX12 API."

They want people to continue to use OpenGL/Vulkan/OpenCL/etc., and this is just their mechanism for getting those APIs GPU access when running under WSL: https://www.collabora.com/news-and-blog/news-and-events/intr...

"We have recently announced work on mapping layers that will bring hardware acceleration for OpenCL and OpenGL on top of DX12. We will be using these layers to provide hardware accelerated OpenGL and OpenCL to WSL through the Mesa library."


> There is no intent for anyone in the Linux world to start coding for the DX12 API.

I took that to mean that the developers on Linux are the people who won’t switch to DX even if it was ported.


According to the comment by christoph-heiss, this is just a paravirtualisation technique and will appear like a normal Limux GPU driver to applications running on WSL, rather than a special DirectX for Linux API that won't work on Linux itself.


I'm not sure where you are getting the idea that this would expose a normal Linux GPU API. The blog post and the thread on the kernel mailing list seem quite clear in that this is explicitly designed with the goal of exposing the Windows driver API and associated client APIs like DirectX.

From the email introducing the set of patches:

> The projection is accomplished by exposing the WDDM (Windows Display Driver Model) interface as a set of IOCTL. This allows APIs and user mode driver written against the WDDM GPU abstraction on Windows to be ported to run within a Linux environment. This enables the port of the D3D12 and DirectML APIs as well as their associated user mode driver to Linux.


> I'm not sure where you are getting the idea that this would expose a normal Linux GPU API.

The fact that they went through the effort of porting Mesa to get OpenGL and Open CL support, and are working on Vulkan support.


The standard for Linux gpu access is the dri/drm kernel interface, not the new wsl specific api they've added.

Their porting Mesa to use a proprietary library that interacts with the wsl only kernel interface isn't that same as behaving like a standard Linux gpu


It's not really a standard though, each GPU has a totally different set of ioctls and device files to the point that you can't write against a generic kernel interface either way. AFAIK, the only DRI/DRM interface with two used clients is amdgpu (of which one of the clients is proprietary).

And part of the discussion on the mailing list is how to integrate this with DRI/DRM and dma-buf so it can be used by more Linux clients with less work (although still non zero like you'd have on a DRI scheme as well).


> The blog post and the thread on the kernel mailing list seem quite clear in that this is explicitly designed with the goal of exposing the Windows driver API and associated client APIs like DirectX.

Are we reading the same blog post?[1]

[1]: https://devblogs.microsoft.com/directx/wp-content/uploads/si...


Keep in mind, a ton of people and companies are asking for this, because they genuinely want to use hardware-accelerated PyTorch and TensorFlow "through a Linux-thingy" on Windows.

BUT I MUST AGREE WITH YOU that, over time, this is likely to fragment the Linux ecosystem and encourage people to develop software that works only in Windows "with the Linux-thingy installed" environments.

That's because sooner or later, people using Windows will publish important AI papers or release must-have software or do something else with code that runs only on Windows "with the Linux-thingy installed." The "Linux-thingy" becomes a dependency that gets installed in the background. It becomes a near-invisible component of Windows, abstracted away by layers of Microsoft APIs, code, standards, etc.

The "Linux-thingy" on Windows, in other words, becomes like an init system on Linux, which can eventually be replaced in whole or in part without most people caring or noticing. Most Linux users don't care if their Linux machine uses Systemd, a Sys V Init, or even the dead-ended Upstart. That's the "Extinguish" part.


I do think that's a real concern...however in some ways this is an admission of failure - users like what's in the WSL enough that they want more linux, versus just some subsystem which is consumed by other apps.

Also, I think linux users (desktop/servers) who actually write code and provide updates for programs, aren't likely to pause the world for WSL. Its not exactly 'linux' nor the destination for their code, WSL stands a risk of becoming obsolete, if not properly maintained etc.

Its also good in some sense, it exposes users to linux capabilities and the like, who might not have had the guts to run a live cd or deal with modern hardware foibles.


I hate how this is constantly brought up. Yes- that was a Microsoft strategy and might still be. With that said, Microsoft has no power to make people switch. If people switch off of desktop Linux in favor of Windows it's because Windows is either a better product or Linux was too much of an issue. Why shouldn't consumers choose the better product for themselves? Why shouldn't Microsoft or Linux compete for business? Why do you care so much if someone wants to use Linux or Windows?

>With the "benefit" of forcing the Linux kernel developers to pay the maintenance costs of course.

No. Nobody is forced to maintain this. If Microsoft stops maintaining it then Linux is free to kick this to the curb.


> This patch does nothing but fragment the Linux ecosystem

Proof that Microsoft really has embraced Open Source development.


i (and others) called this 3 years ago[1].

people need to stop using windows, entirely. i recently built a gaming machine, pretty high spec, and i've resolved to never install windows on it. there are plenty of games i can't play, a few that i would like to. but i wont give money to developers that wont release linux versions of their games - even when they are utilising engines that have linux ports (PUBG being the most recent example i can think of, but any unreal/unity engine based game). there's no reason to use windows any more (for people like us, at least).

[1]: https://news.ycombinator.com/item?id=14320200


I'd love to be using an open source desktop, but sadly Linux Desktop is still a goddamn garbage fire as far as I'm concerned. Not that long ago (months) I would say that 4 out of 5 of my desktop and laptop computers ran Linux, but now that number is 1/5 because I just got sick of dealing with Linux Desktop bullshit on them. My main gaming rig runs nVidia because it is frankly best in class, and nVidia has shit Linux support so that's a no-go. I also have a used Oculus Rift, which again rules out Linux.

There's a reason Windows is still dominant in the Desktop space despite all its problems.


> Not that long ago (months) I would say that 4 out of 5 of my desktop and laptop computers ran Linux, but now that number is 1/5 because I just got sick of dealing with Linux Desktop bullshit on them.

interesting. i put ubuntu 20.04 on this and it's been plain sailing. now usually i'd say i've got a high tolerance for bullshit when it comes to linux, but i'm not exadurating when i say i did not have to touch the terminal to:

1. install it without any extra crap

2. switch from nouveau to the nvidia driver - version 440.64, a very recent driver (which isn't very fair to nouveau, i didn't even give it a try, really)

3. install steam and games and have them work flawlessly from the get-go. games i've played:

* counter-strike global offensive (a lot..)

* the binding of isaac (would be bad if this didn't run!)

* deus ex mankind divided (was _very_ surprised to see this available and working perfectly, to be honest)

* black mesa

a short list, but i've only had the machine for a week.

i know it's stupid to say "hey i didn't have to use the terminal", but really, ubuntu has come a long way. i couldn't have said that last year, i don't think.

> nVidia has shit Linux support so that's a no-go.

eh, i'm gonna respectfully disagree there. linux support from nvidia has been a bit patchy (and the nvidia driver windows isn't exactly a dream at times, too), but i've been using nvidia driver on linux on various installs on various architectures on various hardware for _years_ and i've never had a significant problem. AMD, on the other hand.. i don't miss fglrx.

> I also have a used Oculus Rift, which again rules out Linux.

well i also wont support facebook, so that's not an option for me :)

honestly, it's _really_ disheartening to see the replies to this comment and see how far the normalisation of deviance has come. the _only_ tool we have as end users to stop companies from being shit is to vote with our wallets. and yet the majority of responses is to just give up and accept that that's how it is.

i think we're doomed as a species :)


What kind of linux desktop bullshit? Has it been getting worse?


No, it just isn't getting better. Any time you want to do something even remotely interesting, like install software that is actually up to date and therefore not in the repo, you have to jump through a bunch of hoops. One of my other usecases involved a device with only a 16GB internal disk, so naturally I would like to install applications to an external disk so they actually fit. This is usually trivial on Windows but not possible with any package manager on Linux [0]. I still constantly run into issues with hardware, especially sound and video, that require hours of googling and manually tweaking config files to fix. Oh yeah, and half the time the results from google for any issue applied to the way things were done 5 years ago, which in typical Linux fashion means that there's an entirely new way to do it now that bears no resemblance.

[0] Flatpak can do it if you setup an entirely new installation on that disk, but flatpak has its own issues. Snaps can't do it at all, and while AppImage is really great relatively few projects distribute that way.


what do you mean you wanted to install applications to an external disk?

if you have a separate OS on the external drive, you could chroot in and install it. If you just want that one application to be on the other disk, you could mount it to wherever on the filesystem the program would be installed. However you probably don't wanna do that on an application by application basis. you could mount /usr/bin/ on the external drive for example. or use LVM to share disk space between the disks

Flatpak, Snap, Appimage and other portable format aren't really preferable. they lead to a software ecosystem that amounts to just downloading and running executable binaries off the internet, each with their own overhead.


> what do you mean you wanted to install applications to an external disk?

Kinda illustrates my point about this being a foreign concept to Linux Desktop people. It's pretty simple: I want to put an application on an external disk and run it from there.

> if you have a separate OS on the external drive, you could chroot in and install it.

No thanks. I'd just like to have the application stored on an external disk, and execute it on the OS I have installed.

> If you just want that one application to be on the other disk, you could mount it to wherever on the filesystem the program would be installed.

Not really, because Linux likes to have an application spread its files all over the hierarchy, so in reality I need to either use some form of union-mounting, and/or simlinks. Of course that is only sufficient if there are no library conflicts between what the application wants and what the system uses, in that case I need to use LD_LIBRARY_PATH and other tricks. In some cases I'll need to use a launch script that calls a different ld.so.

That's a lot of hoops compared to how sane operating systems do it.

> LVM to share disk space between the disks

System breaks when disk is removed. No good.

> Flatpak, Snap, Appimage and other portable format aren't really preferable. they lead to a software ecosystem that amounts to just downloading and running executable binaries off the internet, each with their own overhead.

Which is pretty much what I want, because the alternative is dealing with the bullshit I mentioned above whenever you step outside the package manager's sandbox.


NixOS gets you there I think. They deviate from the standard hierarchy to keep each application in its own directory that could be installed on it's own drive if need be, then with symlinks back to the main tree for legacy purposes. Like /bin/sh would be a symlink to /nix/store/s/5rnfzla9kcx4mj5zdc7nlnv8na1najvg-bash-4.3.43/bash but only because there's enough legacy out there that expects /bin/sh to be runnable verbatim.


Now if only I didn't need to learn a new language in addition to all the OS tooling to use NixOS.

This is what AppImage gets that the rest of Linux doesn't seem to: if an application is a single file/folder, you don't need anything more complicated than a file manager to manage it.

P.S.: after a quick glance at some documentation, I don't see any mechanism that would allow me to install nix packages to arbitrary locations anyway.


I'd like to point out that having random programs scatted across external HDDs is an anti-feature to me. I never remember which files are where as it is. And too easy to screw things up by unplugging the drive...

Nevertheless it can be done - if you are dedicated, recompile the program with the prefix changed to the mounted root of your separate drive. Presto, automated installs to your external drive. Now, linking up all the bits can still be a right headache, but this is just a bigger nightmare on Windows (what will you DO if your drive letter changes?? You mean I don't have any system to automatically link up random programs on random drives which may or may not be present. Gosh darn, that package manager sounds like a good idea now...)

If you're using a laptop, you're probably due for an upgrade in HDD capacity, or if you're lucky enough to be on Desktop I'd recommend a board with hotplug capability and using LVM. I think no matter your platform you're going to have a hell of a lot of problems doing things your way. If you can patch the package manager to be able to do this, amazing! I don't think any system exists like this, period.


You mean you want an actual installer, like in DOS or Windows? IMHO, it's crazy.

I had similar problem. I joined two physical disks into one logical using LVM and forgot about problem. :-/


> You mean you want an actual installer, like in DOS or Windows? IMHO, it's crazy.

Well that's still better, in my opinion, than package managers and all their various restrictions. But ideally I want something like AppImage, where an application is a singular entity that can just be moved at will and run from wherever [0]. Most DOS programs and many Windows programs (if you extract them from the installer) will still work like that.

> I had similar problem. I joined two physical disks into one logical using LVM and forgot about problem. :-/

If you unplug the external disk, your system stops working. This is not what I want, as I mentioned already in the post you're replying to.

[0] classic MacOS, RiscOS, and NeXT all had programs that worked like this too. Linux has yet to achieve the flexibility in application management afforded by the OSs of the 1980s.


Hmm, that's fair. I'd encourage you to try something outside of the standard redhat/ubuntu distros. Manjaro is nice and is pretty much always up to date. Also has access to the AUR, so there are packages for pretty much everything. It's a lot easier to do interesting stuff.

LVM can make two disks appear like they're one, but of course if you remove any of them they both fail. Doesn't really solve the underlying "I want to install apps outside of the root partition" problem though.


> I'd encourage you to try something outside of the standard redhat/ubuntu distros.

Generally speaking they have the same problems as the mainstream distros, but with the additional caveat of even less chance of googling solutions and worse or no support at all from non-oss software.

> LVM can make two disks appear like they're one, but of course if you remove any of them they both fail. Doesn't really solve the underlying "I want to install apps outside of the root partition" problem though.

Precisely. The very concept is just so foreign to most of the Linux community that even talking about it gets you strange looks. It is actually possible to do with some significant AUFS-fu, but its a hell of a hoop to jump through for functionality that is pretty natural in every other desktop OS that ever existed.


I've heard GoboLinux [0] does not follow Filesystem Hierarchy Standard.

[0] https://en.wikipedia.org/wiki/GoboLinux

----

What you describe is like installing in $HOME. It's not unusual - python virtualenv, ruby rbenv, node_modules. This comes with trade off - either system knows where to search or one has to define per project. By FHS entire Linux tree is a project. In Windows... configuration is pain.

Flatpak someday.


Now, as I think about it, ruby is not like FHS

    └── gems
        ├── irb-1.2.3
        │   ├── exe
        │   │   └── irb
        │   └── lib
        │       ├── irb
        │       └── irb.rb
        └── rack-2.2.2
            ├── bin
            │   └── rackup
            └── lib
                ├── rack
                └── rack.rb
    
vs

    ├── bin
    │   ├── irb
    │   └── rackup
    └── ruby
        ├── irb
        ├── irb.rb
        ├── rack
        └── rack.rb
    
FHS approach does not support multiple versions, gems approach requires declaring version in code or Gemfile, playing with PATH, considerable slowdown https://github.com/Shopify/bootsnap


I think that sums up my issues with desktop Linux: any user has at least some niche requirements. For mine they are fairly doable on MacOS/Windows, but they require hours of staring at documentation and downloading/compiling source code in Linux.

I still write code in a Linux environment because that's where the ecosystem is, but for my daily driver I've given up on desktop Linux. And I've found that WSL gets me a Linux based development environment side by side with Windows with less pain than dual booting. I've found that when it comes to using Linux smoothly, it's a lot easier to shoehorn myself into the happy paths than to bend my setup into my way of doing things.


As a game developer, for most mid-tier inside titles releasing Linux version costs more than it makes, so, no.

And regarding using Windows: company's behavior is important, but product quality, especially when it's a tool you use every day for work, is more important. Linux is ok for certain types of developers, it's still awful for normal users, abysmal for office work and obviously not a choice if you develop Windows software.


> As a game developer, for most mid-tier inside titles releasing Linux version costs more than it makes, so, no.

What are the main costs of releasing a Linux version of a game when using Unity or Unreal? My limited experience with Unity has been "check the Linux box, click the build button". It even cross-compiles no problem. People I know tell me Unreal is similar.

It's not like they have to do a full run of play-testing for Linux. Just check if it starts, play for 10 minutes and release that. We know we're a minority so we're willing to put up with bugs. Even if there's only 100 people who'd buy it only if it ran on Linux, that should pretty much cover the couple hours of dev time.


I can tell you've never released a popular product on Linux. It's a lottttt more effort than you're implying. To do it well, you need to have Linux as a target from the start of your development, it's not something you can tack on to the end. Linux graphics drivers are very different from Windows (the AMD stack isn't even the same codebase). Linux window managers are... wild. You need to build to an ABI that will be supported across distros, which mostly means the Steam Runtime, which means you need to go learn about and understand that. You can't introduce any platform assumptions ("ehh, I'll just hardcode this path with a backslash... oops"). Once you release, you're talking support for dozens of different distros, different versions of distros, different package managers, different filesystems. I know what you're going to say: "Just support Ubuntu LTS and ignore everyone else". Well, now you've just cut your customer base by at least 75%[1], and your customers on other distros are going to demand support anyway. Do you tell a paying customer to F-off because they're not using a distro with only ~20% of the market?

Supporting Linux is hard. So is supporting Windows, but if Windows makes up 98% of your customer base, it's justified. 2%? Ehhhhh.

(P.S. I love Linux. See my profile. This is the kind of stuff I deal with every single day.)

[1] https://boilingsteam.com/we-dont-game-on-the-same-distros-no...


Open source your games. Open source is the only practical way that existing distros are able to test and support the majority of packages that they ship. If you do that, the cost of supporting all those other things will fall on the distros, not on you, but you need to actually play ball with them and give them something workable that isn't an opaque binary blob that is illegal to redistribute, modify or reverse-engineer. The feelings you're describing (that you need to shelter 100% of those costs and you have no other options) are a side effect of the game industry's cargo-culting of closed-source business models and refusal to even consider that there are ways to make money from open source.


Yeah, the solution to a platform not making enough money to justify supporting it is... giving away your game for free! The self-centered attitude of your comment is astounding, but all too common in open source communities.


You don't have to give your game away for free. I urge you to look into how prominent open source companies are actually making money and think about how it can be applied to games. It's not an impossible reality.

You claim I am being "self-centered" but the fact is if you don't provide something that the distros can work with and instead try to route around everybody and ship an untestable blob then that's your fault. The reason everyone tells you to just support Ubuntu LTS is because they are the only ones who have the resources to support all these random binary blobs for X amount of years, and that comes with all the pains associated with it, as you are well aware of.


> You claim I am being "self-centered" but the fact is if you don't provide something that the distros can work with

Alternative view: distros don't provide a stable platform that developers can target.


Some do provide that, such as Ubuntu LTS and RHEL. Others don't.

Expecting every distro to have the exact same release & support cycle is nonsensical.


Linux LTS releases are not particularly stable from the perspective of someone who wants to distribute applications as binaries. Its API can change with every release, i.e. every two years. It's incredibly unlikely that a binary compiled for Ubuntu 18.04 will be able to run on Ubuntu 20.04 (as in Python etc.).

To contrast, if you take a Win32 game from 2003 that targets DirectX 9, it probably runs fine on the latest Windows releases. (You might have to enable some compatibility mode.)

"BUT BUT BUT if it's packaged with the distro it's no problem." Remember we're talking about proprietary applications (mostly games) here. Having these packaged with distros for years after the devs moved on to the next project is just a pipe dream.


My original statement was that if this is really a problem for you then you need to stop writing and distributing proprietary binaries and open source your game. I know this is a hard pill for game developers to swallow but there is no other way. Open source communities move fast, they are not going to slow down just to support some opaque binary blob that is illegal to redistribute or fix bugs in, and that the original developer doesn't even care about anyway. It is nonsensical to expect these communities to work exactly like Windows. These are not fortune 500 companies with billions in the bank like Microsoft that can afford to keep innovating while also supporting every single legacy program in existence forever. The only way open source communities can provide the same level of support is if you provide source code that other interested parties can keep up-to-date without worry of being sued.


These prominent open source companies that you're taking about make money from consulting or cloud services. There's no well understood and well rested way to do that as a game company.

It doesn't mean that there's no such way, but it does mean that attempting to find it is very risky. Making games is already a very high-risk, high-reward industry, do adding this amount of risk to the equation is advice that's insane for all but most successful companies.


A multiplayer game is just like any other cloud service in terms of economics. And a game that allows a certain degree of customization is analogous to a consultancy. Find the pain points your business customers/partners are having and then charge for solutions. If you don't have any business customers/partners then get some, not having them is a much bigger risk than choosing any specific development model.

One of the ways these companies can reduce risk is actually by using other proven open source components. If they won't even make an effort to try to expand this, then there will never be a well-understood way.


> A multiplayer game is just like any other cloud service in terms of economics.

No, it's not - this is an absurd statement. Cloud services get most of their income from big and medium companies, billing tens and thousands of dollars in privately negotiated agreements. Digital Ocean and AWS don't live on single developers paying them $5. All their real money is on B2B.

Multiplayer games either get a flat subscription from all their playerbase, or rely on micropayments. Either way, they spend much less on account management where they interact with customers as individuals, and much more on analytics, where they measure and work with their customers in aggregate. These are completely different business models, that completely shape companies that operate on them.

> Find the pain points your business customers/partners are having and then charge for solutions.

Games are not solving any problems - they are entiertainment product. I've seen people who have tried to reason about games in the terms you're using, and it always yelded hillarious (but also sad) results.


Open sourcing doesn't mean you need to provide all the souds/music/3d models/assets/etc. Those can all be copyrighted separately.


> Open source your games. Open source is the only practical way that existing distros are able to test and support the majority of packages that they ship

Which might be one of the reasons Linux Desktop is so awful to develop for.


The "Linux Desktop" is not and has never been a thing beyond a vague marketing term. What you're thinking of is a loose collection of unrelated projects. Pick a single stack and go with it and things get better.


....Which might be one of the reasons Linux Desktop is so awful to develop for.

You wonder why people ignore your platform? Saying it isn't actually a single platform and instead is dozens of different platforms just means each of those platforms is even less relevant. Saying you're not just one platform with 10% market share but actually 20 with 0.5% each doesn't make the case for shipping on Linux Desktop any better.


I'm not making a case for shipping on the "Linux desktop" because that doesn't exist and has never been a real platform. Most smaller distros are not trying to be "relevant" to whatever the gaming community's fad of the day is. They're perfectly fine filling what niche they do. If you want to target those smaller distros as a platform, you can give them some source code to work with, or you can pay all the cost of maintaining things yourself, or you can just ignore those. They will do fine without your game.


So what are you saying? Developers should just choose a major distro and target that, then get all the flack for issues their product has on every distro that isn't supported? Because that's basically what's already happening and developers hate it, which is why they often don't ship on Linux at all.

And many proprietary pieces of software license components from other proprietary pieces of software, so that even if they did want to open their code they'd have to strip pieces out of it anyway which doesn't really help the cause of distros integrating it. And even if it did, then the developers are reliant on the maintainers for their relationship to their customers. Have an issue with the product? Oh, it turns out that's because of this patch made by the maintainer of the package for that distro, who now has to be contacted and convinced to fix it, which they may decide not to for arbitrary reasons. Even open source developers have problems with this!


In general, distro maintaners can't help you with legal conundrums you created for yourself (you signed restrictive NDAs and proprietary license agreements and didn't consider the fine print until it was too late) or with other unrelated market problems (lack of popularity, lack of developer interest in supporting your game).

If you need to strip out some parts and there is enough will in the community to replace those pieces that you stripped out, then it will happen. If you have a product that is anywhere near being popular among the FOSS developer communities then I don't think it makes sense for you to claim that this will doesn't exist, or that distro maintainers will lose interest.


> What are the main costs of releasing a Linux version of a game when using Unity or Unreal? My limited experience with Unity has been "check the Linux box, click the build button". It even cross-compiles no problem. People I know tell me Unreal is similar.

For a simple game that uses the entire UE4 stack, you might be able to get away with that if none of your code is Windows-specific and works exactly the same on the Linux distribution you are targeting. Once you start using your own middleware and libraries from outside of the default engine, you need to make sure every single one works across all the platforms you're targeting. Many won't have a Linux compatible version, and those that do may only work against specific distributions and hardware. Even then: Have they changed the window manager? What have else they customised?

There have been many issues with anti-cheat solutions over the years on Linux.

> It's not like they have to do a full run of play-testing for Linux. Just check if it starts, play for 10 minutes and release that.

For a simple game, you may get away with this. Anything larger will require significantly more testing. I've been a producer on large games, and the QA process starts very early on. You don't 'finish' a game and then QA starts - It's an ongoing process that consists of significantly more than just launching a game.

>We know we're a minority so we're willing to put up with bugs

Some people don't see it that way though. People who have paid for a product have the right for it to work properly. Tickets for Linux support are often higher [1], so the time investment against payback can be problematic.

GPU driver support on Linux can also still be problematic. From feature differences against Windows, to crashes. These all take additional development time. Linux developers are often more difficult to acquire in the game dev world. QA even more so.

Game dev is really hard. Some of it is hidden by engines like UE4 on the surface, but as soon as you start digging down into serious development, it's difficult.

[1] https://twitter.com/bgolus/status/1080213166116597760


Probably the most relevant issue here is the DirectX stack or using some windows specific API (why? DirectX I maybe got at one point, the others speak to some absurd Microsoft-Is-Best-Cause-I-Paid-Money-ism).

> Many won't have a Linux compatible version, and those that do may only work against specific distributions and hardware. Even then: Have they changed the window manager? What have else they customised?

Outside of truly exotic window managers, the biggest issue I've ever seen with ports has to do with dependencies. Flatpack/Snaps solve most if not all of those problems.

> There have been many issues with anti-cheat solutions over the years on Linux.

That's assuming anti-cheat systems are ethical, even. Most games I play I want to be multiplayer, and moddable. Most games I play can do both of those things and work in a cross platform modding language. Not hard. I don't want any anti-cheat systems for a complex game with tons of modding.

The only place anti-cheat systems make sense is with e-sports style games. Of which, the best anti-cheat systems work server side. Anything else is invasive and tantamount to illegal spying. Shame on you, go burn in peeping Tom hell you perverse game dev.

> For a simple game, you may get away with this. Anything larger will require significantly more testing.

I find this incredibly hard to believe. If you are working within the engine the majority of the time, the majority of the game will play wherever the engine works. Which brings us back to one of your first comments:

> For a simple game that uses the entire UE4 stack, you might be able to get away with that

For which games are you not just using the UE4 stack? I.e. what specific problems are game devs normally experiencing causing their code to be non-portable? A bunch of low level assembly code and non standard C++?

> but as soon as you start digging down into serious development

I challenge you to define 'serious' development. That sounds suspiciously like you're wasting your time fighting infrastructure, or just doing things in a non-portable manner. Solving problems is cross platform, so you can't be doing much serious game coding, its probably problem solving for your particular platform (e.g., as I already mentioned, doing a bunch of work in DirectX shaders or something).


Well, I've been working with Unity for 11 years now, but my experience with building for Linux is limited, so I'll go with experience from mobile and console development, and say that with shaders (which never ceases to surprise with platform-specific glitches), filesystem (although Unity is supposed to abstract it out, once you get serious about resource management, this abstraction starts to leak), input (because you want to support gamepad with minimum hassle for the player), and million of unknown unknowns that hopefully will be found by QA, but realistically, by very angry players, I'd budget at least a couple of man-months for this. With sales in hundreds or even tens of thousands, we can expect to get about 2% of audience from Linux, which will account for single-digit thousands, or hundreds of copies sold. With a price tag of $10-20, after Steam cut, taxes, regional prices, sales and refunds, it's about $5-10 of revenue per unit, so in total, about $10k.

If that would just be the money a single developer got for 2-3 months of work, it could still be fine for some regions (not everyone on HN is from US). But with a studio, you also have to spend money on marketing, payroll taxes, rent, and development of future projects, which most likely sell even worse.

So, no, I really don't see how porting to Linux would pay. Most of developers that I talked to who did it admitted that supporting OSX and Linus turned out to be a giant waste.


I think there's merit to the argument of 'big corporations can't get things done efficiently' here. I think as you mentioned, for a small scale dev it can be done. I agree, shading could be better across platforms, graphics card manufacturers, even, but that all pales in comparison to a large inefficient corp making grossly over-complex games with staggeringly little added value per hired developer (oh yay another 1mil triangles added to the 100th remake of the same garbage FPS or NFL game, extreme sarcasm).

But outside EA Games e.g., some of the best games I've ever played were written by solo devs in a cross platform engine you may have heard of called Flash. Yeah, not always smooth sailing on Linux, but it worked, pretty well even.


> Linux is ok for certain types of developers, it's still awful for normal users, abysmal for office work and obviously not a choice if you develop Windows software.

Don't want to get into an OS war here but merely stating your opinion as fact without supportive evidence is not in the spirit of HackerNews.

> it's still awful for normal users, abysmal for office work

For office work a lot of workload has moved to the web browser. Creating documents and working on spreadsheets for instance can now all be done online with Microsoft's Office offering and other competing products like Google Docs.

And why is it awful for normal users? ChromeOS for example is just linux, and that whole platform is targeted towards 'normal users'. I use linux on all of my personal devices and I use these devices to do very 'normal user' activities like watching YT, social media, blogging, etc. There are many communities that consist of 'normal users' like /r/Thinkpads or /r/LinuxOnThinkpad that are currently enjoying linux.

Precisely why do you think it's awful for normal users and office work?


> Don't want to get into an OS war here but merely stating your opinion as fact without supportive evidence is not in the spirit of HackerNews.

These things are pretty much self evident.

> Linux is ok for certain types of developers

Web developers and HP computing. Game developers? Haha, no. Embedded? Too many proprietary toolchains that don't run on Linux. Productivity and business applications? See below section on office work. ML? nVidia [apparently I'm incorrect on this one].

> it's still awful for normal users

Alright, this is a bit subjective, but is supported by the stats that show Linux Desktop still in a distant and well deserved third place. You list a few tasks that you seem to believe are "normal user" tasks, I submit to you that you drastically underestimate the things a normal desktop computer user does with their computer, since most of the tasks you listed are now things that people do on phones and tablets.

> abysmal for office work

In most offices, Microsoft Office is an indispensable productivity tool (which many HNers, having little experience with real office work, will underestimate severely) and it is only supported on Windows (No, the web version is not the same).


> Embedded? Too many proprietary toolchains that don't run on Linux.

It's funny/surprising that you'd say that, given that Linux is the single most dominant OS in the embedded device space. What toolchains are you referring to? Here are some well known ones.

https://www.yoctoproject.org/ https://www.ptxdist.org/ https://openwrt.org/

> ML? nVidia [apparently I'm incorrect on this one].

Not only are you wrong about CUDA, but Google's custom built chip designed for ML runs soley on Linux: https://en.wikipedia.org/wiki/Tensor_processing_unit

ML almost exclusively runs on Linux.

> and it is only supported on Windows

Office also runs on macOS. You can also run Office in Linux using WINE.

> Alright, this is a bit subjective, but is supported by the stats that show Linux Desktop still in a distant and well deserved third place. You list a few tasks that you seem to believe are "normal user" tasks, I submit to you that you drastically underestimate the things a normal desktop computer user does with their computer, since most of the tasks you listed are now things that people do on phones and tablets.

It's really easy to say anything without providing any supportive evidence to substantiate your claims. Pray tell what tasks you are referring to that "normal users" partake that supposedly can't be done in desktop linux.


>It's funny/surprising that you'd say that, given that Linux is the single most dominant OS in the embedded device space. What toolchains are you referring to? Here are some well known ones.

No, and no.

The parent was referring to embedded development toolchains - compilers, IDEs, endless small utilities, debuggers, analyzers, etc. etc. Commercial tooling is almost all Windows-exclusive, even if it's using a smattering of open-source bits underneath.

Also no. The "embedded device space" is v a s t. New projects for phones and similar hardware profiles might choose an Android or other flavor of embedded linux, for sure. Even within sexytech consumer products you're more likely targeting something like VxWorks or QNX as not. The physical world, the domain of embedded devices, is not smartphones and SaaS. Unless you're talking about a very specific product category it's laughable to call Linux dominant here.


> The parent was referring to embedded development toolchains - compilers, IDEs, endless small utilities, debuggers, analyzers, etc. etc.

I know what the parent was referring to.

> Commercial tooling is almost all Windows-exclusive, even if it's using a smattering of open-source bits underneath.

No it's not. Literally both of the embedded OS' you mentioned (VxWorks/QNX) support Linux as first class hosts.

https://blackberry.qnx.com/en/software-solutions/embedded-so...

https://www.windriver.com/support/site_configuration/docs/wr...

What Windows exclusive tooling are you talking about?

And you are understimating the prevalence of Linux as the embedded OS for embedded devices.


ML is actually mostly done on linux. CUDA works absolutely great with nvidia binary drivers. Display performance is irrelevant.


Because when I open a 300-page, media-heavy game design document in Google docs on my i7, 32gb computer it still lags when I scroll it. And the last time I tried to use OpenOffice (or whatever was the fork called, I mix them up), I couldn't shake the feeling that UX was created by developers who just wanted to check the boxes in the list of features, without any field testing whatsoever.


> not a choice if you develop Windows software

A minor nitpick, but cross platform development is usually a painful work regardless of which platform you use.


I, for one, like to use my computer. I do not have time for my computer to be some battleground of political OSS zealotry.


seriously, the problem is apps wanting unfettered, opaque access to your system and data. If the default Steam install can r/w everywhere, anti-cheat needs kernel modules and a game can install a Trojan (like recently exploited with Source) - I say: no thanks to that, have a VM for gaming and all proprietary stuff and say a big fucking NO to the shshow the "games" ecosystem as a whole is...


This feels more like a reverse takeover to me. MS actually has a legacy problem in the form of the windows ecosystem. Under Ballmer this was a problem that required defensive moves and rhetoric to not lose people to Linux and OS X and it didn't work. Under Nadella, a bait and switch was initiated where the problem (windows legacy) is slowly switched out for Linux. This is not the embrace and extend policy of last century. They don't care about software licenses; they care about SAAS revenue these days.

In the process, they are rapidly regaining developer trust by building productive relations with the same developers they were alienating under Ballmer.

It's smart. They own Github now. Most stuff that happens there is not windows centric. But increasingly the MS developer ecosystem is being untangled from windows in any case. That's necessary to future proof it.

While the windows kernel is nice and the driver ecosystem around it serves MS well, it's actually been a problem for them as well. They've failed in the phone market (repeatedly) because linux was a better fit for OEMs. Also they've had Google and Apple compete with them effectively with the ipad and chrome os in a market where MS was peddling crippled laptops. These are all examples where windows legacy was part of the problem for MS. They weren't able to compete there. Their crippled laptops were too expensive and uncrippling them would kill their high end market. So people bought ipads and chrome os laptops instead.

Increasingly desktop software that is not web based is getting more and more of a niche. Even office at this point runs well in a broswer and MS actively supports native applications on platforms of all their competitors (android, IOS, OSX, etc.). At this point that's not optional from a revenue point of view. A windows only office would be a problem at this stage. They support it and they actually do a decent job too. This too is something that happened very quickly under Nadella. A lot of the Azure revenue is in fact office revenue.

IMHO this move to support GPU virtualization and ultimately running linux desktop apps, gets them access to a lot of niche OSS applications for Linux, the entirety of the machine learning ecosystem, and the community of professionals using that. Not all of them will switch to a windows laptop of course. But some will and they tend to be the type that spends money on their tools.

Additionally, they are doing a clever play with APIs, github, cloud native stuff in Azure, development tooling etc. where they are not blocking using non MS things but merely make the choice to buy into their premium SAAS subscriptions more attractive. All of this stuff is usable without doing that but if you are using VS Code already, have your code on Github, and are doing some AI stuff in the cloud, etc; it's a small step to buy into a well integrated ecosystem provided by MS where you are using a windows laptop with their oss tools, while deploying to Azure, and maybe doing your office stuff using office 365. The value is no longer in selling the OS but up-selling SAAS and making sure the choice to buy into that is logical, easy, and natural no matter what you use.

Github codespaces is a great example of that. I bet it will be really easy to setup and come with some nice SAAS subscription. It's all OSS and you are welcome to run it on AWS or your own cloud. But I bet it will be easier to run in Azure. MS tools, MS cloud, MS as the easiest place to run linux developer tools (!!!), etc. They won't force anyone to switch. They don't care about individual developers but they do care about what their bosses sign up for in terms of SAAS services. That's where the money is.


> Under Nadella, a bait and switch was initiated where the problem (windows legacy) is slowly switched out for Linux.

If that is what's happening, then what's the endgame? Switching WSL around so that "Windows host + Linux guest" becomes "Linux host + Windows guest", i.e. Windows becomes a Linux distribution running native Windows apps in a VM with seamless integration? I'm somewhat intrigued by the possibility, but I don't see how it could work given the ubiquity of vendor-supplied device drivers targeting Windows kernel API/ABIs.


A large part of their business is already running on or compatible with Linux. The endgame is that they get your money regardless of what OS you are on.

I'd expect the importance of the windows kernel for revenue growth to be increasingly less important over time. Of course they won't drop it outright; at least not right away and it's likely to stay relevant for e.g. pcs/laptops and gaming. As for games and vr content, a lot of game studios already use cross platform sdks and Linux support for games with and without emulation is pretty decent these days. Also, Android and IOS are big targets for games.

Most hardware vendors don't target windows exclusively. Some do of course but a lot of hardware works fine on other platforms even if vendors don't actively support that. Anything intended for data centers runs linux primarily. Most laptop vendors have a few linux friendly laptops at least to not lose out on the pro developer market that tends to actually buy their more expensive laptops. Likewise, most graphics card vendors want to support e.g. machine learning and that requires linux driver support.

But I get what your saying. My observation is that MS is very friendly lately with Ubuntu. I don't think Mark Shuttleworth is interesting in selling that outright but an MS distribution might be a logical next step given their increased dependency on Linux on the Desktop and in the cloud.


It looks like they are already at the "Extinguish" phase for email. Try reading this:

https://lkml.org/lkml/2020/5/19/1527

:(


I think that's a bit hyperbolic. The Outlook team's choice to not do word wrapping in the plain-text version of an email is unfortunate, but defensible IMO, considering that most people use an email client that can display HTML email. And before the defenders of plain-text-only email chime in, HTML email has features that most people actually want. The world has moved on from the time when emails were displayed in fixed-width terminals. So Microsoft has as well.

Disclosure: I work at Microsoft (on the Windows accessibility team). At work, I use Outlook. Outside of work, I use Mozilla Thunderbird, but even there I send HTML emails sometimes.


I don't think it's hyperbolic. If someone has sent you an email in plain text with wrapped lines, your mail client will know this and it would be reasonable to assume that perhaps the sender may be using a mail client not capable of rendering HTML emails nor autowrapping long lines. It would be a sensible choice to format the reply in the same format as that of the email received.

By stating "HTML email has features that most people actually want. The world has moved on...", you seem to be implying that the primary method of communication of Linux kernel developers is irrelevant and their use-case for email is not important. In fact, the more I think about this, this is a classic example of Embrace, Extend, Extinguish.


I think what we have here isn't a deliberate effort to extinguish a standard, but a clash between two different cultures. LKML follows traditional hacker norms, from the days of actual terminals and slow connections, whereas Outlook is built for the world of GUIs, WYSIWYG, and more or less high-speed connections. The latter is what the vast majority of people have chosen, so it's a sensible business decision for Microsoft to not pay much attention to the old ways. It's just unfortunate when those two worlds collide.


I appreciate your sentiment, but I don't know, I'm not convinced. You can have all the GUIs and WYSIWYGs you want, but as I said, email replies should default to the same format as that of those received. Thunderbird (a modern email client) does this, if I recall correctly. It's very easy to avoid this mess.


I even doubt the patch will be accepted upstream. That patch adds nothing for other distros and is not convenient to maintain upstream.


No. They are not that company anymore and it’s been clear they have not been that way for over a decade.

This is a step towards moving their API over to Linux so they can dump Windows as an OS and provide it as a docker container service service for enterprise.


> It appears that Microsoft has now initiated the "Extend" phase of their classic Embrace, Extend, Extinguish playbook.

Or, much worse, M$ now trying to kill Linux as desktop OS, same way as Elop killed Symbian as smartphone OS.


How exactly will they extinguish Linux and open source?


While this was my initial thought as well, on a second thought that doesn't make too much sense. Well depending on whether you're wearing your tinfoil hat properly, you could say this is for testing the waters, but who the fsck would target their game at this? Write against naive Linux but use DirectX for the graphics? Just why? Sounds like the most stupid thing ever.

The CUDA support thing from other comments below seems the only sane use case so far, and in that case I don't really see the EEE either, but just good old "make shit work".


EEE is incremental by nature. This represents a lever they could use to create an ecosystem of Linux software which only works on WSL.

> but who the fsck would target their game at this?

It doesn't have to be games. Maybe MS releases an ML visualization library for Linux which requires D3D. It's not so far fetched.

Given the cumulative pain developers have had to deal with supporting IE6 over the years, I think it's warranted to be wary of this kind of thing.


Eh, still not convinced. Porting that to OpenGL is half a day of work. Like I said, you might say they're testing the waters, but I'm just not paranoid enough to scream EEE yet.


> This also enables third party APIs, such as the popular NVIDIA Cuda compute API, to be hardware accelerated within a WSL environment.

Dollars to donuts this is why Microsoft is implementing this. GPU acceleration is becoming a critical feature for many users (but especially developers) and this will continue. If WSL is to be a serious competitor, this is necessary and I'm glad to see it showing up. This is true of cloud compute, too, and Microsoft is betting big on cloud as its future growth area.

> Only the rendering/compute aspect of the GPU are projected to the virtual machine, no display functionality is exposed.

The Linux gaming folks will be pretty sad about this one. Anyway, this isn't really a Linux port of DirectX. This is GPU compute via DirectX APIs.

So now, I'm just waiting on monitor mode/AF_PACKET for WSL...


Yup, from one of the MS staff replies further in the thread[1]

> There is a single usecase for this: WSL2 developer who wants to run machine learning on his GPU. The developer is working on his laptop, which is running Windows and that laptop has a single GPU that Windows is using.

Can't say I can get behind MS trying to shift maintenance for a Windows only "feature" onto the Linux devs here.

1. https://lkml.org/lkml/2020/5/19/1139


> Can't say I can get behind MS trying to shift maintenance for a Windows only "feature" onto the Linux devs here.

Interesting take on the situation. This is effectively a driver they need to get into the kernel (just one that targets a paravirtualization host and not “real” hardware), and Linus has been adamant that the correct way to write a driver in Linux is to upstream it into the kernel.

The perspective that upstreaming a driver into the Linux kernel is a burden for Linux kernel developers is one I haven’t heard before, and seems to clash with Linus’s typical stance. Is this something that has some prior examples? Genuinely curious.


Drivers for use by Linux, not for use by antagonists.


To be honest I don't quite understand what stopped those developers from running machine learning on their GPUs under Windows itself. Most frameworks work just fine. I've been doing quite a lot of TensorFlow with both Python and .NET.

The only time I faced the need for Linux box is trying a demo project from OpenAI, which did not use the features it required Linux for on a single machine anyway.


I believe the purpose is for devs who are deploying to Linux, their toolchain may not fully work in Windows, but they want to have a similar dev/test env within Windows... pretty much the whole point of Windows Subsystem for Linux (WSL). It's not that the frameworks don't work on Windows, it's the deployment tech, like Docker, Ansible, their build scripts etc etc. Some users may be doing something totally custom on top of the GPU without a framework layer but that is probably very few users. I'd have to guess though that those users would be the type of users that would get MS to go through the effort though...


Do people actually use Docker and Ansible a lot for ML???

Doing something custom on top of the GPU is also not much different on Windows, than Linux. CUDA is basically the same. OpenCL and Vulkan are available too.

I'd like to hear perspective of a person, who actually does ML specifically on Linux for some reason.


The reason to use Linux is to follow the crowd. I use Nvidia's docker container because the plurality of devs use it. Over the course of my career I've found that well-trodden paths tend to have a _lot_ fewer bugs thanks to other people finding them first, and when I do get some

https://github.com/pytorch/pytorch/issues/37790

weird

https://github.com/pytorch/pytorch/issues/32575

bug

https://github.com/pytorch/pytorch/issues/25301

I don't have to spend time explaining or justifying or isolating my setup.

Conversely though, my work is itself off the beaten path enough that I'm likely to run into weird bugs. If I was pushing images through a CNN, that'd be well-trodden enough on every platform that I'd be a lot less fussed about which particular platform I use.


Well CUDA is only one thing. Especially in the python environment it's only a matter for time before you run in some wierd dependency issue that is Windows only. For example, getting XGBoost completely up and running on Windows requires you to either build it yourself or download a .dll from a university link. Installing it on Linux is just a proper pip install.

Also windows not having a build in C compiler makes you dependent on the horribly convoluted Visual Basic stack that seems to have a lot of dependencies for some python ML libraries. Docker makes it a lot better to run and I almost always deploy in a docker container because the ML modules I deliver are often interacted with as a black box with a REST API on top.


It looks like Anaconda supports XGBoost on Windows.

You might be right about C compiler. But something itched when you mentioned Docker. Could getting Windows SDK installed be harder, than installing Docker?


I just use chocolatey for that > choco install windows-sdk-10.0

That being said I had to avoid installing VS 2019 for quite awhile because Node.js native module build chain couldn't work with it. There are complexities


Things like filename limits and command length limits seem to get hit on windows way more frequently, making all code more fragile in general, and a million other little things.

Even popular libraries like zeromq don't support namedpipes on windows because of how complicated they are and how different to everywhere else.

Just determining what visual studio version is installed seems to trip up projects all the time.


Mostly doing several kinds of NLP: My actual setup is a Windows Laptop to SSH into Linux machines w/ tmux session. However, I really appreciate WSL for working offline, etc.

My main reason: It is the most convenient way to have Unix tools (grep/sort/cut/sed/less/...) and bash available. Cygwin always was a pain, MinGW / GitBash felt much better, but ultimately WSL just feels best.

These tools are incredibly valuable to my workflow. Sure, stuff like pandas can be nice for small datasets, and some data sits in some DB/Kafka/distributed system. But there have been countless cases where unix tools allowed me to take xxGB zpfiles of text and do basic examination or even build baseline models within a few hours.

Sure, there always are alternatives to use these tools and there are many equivalents. But I would always prefer WSL + conda for Linux to a typical "Windows Conda" installation with that weird GUI and the need to install so many different applications to even just look into the first or last few lines of a huge textfile.

EDIT: That said, of course I can/could always just run a juypter notebook under windows using windows cuda + GPU and share files with a WSL bash where I do my modifications. But again, everything within the same systems just feels better (ipython shell magic, no worries about if paths to the same file are really identical, etc) and while this is by no means a game-changer, it is just nicer that way.


I do all of my development on linux, if I can, but to be honest the GPU support is generally better on windows because that seems to be the main platform AMD and NVIDIA target - though linux support is not too bad. GPU support is the only potential benefit in using windows that I can think of though. Everything from package management, to build tools, FOSS support, community, troubleshooting, etc. is generally better on linux.


Docker at this point is InstallShield infrastructure for startup production quality freewares


It is being used both for training and inference in order to package dependencies and scale up.


Just curious: what kind of models are you training, that require scale-up using Docker/Ansible/Kubernetes?


But Windows is a painful OS to use for anything other than gaming. Ideally, I'd like to see the exact opposite of this: run Linux with a Windows subsystem just for gaming.


Many companies, including the one I work for, have most development happening on Windows. It is far easier to manage for Enterprise use-cases, with all relevant services integrated and obtained from a single vendor.

Not to mention that there are quite a few development environments for more obscure platforms that still only exist for Windows.

Overall, since most development time is spent in an IDE, the OS is really of little relevance to software development. Sure, some people insist in using command line tools, and that is unlikely to be pleasant on Windows, but a lot of other developers don't, and we couldn't care whether we're running our Emacs on Linux or Windows or Genera or whatever.


I’ve programmed in a professional capacity on macos, windows, and linux. Windows used to be the worst, but with WSL and VSCode, i would say it has overtaken macos in my mind, because WSL is better than using brew


Sure, companies may love it, but developers hate it.

I use a Windows laptop at the company I currently work for, because everything is locked down and I wouldn't be able to get my own laptop connected to the network. (Or so I thought; I co-worker managed to use the Windows laptop as a bridge to his own Macbook.)

Now you're right that as long as I stay in the IDE, it's not so bad. But every once in a while I need to do something outside the IDE, and I immediately get slapped in the face by how stupid some things are. And because it's an enterprise environment, some things are even worse than usual; opening a folder, or saving something, can be unreasonably slow because either it's a network drive or it needs to be checked for viruses and malware while I'm trying to use it. Or for some other reason. I don' t know, I just experience the extreme slowness.

Also, on top of the old terrible DOS shell, there's now also a Power Shell that's supposed to be better. It apparently has some powerful features I don't really grasp, but it's still not remotely as good as bash. And sometimes the command line really is unavoidable.

But the real pain is at home. When I activate Windows 10 on a new machine, I need to create a Microsoft account. I don't want one, but it takes serious determination to avoid it, because behind every message is another trick to sucker you into an MS account. When you finally do manage to create a local account, you're immediately expected to compromise your security with 3 insecurity questions, and no way to avoid it as far as I can tell. Previous versions of Windows did not have this stupidity.

Also, somehow Windows keeps losing my mic, speakers or camera. Once I've found the right troubleshooter, it immediately figures out how to fix it, which is great, but it also keeps losing them again. And finding the right troubleshooter takes a couple of steps and a bit of searching. I feel like I need to pin several relevant troubleshooters to the taskbar.

And then there's the total lack of access control. To install anything, you need to be admin. I gave my son a restricted account, but he can't do anything with it. I'd like to be able to create an account that can instal games, but can't compromise the system. No such option in Windows. If you can do anything, you can do everything. Unless you're in an enterprise environment, in which case you often still can't do anything. So I guess more detailed access control does exist, but only for enterprise users or something.


You generally have some good points, and some points that have more to do with preferences, or luck with hardware. I would note that the lack of anti-virus software is one of the reasons a lot of companies don't want to run Linux on employee systems. Locked-in Windows systems at least have the advantage that even if you Run as Administrator some .exe that you got emailed, there is a good chance that your AV will not allow it to run; if you're running something as sudo on Linux...

> And then there's the total lack of access control. To install anything, you need to be admin. I gave my son a restricted account, but he can't do anything with it. I'd like to be able to create an account that can instal games, but can't compromise the system. No such option in Windows. If you can do anything, you can do everything. Unless you're in an enterprise environment, in which case you often still can't do anything. So I guess more detailed access control does exist, but only for enterprise users or something.

Here I never understand this point. You can't do anything on a Linux system if you don't have sudo access - it's not like apt or yum have any special magic to allow non-admin users to install stuff. And if you can install software on a system, you can already do anything else. Especially Games, which install drivers to perform DRM and anti-cheat bull.

Now, if you want to look into it and waste quite a bit of time, Windows does allow you to configure access control at a very fine-grained level for access to non-system folders. But as long as the installers want to install things in system folders, there really isn't any solution.


I admit I've never really looked into how detailed Linux is in access rights, but on Unix systems, it's very normal to have install rights for specific directories. If Linux doesn't allow that, that would be disappointing, but I strongly suspect Linux allows this just as much as other unixen. So that would mean you can install stuff without sudo rights as long as you get group rights to the right directory. And that's a much safer approach to security than all-or-nothing.


I'm pretty sure neither `apt` or `yum` or other common package managers support any way of running as non-root. Of course you can download the sources and compile yourself, or maybe even find a binary distribution with all dependencies included (good luck with that).


I disagree, Windows is very painful for me to use at a basic level compared to Linux. I would be very unlikely to take a job that forced me to develop under Windows.


The same is mostly true for me when trying to use a Linux desktop.

However, if I'm using IntelliJ or Emacs and Firefox, I don't really need to care what OS is running underneath too much.

Edit: of course, Linux and Mac are available for devs that prefer them. It's still much easier for IT to manage 7000 Windows desktops and a couple hundred Linux ones than it would be to manage 7000 Linux desktops.


I'm not sure I agree with that. 7k "normal" machines for any given value of "normal" seems like it would always be easier than 7k "normal" machines and 200 oddballs. Is the sysadmin tooling for Windows really that much better?


Are there even systems for managing many desktop machines running Linux?

- Could you push a patch to Linux systems and have it install at the user's convenience (with some end date)?

- Can you do that in waves without manually configuring things?

- Can you remotely wipe a system if required?

- Is there any popular anti-virus software for Linux, to protect company files in home folders from user mistakes?

- Can you help users install some software without giving them full access, but also without requiring IT intervention for every installation?


> It's still much easier for IT to manage 7000 Windows desktops and a couple hundred Linux ones than it would be to manage 7000 Linux desktops.

Why? I have never seen Windows being managed entirely hands-off whereas Linux just works.


I don't understand what "being managed entirely hands-off" means. I can assure you there is no manual intervention from IT on every desktop system in the fleet, so I would call that exactly "being managed hands-off". I am certain there are always a few systems that do need some manual intervention for whatever reason, but that seems to just be the way with computers. It's not like Ubuntu updates never break, or no one ever gets to install a broken package before it is retracted.


I mean with 'entirely hands-off' that the computer just keep on running. On Linux, most packages are part of the repository and update automatically. On Windows, those packages have to be updated manually. Even if everything is automated, the computer will have to be restarted quite regularly. At least to me, Windows seems to be more difficult, especially if you have a fleet of desktop systems that can be chosen to be perfectly Linux compatible.

Where does the complexity on Linux come from that makes managing them more difficult?


Even on Linux, you need restarts if you want the updates to shared libraries to actually happen. You can't just apply a critical security update to OpenSSL for example and hope that the user actually restarts their programs at some point - if you care about the update being applied, you need to know that by some date all of the programs on that system are actually restarted, and a scheduled reboot is by far the simplest way to do this.

Then there's the question of pushing an update to all managed computers. Maybe it's not a package update, but you want to change some SELinux policy for all users, or update some DNS server or the default search domain and so on.

Never mind the question of how you can instruct one of those Linux computers to delete all data it holds whenever it next connects to the internet (to handle the case of a stolen company laptop).

There are so many things that you need in an enterprise setting that have common (though probably quite expensive) tools available for Windows. Maybe some of these exist for Linux as well (I would expect RedHat to have some), but I'm not sure. Linux admin is usually reserved for servers much more than desktop computers.


Why would you want to delete the data? That's when you open the door for an attacker to access it. If the data is encrypted, it can remain on its partition because it is inaccessible.

Interestingly, apples have to be compared to oranges. On Linux, it is easy to identify the programs that are using a library. Thus it is easy to restart just the services that are patched. In general, things can be scripted so there are no tools available. But this requires somebody who understands the system. From a business perspective, this might be more expensive, or not, if the tools are expensive.


Wine / Steam's Proton does a decent job, and some older games even work better with Wine than with Windows 10.

If you have a spare GPU, a VM with PCI passthrough does an even better job, except for some anti-cheat software that artificially discriminates against this setup.

In theory it ought to be possible to switch a single GPU to/from a VM without a reboot. In practice I have no idea how huge a refactoring to the Linux graphics stack that'd require.


> "some older games even work better with Wine than with Windows 10."

This has been true for 16 bit games since long before Windows 10. Ages ago one of my favourite games stopped working on Windows, but Wine had no problems with it. So my impression has always been that Wine is excellent for really old games, but slightly more recent games, it could already be very hit and miss.

> "If you have a spare GPU, a VM with PCI passthrough does an even better job, except for some anti-cheat software that artificially discriminates against this setup."

Doesn't every CPU these days have onboard graphics? My Thinkpad X1E should support hybrid graphics, so it'd be nice if I could give the GPU to a VM and have the desktop use the CPU graphics.

But if a Windows VM does a better job, that means Wine doesn't yet do as good a job as Windows. Though it's certainly true that Steam support for Linux is growing. But I don't think every Steam game already works on Linux.


Wine has gotten very good at recent games... or more specifically, Proton has. (Proton is a Steam-maintained fork of Wine, and is built in to the Steam client.)

Official Proton "support" is limited, because it requires certification by Valve and/or the game developers that the game runs well (the equivalent of a "native" rating on winedb/protondb), but if you're willing to go down to "gold" levels of support it still runs 70-80% of all Steam windows games.

See https://www.protondb.com/


> Doesn't every CPU these days have onboard graphics?

Don't bother attempting GPU passthrough on any laptop with an AMD CPU (eg Ryzen 2700U) and Radeon GPU (eg RX 560X).

It turns out the GPU passthrough needs a dump of the Radeon BIOS provided as a file, but no-one can dump the BIOS of discrete AMD laptop GPUs. :( :( :(

Note the complete lack of RX 560X BIOS's here:

https://www.techpowerup.com/vgabios/?architecture=AMD


> Doesn't every CPU these days have onboard graphics?

On laptops, pretty much. On Intel desktops, yes, aside from Xeons. On AMD desktops, only some lower end Ryzens have "G" models.


> In theory it ought to be possible to switch a single GPU to/from a VM without a reboot. In practice I have no idea how huge a refactoring to the Linux graphics stack that'd require.

I've got this working today. I do it through swapping the nvidia driver for the vfio-pci driver (and back again if required). The slight annoyance is that you may need to restart X11 (for me this is not an issue).

I wrote about this some years ago: https://me.m01.eu/blog/2016/05/pci-passthrough-vm-monitor-se...


You can do that today with Qemu and PCI-passthrough. You just boot a VM, and pass it a physical grapics card.

Check out https://old.reddit.com/r/VFIO/

I guess this will be the standard until we can have nicer graphics drivers for Linux.


That requires two GPUs though, one for the guest and one for the host.



Right, it's easier to have an SR-IOV situation with an integrated GPU that doesn't have it's own VRAM.

That's not the usual situation for people that are trying to use this scheme.


Yes, this is true, I use my on-board Intel graphics card for my host, and my Nvidia card for my guest operating system.


Apart from photoshop video production audio and many other professional uses.

Like the previous response what does this buy me as compared to developing and running natively in windows - as there are native compliers that support cuda etal on windows.


Windows is a painful OS to use for anything other than gaming and web browsing, I agree. (Well, it was, until WSL.) If I ever again own a work-first non-gaming desktop, I will once again install some Debian GNU/Linux on it.

(My gaming desktop, of course, runs Windows.)

On laptops, however, I continue to find myself jumping through idiotic hoops to use Linux. The driver support is always just-barely-good-enough. Maybe it's audio, maybe it's graphics, maybe it's power management. Maybe it's something involving networking-after-power-management or some crap involving "don't unplug your headphones while the lid is down". On my current work laptop, a beautiful Thinkpad Carbon X1, I can't get the power management to work properly, so I just have to accept that there's no hibernate. I'm constantly forgetting to shut down, put it in my backpack, and then pull it out drained. What a pain in the ass. Could someone fix this problem, probably. Can I? Not in the dozens of hours I've put into it. I hate doing IT, I hate it I hate it I hate it.

However much Macs make me want to vomit in my mouth, I can see the appeal. The drivers work at least 90% as well as Windows drivers, and the UX is at least half as good as a lightly-tuned Linux machine. "Jack of all trades, master of none, is oftentimes better than master of one"

Anyway, before this lockdown ends, I'm upgrading my laptop distro to this new distro I've heard of out of Redmond, I think it's called "Windows".


Painful? What exactly is painful? Apart from the Settings/Control Panel debacle, I don't know any issues you could be facing on Windows, unless you do a lot of C/C++ development which is still problematic due to the lack of a proper package manager.


I must disagree with the C/C++ take. Visual Studio is still, for me, one of the best IDEs out there, and the single best one for C/C++ development. And for the longest time, Windows indeed didn't have a good package manager, but over the past few years we've had vcpkg, which fills the vacuum pretty well when it comes to getting libraries without much hassle.


I still did not have good experience with vcpkg, but, admittedly, I don't do a lot of C++.

How do you configure compiler to use libraries downloaded from vcpkg? CMake? Something else?


>>run Linux with a Windows subsystem just for gaming.

aka wine / proton


That's exactly what Wine is.


It's exactly what Wine wants to be. Sadly that's not quite the same thing.


What do you mean? Wine has a different architecture because distributing microsoft binaries is not legal, so technically it's not the same thing, but it still does an amazing job and a lot of apps/games works flawlessly.


And a lot don't. And then you have to spend a lot of time researching why and messing around with config options and maybe even compilers. And the games sometimes stop working after an update.

Conversely, if you run Windows, it's rare that you need to work hard to run a game.


This is definitely not my experience. Off my 100 games on steam, gog and egs, 2 doesn't work straight out of the box. - rocksmith 2014 that can work by switching to alsa audio but the audio lag make the experience subpar - bit trip beat that i know can work by changing something but didn't try


It does an amazing job, but a lot of apps and games do not work flawlessly. Or didn't, last time I tried Wine. Maybe this situation has changed a lot since then, which would be awesome.


Every single versions has a lot of improvements, esp regarding compatibility. I suggest giving it a try once again.


That directly implies that every single version has had lot to improve. And this is not to diss wine, but it is difficult problem they are tackling.


Holding software to that standard eliminates most of it. "No, I haven't tried Google Docs yet. They're still adding features and fixing bugs. I'm holding out until it's stable."


Context matters a lot. I'm not holding Wine to unreasonable standards, its a matter of recognizing reality of the situation and that Wine is not such a exact mirror of WSL and as such it will continue to have significant issues.


Are you familiar with Proton? Steam have put a lot of work in to making WINE work flawlessly for many games.

https://www.protondb.com/


It seems like over the last 5 years, Wine has improved a ton. Or at least - it seems way more active.

Not sure why, but I suspect that Valve has a lot to do with it.


I mean, it's good enough for lots of games.

But there is another way there - running Windows in a VM with GPU passthrough - works beautifully in my case.


My experience with GPU accelerated development is quite horrible on Windows for anything other than the NVIDIA prepped docker container. There was always something missing or some drover was incompatible. In the long run I have always regretted developping Python on Windows, also often because whatever was developped was to be deployed on a Linux box.

I do not think it's purely windows to blame here though. It's only quite recently that NVIDIA started fixing their documentation and instructions on getting all the right CUDA CuDNN stuff running properly on a system.


Hi. PM on Windows & WSL here.

Imagine if you could run AI/ML apps and tools that are coded to take advantage of DirectML on Windows and/or atop DirectML via WSL.

Now you can run the tools you want and need in whichever environment you like ... on any (capable) GPU you like: You don't have to buy a particular vendor's GPU to run your code.

If you're old like me and remember the dark ol' days when games shipped with specific drivers for (early) GPU cards/chips, but failed to run at all if you didn't have one of the supported cards, you'll understand why this is a big deal.


> If you're old like me and remember the dark ol' days when games shipped with specific drivers for (early) GPU cards/chips, but failed to run at all if you didn't have one of the supported cards, you'll understand why this is a big deal.

Maybe I'm not that old, but I'm old enough to remember the days when microsoft was intentionally degrading opengl performance on windows ;).


This. Some games would have a handful of different renderers for different setups, while other games would only support one specific card type (and if you were lucky, a software renderer).

Those days sucked. Bigtime. If we can avoid doing the same mistakes for machine learning then we should.


> Maybe I'm not that old, but I'm old enough to remember the days when microsoft was intentionally degrading opengl performance on windows ;).

Which is still nonsense, since this only affected the OGL driver shipped by Microsoft. In contrast to truly bad actors like Apple, OEM were free to ship their own OGL drivers from day 1.

So sorry mate, but I have to call BS on that one.


Prob. different PM though.


>Now you can run the tools you want and need in whichever environment you like

Isn't the linked post saying you have to be running on Windows though? It seems like it would make way more sense to either port directX to Linux, or ditch directX and put those resources into supporting Vulcan.


Whichever environment you like as long as it's on Windows


Hi!

Don't you think the effort to achieve this would be absolutely massive? I don't know what kind of resources are thrown on this project, but I'd estimate minimum to be 3 dev teams for 2 years just to get a few variations of ResNet to work "as is". And that's just for regular models, that don't require quantization or (auto-)mixed precision for training.


Neither pytorch nor tensorflow support WinML, so this is going to be a bit of a stretch still, since CUDA is still the toolkit of choice for mainstream ML frameworks.


> horrible on Windows for anything other than the NVIDIA prepped docker container

o-O nvidia-docker does not even support Windows.

I think the only thing you need to know is which CUDA version your cuDNN requires, and it was quite clearly stated on the download page. Also the same on Linux. For nvidia-docker you used to need a specific driver version.


Lots of ML frameworks are built for Linux/UNIX first. The OpenAI projects you mentioned are a good example, but also projects like Ray (ray.io). Even PyTorch - a lot of stuff works fine, but their parallel DataLoaders actually work slower on Windows than the single-threaded ones (see this github issue: https://github.com/pytorch/pytorch/issues/12831 )


If your dev or production stack is Linux based, I think it makes sense to try to bring the GPU to your dev stack instead of the other way around. If you're working with other devs who are on actual Linux stacks, it'd be a pain in the ass to always require for there to be hybrid Windows/Linux tooling.


This mainly to woo developers who are now working on Mac OS or thinking to use Mac OS to use Windows and WSL combination instead for ML & AI application development.


I couldn't get torch's cuda stuff working on windows. Runs fine on OSX though.


Is there a possibility Linux upstream won't accept it?


If you read the replies by Dave Airlie and Daniel Vetter, it seems somewhat likely that upstream won't accept it. Perhaps that's just initial skepticism that will evaporate after more discussion, but perhaps not.

Frankly this does just seem like MS wanting to reduce their maintenance burden on what they expect will be a very important part of their WSL offering on Windows. There's nothing inherently wrong with that desire, but the people on the other side need to weigh their maintenance burden[0] with what benefit this will have to the Linux community as a whole, which at first blush seems minimal. Especially considering that the userland pieces that talk to this driver aren't open-source.

There's also the question of whether or not you believe WSL as a whole is good or bad for Linux. If there are people who would run a Linux desktop for development who then decide not to because WSL exists, perhaps that's a bad outcome. If you have people writing more DirectX GPGPU code who would otherwise write to a standard interface like OpenCL, perhaps that's a bad outcome (to be fair, there's also a lot of CUDA out there, which is similarly problematic). Is this the start of MS going back to their "Embrace, Extend, Extinguish" playbook, or is that just a paranoid fear? They've definitely been embracing Linux, and enabling people to write DX12 GPGPU code that targets a Linux environment but will only run under WSL on a Windows install does feel like "extend".

I'm not sure where I personally stand on this issue as I haven't done my research, but I think they're interesting questions to ask.

If this gets rejected, of course it doesn't stop MS from doing any of these things, but it does make it harder for them to maintain their extensions to Linux.

[0] Airlie is even concerned that just by looking at the code, he or other DRI developers could run into future IP derived-works trouble when designing future Linux graphics interfaces.


> There's also the question of whether or not you believe WSL as a whole is good or bad for Linux. If there are people who would run a Linux desktop for development who then decide not to because WSL exists, perhaps that's a bad outcome.

I think if that really matters you should be against anything in the kernel to make it work well as a VM under Windows or MacOS or BSD, and in VMware or VirtualBox too.

From that point of view, Linux in a VM on another host is taking away from Linux running as a desktop on that host. Linux running as a VM on a VMware cluster is taking away from Linux running on those bare metal servers instead of VMware ESXi.

I think the more sane way to look at it is that Linux is an application (and its subcategory is that it is an OS) which is meant to run on various hardware and software platforms, the more the better. This strategy has worked very well over that last couple decades.

Does allowing WSL mean that some people that would install Linux on their hardware just run Windows instead and use Linux on top? Probably. Does it mean that people that already ran Windows and have never used Linux get exposed to it for the first time and get familiar with it through a few click on their existing Windows computer? That's also probable. Does it really matter in the end? Probably not.


> I think if that really matters you should be against anything in the kernel to make it work well as a VM under Windows or MacOS or BSD, and in VMware or VirtualBox too.

Why? I would guess that a very large share of linux kernels run under a hypervisor in some data center, in a public cloud or some OpenStack cluster. Won't those mostly be the same features?


Because WSL doesn't compete with people who would otherwise run a Linux box, it competes with people who would otherwise shove Ubuntu into a VirtualBox and run it on their Windows box in seamless mode. So the idea that it drives people away from Linux proper is nonsense (BTW., WSL is Linux proper), unless you also believe that installing a Linux in a VM on a proprietary system is also driving people away.


> ... shove Ubuntu into a VirtualBox and run it on their Windows box in seamless mode

Given that the WSL2 rewrite is essentially this, without even the niceties of a VirtualBox GUI wrapper to control the settings, I keep wondering what all the fuss is about.


I agree with you completely. I think that the comment I was replying to thinks otherwise.


No, I was just a little more abiguous due to a left out word or two than I intended.

>>> I think if that really matters you should be against

Should be interpreted as (and I meant to write as)

>>> I think if that really matters to you you should be against

The rest of the comment should have made that obvious though, especially the last two paragraphs.


Now I get your intention, the original phrasing made me it parse it incorrectly.

Happy to know that we also agree then.


>>>There's also the question of whether or not you believe WSL as a whole is good or bad for Linux.

IMO it's a good thing. Given that windows accounts for 90%+ of the desktop OS share, Windows might very well become the world's most used Linux distro.


It is, of course, decidedly not a Linux distro though. If it was, it wouldn't be an issue. I think there are positives, but it looks a lot like "Extend" to me.

I can't even say I wouldn't use it - it might be nice! But I will not use any WSL-only capability, that's for sure.


Hi. Microsoft PM working on WSL, Terminal and Windows.

WSL2 literally runs user-mode distros (and their binaries) in containers atop a shared Linux kernel image (https://github.com/microsoft/WSL2-Linux-Kernel) inside a lightweight VM that can boot an image from cold in < 2s and which aggressively releases resources back to the host when freed.

So when you run a binary/distro on WSL2, you are LITERALLY running on Linux in a VM alongside all your favorite Windows apps and tools.

If some of the tools you run within WSL can take advantage of the machine's available GPUs etc. and integrate well with the Windows desktop & tools, then you benefit. As do the many Windows users who want/need to run Linux apps & tools but cannot dual-boot and/or who can't switch to Linux full-time.

This will (and already has) resulted in MANY Windows users getting access to Linux for the first time, or first time in a while, and are now enjoying the best of both worlds.


The question isn't asking whether you, a Windows user who runs Windows, benefit. The question is asking what it does to Linux users who don't run Windows even a little. (And I think you know that.)


That's like asking whether Linux users who don't run ESXi benefit from their paravirtualized drivers being upstreamed. No, they don't, but they were accepted with way less bruhaha. And that's despite VMWare blatantly violating the GPL for more than a decade.


With DirectX on WSL, you can do new things when Linux is running on Windows (via WSL). But these new things aren't possible when Linux is running another way (e.g. on the bare metal).

So people who use it are married to Windows.

I think folks would be absolutely excited if this was an initiative to allow writing DirectX applications on Linux, and available for Linux on bare metal. But as people realize this marries them to Windows, they go meh.


They're not intending d3d to be the client library, they ported Mesa for OpenGL and OpenCl, and are working on a Vulkan port for all of that.


I think the concern with this DirectX implementation is that it only works for WSL users, not standard Linux users. So, it's a software API that will only work in your ecosystem, not the overall Linux ecosystem.

If DirectX on Linux could also work on bare metal, the conversation here would likely be different.


My understanding is that this is meant to become transparent to the user, that really this is about enablement of hardware acceleration within WSL, and that the typical Linux userland graphics APIs like OpenGL would layer on top. So the goal isn't to get you to link to libdx12 or whatever it is they have here, it's actually a piece that will be used by Mesa to provide accelerated GL for plain old Linux apps to use when running in WSL. It seems like the easiest bite off the apple was offscreen rendering and GPGPU functionality, but the MS devs seem willing to work with the kernel devs to rearch it so it fits into the typical Linux graphics stack ie DRI and other lower level systems. As far as I understand, this would be required for, say, Wayland to be able to have hardware acceleration when running in WSL.

I'm still piecing it all together, and I definitely feel that "Extend" feeling, but I'm not sure that's what's happening here. Looks more like a few devs at MS are trying to solve the GPU Accel use case for WSL...


The DirectML API is not otherwise available on Linux. If it promises cross vendor support (which is a good thing) tools and libraries will use it.

Those tools and libraries will then not work on native Linux.


IMO it's exactly embrace extend extinguish that parent noted that can be spun to kick CUDA out of the game for Linux which is good in short term.


> [0] Airlie is even concerned that just by looking at the code, he or other DRI developers could run into future IP derived-works trouble when designing future Linux graphics interfaces.

And that is, IMHO, a very real concern and it really should not be merged.


Yeah, I think MS should address this point. Airlie also asked if everything here is covered by Open Invention Network (OIN) of which Microsoft is a participant, and indeed I'm pretty sure it all has to be since that is the point of OIN. But the MS devs will now need to have legal sign off that it's all covered by OIN before they can respond in the affirmative. I assume that will take some time as these things usually do. Especially since this seems more like a bottom-up effort than something top-down managed


That's pretty much always a possibility. On the other hand, they're also generally fairly open to add things as long as the developers react to concerns. I'd guess if this can be neatly stuffed in a corner and treated like any of the other Hyper-V specific drivers, and quality is okay, it has a reasonable chance to be accepted.

And if it is rejected, Microsoft can still ship it in the kernels for the distros they offer on WSL.


Yeah not the end of the world if it doesn't land immediately, based on the original devs responding that they'd rather find the right place for it than the expedient place for it


Sure, but that doesn't really stop Microsoft from achieving their goal with WSL2. They would be happy to upstream it if possible if not then oh well.


Why would it? That's a functionality that can't be used without Microsoft proprietary parts. Unless I missed something, I don't think there is an open source implementation of DirectX somewhere?

I doubt the linux devs ever see WLS as a target they have to maintain themselves.


Wine has an open source implementation of DirectX, FWIW.


Ok, not my field, but isn't it based on OpenGL? Does it count as an implementation or an emulation layer? My understanding is that Microsoft is trying to give WSL a transparent access to the GPU using the regular linux interface and transmitting it to DirectX.

Using that with wine would mean adding two emulation layers before reaching the actual driver. I fail to see any use case for that.


It's based on vulkan, which means that it has low enough level access to the GPU that I think it's fair to call libraries implemented on top of it "native". Most GPU drivers have some kind of translation for directx already, and there's no inherent reason why the open source directx implementation has to perform worse than the GPU implementation of directx. I hear that the open source dx11 implementation actually beats AMD's DX11 implementation in some cases.

So yeah, vulkan is neat and opens up a whole lot for the linux world. In the future you'll probably see userspace implementations of opengl on top of vulkan, maybe even CUDA implemented on AMD gpus, although I'm not sure how practical that is. Also a whole lot of exciting GPU sharing tech, accessing GPUs inside of VMs for example.

DirectX will come to linux, but it won't be thanks to microsoft. You can thank valve hedging their bets on the microsoft store for that.


What would be the big loss other than MS bundling a ppa ? Nvidia drivers and cuda anyway need a ppa. So wsl + nvidia + cuda being a single ppa is anyway not that far off.


[flagged]


I don't see it, but if it reads that way it wasn't my intention.


The associated blog post (https://devblogs.microsoft.com/directx/directx-heart-linux/) does have more details on exposing display functionality. There is no swapchain functionality yet, but it's clear they're working on it, and part of the work is "DxCore", which seems to be a cleaned up and simplified version of DXGI. So not now, but soon.


Yeah, this should be the right link. TL;DR Microsoft brings DirectX 12, OpenGL, OpenCL, and CUDA to WSL. Vulkan is under investigation. That's really a piece of exciting news!


We've changed to that from https://lkml.org/lkml/2020/5/19/742. Thanks!


Yes, machine learning is the first priority as they say in the thread. However, the blog post goes on to say that window system integration is coming. This will eventually be a full graphics stack.

> Anyway, this isn't really a Linux port of DirectX

The entire user mode side of Direct3D is ported, in addition to the user mode parts of the Nvidia, AMD, and Intel graphics drivers.


> This will eventually be a full graphics stack

Are we seeing the start of the migration of Windows to linux?


My guess is that MS and Apple are both slowly trying to steer their ship in the same general direction as ChromeOS: a stable, locked down OS that runs applications in dedicated sandboxes/containers/VMs. No longer does the OS need to provide the same "shell" to those applications. You don't need a library-based wrapper to the syscall layer. The paravirtualized hardware is the new syscall layer. You can wrap whatever OS interface you want around that in order to support different kinds of workloads. Games can run as close to the system as possible. Workloads destined for the cloud can run in a Linux environment. Instead of being intermediated by clunky VM kit from third-party vendors, they'll provide a lot of it themselves to optimize performance and ensure adequate security between environments in a user-friendly manner.

By making the virtualized hardware the "glue", they can avoid the GPL/copyleft infection of their commercial OS, while supporting different kinds of developer experiences.


>> the migration of Windows to linux

Please no. Please keep your peanut butter out of my chocolate. Call me a purist, but linux should take nothing from windows, give no ground, make no compromise. One must die for the other to live.

https://bugs.launchpad.net/ubuntu/+bug/1


Chocolate and peanut butter are pretty good together. Just sayin'.


Wait, is chocolate and peanut butter really a thing?! That sounds quite horrible to my non-US ears.

Edit: yep, an online search seems to say that's an actual thing. I guess I'm part of the ten thousand today https://xkcd.com/1053/. I will never understand the US fascination for peanut butter.


Chocolate and peanut butter is amazing.

And yes, as a sibling notes, Reese's peanut butter cups are actually alarmingly tasty, but.... as with any $1 chocolate bar, that's shitty HFCS-saturated chocolate and shitty palm-oil-laced peanut butter, with way too much sugar in it, so if you're too good for that, well, that's a credit to your tastebuds, good on ya.

So eat real chocolate with real peanut butter. Real peanut butter is nothing but peanuts and salt (it keeps well, but fresh-ground is better). Real chocolate, I trust you can figure out. Milk and dark are both good in this application.

Although, of course, peanuts are not true nuts (no more than macadamia or almond or walnut), they're nonetheless very nutty, and the effect is pretty similar to "almond bark", or hazelnuts with chocolate, or pecans and chocolate. And of course you can just eat peanuts with chocolate, an okay combination. But there's something weirdly perfect about peanut butter with chocolate, better than peanuts with chocolate.

But hey, although I'm not American, I am from America's hat, and I do like peanut butter in a few other formats too.


You should try a buckeye! It's basically peanut butter wrapped in chocolate, giving an appearance similar to a buckeye nut.


> I will never understand the US fascination for peanut butter.

At one point in history, US farmers were encouraged to grow peanuts as a rotation crop to improve soil quality. That led to a glut of peanuts in the market, so people tried to find uses for them. Peanut butter was invented+ as one of these uses, and has been a staple of American diets ever since.

+Or promoted, I can’t remember


They promoted soy family members for there self-fertilization attribute.

https://aces.nmsu.edu/pubs/_a/A129/


Reese's peanut butter cups [0] are a fairly popular "candy" item here. Try one if you get a chance, they're pretty tasty.

[0]: https://commons.wikimedia.org/wiki/File:Reeses-PB-Cups.jpg


As a European, sandwich with Nutella on one bread side and peanut butter on the other is awesome (except for your cholesterol).


and sugar...



> These changes are on the WSL’s team roadmap and you can expect to hear more about this work by holiday 2020.

As if describing things in terms of northern hemisphere temperate seasons wasn’t bad enough (and still worse commonly showing how little you care about any place other than the USA and maybe Canada by using the name “fall”), now we have this: “holiday 2020”. I don’t know when this is talking about. I’d have guessed northern hemisphere summer school break first, but I guess that’s just about finished now, so it can’t be that. Christmas time would have been my next guess, but surely you’d describe that as “by the end of 2020”? And then other possibilities occurred to me—Halloween? Thanksgiving? I have no idea at all what Americans would call “holiday” as a time of year.


Holiday == November or December. Thanksgiving and Christmas are the big ones, but we use generic terms like "holiday" to accommodate people who celebrate different holidays around those times. This is pretty common terminology; IIRC the "War on Christmas" has been a thing for a couple decades.

> I’d have guessed northern hemisphere summer school break first, but I guess that’s just about finished now

Summer break more or less just started for most students here. The fall term will start around August.


Northern hemisphere summer school breaks vary but typically run in the May-August time range, so they're just starting rather than ending.

They're referring to https://en.m.wikipedia.org/wiki/Christmas_and_holiday_season


it’s a neutered term for Christmas.


Aren't large parts of the wayland stack incompatible with NVIDIA drivers? It would be ironic if Microsoft was the one to bridge that gap.


That's more a matter of buffer management. Though it may turn out , if their work is too tied to NVIDIA, to only support NVIDIA's de facto proprietary selection: EGLStreams.

Though in principle, there's nothing preventing them from using GBM instead of EGLStreams, and there are some good practical reasons such as having compatibility with the broad base of existing accelerated Wayland windowing libraries and applications.


You can use Wayland with NVIDIA drivers. The problem is that NVIDIA and the open source drivers expose different buffer management APIs and Wayland does not abstract over that, so it has to be explicitly handled by every client application. Some Wayland clients refuse to support both, others had NVIDIA support patched in by NVIDIA itself.


Sorry if this question misses the point but how did AMD avoid this issue? Are their drivers open source?


AMD started to support the development of the open source driver several years ago by publishing hardware specs. . I am not sure if they switched completely or are still maintaining a closed source driver on the side - I haven't bought AMD cards in years, my last one only "works" with the binary blob.

The people working on the NVIDIA open source driver have no official support and were fighting with signed firmware blobs last I heard. I wish them luck, but even on older cards it is more likely to crash your system than render anything.


Single data point here but I just switched from an NVIDIA GTX 960 (2015 card) to an AMD RX 570 (2017), both using open source drivers, and the performance improvement in Wayland/Sway is huge.

My understanding is that even 2015 is too new for nouveau to run with high performance due to something called reclocking, where the card starts up at a minimal clock rate and then it's up to the drivers to reconfigure it for running at the advertised clock.


That would be great news - something I asked for since the presentation of WSL - I just hope it uses the new graphics driver infrastructure, as the first implementation mentioned in your link seems to be using RDP, which is less efficient. But if it does, that would be excellent news.


Do ML people actually care about DirectX? I thought everyone is using CUDA? Anyway in my university building I have not seen anyone that does machine learning on Windows.


Yes, (almost) everybody doing machine learning is on CUDA. There are multiple pieces of this work, and the DirectX API's are only part of it. Other pieces get CUDA running as well.

And yet another piece is a layer to get OpenGL and OpenCL workloads running on DX12 as well, rather similar in scope to how MoltenVK and the gfx-hal Vulkan Portability work are a layer to get Vulkan workloads running on Metal. This is a big effort, and it seems to me their goal is to get things to the point where stuff Just Works and you don't have to think too hard about the various bits of (technically difficult!) infrastructure to get you there.


In ML if you decide not to kiss NVIDIA's ass you're screwed. Figuratively they have 100% market share. Having major alternative backend, even if it's proprietary, will force diversity and that has some upsides I think.


They typically don't - the team published both a special build of Tensorflow that uses DirectX, as well as working with nVidia to get CUDA running against their DX Linux kernel implementation


> special build of Tensorflow that uses DirectX

Huh? Are you sure about that? Regular TensorFlow on Windows uses CUDA, not DirectX-flavored compute.


I was also surprised, but found this RFC for TensorFlow on top of DirectML: https://github.com/tensorflow/community/pull/243


Oh well, I think that will take several years to land.


CUDA is supported as well.


A bit of a tip, your university development experience is not going to be like anything you see in production


> AF_PACKET

Just run a Linux hyper-v vm. That's what WSL2 is doing under the hood anyway. I run it this way and it's great. I have windows terminal auto ssh into it. Performance is great. And using the X server x410 on the windows side gui performance is fantastic (though no hardware acceleration) because instead of ssh tunneling x410 suports AF_VSOCK for the x socket, which hyper-v supports for performance as good as a domain socket on the same machine.


I've had trouble researching if WSL2 is in fact a hyper-v managed VM. I've seen some documentation referring to WSL2 as a tightly integrated Krypton (scaled down hyper-V) VM. It seems to imply the host overhead isn't as high as a guest on hyper-V


WSL uses a Hyper-V derived virtual machine that is

* Sparse & light - they only allocate resources from the host when needed, and release them back to the host when freed * Fast - it can boot a WSL distro from cold in < 2s * Transitional - these lightweight VMs are designed to run for up to days-weeks at a time

Full Hyper-V VMs aim to (generally) grab all the resources they can and keep hold of those resources as long as possible in case they're needed. Full VMs are designed to run for months-years at a time.

WSL's VMs are MUCH less impactful on the host - FWIW, I run 2-3 WSL distros at a time on my 4 year old 16GB Surface Pro 4 and don't even notice that they're running.


But then you have this thread with people running Cron jobs to free cached memory: https://github.com/microsoft/WSL/issues/4166

I imagine this will be addressed, but claims of lightweight seem exaggerated?

But even more on my mind is the impact on the windows host. Is it running as a guest under hyper v? What's the overhead?


^ ms pm


AFAICT Krypton is stripped down in the sense that a lot of the management framework is gone, but as far as the guest is concerned, it's running on hyper-v.


Ah that sucks, for a moment I thought I would be able to run Wine from within WSL2 and get my game on.


if you are already running Windows and using Linux through WSL, why would you want to use WINE to run games?


Because older games run better on wine than they do on windows. Though there's more recent software available to mitigate some of that (wined3d, winevdm, etc).


Possibly parent comment was a joke?


"There is currently no presentation integration with WSL as WSL is a console only experience today. The D3D12 API can be used for offscreen rendering and compute, but there is no swapchain support to copy pixels directly to the screen (yet )."

This leads me to believe that display support is intended in the future. It's a work in progress. They've gone this far why would they stop at compute? Still, it's pretty awesome if you ask me.


They've announced that Linux GUI apps are coming later this year to WSL2; while that is possible without GPU, I imagine MS wants a decent UX for the feature, though, which suggests...


And wouldn't be faster simple use native Linux installed on the computer ? Instead of running on a VM, and a glue/translation layer from OpenGL/OpenCL/Vulkan/CUDA to DirectX. And don't forgot all the blotware and slowness that have Windows 10.


But then you lose all the benefits of an established desktop OS, like all hardware pretty much working, the OS pretty much working out of the box, plus software like Microsoft Office just runs without any additional effort, plus you get to profit from years or decades of muscle memory and OS-specific knowlegde, and maybe it's even about being allowed to connect your laptop to the company networks at all.


Speaking as someone who spent like a dozen hours trying to get his duel monitor setup working in Ubuntu and Fedora: no. It's not.

My experience is that Linux has significantly worse hardware support than Windows, particularly where newer hardware is concerned.


Gaming folks won't be running games in WSL when they can just run them in W.


> The Linux gaming folks will be pretty sad about this one.

I was a tad bummed when realizing what this actually was, but still very much impressed.


I found this reply to be especially interesting: https://lkml.org/lkml/2020/5/19/1309

> I also have another concern from a legal standpoint I'd rather not review the ioctl part of this. I'd probably request other DRI developers abstain as well.

> This is a Windows kernel API being smashed into a Linux driver. I don't want to be tainted by knowledge of an API that I've no idea of the legal status of derived works. (it this all covered patent wise under OIN?)

> I don't want to ever be accused of designing a Linux kernel API with illgotten D3DKMT knowledge, I feel tainting myself with knowledge of a proprietary API might cause derived work issues.

This is the real scary part about software patents, and more specifically, patenting APIs. I'm not saying Microsoft is doing this in this case, but it could be a very real strategy by a bad actor. Attempt to taint open source software with your patented software and then litigate.

In a dystopian future where patented software API lawsuits have become a common occurrence, I could imagine "clean room people". Individuals who can be guaranteed to not have been exposed to certain software designs.


With looking at source, you have to distinguish between copyright (the license) and possible patent status. You should indeed be careful when looking at code with an unclear license, as this opens you up to claims of copyright violations.

However with patents, the situation is different (which makes them so problematic). Patent violation is solely based on whether your code violates a patent or not. Having looked at other sources or not doesn't change anything about the possible violation. This is, what makes patents so nasty.


(The originally submitted URL was https://lkml.org/lkml/2020/5/19/742.)


On the other hand, if Microsoft themselves are publishing this code as open source, on the open, on one of the most important projects out there, and are actively asking other experts from other companies to review it, I would imagine they are weakening their arguments on the potential litigation.


Only for copyright claims. Patent claims are not diminished by publishing items and asking for review. We can see this in the various video codec and other standards that publish a bunch of info that is covered extensively by patents.


FYI:

This is the first draft of the Microsoft Virtual GPU (vGPU) driver. The driver exposes a paravirtualized GPU to user mode applications running in a virtual machine on a Windows host. This enables hardware acceleration in environment such as WSL (Windows Subsystem for Linux) where the Linux virtual machine is able to share the GPU with the Windows host.

So this isn't actual "DirectX on Linux", just a driver for a virtual GPU exposed to WSL-guests to enable guests to directly use DirectX, more or less.

Probably most useful for Azure-based projects.


No, if you read the blog post on Microsoft's site it goes into more detail:

This is the real and full D3D12 API, no imitations, pretender or reimplementation here… this is the real deal. libd3d12.so is compiled from the same source code as d3d12.dll on Windows but for a Linux target. It offers the same level of functionality and performance (minus virtualization overhead).


It says 4 sentences earlier:

>> Linux when running in WSL.

>> minus virtualization overhead

refers to "para" in "paravirtualization." "Paravirtualization" is reducing the overhead of virtualization when the guest knows it's being virtualized.


That's the user space code. It still depends on the Windows drivers to do the work.


This sounds like an attempt to win back ML segment from Linux to Windows. There are three letters back in my mind that sound like screaming, but too early to tell unfortunately.


How is it too early to tell?

It is an extension of the capability of WSL, giving you that sweet convenience of the DirectX API with your existing ML project. Of course, this extension makes your project incompatible with desktop Linux once adopted.


Not necessarily. It is not possible to access the GPU in WSL at all right now, so I need to dual boot which also means dealing with Linux desktop compatibility issues with my laptop. As long as I can use the same ML framework (without DirectX API), then this poses no compatibility issues at all. It just means I can develop & run my ML code in WSL.


Just curious, what stops your ML code from running under Windows?


It's way more of a pain, due to a lot of legacy constraints in windows. It's easy to overflow pathnames and command lines that then get silently truncated. Doing ML dev work is for sure easier on a Linux env than a pure Windows env. it's not impossible on Windows, but def much nicer in Linux.


Except your ML project is accessing the DirectX API through another cross-platform API layer (CUDA). And the purpose of running WSL is that you ultimately hope to deploy to Linux servers (which Microsoft hopes will be on Azure).


The thing with WSL is that it doesn't win back the ML segment from Linux to Windows for production workloads. This is just a developer workstation friendly move that explicitly doesn't tie you to Windows itself.

It seems like Microsoft is continuing to see Linux as a production server target while positioning Windows to remain relevant as a workstation OS.

Ostensibly, this is a move to compete with Mac OS and not Linux.


I'm not too sure about that, what about the support for DX12 in WSL? Can't deploy that to production outside WSL right now as far as I'm aware, unless they add DX12 support to azure Linux vms...


Given that the main use case is stuff like running machine learning tools, the only people directly targeting DX12 on WSL will probably be framework/library developers, who might want to add DX12 as one of the graphics systems they support. If that's the case, the vast majority of developers won't notice any difference other than more software supporting graphics acceleration when run in WSL (or, another way of looking at it, when Linux is running on the WSL "hardware"/"platform").

Sure, nothing stops you from targeting DX12 directly in your application code, but why would you do that? At that point, you'd just target Windows since your users would have to be running it anyway.


> I'm not too sure about that, what about the support for DX12 in WSL?

It's an implementation detail -- they're exposing the Windows graphics driver to the Linux system with the most minimal amount of translation and overhead.

You could code directly to it in your Linux application code but it makes no sense to do that. You'd be literally writing a Linux application that can only run under Windows -- the smallest market ever proposed. Instead library/framework developers will add it as another target to improve performance in WSL for generic Linux applications.


> This is just a developer workstation friendly move that explicitly doesn't tie you to Windows itself.

Ah, but it does tie the developer to Windows ~ this DirectX module can only be used with WSL2, which only runs on Windows.


No, because developers won't be coding to this DirectX module directly. They'll be using CUDA or OpenGL or using another library or framework for which this is just one of many backend implementations.

Unless you think developers would actually bother coding explicitly for the world smallest possible market (Linux inside of Windows).


In this case, this is the extend phase. So this is what they really mean about 'using Linux' and 'Microsoft <3 Linux'.


Mentioned it in another comment, but I am not sure what stoped people from doing ML without WSL. I don't even know ML libraries, that don't work on Windows proper.


...and extend. I'm sure this architecture was easier for MS but it opens the door to code that runs on WSL2 that doesn't run on regular Linux.


In the linked thread the Microsoft developers explicitly deny that this is the intent, if that's worth anything to you.

The goal, rather, is to take all the usual (for the Linux side) GPU stuff and give it access to the Windows host's GPU, when running on WSL.

That is, they are already working on getting OpenGL/Vulkan/etc. to run on top of this: https://www.collabora.com/news-and-blog/news-and-events/intr...

Rather than "get people to write code that runs on WSL2 but not regular Linux," the goal is "get code that runs on regular Linux but not WSL to run on WSL2."


The developers' word is irrelevant. Microsoft is a business. Microsoft will pursue its long term business interests. Developers are hired to do what the business directs. So the question is really, "what would best serve Microsoft's business interests?" and not "what do the developers intend?" because in time only one question matters and unfortunately it's not the one with the best interests of non-windows-users in mind.


> Microsoft will pursue its long term business interests.

Just like all positive sum games, open source is in everyone's interests, including Microsoft's. Egoistic alturism: serving your own interests [through] serving everyone else's interests.

All the people here railing on how Microsoft is destroying open source clearly don't understand the very premise of the thing that they think they are defending.


The developers' intent does align with Microsoft's current interests -- as you say, they are hired to do what the business directs.

Perhaps their interests will shift in the future, as they clearly have in the past. But the things they build now are not from the "extend" phase of an EEE arc. At worst they are in the "embrace" phase.


Embrace phase sounds about right.

If MS make it as easy to run desktop apps on Windows as it is on Linux, then the question of why even run a dedicated Linux machine if you can do everything on Windows? becomes relevant.

I suppose clipboard integration already works? And drag n' drop?

Once that mindset is heavily entrenched (e.g. in Enterprises), then the extend phase can begin.

I don't think MS are as scared as they were previously when Linux started to dominate the server landscape, and swathes of new developers (especially web backend) moved to a java, ROR and Python (or LAMP), where MS were absolutely nowhere in the stack.


The initial implementation of WSL was not mandated by the business. it was created by the developers that were tasked with exploring how to run Android apps (on the ms phone) and they went bananas and invented and created WSL. First later the business side of Microsoft came in to play on wsl


It doesn't matter where it came from. WSL is owned by MS, and the most rational framework to predict how they will use it is that they will attempt to leverage it to support their business interests.

MS is not in the business of paying developers to give away nice things for free.


Does the intent matter if the result is the same?


Will the result be the same?

I don't see why it would be, given the circumstances.


If their boss comes to them with a mandate later, their intentions will be forced to change. Also, the road to hell is paved with good intentions.


Why would Microsoft's word be worth anything to anyone?


> In the linked thread the Microsoft developers explicitly deny that this is the intent

Unfortunately, the developers don't call the shots in there. And the ones calling the shots don't give a single damn about these developers.

Microsoft is back to it's old shenanigans after a big PR blitz.


> Microsoft is back to it's old shenanigans after a big PR blitz.

And your evidence being what exactly? Yes, they’ve done EEE before, but there’s no evidence that they are doing it now. It’s been two decades.


This announcement is the evidence. They release some directx stubs that are completely useless outside of windows and of absolutely zero importance to anyone in the Linux Ecosystem and make a huge Microsoft hearts Linux push with it with confetti, unicorns and blushing dev-twitter influencers.

While the actual meat of directx will still be proprietary and windows-exclusive where they call the shots and make the money.


Releasing something exclusive is not evidence of EEE. Yes, it’s embracing, but that’s it.

How do you know they don’t plan to port DirectX to Linux? They’re currently attempting to upstream their changes, so why wouldn’t they port DX itself? They also don’t have to open source it (although I wish they would); they could just release a binary blob that you download and install.

Also, how do you expect Microsoft to kill Linux? To me, it seems that people who willingly choose to use Linux over Windows are generally the kinds of people who won’t be persuaded to move back. Microsoft can’t kill Linux.

Have you ever looked past your preconceived biases and considered that maybe Microsoft has actually had a change in culture over two decades? I don’t think so, because the people who scream “EEE” are the kinds of people who’ll never be swayed in their opinion.


> but it opens the door to code that runs on WSL2 that doesn't run on regular Linux.

Which is exactly what I was expecting since day one when WSL was announced. WSL is going to slowly kill Linux, or to be more accurate, it will kill any Linux (be it on servers or desktops) not Microsoft branded and distributed. Forks on smaller embedded systems will resist for a while, until one day MS decides to port some killer technology de facto embracing them as well.


Yeah, but right now it solves the problem of not being able to run code on WSL that does run fine in Linux.

What really stops me from using Linux on the Desktop is compatibility issues with my laptop(s), reliable sleep/wake and rendering issues with High DPI display.

Microsoft released an MS Teams client for Linux which astonished me. So, I am giving them the benefit of the doubt for the time being.


The Teams client in Linux sucks, it is missing half the features and you have to pkill it after use, because simply closing it doesn't actually stop the program + it spins your fan like crazy doing God knows what in the background.

Still, thanks, I guess.


Agree. I am also looking for a laptop to mainly run Linux on. This quite widens the choices for me. Also, what keeps me and probably many others from running 100% Linux is the requirement of some commercial applications which are not available for Linux. So far using a Mac with a Linux VM was the best compromise for me, but now Windows becomes an appealing alternative.


Microsoft hasn't given up on Windows but having lost the mobile space I think they've decided to go back to their roots of developing software for many operating systems. It doesn't matter if you run Windows, Mac OS, Android, or iOS if they can sell you an Office 365 subscription.

The Windows protectionism that dominated the 2000's doesn't help Microsoft sell more product anymore.

It's not so much that Microsoft has changed but that the world has changed and Microsoft is changing with it.


I think Windows has been trying to kill linux for what, almost 30 years? Some of that conspiracy mentality may have been valid 15 years ago, but probably not anymore.


You're saying it's invalid because it was too right?


It may be hard to remember way back then, but linux didn't have either the market share or the mind share that windows had back in the late 90's.


But it wasn't a conspiracy now, was it?


This patch adds WDDM (Windows Display Driver Model) as a Linux kernel API. The implementation just forwards to the Windows kernel, but it seems to me like someone could implement it natively in Linux. Then Windows user mode graphics drivers would work natively in Linux without WSL, and DX12 too.


someone could implement it natively in Linux

This is exactly what we don't want. There's already plenty of effort wasted on GBM vs. EGLStream so introducing a third API would just lower quality even more.


This "third API" is already well supported by every relevant graphics vendor. Adopting it would actually reduce the number of APIs vendors need to deal with. And the quality of WDDM drivers has always been higher than anything Linux has.


Great, just what I want, vendors to be incentivized to force me to run closed binary blobs on my Linux box because they'll just repackage their Windows driver and still not release its source.


> And the quality of WDDM drivers has always been higher than anything Linux has.

And the outcome of implementing this seems to ensure it will stay that way.


Really? The experience of AMD users has historically tended to be at odds with that statement.


This isn't merely another API, though. It's the one against which most graphics drivers are written.


This is actually debatable. How many different GPUs are supported on Windows vs ob Linux? There are really only three vendors left on PC, while Linux supports these as well as a bunch of mobile GPUs...


We definitely wouldnt want a stable driver API! NDISwrapper was a huge step in the right direction.


Presumably the NVIDIA Linux driver can be replaced with a shim over this, and we can all forget about EGLStreams.


GPU vendors could expose their driver to the WDDM API in Linux, though.


In a previous life I wrote code that only ran in Cygwin, it depended on a dummy networking driver to talk to the WinCE core of a sonogram machine.

Running Windows VMs as kernels for Cygwin environments felt less weird than most modern container architectures.


Microsoft doesn’t consider desktop Linux its competition, it considers MacOs its competition.

It’s going after macbook’s position as the default developer laptop and Apple deserves what’s coming to them IMO, for neglecting it for so long.


> after macbook’s position as the default developer laptop

Perhaps in your country/field. Certainly not in mine...


Sorry I can't hear you over the airplane engine that's apparently inside the 16inch mbp


That's not the point. They are trying to not IBM themselves.

They and the clones won the war with IBM in the 80s and 90s because IBM was worried about things that didn't matter: they wasted time with things like an operating system CP/X86 that ran DOS and 3270 emulators to their mainframes on top of a GUI system called mermaid (3270 PC) while Microsoft was writing Windows 3. Oh yeah did I mention some configurations cost $20,000 each? some configurations cost $20,000 each.

Because IBM wanted to secure its AS/400 minicomputers, mainframe, and microcomputer line and make them interoperable, as if a PC user sitting in their den would be connecting with $500,000 or so of other IBM machines, everyone else ran circles around IBM. They kept up the "whole kitchen sink" system way way past its due date.

This also happened to DEC, which responded way too late in the game to be relevant, SUN, SGI, and Wang. And it may eventually take down Oracle.

So they're intentionally taking an uncommitted, decoupled approach. I'm paying Microsoft as a result. I pay GitHub and Azure bills every month and run exclusively Linux.

They can't sway around as a monopoly like they could in 2000, once Ballmer left things changed rapidly. It's not going back. They have effectively zero cloud software (database/operating system etc), effectively zero mobile presence, and most people don't really like Windows.

In a tightly coupled stack you're only as strong as your weakest link, and MS has a bunch. Their biggest risk now is to HP or GM themselves; basically gobble up a bunch of things, blend it into indifferentiable blandness and collapse while waving a giant sceptre labelled "greatest company of 30 years ago" (HP bought DEC, Cray, Tandem, Apollo, Convex, 3Com, Phoenix, Palm and SGI and did effectively nothing with them beyond slapping an HP logo on their final pre acquisition product line and then just rode it out without any followup. HP knowing only how to fumble the ball every time for 20 years is why big business switched to Linux. HP took all the alternatives behind the barn one by one, cut them a fat check and then shot them. Simply crazy)

So yeah, it's a different game now.


> And it may eventually take down Oracle.

Oracle is basically a department of the NSA/CIA/FBI. Sales to companies are just cover for the real game. It will never go away.


> that doesn't run on regular Linux.

DirectX does run on regular Linux, courtesy of Wine/Proton. It's just a matter of reimplementing the userspace .so interface that MS will provide for access to this facility, so that it hooks into that support instead.


> it opens the door to code that runs on WSL2 that doesn't run on regular Linux

That's a strange complaint. Would you be happier if this was merged into the mainline kernel instead?


What I mean is imagine an app that supports "Linux" but it only runs on WSL2; it doesn't run on real Linux. That would be bad IMO.


This emulates a GPU driver.

If I understand it correctly, it just means that GPU-accelerated code that previously only ran on "proper" Linux now can run on WSL too, being powered by DirectX behind the scenes.

What's bad about that?


You don't understand it correctly. Nobody is complaining or even talking about the gpu driver.

> DxCore & D3D12 on Linux

Projecting a WDDM compatible abstraction for the GPU inside of Linux allowed us to recompile and bring our premiere graphics API to Linux when running in WSL.

This is the real and full D3D12 API, no imitations, pretender or reimplementation here… this is the real deal. l

> libd3d12.so and libdxcore.so are closed source, pre-compiled user mode binaries that ship as part of Windows. These binaries are compatible with glibc based distros and are automatically mounted under /usr/lib/wsl/lib and made visible to the loader. In other words, these APIs work right out of the box without the need to install additional packages or tweak the distro’s configuration. Support is currently limited to glibc based distros such as Ubuntu, Debian, Fedora, Centos, SUSE, etc…

So you can use it with MS blessed distros and only when running under WSL2. You can't use it without WSL2.


> The plan is for Microsoft to provide shims to allow the existing Linux userspace interact with DX12; I'll explain below why we had to pipe DX12 all the way into the Linux guest, but this is not to introduce DX12 into the Linux world as competition. There is no intent for anyone in the Linux world to start coding for the DX12 API.

https://lkml.org/lkml/2020/5/19/1139

And backing that up, they ported Mesa to get OpenGL and OpenCL in the Linux guest and are working on a Vulkan port as well.


OK. That's different. That sounds extendish indeed.


It doesn't just open the door, it absolutely requires it.


How? For example code that uses CUDA on Linux, calls the GPU CUDA driver, which passes it on to the Windows part of the interface. How does that require user code that doesn't run on normal Linux to take advantage of this?


There is a new, closed-source Microsoft DirectX 12 library for Linux apps that speaks WDDM to /dev/dxg.

There is a pre-existing closed-source NVIDIA Vulkan+OpenGL+CUDA library for Linux apps that speaks EGLStreams to /dev/nvidia0.

There is an announcement that the closed-source NVIDIA library might start speaking WDDM to /dev/dxg.

There is a pre-existing open-source Mesa DirectX 12 (+Vulkan +OpenGL + ...) library for Linux apps that speak GBM to /dev/dri (and some support for speaking EGLStreams to /dev/nvidia0 too).

There is a pull request to implement /dev/dxg in the kernel as a Hyper-V pipe for WSL2's use.

There are lots of interaction points for alternate implementations of various pieces of the stack. For example, in the future it might ultimately be possible for an alternate /dev/dxg implementation to wrap the normal DRM API, or for NVIDIA's binary driver to reimplement /dev/dxg on real hardware.


"There is no intent for anyone in the Linux world to start coding for the DX12 API."

https://lkml.org/lkml/2020/5/19/1139

"This is a driver that connects a binary blob interface in the Windows kernel drivers to a binary blob that you run inside a Linux guest."

https://lkml.org/lkml/2020/5/19/1288


The title is misleading.

This isn't DirectX on Linux. This is DirectX API access for WSL exclusively. It will lock ML projects into Windows/WSL for the price of access to the DirectX API and at the expense of development of other projects like Vulkan for native Linux.


Not anymore so than they're already locked into CUDA. It's more a WDDM bridge than a strict DirectX bridge.


I don't see how being "locked into CUDA" should make one less skeptical of a change that will functionally lock projects into Windows. I can't help but think this is going to siphon resources from Linux native graphics development.


I don't see how it'll be locking projects into windows. There's no user space DirectX library here.

The whole point looks to take your CUDA code that would run just fine on Linux without a hypervisor, and run it just as well in WSL2.

ie. this is at worst still the embrace stage, IMO.


They are providing libd3d12.so, that is a user space library that apps can link to.

At the moment, that library only works under WSL, but I guess the DXVK folks can connect their implementation too.


> The plan is for Microsoft to provide shims to allow the existing Linux userspace interact with DX12; I'll explain below why we had to pipe DX12 all the way into the Linux guest, but this is not to introduce DX12 into the Linux world as competition. There is no intent for anyone in the Linux world to start coding for the DX12 API.

https://lkml.org/lkml/2020/5/19/1139

And backing that up, they ported Mesa to get OpenGL and OpenCL in the Linux guest and are working on a Vulkan port as well.


We've changed from https://lkml.org/lkml/2020/5/19/742 to the blog post it links to, which hopefully makes things clearer.


I really wish Microsoft would just tell NVidia (and AMD) to support SR-IOV on all their GPUs. Then we wouldn't need any major software changes to enable CUDA acceleration within VMs, just a configuration change to pass through a virtual function of the GPU to make it available to existing drivers.


SR-IOV doesn't make sense on most GPUs, because they have their own MMUs already with different semantics. SR-IOV is terrible for dealing with a large bank of RAM on the target device, because the MMU is in the wrong part of the architecture. It's all the way on the root complex, so all GPU VRAM reads and writes would need to take the slow path through PCI-E and back. This is basically a nonstarter for GPUs that have their own VRAM. They also don't cover the GPU specific caching information that's normally stored in the GPU page tables either.

Intel is able to get away with it for integrated GPUs because they can codesign their system level IO-MMU with the GPU MMU and it doesn't have it's own VRAM bank. I'd bet dollars to donuts that Intel's new Xe discrete GPU doesn't support SR-IOV even though the integrated versions of the same core will.


How does AMD deal with this on their server discrete GPUs that support SR-IOV?


It's not really SR-IOV in the way you'd generally consider it as it only virtualizes the GPU's access to main memory not VRAM. That still requires that you setup GPU page tables and manage them on the host side with a bridge driver in the guest like Microsoft's work here is doing.

The same with Nvidia's A100.


AFAIK that's up to motherboard manufacturers. SR-IOV often doesn't work on desktops, I doubt there mass produced notbooks support it at all. MS can work with manufacturers, but they have their own incentives and it is unlikely to expect that all of them suddenly start doing a good job on this front.

Meanwhile MS wants to cast widest possible net, so it needs to enable more and more workflows on WSL2 today, not tomorrow.


Other than enabling the IOMMU (which any system with Thunderbolt should have on by default), what does the motherboard firmware actually need to do for SR-IOV? I can't see any reason why it would need to be concerned with enumerating Virtual Functions or anything like that, but I don't actually have the SR-IOV spec on hand to dig through.

EDIT: Looking through an old Intel whitepaper, it looks like the system firmware at most has to reserve some extra config space for SR-IOV devices when enumerating the PFs, so that the OS can enumerate VFs after creating them. But Linux includes an option to re-allocate this stuff if the BIOS doesn't reserve space, so this apparently isn't a hard requirement.


I agree, this would dramatically simplify the situation


The linked blog post in TFA has a clearer title: "DirectX is coming to the Windows Subsystem for Linux"


We've switched to that post now from https://lkml.org/lkml/2020/5/19/742. Thanks!


true


This is the kind of driver that you get when you ignore all existing Linux code, then try to throw 15k LOC over the wall when its all done. Microsoft should really know better: this is not how you get your code upstreamed.


I doubt Microsoft really cares. Microsoft will ship it in the linux kernel in WSL2, and people will use it, and if the upstream community doesn't want it I don't see why Microsoft would have a problem with that.


They'd certainly prefer that upstream Linux maintains it for them. The more code they have to maintain out of tree, the harder it is for them to upgrade their WSL kernel.


They mention its a 'first draft' and they want to work together to try and get a final draft in ~6 months.


Did you even read the article or comprehend it? This isn’t DirectX for Linux. It’s DirectX access for WSL compute


At least it is just 15K LOC!


Is it really "DirectX on Linux," or more like "DirectX on Windows Subsystem for Linux?" Is it going to be useful for Linux in general, outside WSL?


No, this is only an extension of WSL, it will not impact desktop Linux.


Thank you. In the meantime I also noticed the following comment from Dave Airlie on LKML [1]:

> This is a driver that connects a binary blob interface in the Windows kernel drivers to a binary blob that you run inside a Linux guest. It's a binary transport between two binary pieces. [...] I can see why it might be nice to have this upstream, but I don't forsee any other Linux distributor ever enabling it or having to ship it, it's purely a WSL2 pipe.

1. https://lkml.org/lkml/2020/5/19/1288


He's pretty against it on a number of fronts.

That being said I don't see why distros like Ubuntu that allow non free code wouldn't ship with it out of the box. They already support Nvidia binary drivers anyway, so why wouldn't they support a transport bridge that'd let people use them on WSL too?

Also, he notices that it's intended to be more than a bridge in the future, as they're already working on integrating it with the presentation layer on Linux.


Because this benefits only WSL. I don’t think that the Linux maintainers should spend time and effort maintaining a shim that only benefits users on another operating system. It does not even lead to cross platform code if a dev uses this on WSL.

Microsoft can ship it however they please though within bounds of licenses.


They've already upstreamed lots of other hyper-v paravirtualization infrastructure.

And why do you assume it only helps WSL? Doesn't this help all hyper-v setups get access to GPUs, including azure?


Probably because its a shim to use a proprietary (only distributed by MS) binary blob version of DirectX compiled for Linux. Outside of ML utilizing DirectX in WSL on Windows, the only group that this helps is MS.

FTR, I'm not arguing whether or not this should be upstreamed. I just see where the other poster could be coming from. I could be wrong on my take and if so, someone please correct me.


They also ported OpenCL and OpenGL to it via Mesa. Vulkan is in the works, and you can use CUDA if the host GPU supports it.

Would y'all be so worked up about VMWare upstreaming their virtual GPU bridge?


Depends. If VMWare had the history Microsoft had one would be naive not to get "so worked up".


VMWare is probably the company with the worse reputation than Microsoft since they spent more than a decade shipping large part of Linux linked against their proprietary hypervisor in a blatant GPL violation.

And yet here's where their paravirtualized GPU driver lives https://github.com/torvalds/linux/tree/master/drivers/gpu/dr...


Microsoft has a history of anti-competitive practices that crippled the industry. In either case, serious objections are justified and severe handling of one case doesn't invalidate the handling of the other.


There's been Microsoft levels of anticompetitive practices from every proprietary hypervisor vendor. The others are seen as "oh, look! They're working with Linux! yay!" for a code dump across the wall (like vmware, and arguably z/VM), whereas here Microsoft appears to be doing everything right and working with upstream to modify a minimal MVP and are being lambasted for it.

It feels like they're doing everything right and everything we've asked for, for decades, and being shunned for it. What's the mechanism here for Linux winning and Microsoft being allowed into the community and what should they be doing different?


They're not doing everything right. They're doing something objectionable, like other people are also doing. Except they're being called out for it because of their reputation of having one of the most hostile anti-competitive practices the industry has ever seen. It's the price you pay for having such bad credit, even if your intentions initially seem benevolent.


> They're not doing everything right. They're doing something objectionable

What exactly? The only performant way to expose GPUs to guests while still allowing them to be used by the host (or multiple guests) for GPUs that don't have hardware support for partitioning themselves (ie. pretty much all discrete GPUs) is to replicate the ioctl layer from the host up into the guest. Any other solution is a non starter if you care about getting the perf you'd expect from the paravirtualized GPU.


Why would they? In its current state, this driver is only useful when Linux is running on Windows. A regular Linux desktop/server user would not find this useful.


Azure is more than 50% Linux. There's an incredible amount of "regular" Linux instances running on top of hyper-v. This lets them use a virtualized GPU in that env.

On top of that, why would canonical care what hypervisor you're running? There's so many other cases where they haven't gone "eww.. proprietary", why would they start now.


I don’t think azure is relevant here. Standardizing SR-IOV would be much more helpful to everybody (but much more important to Azure/cloud platforms). Kvm setups can already pass through gpus just fine.

This is a way to lock developers who want to run Linux code for AI/ML onto their windows machines and to keep them from switching their desktop OS to Linux.

Edit: from the kernel mailing list

> There is a single usecase for this: WSL2 developer who wants to run machine learning on his GPU. The developer is working on his laptop, which is running Windows and that laptop has a single GPU that Windows is using


> I don’t think azure is relevant here. Standardizing SR-IOV would be much more helpful to everybody (but much more important to Azure/cloud platforms).

Meanwhile today, Nvidia and AMD GPUs don't support SR-IOV. So that's a non-starter. It's also not clear that it the right model for GPUs either, as they have their own MMUs already.

> Kvm setups can already pass through gpus just fine.

But you need to dedicate that GPU to the guest and the host can't use it anymore.

>> > There is a single usecase for this: WSL2 developer who wants to run machine learning on his GPU. The developer is working on his laptop, which is running Windows and that laptop has a single GPU that Windows is using

From the current patch set, because they haven't hooked up the swap chains yet (as they've said that they're doing) which would allow full graphics support in the guest.


We've changed from https://lkml.org/lkml/2020/5/19/742 (and that title) to the blog post that it points to (and its title), which presumably makes things clearer.


Twenty years ago Microsoft wanted to make Java “the best language for Windows programming.” Today, apparently, they want to make Windows “the best environment for running Linux.” (This time around they might even succeed.)


Yeah, I don't buy the arguments here that Microsoft can extinguish Linux. Or would want to. Who's going to run Window's servers just to enable wsl? This is clearly targeted at developers coming back to windows on the PC, the aim is even stated as such to "enable IDEs" on wsl. I've ran arch, ubuntu and debian on my various systems, not because I want to but because they have the superior development environment in the terminal, outside of that Linux still feels janky and doesn't have any market share because of that.

Microsoft is making something I've wanted for a while, I can get Linux off my desktop and laptop do my development and my game playing on Windows and ssh into my servers for any heavy lifting. This sounds like it will be a great win for devs.


This is probably a better article directly from Microsoft: https://devblogs.microsoft.com/directx/directx-heart-linux/

OpenCL, DirectX and CUDA all seem to be available for GPU acceleration on WSL.

One other exciting thing is GPU accelerated GUI Linux apps on Windows. You may not need an X server anymore. I'm looking forward to using i3 on Windows.


Changed to that from https://lkml.org/lkml/2020/5/19/742. Thanks!


As far I understood the article, this does bring the DirectX API to Linux but in it's current shape it's only usable in WSL as the API implementation uses the virtual devices under the hood provided by WSL.

Legal/licensing issue aside, how hard would it be to re-purpose this to make it available under something like Wine or even natively on Linux with real graphics devices? How much of this is till locked in on the Windows host side?



Per the second one it sounds "relatively straightforward" but also "not really worth maintaining upstream."


Hopefully, this will enable Windows users will be able to run docker containers that use GPUs the way Linux users have been able to for years now with nvidia’s container toolkit[1].

[1]: https://github.com/NVIDIA/nvidia-docker


From the linked blog post [1], it sounds like it: "In addition to CUDA support, we are also bringing support for NVIDIA-docker tools within WSL. The same containerized GPU workload that executes in the cloud can run as-is inside of WSL."

[1] https://devblogs.microsoft.com/directx/directx-heart-linux/


Am I mistaken, or is this not so much "DirectX on Linux," but only "DirectX on Linux VMs on Windows?"


I was a beta user of WSL1. It was interesting, but the very slow file system performance from Windows to Linux and vice versa was a deal breaker for development use.

I haven't even bothered to use WSL2 because I hardly use Windows anymore, and I was forced to use VMWare with Linux on Windows for a few years of development work. It was always a pain, too, and once I switched to MacBook, I never looked back.

Since then, I assumed WSL2 wasn't going to be used much by 'real developers' as it also had many of the same problems that plague VMWare and VirtualBox, which haven't been adopted by enough developers to pose a threat Linux native and Mac users.

But I really feel MS is throwing a lot of resources into this. It's not just a quick fix for those users who need a command line but wouldn't ever consider Linux or Mac anyway, as I initially thought.

And if this gets 'good enough' then imagine even more trouble for Linux and especially Mac's grip on engineers and scientists.

Ubuntu, Redhat, and the rest have utterly failed to work with Dell or Lenovo or HP to concentrate on just one or two very well supported laptops - everything including suspend, battery life, ACPI, fonts, etc. It's a chicken and egg problem - workstations aren't profitable because most users won't put up with the hardware and driver problems. And without paying users, the distros ignore potential pro users to concentrate on enterprise.

As for Mac, while I'm happy they've finally fixed the keyboard and allowed 32GB+ ram, that configuration looks absurdly expensive at $3500 compared to a similarly configured $1700 Dell Precision or Thinkpad. And MacOs has gotten worse over the years, not better. There is nothing really compelling me to stay except for it's Unix compatibility.

WSL2 may eventually be the final nail in the coffin for a viable Linux Desktop, and Apple should be worried too, though they appear to have forgotten about pro users years ago anyway.


Dell have developer edition laptops that come with Ubuntu preinstalled and I saw a link on HN not long ago about Fedora coming to Lenovo laptops (I think it was https://fedoramagazine.org/coming-soon-fedora-on-lenovo-lapt...). I've not used them myself but I'd hope the situation isn't as dire as you describe, at least for those selected models.


Dell has put out Linux laptops (XPS and Precision) with both Redhat and now Ubuntu for years.

The problem is that they don't keep up with updates, and so you buy an XPS in 2017, and then decide to update to the next non-LTS version of Ubuntu, and eventually more and more of your hardware no longer 'just works'.

And since moving to Mac, I just can't get used to my Dell Precision anymore because certain things like fonts and touchpad are just so inferior to a Macbook from the same year.

And this is probably half the problem: If they concentrate on only one or two models, then they either have to pick their top-end line (Precision or Thinkpad) to compete for pro users who'd go with $3000 Macbook Pros, or instead target the lower end budget users on Inspirons or Lenovo Yogas who'd otherwise just use Windows.

So instead, it just seems like Linux is mediocrely supported on most of their laptop lines, but not a single one is 100% supported long term like it would be with a Mac or any Windows box.


Maybe I'm missing something but how is this an improvement over virgl? Something that's platform agnostic and works with applications right now being replaced with something that's very tied to windows.

Not a fan but also not a graphics programmer.


Why can't we all just agree and use Vulkan!?


Yes Vulkan is supported as well. They are implementing Mesa driver on top of it: https://devblogs.microsoft.com/directx/directx-heart-linux/


I suppose the obvious answer is the Xbox One. Microsoft's consoles will presumably continue to support Direct3D and no other graphics APIs.


I also found the phrasing of this to be ambiguous.

Windows (subsystem for Linux)

has a Windows host running Linux

Or

(Windows subsystem) for Linux

has Windows running on a Linux host

They've been using this phrasing since the OS/2 and POSIX subsystem days and it was also confusing then

Quarterdeck did it right with Desqview/X by just renaming their WIN16 subsystem as winx. Then you would see "desqview winx" and know that it's windows running on desqview and not the other way around.

IBM also got it right: OS/2 Windows VDM. I know it's running on OS/2.

But Microsoft has been running with this phrasing since literally Windows NT and it's always been confusing.


Wasn't it originally something like "Linux Subsystem for Windows" and there was some legal problem with the word "Linux" being first?


Apparently people have used the words swapped to mean the same thing.

It's bad.

Microsoft releasing something to run windows software on Linux, maybe for some exorbitant fee and only on say, the commercial versions of RHEL to undercut Wine progress, not only doesn't seem out of the question but also a reasonable defensive strategy and something I totally missed the announcement of. It's not like I'm a tech reporter, I go into my work for a couple weeks missing everything pretty often


EEE, slowly the Linux desktop | os environment will be a thing of the past. something you won't run directly but interact via the windows api. and your only other choice will be mac os on arm. the biggest mistake the linux ecosystem did, was being so fragmented. if they had rallied around one desktop env, then lots of people would've migrated. I personally use a t490 on ubuntu and a 2013 macbook. but even I I'm attempted to try out WSL even though, I haven't used windows in over a decade


> the biggest mistake the linux ecosystem did, was being so fragmented.

I'm not sure how many times I have pointed this out here, but given the lack of a standardized desktop environment, that is the reason why most software companies are reluctant to support Linux users. But in contrast with 'Android' in the mobile industry has succeeded in doing this with Linux as its kernel.

For a typical or even a complex desktop app, I can support 100% of all Windows users, 100% of the time. macOS is more or less the same here. Linux distros however has this fragmentation running so deep that to target the majority of users there, you must supporting 5+ distros they could be running. That's 5+ CIs to maintain and 5+ troubleshooting guides to update here.

As even Linus Torvalds would say: > 'I still wish we were better at having a standardize desktop that goes across all the distributions… It’s not a kernel issue. It’s more of a personal annoyance how the fragmentation of the different vendors have, I think, held the desktop back a bit. ' This is a distro ecosystem problem, not a 'kernel' issue.


While he is right that it is not a "kernel" issue in the strict sense, I have to say it would have been beneficial, if Linus had used his influence to push more for a better desktop environment.


If this works, Microsoft will overnight become viable for scientific dev work. Kudos to MS for taking on what is perceived by many as a "fringe" feature, but would allow me to use Windows on a laptop, while having a sane (and compute-rich) Linux dev environment. In all these years, NVIDIA still didn't make it easy to use laptop GPUs from Linux directly. At least not if you want good battery life.


I wish they did away with these heart symbols, professing their love for open source. Yes, we got that, Linux brings MS tons of money via Azure, but all this monkey show about love is plain disgusting, I can't imagine someone falling for that.


So they WILL port DirectX to Linux, but only when it's running as a VM atop Windows.

Typical..


I remember starting to use and learn Visual J++ just to find out that if one wasn't really careful the code you wrote didn't run on other systems. If I was going to keep making a Windows only program, I preferred to stick with Visual Basic at the time. The real Java SDK was harder to learn for teenager me so I dropped it for a couple years until I started at Georgia Tech, which used Java extensively.


Step 1) Put a Linux Kernel on Windows as a separate subsystem

Step 2) Port all APIs to this Kernel

Step 3) Switch over to Linux as the main kernel and make the windows kernel a a Linux sub system to support legacy win32 apps.

Step 4) Make all win32 support into a containerized system

Microsoft moves to a Linux based system it offers for free and is only used for integration and support of it’s services.


Maybe this question has an obvious answer, but will I still need to start my X Window System for graphics to display from my WSL terminal?

My guess is that the answer is yes, but the type of applications that can be run is now extended?

Conversely, I would expect this doesn't change anything with regards to SSHuttle, which would still not work on WSL.


Great, now we might get RAPIDS on Windows! https://stackoverflow.com/questions/58623169/is-there-a-way-...



It seems Microsoft is getting closer to making Windows a fully Linux compatible OS. I wouldn't be surprised to see them get rid of virtualized distros running in WSL at some point and Windows natively becoming POSIX complaint.


We changed the url from https://lkml.org/lkml/2020/5/19/742, which points to this.



Windows will become an emulation layer in Linux when Microsoft is tired of maintaining Windows kernel/sees profit in running Windows apps native on Linux. SQL server is already ported.



DirectX on WSL and DirectX on Linux are not the same thing.


The big news is Docker support for GPUs on Windows. You could train on GPU on Windows but you couldn't serve the model without Docker, as a business.


>This is the real and full D3D12 API, no imitations, pretender or reimplementation

But isn't Direct3D a COM API?


Nobody: Microsoft: Now you can use DirectX in the Windows Subsystem for Linux!


even though I am quite successfully using pytorch under win (with conda), ... THIS IS EPIC. because now o can finally go full wsl

and a pretty good excuse to switch to fast lane.

i love wsl(2) and this is the icing on the cake.


I just hope David Airlie not going to merge it.


This title is misleading, I saw it and got really excited that someday we may be able to game on Linux.


> got really excited that someday we may be able to game on Linux.

Someday? You can already game on Linux.


Title fixed now. Was "DirectX on Linux".


How about sockets...


Blog post from Microsoft linked at the top of the LKML email: https://devblogs.microsoft.com/directx/directx-heart-linux/


Changed to that from https://lkml.org/lkml/2020/5/19/742. Thanks!


[flagged]


You're breaking the site guidelines with this comment. They ask you not to post allegations of astroturfing, etc., without evidence. Things appearing that you happen to dislike (from Microsoft or any other source) are not evidence of astroturfing, they're just evidence of a large community.

If you think you're seeing abuse you should be contacting us at hn@ycombinator.com with links, so we can investigate. No more posting like this, please.

https://news.ycombinator.com/newsguidelines.html


The Year of the linux desktop has come


Actually, the year of Linux on the desktop will never come also thanks to this. Developers will find more comfortable writing Linux software on Windows (some already do); desktop users will have both worlds at hand without being forced to dual boot; Linux gamers won't have any reasons to keep using native Linux. Etc. I foresee in a not so distant future Microsoft integrating Windows UI elements and events hooks right into WSL, so that Linux developers too will be able to take advantage of the well standardized Windows UI; no more GTK or Qt frankenapps on Windows: just pure Linux code written on well known Windows RADs that access all windows internals while opening standard Windows dialogs and widgets. That would likely take away a huge part of Linux users.


> the year of Linux on the desktop will never come also thanks to this.

I think you're giving MS too much credit here. Apple has one UI, and Windows has one UI (2 if you count windows in tablet mode). Linux has, what? 30?

Quoth the Torvalds:

"I still wish we were better at having a standardize desktop that goes across all the distributions… It’s not a kernel issue. It’s more of a personal annoyance how the fragmentation of the different vendors have, I think, held the desktop back a bit."

Only recently did Ubuntu stop trying to push Unity and accepted Gnome, thereby reducing fragmentation by 1.

https://itsfoss.com/desktop-linux-torvalds/


I think that both Ubuntu and Red Hat family being based on Gnome is a good thing for Linux on the desktop, as there is a clear target for Linux GUI applications. I like the fact, that you still have the freedom to run any kind of desktop environment of Linux, but a stable default can only help Linux.


Its better if the competing ecosystem offers multiple stable defaults IMHO. Have you looked at KDE5? its amazing.


I disagree, i do not think that anyone would want to start running windows just to run the same software they already run while losing a chunk of control and support. However for the real argument, ask yourself: am I switching to using windows because of this?


I think it's the other way round: Do I still need to dual-boot or switch to Linux if this works? Or can I stick with Windows and have the best of both worlds?


I do not agree with it being "The best of both worlds".

You do not control windows, it controls you. You must adapt to it being in your life and the choices made by its designers, this is both not secure and insanity for a software engineer to allow as an ongoing situation.

Even dual booting windows will mean its updater will overwrite the partitioning table of your drive and hide any other OS you have installed, and as I understand this people only do it out of need and not out of want.


I'm using Apple products for my computing needs, and I din't particularly like Windows, but Apple tends to be even more opinionated and restrictive – and I really like that a lot. It's a big feature of the ecosystem that I get a lot of pretty good defaults that work quite well, even if they aren't what I'd build from scratch if I had to. That way I don't have to expend energy crafting everything myself and can spend that energy on whatever it is I wat to do with my computer. I don't care about the exact type of steel used for my hammer, just give me a reasonably good one and let me build that thing. In that way, MacOS currently is the best of both worlds for me – hassle-free if somewhat constrained, but I still have a UNIX underneath, iTerm, all sorts of utilities, and so on.


Looks like a very realistic scenario. From this prospective Linux is slated to become just another part of the MinGW world, thus having nothing to do with Linux as a potential end in itself. I would not be surprised if one day WSL is advertised as "the best environment for Windows programming."


This is not a port of DirectX for Linux. This extends the functionality of WSL at the expense of desktop Linux.


The sheer hilarity of Microsoft doing this work cannot be over-chuckled.


WTF. Just use Vulkan like a normal person.


Ohhh, such a smart move. If there was any momentum of gaining developers on Linux for gaming this will crush it.

Microsoft is eating Linux with a "loving" embrace.


I'm curious why you think this would make any difference to Linux gaming? If you want games to only run on Windows, they can just develop them for Windows. There's no need to put Linux in the middle.


Vulkan means cross-platform independent of OS. This is a move by Microsoft to lure any undecided/new game developers back to their walled garden. If you don't see this, let's talk in 10 years again - assuming Microsoft doesn't crap on it along the way.


Being able to run dx12 in a virtualized environment that's still under Windows (and thus not actually changing anything vs just developing only for windows) is going to lure undecided game developers how?


Say you're a game developer under Linux, OK? Then this comes along and you give it a try. This combined with all the great game engines that are pure Windows these days (see Unreal Engine 5 demo that was launched these days) and you now can target Linux using that. What do you do - continue with Vulkan or offer your game using this?


You can't target Linux using that. I think you misunderstand what this is. It doesn't help nor hurt Vulkan or Linux development -- this only exists to "optimize" the Linux VM in Windows.


ehh, nevermind


I remember when DirectX was a huge obstacle to porting games to Linux. This feels like a great, great victory for Linux to me!


You're not seeing the big picture. If Windows can do Windows apps and also 90% of what Linux developers use Linux for, what need is there ever for Linux on the desktop? MS marketing may be trumpeting their love for open source but WSL2 is a actually a brilliant bit of jujutsu[0] on Microsoft's part.

[0] I use this word specifically because it's a martial art about using an opponent's strength against themselves. MS's use of Chrome as the basis for their new browser is another example of this.


Microsoft saw developers flock to Mac OS/X where they had a true terminal and could also run line of business apps like Office. I was at several Microsoft developer conferences where 90% of the machines were MacBook Pros and I know that didn't make Microsoft leadership happy. I believe the Surface line was a direct response to that along with WSL. Someone in Redmond realized they couldn't beat Linux so now comes the embrace.


> This feels like a great, great victory for Linux to me!

Why?

This is not DirectX on the Linux desktop, it is DirectX for WSL. This means that code written for this will NOT function on desktop Linux.

Additionally, support for DirectX for WSL is support not spent on Vulkan and OpenGL.


> This means that code written for this will NOT function on desktop Linux.

Like CUDA, I assume that most applications would not be coded to this API directly. Microsoft already mentioned OpenCL and OpenGL.

This is to hardware accelerate Linux applications and not to create WSL-specific Linux apps. I can't imagine there's a big market for Linux GUI apps that would only run on Windows.


Someone please correct me if I'm wrong but I interpret this as being only a bridge to DirectX from the Linux kernel. Not a DX implementation on Linux.

You won't be able to take this dxgkrnl driver and load it into a Linux workstation and get DirectX.

And also, someone else responded to you what's the point of running Linux if Windows can do everything Linux can.

I would assume that most full time Linux users are using it out of at least a tiny bit of ideological motivation. And if not ideological then habitual, having had the power and granular transparency of Linux for so many years that Windows would never be an alternative.


I use a Linux desktop because it's reliable, fast and don't call home to M$ with "telemetry" data about my every key stroke.

Running Windows has never been an option for me, since Microsoft went down the "activating Windows" path many, many years ago.


That would be DXVK etc., not this.


Embrace - wsl 1/2

Extend - features you really open source to have and they fix it (running on laptop, apple do not do Nvidia) but try their best only run in their linux versus

Extinguished - may the last one turn the light off, and thanks you for all the fishes


Not impressed, especially since it's still a blob.

MS should stop fooling around and should start supporting Vulkan instead of pushing their DX lock-in and making hypocritical statements how they aren't on the wrong side of history anymore because they "support open source". The above feels like typical MS EEE.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: