Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft: Linux Is the Top Operating System on Azure Today (thenewstack.io)
122 points by jaboutboul 3 months ago | hide | past | favorite | 118 comments



IMO this is more of a distribution problem than an OS problem.

If MS did a better job at supporting headless Windows distros to compete with Debian (and similar linux) distros, it would be more popular.

For 9/ 10 tasks, it's way easier to spin up Debian , install a web / db / app server and have a running solution.

With windows you're still running through dozens of MSI packages, setup screens. It's too inconsistent.

There are workarounds to this, but they are not as mature & familiar as the corresponding linux setup.

It's the UX not the platform.


Windows as an OS is not as resilient as Linux when it comes to non-ideal operating conditions.

Plus, Linux is way lighter when compared to Windows. I can easily fit a stable and reliable server inside 512MB of RAM with a couple of services on top of it.

You can only dream this on Windows. CPU requirements has the same gap. Linux is way lighter.

Lastly, Linux can be tweaked from terminal completely. You need to do complex tricks to do the same in Windows.


> Plus, Linux is way lighter when compared to Windows.

More specifically than just "lighter", the Linux distro metaphor and ecosystems have already adapted to decomposed use cases. You can run the same Debian or whatever image in a Docker container that you do on the bare server (or RPi/SBC) that you do in a Desktop VM or a Xen cloud host. Your ssh-based deployment and maintenance scripting works the same, etc...

Windows gets sticky and weird the farther you get from a desktop deployment. It's a soup of special case tooling and "Here's How You Do This One Thing". So when those things change, Linux folks just tweak their install scripts a bit where Windows users need to wait for someone to Design a Product for the niche.

This is what everyone was saying back at the dawn of cloud hosting, and it's exactly how it's played out. Windows can be made to work, but... why would you?


    > Windows as an OS is not as resilient as Linux when it comes to non-ideal operating conditions.
I'd disagree with this statement on the grounds that Windows has to run across the most heterogenous set of hardware and configurations of any of the major OSes. That it doesn't fail more often is a testament to its resiliency. It supports the $200 junkbox your gran bought from Walmart to $20,000 enterprise racks and is expected to support all of those different workloads and insane variety of software that can be installed. In contrast, I'd make the case that the majority of Linux distros are running in large data centers like AWS, Azure, and GCP where there's a relatively high degree of homogeneity of the underlying hardware and the operating environment on top of that.

macOS, for example, operates a comparatively limited set of hardware platforms.


For the record, Linux generally supports way more hardware. Things that have been deprecated for windows XP still work fine today on modern Linux.

And it runs on super underpowered boards too, on which windows doesn't even have any chance of booting


I think the real miracle here is the amount of support a single company provides for Windows, while Linux support is very fractured (in a good way) among many companies with varying levels of support.


Does it though? For all of the use cases that a user might expect including device drivers like graphics cards, audio drivers, BT and wifi device drivers, print drivers, etc.? I'm sure a Linux distro could, but somewhere someone would have had to write a driver for it.

The Windows ecosystem supports a ridiculous variety of hardware.


That is a myth that I have also believed until I had to install Windows Enterprise IoT on various kinds of embedded computers.

The Windows that comes preinstalled with most computers truly "supports a ridiculous variety of hardware", but that happens because some professional has already spent some time with the installation of Windows together with all the device drivers provided by the vendors of the various weirder components included in that computer.

When I had to install Windows on embedded computers I had countless problems. I had to use frequently some complicated methods to give device drivers to the Windows installer at an early phase, because it was not able to boot on those computers. Their BIOS/UEFI lacked many legacy features, while Windows lacked drivers for some modern features, so they were mismatched.

Even after Windows was eventually installed, some of those computers had SSDs with low writing speeds, which made them unbelievably slow in Windows due to some stupid and useless Windows services, which by default were writing continuously the SSD and it was very difficult to disable them. Fortunately, that was Windows Enterprise, so it was possible to disable Windows Updates. On some of the computers with problematic SSDs, after the initial installation, booting required more than an hour, because that much was required every day to perform Windows Updates. Only disabling both Windows Updates and various indexing and swapping services has lowered the booting time to normal values.

On the other hand, Linux booted immediately on any of those computers, without having to do anything, and in comparison with Windows it was blindingly fast. Obviously, because those computers were intended to be used as embedded computers they did not have any of the peculiar peripheral devices that are usually included in laptops and for which their vendors provide neither documentation nor Linux device drivers, so they are unusable in Linux unless successfully reverse-engineered, while working fine in the preinstalled Windows.


> Does it though?

Yes it does. RedHat deprecates older drivers, but it's their problem. I connect too many stupid things to my Debian box, and they're enumerated and initialized instantly. These drivers get bug fixes too.


The Windows ecosystem supports those devices (which are irrelevant to servers anyway). ie, it's the manufacturers that support Windows, rather than the OS supporting them.


With container workloads, "Linux" is just the kernel; userland can be largely absent.

That kernel runs on cheapest phones, and on largest supercomputers. With a similarly wide diversity of software. A much more heterogeneous environment than Windows can dream of.


    > With container workloads, "Linux" is just the kernel; userland can be largely absent.
That's my point:

    It supports the $200 junkbox your gran bought from Walmart to $20,000 enterprise racks and is expected to support all of those different workloads


I do not understand your point. You are claiming that Windows needs to support a greater variety of hardware than linux, which is not true. Windows does not run on phones or supercomputers; linux runs everywhere windows does.


Those phones are made by a small number of hardware vendors on homogeneous hardware. At any given time, Google only has a handful of Pixels that it is supporting, those Pixels do not have user configurable hardware, and those Pixels are all identical.

Dell sells a PC and it has to work with any number of possible storage devices. Users might swap out the modem. It is expected to work with any PCI-E device. It supports hundreds of classes and models of devices like printers and scanners. etc. etc.

Saying that "it is used on phones" doesn't really mean much when there are only what? 5? 6? major phone manufacturers supporting a limited and largely homogeneous set hardware configurations. Same for cars; a small pool of manufacturers and suppliers on identical hardware configurations.


You'd be surprised at just how heterogenous Android devices are. Each OEM does their own thing, and it's only in the past few years that Google has gotten a few things standardized. Of course there's still a lot that hasn't been, and will never be standardized. And the core of Android is Linux. Not to mention there's Termux[0] which works on almost any device, from the fairly old to the very modern (currently running it on my OnePlus 12, and also had it on past devices).

[0] https://termux.dev/en/


I wouldn't consider something that needs to go into a special, unusable mode to apply updates and restarted after to be that resilient. I usually apply many an update to my laptop (running Kubuntu) while using it, and even ignore the occasional restart message (after a kernel update), as it's just a recommendation that can be deferred indefinitely, and my system still remains stable. And my uptimes tend to be on the order of months. Now that's resilience IMO.


> Windows has to run across the most heterogenous set of hardware and configurations of any of the major OSes.

Fair, but let's not forget a couple of facts.

First of all, most of the early Linux woes stemmed from two main roots. Drivers, and bad implementations of standards (ACPI & APIC most prominently). Lower cost Windows machines always needed "vendor drivers" to workaround what sins vendors committed with bottom of the barrel hardware, so these drivers handled the non-standard things they did with the already crippled hardware.

Let's also not forget that Microsoft tried their best to weaponize standards to break Linux (in a similar fashion they broke Dr. DOS). See: Halloween Documents [0].

So given your hardware implements the standards correctly, and you have drivers for your hardware, Linux works way better on the same hardware than Windows. This brings us to

> It supports the $200 junkbox your gran bought from Walmart to $20,000 enterprise racks and is expected to support all of those different workloads and insane variety of software that can be installed.

I have two Intel N100 boxes. One is running W11, the other one is running Debian Testing w/KDE. Former is my parents', latter is mine. Mine is using less RAM, running way cooler, and uses less CPU cycles to accomplish harder tasks, while supporting all the hardware on it, at full speed. So, given your hardware implements the standards, and you have drivers, there's no difference in support. It's not a design/kernel issue. It's a support/willingness issue.

I just remembered that Valve's Source Engine ran 25% faster on Linux after porting without any post-port optimizations, which is a great example how Linux is generally lighter and how well it can perform if given proper drivers for any piece of hardware.

> In contrast, I'd make the case that the majority of Linux distros are running in large data centers like AWS, Azure, and GCP where there's a relatively high degree of homogeneity of the underlying hardware and the operating environment on top of that.

I'm an HPC admin managing clusters close to two decades. The problems you encounter on these systems when something goes wrong is way complex than you see on desktop systems. All of these resiliency features create tons of complexity while helping you. Linux runs on these systems better for a longer time, because these systems are implementing standards correctly for a longer time, and you pay for expensive hardware and correct implementation of it, so kernel bundled drivers work correctly 99.9% of the time when you change the hardware.

> macOS, for example, operates a comparatively limited set of hardware platforms.

XNU is for all intents and purposes is BSD kernel, and macOS can be considered a BSD. Give proper drivers, macOS can also run on from your spoon to a satellite. It's POSIX compliant. They even contribute back to BSDs.

[0]: https://www.catb.org/~esr/halloween/


[flagged]


And guess what? Those smartphones, cars, etc. are all made by a relatively small number of suppliers on....homogeneous hardware. Samsung, Xiaomi, Motorola, Sony, OnePlus (any other majors?) are pumping out devices running the same stack and writing their own drivers. Can you plug a non-Samsung internal storage device into a Samsung phone? Can you attach a screen from a Samsung device to a Motorola phone? Will a Xiaomi modem work in a Sony phone?

Buy a consumer Walmart PC running Windows and it is expected to be plug and play with nearly any graphics card, USB scanner, printer, etc. I don't think Microsoft gets enough credit for supporting such a huge and varied consumer user base.

For sure, Linux is the core of every Android device and every modern car. But those hardware stacks are homogeneous by manufacturer and there's only a handful of major manufacturers; those devices are carbon copies from a hardware perspective and a manufacturer only has a handful that they support at any given time.


> Those smartphones, cars, etc. are all made by a relatively small number of suppliers on

And Windows doesn't run at all on 99.99999% of them, and never will.

> Buy a consumer Walmart PC running Windows and it is expected to be plug and play with nearly any graphics card, USB scanner, printer, etc

There isn't a single graphics card you can buy in a store today that doesn't work with Linux. In the mid-2000s there were USB scanners and printers that didn't work with Linux but now finding a USB scanner or printer that doesn't work with Linux is extremely rare.

This argument about compatibility is an extremely stupid argument.


Yikes. Chill out


No.


This is also a distribution problem. If there was a distro with NT kernel, SSH and core services it could run on the same hardware.


Nope. NT is designed with “not POSIX, not UNIX” ethos, and this affects the interfaces in and around the OS profoundly. You won’t be able to configure and communicate with NT kernel the way you talk with *NIX system, i.e.: with files.

You open a serial port the same way you open any file. Same call, single line.

In Windows that was 30+ lines to begin with. NT is not bad, but it’s not UNIX, and designed to be complete opposite of it. No user land can change that. Even “subsystem” functionality can’t make it replace Linux as we see today.


> This is also a distribution problem...

A lot of the software written for Windows assumes that there is a UI available. This makes it harder to automate things.


Not only does it assume a GUI for user input/output, on an API level there's the assumption of an HWND (window) and its message loop. Some Windows APIs aren't even available if you don't have that. Console-only applications are considered second-class citizens on Windows.


that's a good point


One of their engineers demoed this years ago called MinWin.


Even if there was an easy way to run windows headless and microsoft didn't charge per-core licenses I actually think the platform itself is the issue--

It's that windows isn't POSIX compliant. It's easier to port libraries, etc. to run on OSX than it is on Windows. There's all sorts of weird gotchas when porting to windows as well. Setting up many language runtimes, compilers, etc. is all very different and oftentimes poorly supported. There's far smaller of a critical mass of libraries that will run on windows either (natively not on a VM).


Yep. My MacBook and Debian servers act pretty similar from a cli perspective. If it’s in apt it’s nearly always in brew


Brew makes me cringe. Apt just installs what you need and then it's done. Brew makes a thing about updating everything - I'm always reluctant about it. It does make having multiple versions of python fairly easy.


a while ago there was a bit of fuss about Windows having introduced something very similar to containers... in the sense that you could spawn small windows "instances" (I don't remember the correct name) and run applications in there. similar to "containers" (as a bunch of namespaces) rather than jails rather than zones.

IIRC Kubernetes could have windows node as well?

EDIT: yep... https://kubernetes.io/docs/concepts/windows/intro/


They are in fact containers. Docker can leverage them, although usually Docker Desktop users are running Linux containers under emulation.

https://learn.microsoft.com/en-us/virtualization/windowscont...


The easy way is Server Core and its available in Azure. I posted about it below.


Why is it a problem at all or is being frame like a problem?

Microsoft can easily drop a fork of windows, slam out all of the com crap, leave .net core and include some IIS, but why, you already have someone doing it in Linux.


It makes Windows hard to operate in a “servers are cattle, not pets” environment, though my experience is limited so I’m open to corrections.

That then makes Windows less appealing to the infra-as-code crowd, which is growing to be “almost everyone” these days.

It’s not just having an image with those things, it’s the entire surrounding ecosystem. Eg my experience with exe or msi installers is that using them without a GUI ranges from “difficult to find the right CLI switches” to “just not gonna happen”.

Also, I think a lot of that com crap is used by Powershell, so gutting it also means giving up CLI access to stuff.


"my experience with exe or msi installers is that using them without a GUI ranges from “difficult to find the right CLI switches” to “just not gonna happen”."

This was my student job during college :) wrapping installers in scripts so they'd install headlessly.

My favorite part was that windows would need to be rebooted CONSTANTLY. Seriously. Provisioning a new box for some environments would result in TEN OR MORE reboots depending on exactly what shitware you had picked to install. And by shitware I do of course mean exorbiently-expensively-licensed academic/scientific software. The more scientific it was, somehow the exponentially worse the installer experience would be.

We had one package where we were literally forced to cram in a step where you would literally pick up the phone and call IT and someone would come over and plug a USB in while the install proceeded. The software was able to be network-licensed but only post-install, the installer itself never got the memo and wanted the usb to be physically present on the box during install.

Death by a thousand cuts (and some big gashes), Windows is just not built to work headlessly/scriptably, period. And trying to make it work like that takes tremendous investment, constant babysitting, and it's still fairly shitty even in the end.

Meanwhile we just apt install shit in dockerfiles and it just works.


You mean like they did almost a decade ago when they released Nano Server? https://learn.microsoft.com/en-us/iis/get-started/whats-new-... etc


Microsoft does have this. It is known as Windows Server Core and it is available as a Marketplace item in Azure.


there's still tons of steps to run an app on it, and they all require a gui and manual input.

There's some automation, but the maturity level is 2/10 compared to linux.

So there's a chicken-egg issue that Windows doesn't provide adequate automation out of the box to create the automation ecosystem that has led to the turnkey solutions available to linux customers.


Microsoft has PowerShell and DSC as their automation solution. Its pretty ridiculous to say that they are not providing that.


PowerShell is cool but there aren't convenient PS interfaces for a huge number of normal tasks on Windows

When you set up your next Windows workstation, set a rule that you perform all of your configuration (incl. application settings, as far as is possible) via PowerShell or other automation. It very quickly becomes immensely frustrating, but doing the same on Linux is much less so (Mac is somewhere in the middle).

You'll also find that when it comes to achieving something you don't yet know how to do, even finding a scriptable solution on Windows is kind of a nightmare because both the Microsoft documentation and every third-party website and blog is flooded with (useless, for this exercise) manual GUI garbage.

DSC is nice, though.


I use DSCs too though I prefer Ansible. Sometimes I combine Ansible with DSCs


I just use Ansible with OpenSSH from my remote host.

I can also run PowerShell


As well as Azure App Service on Windows. The deployment looks just like deploying to Azure App Service on Linux. I use Azure App Service on Windows to deploy web applications, although the CMS software I use does now support Linux. I then make use of Azure SQL Server to run my database.


Microsoft also charges per core for a licence, linux does not


I think this is oversimplifying things a bit. There are many workloads running on Linux that also pay licensing for RHEL, SLES and Ubuntu Pro.

It really comes down to the requirements for your app as defined by the vendor and/or the internal team.


Licencing exists for enterprise support if you want or need it, but it is absolutely not required for linux unlike windows.


Really? Any numbers? Cause I aleays comented that for allnthe companies I've worked, even the ones thatnloterally developed proprietary desktop software for Linux and thus had RedHat on all workstations... they never sent a single dime to RedHat.


Were they actually running RedHat Enterprise Linux, or was it something like Fedora or CentOS? CentOS was basically RHEL with the branding stripped, but I don't believe they had any kind of support agreements.


That is the point.


Microsoft charges, Linux does not.


>Linux does not

Red Hat, Suse and Canonical do charge.


They do charge for extra benefits, if you need them. But you can deploy a server to production today on Debian and be fine. Unless your in the case where you need to pay for support (regulatory).

Just because RH, Suse and Canonical charge, does not mean those are requirements. You can always opt to have linux and not pay for their support.


Right. Linux vendors charge for services; Microsoft charges rents.


> Just because RH, Suse and Canonical charge, does not mean those are requirements

You are leaving a gaping hole in your servers if you are not patching them (the distro's that charge and you decide not to pay)


If you can handle the churn of building against a new platform every year or two instead of every decade or more, you can keep all your stuff patched without an extended support license.


This is a valid point, but I wonder how many installs are managed through a commercial contract. My assumption is that it would be a small number of high value contracts, but the bulk of installs are just free Ubuntu/Debian/Fedora installs.


yes but here's the main thing:

- you can choose to pay RedHat, Suse and Canonical

- you cannot choose not to pay Microsoft


[Removed comment because it responded to wrong parent]

https://news.ycombinator.com/item?id=41038604#41038791


What does that have to do with paying? Only RHEL requires that you pay for patches/updates. If you're running Debian, Ubuntu, etc you get all your patches and updates for free; no need to pay.


Regarding Ubuntu - if your requirements of your org require you to go beyond 5 years on Ubuntu LTS then you do need to pay for Ubuntu Pro.

also, many org's opt to pay for Ubuntu Pro for piece of mind.


Beyond 5 years - I never ran into major problems doing release upgrades.


The release upgrades for Ubuntu were a little rough whenever they changed init systems (sysv -> upstart, upstart -> systemd), and occasionally there has been some other weirdness.

But the real key is that do-release-upgrade sucks because it's too conservative because they roll back when there are any errors or dependencies they can't resolve. If you do your release upgrades manually, you can always work through those issues. Debian/Ubuntu busybox includes dpkg, so even if you end up in a very broken state (glibc version mismatch, coreutils don't work, bash is broken, etc.). You just need a copy of busybox installed, tmux, and maybe a portable SSH daemon and a statically compiles curl, and you can do everything the release upgrade process does but with the ability to rescue a failure instead of rolling back or dying. It's probably good to follow the same upgrade path as that tool likes though (only directly upgrade between adjacent releases or from one LTS to the next LTS), so you may have to do this a couple of times if your servers are really old.

(I guess you don't really even need any of those statically compiled tools, either— you can just use Nix to provide whatever you need instead since no Nix-provided tools care about the libs provided by the underlying Ubuntu system.)

At a past job I upgraded in-place some Ubuntu 14.04 web servers to 20.04 through some release-upgrade failures this way. It's generally better to rebuild on a new, clean image, but in-place upgrades on Linux rarely fail and are pretty much always rescuable when they do.


That's one option of many but not the only way to license.


Microsoft is so big that the CEO runs the company based on Friedman doctrine, aka their stock share holders are the driving force. Most stock share holders don't know the difference between OS designs or automation. Most are only able to read news clippings. Cannot remember the last time Microsoft Azure actually made non-tech news.

Linux's share holders are the community around the developers and users. They want to continually make a better products. There are no stock holders to court.

This is why Microsoft will not produce a more viable product that people are looking for in the server and embedded market. Their Oligarchy locks them into company IT infrastructure and PC gaming. I would also argue that Mac OS is a better personal OS when gaming is not a requirement but they are cost prohibited by the average person, so Microsoft eats that cake too.


There's also Azure App Service on Windows, which acts like a headless Windows distribution. There's integration with Visual Studio and Visual Studio Code, as well as a web interface and command line interface. Most of the web software I work with now has ported over to work on Linux and Windows (with the consolidation of .NET Core/.NET), and I'm really only held back by the CMS software I use still being tied to MS SQL Server/Azure SQL (of which I can also deploy to from Visual Studio or Visual Studio Code's version of SQL Management Studio). Really the only issue is if you want a full virtual machine running Windows itself, so you can treat it like a local machine.


is it a problem at all? why bother trying to switch people over to windows when you can just make money off hosting their linux servers?


you're not thinking about the sales channel.

It's more lucrative to license Windows + SQL server + MS 365 + etc etc than hourly VM rates.


365 is a hosted service. you don't need to run windows on your servers to use it

SQL server is also offered as a managed service on Azure, and you can query it from your linux server just fine.


i'm talking about the sales funnel


Except the very few that will take that option when there's a better open source alternative for free won't make it worth it.


Nano Server, now Server Core.

However even if they were much better than they currently are, many people go to Linux workloads due to licensing.

Unless the applications they are deploying depend on Win32 features.


Thank you @jaboutboul. I appreciate that Linux works so well on Azure.

A substantial problem for the Linux ecosystem on Azure is that Azure Files is not POSIX compliant. With Container Apps, ephemeral storage is POSIX compliant. However, if you mount a persistent Azure Files file system and use it directly, some applications break. One workaround is to use rsync in the background to replicate data from ephemeral to Azure Files, but we can lose data this way (and ephemeral storage is limited to 8 GiB).

It'd also be nice if "Consumption Only" container apps would have more than 4GB of memory. It's so nice to use these.


I've forwarded this feedback to both teams. Can you email or message me so that I can loop you into the product teams?


you can use azure files nfs, that should be posix compliant, but it comes without auth or encryption ... :)


The NFS protocol does not support all the functions of a "native" filesystem running on a block device.

NFS is fine for configuration files or read only. But workloads that do any sort of intensive writes will probably not like NFS, from SQLite up.

The newest NFS versions might fix some of the issues around caching that cause problems for write operations though.


NFS requires a custom VNet. Using a fully managed environment is important to us.


> Microsoft enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

Is this a paid ad we are reading, or a news article? Genuinely confused.


> Microsoft, Oracle and Red Hat are sponsors of The New Stack.


Hey everyone. We put in lots of efforts to make sure that Linux runs really well on Azure and that you have a great experience. Glad this got some traction and glad to answer any questions.


Is the hypervisor of the hosts also Linux based, or Hyper-V? I recall seeing Github Actions errors the other day that suggested a hyper-v failure even though all of our stuff is linux.


Azure uses Hyper-V behind the scenes


Windows Server aka CoreOS


Congratulations on the milestone!

Is there a paved path for bringing up NixOS on Azure VMs? I haven’t looked hard yet, but I’m about to so if this is a known thing I’d be grateful for any pointers.


My experience was that the Linux VM agents used to cause frequent problems but in the time I was using Azure I did see great improvements in them, kudos for that. I'd like to see less churn and breaking changes in agents and in Azure in general, but that seems to be a problem common to all major clouds.


> Glad this got some traction

Yes, we know.[0]

[0] https://www.kickscondor.com/satya-nadella-'reads''games'-hac...


Thanks for posting that. This post had nothing to do with Satya nor did he or anyone aside from myself and a couple other team mates see it before posting.


I really fail to see how interacting with the community to see how we feel could possibly be negative. Sure theyre doing it to push their product, but theyre also doing it to see where they need to improve.


I don't see it as particularly positive or negative, just something worth knowing.


It's the year of Linux on someone else's Desktop.


*server


That has been true for decades?


Honest Question, is Azure itself a Linux System. Or are these Linuxes running on top of some version of Windows ?


Microsoft uses a Windows derived host OS running Hyper-V for the VMs.

https://techcommunity.microsoft.com/t5/windows-os-platform-b...


For general compute yes, that is correct. A good chunk of the work that we do is making sure that that HostOS plays nicely with Linux guests and vice-versa.


They always liked to dogfeed their own stuff even when it makes no sense whatsoever


so the "real" azure nodes are some kind of special Windows+HyperV but for example their accelerator cards are linuxes: https://cfp.all-systems-go.io/media/all-systems-go-2023/subm...


We use Azure AppService. If you choose AppService Windows, it provides a virtual instance of Windows with IIS. If you choose AppService Linux, it provides a virtual instance of Debian (if I remember correctly). I'm not sure if Linux is running on top of Windows or Windows on top of Linux, but you get what you chose.

I'm assuming Microsoft has pivoted to SaaS and the Cloud, and the OS plays a lesser role in the current form of Microsoft. Gone are the days of Steve Ballmer. Windows and Office are no longer the primary cash cows.


I was there three years ago. Most (possibly all) of what was running on bare metal was Windows/Hyper-V. Some Azure service backends ran on Linux VMs; AKS backend for instance was on Linux I believe. Most were Windows/C# though. I know there was some initiative to allow for Linux bare metal for some internal services, I think to allow for running k8s clusters without the extra hypervisor layer(?), but it was still WIP when I left.


The Azure infrastructure, as I assume most hyperscale infrastructures are, is hybrid depending on the service. Service owners typically have a fair amount of liberty in deciding how they architect their service and what they use underneath it.


One interpretation of your question is the Azure API / RM engine itself, which if I understand correctly is written in C# and can run on either linux or windows or probably some containerized app engine version of either or both


When OS became a product input rather than the product itself, things changed


This article doesn't talk about actual VMs/instances running in Azure. Also, so many new service devs only target Linux, and Windows is an after thought. No wonder 60% of the market place is linux.


the actual talk (video in article) says more than 60% of VM vcpus are from linux VMs, it does not talk about instances though. But I would expect Linux to be a lot of very small and very large VMs :)



According to that wiki he does not work on Azure since 2012


According to the same page, he is not a fan of Unix. I assume he is not a fan of Linux by extension.


Isn't that obvious? It doesn't matter if Microsoft is Microsoft, Azure is a cloud service and in the cloud Linux (server(s)) is/are the winner(s). The only thing that could compete with that are virtual/remote Windows desktops but I really cannot think that they are used in all industries like Linux servers.


AFAIK that has been the case for many years, to the point where in Azure it’s acceptable to go “Linux-first” when developing new services. Maybe even “Linux-only”.


I cannot comprehend the folks who choose to build and deploy apps on Windows servers. C# and F# I can get behind, but I would deploy them on Linux.


Same, but moreover, I consider the “windows” part of my dev experience to be entirely worthless and advise my son to not bother with anything Windows specific. Our lives are finite, might as well learn stuff that is Free and that has lasting value.


I'm honestly surprised it's not much higher than "over 60%". I know that a lot of orgs have done lift+shift for Windows services on Azure... Considering many, many things developed for .Net are trivial to update/port to newer versions on Linux/Docker, I'm just surprised more haven't done so.


60% of the market place catalog, not actually usage. That is at about 50/50. Microsoft needs to work on getting it the other way. More like 60% Windows Server and 40% Linux Server


they do talk about actual usage in the talk, but only vcpu wise, not instance wise. For vcpus it's also more than 60% linux


...or at least the top one that's, you know, not in a BSOD crash loop.


LOL


Windows is a pile of shit and only managed to stick around by bundling an actual Linux kernel in WSL2 to cling to relevance with software developers and ultimately because retards who want to play-pretend they are IT and shouldn't be anywhere near computers beyond muh video games keep installing it everywhere.


[flagged]


Can you elaborate on this? It's rather general.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: