The article says "Microsoft will release a custom Debian Linux," but the linked Github repository says:
> Q. Is SONiC a Linux distribution?
> A. No, SONiC is a collection of networking software components required to have a fully functional L3 device that can be agnostic of any particular Linux distribution. Today SONiC runs on Debian
This article has changed since the last time I read it, minutes ago. It didn't used to have that link, and it used to speculate that the supposed linux distro would be MIT licensed.
...There are several things I don't like about this article.
Distributions can have licenses. Most modern distributions are MIT licensed (meaning you can create derivative distributions under the conditions of the license). OpenSUSE is still GPLv2 and is one of the last ones (Debian switched to MIT recently IIRC).
But most of that software will be licensed under other licenses, and you can't just redistribute it under another license!
If you ship a collection of software called a "distribution" that includes a copy of the Linux kernel, then that copy of the kernel remains GPL. The fact that it's on the same CD or server as some other MIT-licensed software is irrelevant.
A collective work is separate from a derived work. These are two separate things in copyright law. You can own the copyright over a library of music, despite not owning the copyright over the actual music itself. So, you could have a proprietary Linux distribution that is only made up of GPL'd software. The only thing that would be "proprietary" is the particular configuration, selection and build scripts of the packages that you picked.
So, having a GPL'd distribution means that the package sources and configurations and other such "distribution sources" have to follow the rules of the GPL. It doesn't matter what the license of the software itself is (it can even be GPL-incompatible or proprietary).
In tangentially-related news, there was a lot of talk at NetDev about switchdev, a new Linux driver model for hardware-offload switching hardware.
It allows the kernel's Layer-2 and Layer-3 switching/routing configuration to be reflected down into the switch offload hardware, and the switch's ARP and MAC table data to be reflected back up to the kernel stack.
The overall idea being you can continue to use the same userspace tools to configure the routing/switching, and it all just magically goes faster if you have supported switching hardware.
We (Cumulus) have been involved in switchdev since the beginning. Over the long term, it will replace switchd in our products, but that is at least a year out, maybe two.
Appreciate Cumulus' work in this area. As a network "engineer", the burden of administering switches is horrible. Downside is though, getting more switch SoCs talking to Linux makes me wish I wasn't automating configs right now.
I imagine anything Microsoft releases that could possibly have gpl software, such as the linux kernel in it will have the most aggressive search for violation of any software ever. Memories are long and that distrust is not going away any time soon.
Agreed - it's good to be cautious - but, Microsoft's approach toward E/E/E was rational (if slightly evil), when they were trying to control the platform.
But now, they are trying to control the service/data, in which case the cloud-API (theirs, preferably) is what they want everyone to buy into. In that model, they deploy completely proprietary service interfaces (similar to what Amazon does with AWS, stripe with their API, etc..) - but at the same time, they want to have 100% support of all platforms - particularly in the mobile world where they have round-off presence.
Net-Net, It's entirely rational for a profit-maximizing Microsoft (of 2016, at least) to play nice with all alternative client platforms. I believe Satya Nadella's Microsoft is going to be a very difference animal than the Balmer/Gates Microsoft.
I guess we'll see over the next 5 years, but it looks positive so far.
Oh, we're already seeing ahemSkypeahem. Is that not MS? Is that not Linux? Is that not seriously user-hostile? (Ohbutthat'ssomethingcompletelydifferent. Right?)
Not really. I'd say it's more that Skype on Linux isn't a priority for them. They have it on Windows, Mac, Android, and iOS...
And if you think it's user-hostile for MS to not continue development of Skype on Linux, don't you have to agree that Instagram, Snapchat, and all the other app developers that don't release apps for Windows Phone are user-hostile as well?
There's a difference between "let's not start a port of Y at all" and "start degrading experience on an existing port of Z." (See? "MS is completely pro-Linux (where completely excludes anything that contradicts this, leading to a nice circular definition)" - that's as no-true-scotsman as you can get)
You are very clearly grabbing at straws here. This isn't some nefarious tinfoil hat plot. I honestly suspect their focus is truly being spent elsewhere.
Considering how inept the development of Skype was during the last years on any platform i find it completely unplausible that there is any nefarious intent to this.
Skype works fine on IOS, Mac, Windows(obv), and Android as well.
The number of people who run linux on their desktop is probably on the same order as those who run BlackBerry - neither one of which has enough market presence to make it worth writing/maintaining apps for them. I'm pretty sure Skype isn't available from MS on OpenBSD either. (And, honestly, pretty soon, I wouldn't be shocked if Windows Phone doesn't get the same care and attention that IOS does).
The desktop (win32) Skype app is pretty crappy even on a surface pro 3 and a lot of features are still missing in their UWA apps. For something as important (for microsoft) as Skype the software is bad.
I might be wrong about you personally and I am speaking of the Linux Community as a whole.
I still find it incredibly hypocritical of people's anti-Microsoft stance while they walk around with their Apple products. I seriously see more Macs at every Linux event then any other kind of computer. Most Anti-Microsoft really is anti-Steve Ballmer.
I think Apple at least has not tried to strategically attack linux/OSS via rumors and lies a la halloween documents.
But I certainly agree that the linux community on a whole should re-evaluate its position on MS. If you expect that to happen over night, you are a fool.
Once again that is Steve Ballmer. If this was a whole corporate belief it would have been true through the whole company and its policy and it wasn't.
NOT defending the bad parts of MS but people treat MS as the evil empire when it was a divided HUGE company. Inside MS there were plenty of positive open source people and projects inside of MS. https://en.wikipedia.org/wiki/History_of_free_and_open-sourc...
I believe the Anti-MS is way out of proportion and distracts from worse offenders. Look at Oracle that company is the devil to Open/Free software and people will still run Oracle products all the time. (Stares at my own OpenBox for use with Vagrant on my Windows machine next to me). IBM and their incredible patent farm (IBM year in and year out the number one patent company in America).
Apple is the polar opposite of the open source movement. Apple has been guilty of price fixing ebooks, patent misuse and the copyrighting of design and slide to unlock. Apple does not work with the community but behind high secret walls. You share a secret and they might send the police after you or sue you. They black list reporters that share an opinion that is negative to Apple from access to the company, events and products.
So yes MS has made many bad moves but I wouldn't even say they are the worst. If you take out anything Steve Ballmer has said they are actually were better then most.
Open Source Movement has won and we should celebrate the victories. SQL Server on Linux I wouldn't have thought this in a million years and C# opened. These are amazing days.
Even given all that MSFT has been more antagonistic to Linux in the past. Ballmer was not even CEO when the halloween docs we released, I find it difficult to believe, that Ballmer is the sole cause of MSFTs anti OSS positions.
TBH I agree with you on most points, but calling linux users who have mac hardware hypocrites is a stretch given MSFT's track record.
To a first approximation all the engineers producing these products were already at Microsoft before Nadella became CEO. Microsoft's first open source project is more than ten years old. Scott Hanselman's Hansel Minutes podcast covers the period before and after his hiring and documents his experience of the long cultural shift, e.g. ASP.NET MVC, Microsoft's relationship with Mono and Moonlight, and a bunch of smaller projects.
To put it another way, Nadella's appointment wasn't a coup d'etat. Microsoft under Ballmer ejected several hard chargers in line for the throne and they weren't the one's who saw open source as the way forward.
I've never quite forgotten that Ballmer accused anyone involved in Open Source as Communinists. Not to mention he said that Linux was a "cancer".
Or to be precise, he said that:
"[W]e have a problem ... when the government funds open-source work. Government funding should be for work that is available to everybody. Open source is not available to commercial companies. The way the license is written, if you use any open-source software, you have to make the rest of your software open source. If the government wants to put something in the public domain, it should. Linux is not in the public domain. Linux is a cancer that attaches itself in an intellectual property sense to everything it touches. That's the way that the license works."
and of Linux and Communism:
"There's no company called Linux, there's barely a Linux road map. Yet Linux sort of springs organically from the earth. And it had, you know, the characteristics of communism that people love so very, very much about it. That is, it's free."
As I recall, it was mostly that backwards-compatibility with 8086 was so required that they put every useful performance enhancement except raw MHz in a separate protected mode... and then you couldn't switch between protected and real mode without (I am not making this up) asking the 8042 processor controlling the keyboard to hold your beer and reset you, which could take an agonizingly long time.
The 8042 had another "fun" function on the 286 too. Because there were a number of poorly written programs that expected the memory to roll over at 0x0FFFFF, in order to maintain compatibility, they needed some way of turning off the A20 line. Now, in order to save money, IBM noticed that there was a spare pin on the 8042 that they used to control whether or not the A20 pin would be held low or not. Now, because this became another bit of needed backward compatibility, a way to assert the A20 pin stuck around until Intel's Haswell line, several decades after A20 gating was needed.
Also, there was another way of getting the processor back into real mode that was even more fun. You'd initiate a triple fault to trigger the reset, because that was faster than asking the 8042 to do it for you. Oh, the good old days.
I heard something about that. Also I heard that the segmented memory was EVIL, or something. Although apparently unixes running on the platform used it to simulate the user/kernel divide.
And honestly, we all know that x86 is an awkward architecture at best, hampered by the need for obsessive backwards compatibility, and that without the huge binary footprint, ARM and RISC would have won.
Remember when Andrew Tanenbaum said that his issues with Linux's design were a moot point, because we'd all be running HURD on RISC machines by the year 2000? Have you noticed people in the Linux crowd saying that Wayland will kill X? You'd thing we'd have learned our lesson by now: Backwards compatibility trumps all.
What was wrong with the 80286 was that Intel expected it to be used like processors are used now: Flipped into protected mode once at boot-up and kept there the whole time application software is running.
Also it had a 24 bit address space when really the industry was ready for 32 bit and I have a vague recollection that there were other 32 bit CPUs available at that from other manufacturers (maybe Motorola?) but Intel was lagging.
The 24 bit address space meant that developers had to deal with some sort of funky segmented addressing scheme I believe that was also a pain on the 8086 and I think a nice flat 32 bit memory space would have been a whole lot better.
Bill Gates called the 80286 "brain dead".
I seem to recall it being effectively a faster 8086 but its extra capabilities were both underpowered and impractical to use.
In a sudden twist of fate, Microsoft announces that are writing their own closed source systemd alternative. Millions of naysayers flock to the systemd hailing it as the Savior of Linux.
I pondered this for a bit wondering what the Microsoft equivalent of systemd is, and decided it's probably svchost: it runs a bunch of services on your behalf in an opaque way, and occasionally needs killing when it eats all your memory.
Microsoft already have binary logging that's much harder to read than text files. I'm not sure if there's an equivalent to process reaping, or whether that's handled by the kernel.
Microsoft's equivalent is called "wininit" and spawns services (inc. svchost shared process groups and services.exe), lsass ("Local Security Authority Subsystem Service"), lsm ("Local Session Manager"), winlogon ("Windows Login subsystem" for session 0), initialises the registry, creates temp if it doesn't exist, etc. During shutdown after winlogon terminates (session 0 terminates), wininit then sends ExitWindowsEx() to all system processes before exiting itself.
The two really are 1:1. Windows under the hood is VERY UNIX like. More so than most people realise.
> The two really are 1:1. Windows under the hood is VERY UNIX like. More so than most people realise.
I find this really amusing. I've been reading "Showstopper!: The Breakneck Race to Create Windows NT and the Next Generation at Microsoft"[1] and Dave Cutler is described as VERY anti-Unix. He considered it an inelegant system developed by a bunch of PhDs doing their own thing their own way.
That's a comparison that comes with a lot of caveats. The timeline of Windows NT versus things like the IBM System Resource Controller and the Solaris Service Management Facility is one such. Another is that it really depends from which Unix; even setting aside the fact that systemd only runs on Linux, not Unix. There are some significant structural differences, moreover.
* On AIX, the program that runs as process #1 is init, processing inittab and handling runlevels. Service management is done by another program (srcmstr) running as a separate process; which client programs send commands to via /dev/SRC, a local domain socket. Terminal login is not considered to be a service to be managed as other services are, and is spawned by init not by the SRC.
* On Solaris, there is an init program that runs as process #1, handling runlevels. The SMF master restarter (svc.startd) and configuration manager (svc.configd) programs run as separate processes; instructed by client programs such as svcadm and svccfg. Terminal login (as of Solaris 11) is considered to be just another service managed by the SMF, an instance of svc:/system/console per terminal.
* On Linux or FreeBSD/PC-BSD running nosh, there is a system-manager program that runs as process #1, handling system state. A service-manager to supervise services runs as a separate process. Commands such as service-control and system-control communicate with with them via FIFOs and files in the filesystem. Terminal login is considered to be just another service managed by the service manager, one ttylogin@ttyN service per terminal.
However:
* On Windows NT, the first user process runs the Session Manager program (SMSS.EXE). Services are managed by another process running the Service Controller (SERVICES.EXE). Client programs send commands to the Service Controller and to the Session Manager using "Native API" LPC; the former to manage the supervision of services and the latter to do things that don't really have a Unix/Linux equivalent like turn on the POSIX subsystem in a session. Terminal login is not managed by either the service manager or the system manager, but by the WINLOGON and Local Session Manager programs, and is not treated as anything like a service.
Furthermore:
* On Linux running systemd, there isn't the dichotomy as in the aforementioned systems. System and service management are both done in one program, running as process #1, not in two separate ones. There isn't one direct control/status API, either. The process #1 program exports a Desktop Bus interface, which requires that a Desktop Bus manager daemon be running. To avoid the chicken-and-egg problem that results (since the Desktop Bus daemon is a service that is managed by the process #1 program), there's a separate, private and intentionally undocumented, "bus" API directly to the process #1 program. This is "known" only to the systemctl program. There's also a handshake between the process #1 program and the Desktop Bus manager daemon, activated by its largely undocumented --systemd-activation option, whereby the latter tells the former when there's a Desktop Bus for it to register its API with. Terminal login is a service managed by the mixed-together service/system manager, an instance of the autovt@.service service per terminal, with activation of these terminal login services controlled by another service, systemd-logind.
Which tool is the best for a task is not always the major factor. Other things like the developer team experience with the tools, the community support and the planned deadlines sometimes makes the decision a little harder. If the ms teams are already familiar with tweaking linux distros to their needs, then it seems like a good choise
> I'm starting to believe that developers choose OS/Tools the are used to (Linux in this case) versus the one best suited for the job (BSD)
They do, and it isn't always the wrong choice. If you know the OS inside out it's better to use that than a brand new one you don't know the sharp edges of yet.
I'm still baffled why people think this is a benefit. If you actually care about software freedom, lack of copyleft is a bad thing IMO. What's to stop $EvilCorp from creating a system that is completely locked down, can't be replaced, and is based on your technology? That's what UEFI is, by the way. It's only by Microsoft's blessing that you can install alternative operating systems on new Windows laptops. So please tell me more about why copyleft is a negative. If TianoCore was GPLv3, UEFI would actually be more bearable to work with (but core/libreboot is clearly the way to go on from here).
Actually, Microsoft has provided a patent grant[1][2][3] to anybody who implements a FAT driver for the purpose of booting UEFI. Which is actually a reason why FAT is still in the Linux kernel even though people argue that Microsoft will sue one day.
But IMO we should be all switching to CoreBoot (or LibreBoot). It's much less fucked up than UEFI, the only negative being that you have to flash it yourself (unless you buy a $1000 5-year-old Thinkpad).
They are still responsible for the non-free license on the TianoCore; Intel are having to negotiate with Microsoft to change the license, despite being the copyright holder.
Agreed re not using UEFI. I wouldn't touch CoreBoot though, LibreBoot seems more sane.
LibreBoot is CoreBoot with certain binary blobs removed. Sure, I agree that LibreBoot is better from a freedom perspective, I just assumed that you wouldn't know what LibreBoot is (more people have heard of CoreBoot).
Throughput. The BSD networking stack has been very carefully engineered over time. Linux is OK, but nowhere near as robust. FreeBSD is legendary for load tolerance and network throughput, that on equal machines would choke a Linux server. Seen it, work(ed) with it. When we do something serious that requires heavy load and awesome throughput, we always lean on FreeBSD, never anything else. Everything else has always let us down.
I've seen this claim before, but I've also seen the counterclaim that Linux network stacks have since caught up and even surpassed the BSDs on throughput and latency.
Whether one or the other is truly better though might be irrelevant...either is such a monumental improvement over the windows networking stack that there are bound to be large benefits.
Not that I really doubt you (the FreeBSD developers have a hard won and worthy reputation in this area!) but do you have any benchmarks that help back this claim?
I promise this isn't a question to try to make a point, I'm genuinely interested.
Why are you trying to get network throughput on a VM? Of course it's going to be slow, but KVM probably has some passthrough optimisations for Linux guests.
Because that's what software defined networking is (mostly) about. In the majority of cases it's used as a way to setup a network plane for virtualized hosts.
I'm not a BSD guy, but I can still see it's a little unfair to compare the network throughputs of Linux and *BSD as guest VMs on a Linux host running KVM. You aren't going to get good throughout in that case without hardware passthrough (even then it's still bad). End of story.
I get your idea, but Kvm on linux is prolly the most deployed hypervisor in the world or will be soon. If it doesn't run well on it then it won't be used. End of story.
I know that a bit more recently (~4 years ago) WhatsApp emphasized that FreeBSD's networking stack was part of what allowed them to scale so cheaply. I'm sure there was a good amount of benchmarking and profiling involved in that choice, using their production workloads.
Criddell is right about the history, and Xenix was literally not Unix either. At some point Microsoft agreed to not compete in the "Unix" operating system market, broadly defined.
An awful lot has changed since then, including Microsoft buying a System V Unix license when SCO was extorting everyone (including Sun, as I recall) into doing that. There is so much water under the bridge, I imagine Microsoft could do it if they wanted to. But why?
I don't see how it's a good idea from their point of view. An ubuntu respin is about as far as I'd ever expect them to go down that path, and even that seems like more commitment than they'd want to make.
Xenix was a Unix. It started out as based on Unix V 7, then System III, then System V when MS finally sold it to SCO.
MS tends to keep a lot of fingers in a lot of cakes, even in surprising areas. Given the number of kernels MS Research has released, it wouldn't be too terrible of a stretch to see MS release some sort of unix at some point.
Wow, thanks. It's been 24 or 25 years since I used Xenix (dammit) and the funny part is, when I googled a couple of Unix family trees just now, it didn't even appear on some of them. But you're obviously right.
I think the big thing that would keep MSFT from rolling their own is fear of antitrust law, in the US and Europe. Less of a concern for them when it comes to Linux on phones or embedded, I believe.
No, that layer was famously WeirdIX - it was a literally unusable box-ticking exercise.
They later bought Interix, who created the foundations of Windows Services For Unix, which was a much better and actually usable Unixlike layer, running directly as an NT subsystem, at the same level as Win32.
(This is as distinct from Cygwin, which supplies a GNU layer on top of Win32.)
Actually what Microsoft is doing could be breat.
However I don't understand why they even use Jenkins for this project (https://github.com/Azure/sonic-build-tools)
I mean I love jenkins, but wouldn't it be at least good if they would've used their own build tool? I mean something like tfs-linux-worker I know that doesn't exists, but if they would've done something they could've done something good somehow.
Using jenkins feels like "we can't yet do that with our own stuff"
Because tfs is dying. Why port over tooling for a source management system that is on it's deathbed? Even Microsoft themselves are moving over to Git for many products/teams and have added native Git support to both Visual Studio and VS Online.
PS - Although Microsoft still use Perforce internally also.
> Because tfs is dying. Why port over tooling for a source management system that is on it's deathbed?
[citation needed]
Visual Studio Online (which is pretty much TFS in the cloud) is alive and well. And improvements made to VSO have been shipping regularly as updates to accompanying TFS on-prem.
Are you confusing TFS with Team Foundation Version Control (TFVC)? TFVC is also pretty popular as a Visual SourceSafe replacement and has been very stable for us, though the recent support of Git in Visual Studio and TFS has us considering it as an alternate workflow for some smaller projects. I think the support of Git is great, but knowing MS (and what they've said through their usual surrogates) I don't think TFVC is going anywhere anytime soon.
> Visual Studio Online (which is pretty much TFS in the cloud) is alive and well.
Visual Studio Online supports Git. So, no, it is not "TFS in the cloud." TFS and Visual Studio Online are very loosely coupled.
> Are you confusing TFS with Team Foundation Version Control (TFVC)?
I'm not confusing anything, I just picked one of Microsoft's many acronyms they use for it. Even Microsoft's own consultants call it "TFS" when talking about Visual Studio Team Services in Visual Studio Online. So if Microsoft's own consultants are "wrong" then I am in good company.
> I don't think TFVC is going anywhere anytime soon.
I do.
It doesn't work very well: it sends way WAY too many files up and down constantly, it has no concept of a pull request, offline mode sucks, branching/merging is expensive as all heck (inc. disk space, bandwidth, time, any metric), and even Microsoft's internal teams are utilising Git and Github.
I've used both on VS Online, no comparison, and Microsoft's own staff seem to agree. It is only a matter of "when" not "if" TFS will die and Git will take its place (although I suspect Perforce will survive on the Windows team within Microsoft).
Again, you're citing the source control system as the primary feature. TFS is an entire application lifecycle management suite, not just version control. You seem to (continue to) ignore this. Its closest analogue is probably the entire Atlassian family of products.
On TFVC:
> it sends way WAY too many files up and down constantly
It sends literally zero files anywhere until you interact with the server, as any sane server-based version control system would do. I don't know about your workflow but I don't know what you consider a reasonable amount of I/O to sync a workspace. You can elect "Local" workspaces since around TFS 2012 which can work completely disconnected if you choose.
> branching/merging is expensive as all heck
It's folder-based branching and can be done very quickly/cheaply if you don't store your entire company in source control. And how well does Git handle large files? Git is opinionated on branches and creates them cheaply/quickly; TFVC evolved from CVS-type systems where this was not the prevailing mindset, but again I don't know what you're considering "expensive." Maybe where you work?
> even Microsoft's internal teams are utilising Git and Github
Which isn't evidence of anything other than it's their current tool of choice. That has a lot less to do with future direction of their enterprise products than you're assuming here.
> Microsoft's own consultants
> Microsoft's internal teams
> the Windows team within Microsoft
Do you have insider info or are you just trying to sound like you do?
> "Most of our customers still use TFVC and we value this tremendously. Most people in Microsoft still use TFVC. Most new projects created today on VS Online choose TFVC."
Etc. It's quick and to the point, go read it. And again, has there been some sea-change at MS over the last year on source control? Quite possibly. But so far you've offered nothing but your opinion.
I think you're confused about what TFS is. I use tfs daily, but I don't use it for source control. I use git for that. Source control isn't even the main thing tfs does. TFSVC (the version control part) may be dying, but the rest of it seems pretty solid.
Linux has been the best friend of MS Windows for quite some time now. All the Linux users dual-boot into Windows each time they need some half-decent GUI for an app or whatever. Linux might be a good server host OS, but it failed spectacularly to conquer the desktop.
This is going back a bit, but from memory that was just for "advanced logging", which was enabled on a default install.
You could either not configure advanced logging, or you could use a remote SQL server.
Well, the "peanut gallery" [0] pushed back on kdbus back in the 4.1 merge window and -incredibly- it looks like the devs in charge of kdbus are redesigning it. (kdbus may or may not now be called bus1... it's not clear at this point.)
Last I checked, kdbus isn't even being shipped in Fedora latest anymore.
[0] AKA: The engineers in charge of QA and technical critique of major changes to the Linux kernel.
Thank god. dbus has no business being in the kernel. In fact, Linus said that the problem was just that dbus was coded poorly, and that it could probably perform well in userspace. But Greg wanted it in, and Linus trusts Greg...
I mean, honestly. Go look at some of Linus's usenet postings circa 1990-2005. This is exactly the kind of thing that he said should never, EVER, go into the kernel.
I know you're being downvoted, but I spent the better part of today dealing with a Linux Mint Debian Edition (Jessie) install that decided to install systemd in an update. the original install had no systemd. I explicitly installed sysvinit in order to purge it fully. Today's apt-get dist-upgrade (and reboot to newer kernel) brought me this unwelcome surprise.
Therefore, I made today the day I officially switched my last debian system over to Devuan[0], which was easier than I imagined. No more systemd, or packages depending on it.
What's the connection between RHEL and Microsoft? Azure seems to have focussed on Ubuntu first, so I'm not surprised they lean closer to a Debian base.
Finally M$ Linux comes true. So we must prepare to viruses, antiviruses, "defenders" and other whole infrastructure industry that lives on creating problems out of nothing and then heroically solving them.
> Q. Is SONiC a Linux distribution?
> A. No, SONiC is a collection of networking software components required to have a fully functional L3 device that can be agnostic of any particular Linux distribution. Today SONiC runs on Debian