De-Ballmerization and Microsoft is oozing delicious developer love. What the hell happened. It's like Skeletor became He-Mans best buddy all of a sudden and started helping everyone.
I'm thrilled. The MS tooling is really, really good and the only thing stopping me from committing to the stack fully has been it's lack of open sourceness (vendor lock in is still feasible but getting less of an issue).
Edit: Pardon the fanboyism but I've tried a set of feasible Non-MS language options for my particular domain and F# in Visual Studio beats for me, my particular use case and coding style Scala, Clojure, Ruby, Python, Haskell, "browser technologies"...
Unfortunately, Ballmer's managerial approach is what killed it. He might have known that helping developers was the way to go, but he also thought that competition in the organization would lead to better performance. My understanding is that this idea failed in some of the most vicious and destructive ways possible. I hope that the very visible failure of his cut-throat management style and setting up so many disincentives to cooperation in the organization is highly recognized in management-theory circles. It turns out making your company a terrible place to work with everyone at each others throats isn't a good idea. Whodathunkit?
When it comes to propaganda from large and powerful entities, usually it help to reverse the meaning to really understand the message.
"Developers,developers,developer" = Fuck developers,need more profits next quarter.
"We brought peace and democracy to country <X>" = We seriously fucked it up for years to come, and installed a brutal puppet dictator.
"We are not evil like those other mean companies. We will do good in the world" = The main products are the users, who's information it sells it its real users. Have colaborated and been in bed with corrupt governments all across the world.
The list goes on. It is a rather fun heuristic. And once you know about, you'll start seeing it more often.
It's not only from organizations and powerful entities, I like to apply this logic in normal day-to-day communications with people too.
If someone has a very strong opinion about something and tries to "convert" everyone to it (say, vegetarianism, meditation, emacs, or whatever), it is usually because someone is very insecure about him/herself this trait, and made it part of their identity. It usually has no point discussing this with a person like that.
> When it comes to propaganda from large and powerful entities, usually it help to reverse the meaning to really understand the message.
This works very well for advertisements.
e.g. "Our flights are very comfortable" -> Long haul flights are, in general, cramped and gruelling.
"Our broadband is fast" -> Broadband isn't fast enough.
The selling point of the product addresses a perceived deficiency in the product category. It's safe to assume that the product is, at best, slightly less bad at it than the competitors.
Except I don't think this is true of Ballmer when it came to "developers, developers, developers". I doubt Microsoft has willingly done anything to harm developers that use its products, certainly not for "profits next quarter". Sure, they deprecated some technologies (like Silverlight), but that was in response to market realities (like the iPhone and HTML5), and even then they are supported for years. All the various complaints lodged against Microsoft, such as Spolsky's "fire and motion", are more rationally explained by the vagaries of how technologies progress organically in large companies and in the industry in general.
That culture he instilled has been going on for enough time for him to realize that maybe it is time to reverse or change the policy. If he cared enough, they would have been looking and doing something about it.
Now as you say, would that have made the difference? It is hard to say...
The "new" Microsoft is supposed to be "cloud first" and "mobile first". Yet Azure has among the most lowest uptimes and Microsoft is dead-last in mobile. So you may be on to something there!
Ballmer set Microsoft on this course years ago. This is his legacy. Billions in the bank probably make up for lack of credit from the peanut gallery who think radical shifts in direction happen overnight.
He laid the foundation before he and Gates started selling their stock. The new Microsoft is beholden to Wall Street. It can't do an Xbox360 or Windows Phone or buy a Nokia. Ten years of Open Source projects at Microsoft are why analysts aren't calling for scalps. Ballmer made the next guys job easier.
Depends on how you look at it. Microsoft has been good to developers since the Gates area. They always understood the importance of developers for their OS, as long as you pay for the privilege.
The biggest difference now is all the stuff they give out for free.
> The biggest difference now is all the stuff they give out for free.
No, it doesn't actually matter, it's not a financial issue. Open sourcing their libs is good because it gives developers access to the sources of libraries which they use daily. This in turn will make debugging much easier and will also help with understanding what is actually happening when you call some function.
I left MS-land a decade ago, but when I worked with MS tools I'd have gladly paid serious money for access to the source code (that was in C++, I hear debugging is easier in C#), but that wasn't an option. Now they offer access to the source code, which is the important part, and for free, which is kind of nice but very important.
In this instance: Desktop, graphics. This from a hobby point of view, though! Which means I have very little time to achieve anything and occasionally don't touch the code in weeks. I've come to write code that is a) trifty b) readable c) works on compile d) leverages type system for composable code that has a specific "correct" way to do things.
So I can return to my code, which I've completely forgotten how it works, see that it compiles, read it through, and continue working. Noticeably: I do not need to care about the tooling because it just works. I don't get a shiteload of weird exceptions when I run my code (looking at you, Clojure - yes, I'm a noob) and since the Pythonic and mutable nature of F# allows me to write code in the way I enjoy the most it feels like a glove. There are a few oddities occasionally and some bits are a bit more verbose than in other languages (Clojures collections spoil any other language) but the overall experience is that I can focus only on my code. For me, it's a pain free computational substrate, and free as in beer now that Visual Studio community edition is out.
My progress is really slow but always worthwhile.
If you are interested I found Jon Harrops "Visual F# 2010 for Technical Computing" really good practical introduction, Don Syme et. alls "Expert F#" a good reference book on writing general stuff in the .Net ecosystem with F#, Petricek's "Real-World Functional Programming" a very good introduction to various application patterns and Sestoft's "Programming Language Concepts" a great book and online reference on writing interpreters and compilers (http://www.itu.dk/people/sestoft/plc/).
I'd recommend the recently released F# Deep Dives as well - it's less about learning the language, than about seeing how a variety of different real world problems can be solved in F#. A few hours reading that gave me more inspiration than all the blog posts solving toy problems I've read in the last year.
Your opinions of Clojure seem to mirror mine. I absolutely love the simplicity of the syntax, and after having learned a bit of Clojure then coming to F# I released it's syntax for describing data collections had forever ruined me. EDN is fantastic.
Unfortunately I have to love Clojure from afar. Partly because I only touch it for Riemann so I'm always rusty, and partly because the JVM. I'm really hoping that all this open source .NET business will bring more attention to ClojureCLR though. ClojureScript is a wonderful thing and I know there are some F# and CIL->JS transpilers but LightTable->ClojureScript->Chrome is a wonderful thing. A similar story, complete with insta-repl, with F# would be crazy cool.
You could also check out Kotlin. It's not as strongly functional as F# is but is still pretty nice, sort of a better C# for the JVM. For desktop stuff, JavaFX works OK.
That said, I'm glad MSFT is now competing in the open source/x-platform desktop app space. The Java team can use the extra competition.
I would be curious to see the effects of completely open-sourcing Windows. Businesses would continue to use it, because it's Microsoft and they want enterprise support. I think it would get even more love than it already does from the development community. Piracy of Windows is already rampant, so they're not really in a worse position from that (plus I think that most people who can pay for Windows do so already). Foreign governments who are concerned about NSA backdoors would have their fears allayed. Is there any way it could seriously damage their business model?
Who is going to support 20 million lines of code other than Microsoft employees as amateurs bungle their way through thirty years of patches for backward compatibility wrapped around stupid and clever things OEM's and third party developers did to get their code to work?
It's not like Linux because there isn't a team of experienced volunteers already working on the code base and the Windows code base contains a lot functionality for business reasons that Torvalds has been in a position to not just say "no" but to add "and go fuck yourself."
The answer is that nobody will support it as-is... they would just use the source to make or improve other things. Because this is what would be in the interests of the users. Even if it meant Windows got buried.
But this is strictly imaginary because I doubt that 20 million lines of code are all under the unilateral control of Microsoft.
I think open source is great and I even admire Stallman because of not despite his single mindedness. I also don't think Windows being closed source is problematic. Choice is good and Windows is able to solve a particular set of problems efficiently by virtue of Microsoft's business model.
That model fit my needs and interests for many years. Less so now because my needs and interests have changed to where the tradeoffs Linux and Open Source impose are outweighed by their advantages. But other people have other needs and there are already enough first world problem zealots in the world.
>Everything you can do with closed source software you can also do with open, and there are things you can't do with closed source.
This is just not true. Show me a completely open source smart phone... thing is you can't, the baseband and radio will always for sure be closed source. There are markets where having knowledge that your competition doesn't is vital to success, that is why some things will always be closed source.
A closed source project can license closed source technology and save development costs while avoiding putting 1.0 code for an edge case into their core product. The exchange of money helps keep the interest in support, maintenance, and extension by those who first developed the technology.
Open source projects can also release v1.0 with bugs, it's not like there is some Open Source Police stopping you from doing that. You can also pay for support of open source software, and that benefits everyone and not just the company that happened to have money to invest in it.
There's still money licensing Fortran libraries that have been around for decades. They are fast and good and cheaper than achieving the same levels of fast and good from scratch. There's nothing intrinsically wrong with using some gem written over a weekend of craft beer. It just may not make sense to bet a business on it even when you can read the source code
StackOverflow is an example of why the choice is beneficial. The source code being proprietary is irrelevant because the content is Creative Commons.
No one "supports" it. People will improve the bits that bother them. In the aggregate, these activities amount to more than you could ever pay to have done.
The beauty of open source model is it gives the desperately motivated the means to fix their problems in a way that benefits everyone.
It's all in isolation unless the trunk takes pull requests. Forking doesn't change that either. There's still got to be someone evaluating changes in light of their impact on 20 million lines of code.
You're right, it would be hard to contribute to such a codebase. I think the benefits of open sourcing Windows aren't really "developers will start contributing to it" as much as "developers will start reading the source code". Microsoft would continue supporting it.
I'm pretty sure the likes of Dell, Lenovo, Asus, NVidia and AMD would simply fork Windows for their own usage, and stop paying licensing... that would mean huge gaps in Microsoft's income... bad idea.
Opening .Net up, I feel is for a couple reasons. First, keeping developer mindshare. Second, Keeping Azure as a first-class platform for deploying .Net code. Third, reducing costs within Azure division that are tied to Windows' licensing. The fact that it's probably the right thing to do, is probably farther down the list.
Sometimes I forget that they did the Shared Source release several years ago. However, once in a while I see it mentioned in an academic paper.
Point being, the most common reason companies give for not releasing source for a project is that they have code from contractors that they aren't able to release. That shouldn't be the case with Windows.
> Point being, the most common reason companies give for not releasing source for a project is that they have code from contractors that they aren't able to release. That shouldn't be the case with Windows.
There are probably some parts that won't make it like Pinball [old new thing] but it shouldn't be significant at all.
>> Hey everybody asking that the source code be released: The source code was licensed from another company. If you want the source code, you have to go ask them.
[old new thing] blogs.msdn.com/b/oldnewthing/archive/2012/12/18/10378851.aspx
> Piracy of Windows is already rampant, so they're not really in a worse position from that (plus I think that most people who can pay for Windows do so already).
I seriously doubt piracy of Windows is a huge concern in most markets; MS will still be making a lot of money from Dell, HP, Lenovo, et al selling their stuff.
Nadella has worked on the cloud computing at Microsoft. Microsoft is betting big time on the cloud. Although MS never publicly admits, the OS business for the end user is slowly fading away with the tablets, smart phones taking over and Browser becoming the new OS.
You can see this by looking at their Office 365 strategy (iOS, Android release, online version push). Making Windows 10 free to 8 users, removing licensing fees for windows phones, making Visual Studio available for free. Making the ASP.NET vNext to run on Linux.
Instead, MS sees the growth in Azure. Whether they use Linux or Windows servers, MS can make money off of the enterprise/startup segment. Whether or not it is a sustainable strategy in the long run, we will see. But in the short term, at least for the "consumer" market, MS is giving up on Windows and it is moving its focus to Azure and the software that runs on top of Azure (Office, Sharepoint, Dev Services, XBox, Bing, etc).
The marketing strategy behind this is also interesting. Things are still labeled "Windows" to give the illusion that the "Windows" the end user gets on their new laptop is somehow "related" to the Windows in the cloud (Azure). Interesting strategy to make the "Windows" brand live for a little more at least..
I'm not sure Windows (and transitively Microsoft) is a positive brand and its continuity an illusion worth keeping. It reminds me of antivirus, slow boots, periodic reinstalls to make XP quick again, BSOD and many other things that drove me away from it years ago. Anyway, if one never tried anything else not to leave home could be reassuring and it seems that Win7 wasn't so bad anymore (but I still get calls from friends with infested Windows machines).
It's certainly not "official" but we already have Wine and Mono. I understand that they don't promise write once run anywhere but the support is pretty good. For awhile it was one of the only ways to get Netflix working on Linux based platforms.
.Net actually promises a bit more 'write once run anywhere' than Java. Java stuff is not, and does not seek to be, binary compatible across platforms. .Net is. I'd really love to see, once .Net is all open, browser makers to throw the CLR into the browser. Overnight it would release us from the tyranny of js and let us use basically any language we ever want.
As a practical matter, I doubt that they can. There's probably quite a lot of third-party code intertwined with Windows itself (especially in the device drivers that ship with it) that's subject to strict redistribution and publication restrictions.
Some Windows drivers run in user space, but many run in kernel space (https://msdn.microsoft.com/en-us/library/windows/hardware/ff... shows Windows has kernel drivers; the some/many distinction is my guesstimate, based on the fact that anything that needs to process data fast benefits from being a kernel driver)
As someone who has done Windows driver development, I can assure you drivers run at ring0, kernel context. Not in userland (except maybe some USB device and printer drivers).
The NT kernel is not a microkernel. It all runs in ring 0, including device drivers: I/O, graphics, HID, file systems, etc. It's modular, but there is no protection between the various parts, which all run in a single kernel process, with a single address space.
The NT kernel is sometimes called a _hybrid_ microkernel, but this refers only to the fact that the internal modules are logically decoupled (with message passing between them, for example). Also, graphics actually used to run in userland until they moved it into the kernel for Windows 2000.
Since NT 6.0 (Vista), graphics drivers run primarily in user mode. There is still a kernel mode component, the miniport driver, but it is kept small to reduce the likelihood of crashes.
There are some other drivers that also run in user mode, but graphics drivers were the really big change. Most bluescreens in Windows XP were caused by graphics drivers.
Since Vista, Windows has been able to recover from a failure in the graphics driver. The screen blinks for a second, and then a balloon notification comes up, saying that the graphics driver has been restarted.
NT is a hybrid kernel for sure. In a monolithic kernel everything would be running in ring 0.
NT is a hybrid kernel because some drivers run in ring 0, others (typically peripheral drivers) run outside ring 0. Some drivers (specifically graphics card drivers) run split in kernel mode and user mode: A small part runs in kernel mode, while the complex part runs in user mode.
While you are correct that the graphics driver was moved to kernel mode for Windows 2000/XP, it was actually moved half-way back with Vista. This was in fact the reason why graphics drivers sucked in the beginning for Vista: MS only gave the vendors 4 months since the last change to the graphics driver model before Vista was released.
It's still running pretty much everything in ring 0 that a traditional monolithic kernel would. All the "complex parts" run in kernel mode.
The new WDDM driver model runs a user-mode driver, but it's still backed by a set of kernel-mode drivers [1], including the "display miniport" driver.
As kernels go, it's pretty monolithic. What makes it more like a microkernel is that many of the internal APIs use a message-passing model rather than direct calls.
On Linux the OpenGL implementation of the graphics driver (which is by far the most complex part) runs in user space, as do drivers for printers and a lot of other USB devices. However that doesn't make Linux a "hybrid kernel", and neither does what you said make NT one.
I don't think so, the windows manager is also in userland. It's not the most documented thing, and it depends on what are "the things" of windows, there are many. :)
For one, it's big. When I was working on part of the UIPlat team for Windows, my private enlistment just included build infra and maybe 5% of the sources and associated tools/tests and it still took up 60+ GB (100+ GB with binaries). I don't think you can just throw it on Github.
Two, compatibility. Even if you fix a bug, it usually needs to be versioned in some way to avoid breaking apps that rely on the bug. OS code, once shipped, becomes feature. Fixes require a fair amount of due diligence besides just identifying the problem and the correct fix. The historical context of how the issue was introduced and how it has migrated to other branches needs to be understood by grepping through the source history graph. Suites of build verification tests, regression tests, integration tests, and unit tests need to pass. Although Microsoft has extensive infrastructure for buddy builds and integration staging, external parties won't. So at best they could suggest a fix to Microsoft to complete the due diligence on. But usually suggesting a fix is easy and vetting it is the grueling part.
Three, private forks of Windows seem like maintenance and compatibility nightmares. Fragmentation would likely disrupt windows update's ability to apply patches and keep the system secure. Windows' ability to run across so many devices is kept sane by keeping a relatively small, consistent trusted computing base. Merging together forked Windows repos is a disaster. The XBOX team forked Windows 8 for a few months and made some tweaks to get XBOX One out the door, and then some folks had the nasty job of having to try and consolidate the codebases against moving targets. No one has enough context or time to perform these kinds of merges individually, nor would ever volunteer to, so they need to be massive coordinated efforts between teams.
Four, variants. You're not going to want to deal with all the possible permutations of Windows SKUs, processor architectures (ARM/X86/AMD64/IA64), build flavors, etc.
Five, it's a mess. There's plenty of archaic cruft, possible trade secrets, and lurking vulnerabilities. Open sourcing everything would be like opening Pandora's Box. It's too risky. Curating out some portion of this to open-source would be a massive undertaking and ongoing maintenance cost.
I think it makes more sense to take more mature modules, like .NET core or Roslyn, and open source them as separate entities. Modules are more reusable, and community-driven changes are easier to curate when they can be isolated. Windows has some components which it probably makes sense to open-source, but the entity as a whole is too unwieldy in its current form. .NET is designed more modularly.
TL;DR - Windows is an unwieldy behemoth and would be difficult to open source.
I don't see why everything needs to be open source. People who simply don't like Microsoft will not use it just because it's open. Quite the opposite I think - they'd probably immediately fork it and provide an alternate download in order to hurt Microsoft.
Are foreign governments a big market that Microsoft has lost a lot of customers from? I don't think so. The vast majority of the potential market for Windows is already running Windows, so I don't see what they'd really have to gain. The small but loud minority of open-source zealots won't be happy until every single Microsoft product is open.
>I don't see why everything needs to be open source.
I think this goes back to one of the core tenets of free and/or open source software: the user should be able to know, control, and modify exactly what their machine is doing.
If the source is not available you're just running a black box and praying that it will do what the person who gave it to you claims it will do.
"If the source is not available you're just running a black box and praying that it will do what the person who gave it to you claims it will do."
I'm sorry to point out to you but a large set of computing devices are black boxes and it does not hurt their marketability. Cars, phones (the OS running the hardware, not Android), ... the pilot in the passenger plane does not want to hack his planes OS, nor most likely does the mechanics crew.
Practically, lots of people just want to get their job/art project/email/browsing done and do not care about the blackness of the box just as long as it works.
I think one of the reasons RMS got going was that he was irritated that the software from vendors did not work.
The software markets have advanced some what. Nowadays, for any product with a sizeable market it is fairly safe to presume the software works or at least won't fail miserably. And if it does, it's not just the lone consumer but thousands/million others who are pissed off as well. Yeah, crap gets released though.
The reality of your software being a black box is not a matter of marketability. It is a matter of what you are able to know and control about what your personal machine is actually doing. Obviously there are manifest examples of people running all manner of black boxes, but as someone else pointed out, they are at the complete mercy of their black box vendor (unless someone wants to reverse engineer the box). If your vendor folds up and a critical vulnerability is exposed or is being openly exploited, well, congratulations on your new black brick.
On the other hand, if you have complete visibility into what your machine is doing and have the ability to modify it at will you can avoid all manner of failure scenarios that are essentially unrecoverable in the black box scenario.
We know people will run black boxes, drink poisoned sugar drinks, support genocidal megalomaniacs, torture others for limited monetary gain and/or endorphin rushes, etc. None of those realities imply that others should follow in the same footsteps, especially when there are workable alternatives that don't suffer from the same permanent failure scenarios.
I think you are a bit too idealistic about the hacking capabilities and willingness of people in general. It is convenient the correct functioning of my car or my phone is the responsibility of someone specific else than me.
Civilization is characterized by specialization of people. I cherish the notion that hardware and software should be based on open standards. For day to day work, I just want my gear to work. If it fails, I certainly do not have the time to dig in to the software layer because I have a work, children, housekeeping duties, and a bunch of art projects and higher level concepts I want to focus on.
"if you have complete visibility into what your machine is doing and have the ability to modify it at will you can avoid all manner of failure scenarios that are essentially unrecoverable in the black box scenario."
You presume all software is trivially simple. I can tell you, it is not. A large category of software requires years of specialization to actually grok what is happening.
As an extreme example if I owned a plane I would not like to hack it's software under any circumstance unless I were a professional aeronautics professional, and probably not even then.
> As an extreme example if I owned a plane I would not like to hack it's software under any circumstance unless I were a professional aeronautics professional, and probably not even then.
But would you download an alternative distribution made by a group of enthusiastic aeronautics professionals that has been used by tons of other people with no problems? Think Cyanogenmod.
If I was a manufacturer of planes I would probably figure out if it could be used. As a private individual - no way.
In this plane software example if I was a plane manufacturing org I could dedicate people to integrate it and test it.
There is a difference between an organization dedicated to making a product and a group of hobbyists coming together to scratch their itch. There is a scale, a threshold, above which you need big-org organization and sharing of responsibilities.
Small expert teams are fantastic for the sort of projects that can be done by small expert teams. For larger things there needs to be a bit more infrastructure and organization, or at least continuity of many, many years.
There is a threshold in software complexity after which one really needs lots of organized testing and fixing.
As a private user, if I fail the firmware update on my shiny plane, it's all on me. Unless there was some weird insurance to cover the costs.
Considering the quality of cyanogenmods "stable" releases, never. Hobbyist professionals are still hobbyists and not liable for the damage their code may do.
> I'm sorry to point out to you but a large set of computing devices are black boxes and it does not hurt their marketability.
Free software should be looked on from the point of ethics, and not from financial gain. Proprietary software is immoral, and that is the reason you shouldn't be building it.
It's one thing to laud the common benefits of FOSS, or the ethical gains of charity. These things resonate with me. But I fundamentally disagree with - and as a proprietary software developer, am alienated by - your statement.
Is that actually what you mean? My day job - making games - is immoral? A daily dose of evil? If so, this is the kind of attitude that puts me off contributing to anything even remotely related to "free software".
Perhaps you're fine with that, but I feel it leaves us all poorer.
If you are selling cars with the hoods welded shut and all systems permeated with DRM, are you making the world better, because, hey, cars are good? Or are you in effect making your customers helpless, teaching them that they are not, and should not even want to be, in control of their own cars?
DRM and closed source software are not the same thing. Closed source software is commonly used to manipulate open data. Free software facilitates the openness of data and is absolutely essential but it does not make in any way closed source software evil.
> True, but in this situation they have the same effect which I wanted to highlight, i.e. loss of control on the part of the owner/user.
Analogies are a bit awkward. With cars, DRM would equal to each car manufacturer having their own road barred to other car manufacturers. If the welded-shut-bonnet bothers you just switch cars. This is what I meant with open data - it's what one does with the product that matters, and can one practically choose another.
As to the "evilness" of the situation, I cannot think I could work professionally in software as a field - which I love - if there were no established large companies with closed source products. If my analysis is flawed I would value pointers why.
As my employer is a closed source software vendor, to me the economics situation looks as follows: vendor provides software, end user uses it to add value to their work. End user reports bugs, vendor fixes the bugs. There is an efficient responsibility interface to the whole thing. And the area is well competed in, if vendor a screws up customers will move to vendor b, c, or d...
With an established product there is an established revenue stream which enables stable income for all employees.
The way I choose to work on my profession, I will take stable revenue stream with specific work hours over some 24/7 freelancer role to fix bugs here and there. This simply would not be compatible with a family, with small kids I'm barely able to cope with the stresses at home and work as they are. If I had to worry after the next mortgage payment I would probably go nuts, and that is no exaggeration.
Open source simply does not work that well as a revenue model in general. It works as an enabler for all sorts of practical interactions, and is absolutely essential, but people need to eat and feed their families. Open source provides value, but few people are able to capture that value. That's why there are non-profits to organize critical work in this area.
If the product of my company was open sourced, several enterprising individuals would simply copy the product, relabel it, and sell it. Thus eating to the market share, thus probably leaving me to find another job. Thus, to me, closed source software provides housing, food and security to me and my family.
> If you are selling cars with the hoods welded shut and all systems permeated with DRM, are you making the world better, because, hey, cars are good?
My cellphone might as well be welded shut - because of system-on-a-chip design. I can't replace the GPU, the CPU, the memory, or any number of things I've repeatedly replaced on my desktops with my current lack of soldering skills or equipment. Is SOC design also immoral? Even as it enables access to the internet to an ever growing number of people, something some have been calling a human right?
But let us return to your analogy instead of playing analogy ping pong.
> If you are selling cars with the hoods welded shut and all systems permeated with DRM
Assuming the price remains around the same, I'd simply not buy that car because it's stupidly designed, and going to be a pain to maintain. I'd also not buy a car where I had to replace the engine block to replace the front headlights, no matter what percentage of it's software is GPLed, or how many of it's parts have 3D printer schematics available for me to replace them in my own basement. Neither choice is based on ethics.
I also think it's good we have government laws forcing car companies to make their maintenance documentation etc. available to 3rd party mechanics. Cars are expensive enough to maintain that society is well served by competition. And while proprietary systems aren't inherently immoral, unexpected predatory pricing based on vendor lock-in certainly can be a problem. I've seen the short end of that stick enough times to know it sucks. I think it's worth limiting what a car company can do to help avoid the circumstances that can even lead to that, even if some of the things we're prohibiting them from doing were perfectly ethical for them to do on their own.
But you'll be hard pressed to convince me it's a problem for your $3 copy of Canabalt. It doesn't need an oil change. With several games, I complain if 3rd parties figure out how it works - when multiplayer suddenly becomes plagued with aimhacks, wallhacks, maphacks, and other unfair competition.
If I owned a car, it might as well have been welded shut – I am not a mechanic. Does this fact make it ethically OK for someone else to sell a “closed” car to me?
> […] I'd simply not buy that car […]
That was not the question. The question was if would be ethically OK to sell that car, not if you (or anyone else) would buy it or not.
> But you'll be hard pressed to convince me it's a problem for your $3 copy of Canabalt.
It also wouldn’t really be a large issue with a simple enough car or car-like conveyance, like maybe a bicycle, or a Segway. Does this make it OK?
> It doesn't need an oil change.
Software needs updating. Static software is dead code.
> If I owned a car, it might as well have been welded shut – I am not a mechanic. Does this fact make it ethically OK for someone else to sell a “closed” car to me?
That's not the deciding factor. I've no particular ethical qualm with it if you know full well what you're getting into and choose it.
>> […] I'd simply not buy that car […]
> That was not the question. The question was if would be ethically OK to sell that car, not if you (or anyone else) would buy it or not.
"Neither choice is based on ethics." The ethics of the sale would depend on the other particulars, but if we assume those other particulars were all ethical, then yes, I'd say it's ethically OK to sell you that car.
Let's take a few concrete examples where a car has indeed been welded shut.
- I'm selling you the car for you to resell at a profit as scrap metal.
- I'm selling a clunker I tried to "repair". A few poorly followed DIY guides later and... well, I made some mistakes.
- In a fit of mental illness, or for the sake of a harmless prank, I welded my car shut intentionally.
In which circumstances should I feel guilt trying to sell it to you? To me, it's the ones where I'm trying to mislead you. That could be any of the above (I lie and say it's not welded, lie about the scrap tonnage, I try to convince you welded cars are in vogue, etc.) or none of the above (I'm upfront about the facts and ensure you know what you're getting into.)
>> But you'll be hard pressed to convince me it's a problem for your $3 copy of Canabalt.
> It also wouldn’t really be a large issue with a simple enough car or car-like conveyance, like maybe a bicycle, or a Segway. Does this make it OK?
I didn't say you'll be hard pressed to convince me it's a "small enough" issue or problem to fly under some ethical radar or waterline. I'm saying you'll be hard pressed to convince me it's a problem, period.
>> It doesn't need an oil change.
> Software needs updating. Static software is dead code.
I disagree with your assertions. You might want to update it, but there's a big difference between want and need. Plenty of old DOS and cartridge games are still quite playable. And for the purposes of archival and preservation of the commons, I think a focus on emulation provides more bang for the buck than trying to modernize every abandoned application - and, it should be noted, doesn't require updating the software.
EDIT: On the subject of unanswered questions, I'll repose my original one to you since the original poster isn't answering.
Do you actually think my day job - making (proprietary) games - is immoral? A daily dose of evil?
> I've no particular ethical qualm with it if you know full well what you're getting into and choose it. […] I'm upfront about the facts and ensure you know what you're getting into.
One could make the same argument about any laissez-faire economical proposal, like “Do I have the right to sell myself into slavery?” Some things are prohibited, even though people are, in theory, well aware of what they are doing. One could argue that this is one of those things that ought to be prohibited.
> You might want to update it, but there's a big difference between want and need.
I would argue that it is my right to update the software to changing circumstances. Otherwise, it’s like a solid-block car with DRM; unchangeable, and (once support is dropped) increasingly unusable due to changes in its environment. These characteristics (pay for a limited time of support, after which it becomes practically unusable) is more akin to renting than buying. If I buy a thing, I would expect it to be my right to modify it according to my circumstances for all time, since I now own it. For software, I can’t practically do so without the source code.
> Do you actually think my day job - making (proprietary) games - is immoral? A daily dose of evil?
Words like “evil” have unreasonable amounts of emotional attachments, so since you are pressing me, I feel like you are setting an rhetorical trap, so I will refuse to use that word. I will say, though, that you are quite possibly making the world slightly worse instead of better. The fact that people are taught to be helpless and powerless is a bad thing. Whether you are, on the whole, doing a bad thing depends on whether any positive impact of your game (your game, mind you, as compared to any possible replacement game) is large enough to offset this. This could possibly be true, and perhaps not – I do not feel competent to judge this.
> Words like “evil” have unreasonable amounts of emotional attachments, so since you are pressing me, I feel like you are setting an rhetorical trap, so I will refuse to use that word.
Please note this thread started with some rather emotionally attached wordage applied broadly. And with me very explicitly responding to that wordage. The entire basis of my question centered around that phrasing. It's what I responded to. If you're not interested in handling it... what exactly are you trying to help explain from davorb's post? Are you sure that what you're explaining was in davorb's post?
I think the difference between you and I is a simple one of opinion. Correct me if I'm wrong: You see providing ease of modification as a rights driven moral mandate. I see providing ease of modification as a virtue worth encouraging - sometimes through law - especially when the good to society outweighs the burden or harm to the authors. But I do not see it as a moral mandate, because I do not see 'ease of modification of another's work' to be a right. I do not see it as a right because I see imposing that much burden on the author to be a clear violation of their rights. I have no right to demand a novelist's draft notes or plotline sketches, the LaTeX documents that generated their PDFs, none of it.
Where rights collide, one must strike a careful balance. Let us suppose that one has a right to modification: I think the current length of copyright is unreasonably long, unreasonably in favor of the author, harming the commons and that 'right to modification'. But I agree with the original principle of copyright - to provide an author a means to support themselves via temporary monopoly of the fruits of their labor - and feel that demanding they make it easy to subvert that monopoly from the very get go, to be unreasonably against the author. And although I'm fine with e.g. legalizing jail-breaking a phone, I feel I've no right to demand it be easy.
My ideal world involves much shorter copyright durations (somewhere between 5-20 years max?), better enforced (and perhaps simply by being more reasonable, it will be more respected?), with a richer commons at the end. You could even try to make these rights balance against each other: e.g. for software only providing the protections of copyright only to those who provide their source code.
------
>> I've no particular ethical qualm with it if you know full well what you're getting into and choose it.
> One could make the same argument about any laissez-faire economical proposal
I'm not arguing about "any" laissez-faire economical proposal, I'm arguing about a car I welded shut.
I'm also not arguing that there aren't other particulars that must be considered.
A mugger clearly explains to you so you know what you're getting into and offers you a choice: Give him all your money in exchange for not shooting you. I'm not saying that's ethical!
> like “Do I have the right to sell myself into slavery?” Some things are prohibited
Tell that to a court-martial when you change your mind about enlisting after a war starts. I have many ethical concerns about military recruiting and some of the incentive structures around enlistment, but no particular problem with allowing enlistment. I absolutely cannot fathom those who would want to do this, however, as to me, they very much are selling themselves into slavery - a potentially very dangerous slavery - a slavery which may very well last for the rest of their lives.
> These characteristics (pay for a limited time of support, after which it becomes practically unusable) is more akin to renting than buying. If I buy a thing, I would expect it to be my right to modify it according to my circumstances for all time, since I now own it. For software, I can’t practically do so without the source code.
Do you consider free as in beer - but proprietary - software to be immoral? You didn't buy it. Do you consider free as in beer - but proprietary, and subscription requiring - software to be immoral? You're clearly renting it. Do you consider renting out to people to be immoral?
I can sympathize a little with the "I thought I was buying it but all I got was renting a license" argument. Enough I could potentially agree with, say, an argument that DRMed music is immoral. I'm of the opinion that invasive and negligently DRMed music is immoral (see: Sony rootkits.) But I certainly don't assume I'm buying a DRM-free game complete with source access when I buy a game off Steam - nor I think do most gamers. And being okay with subscription payment model, but not okay with a one time fee payment model, requires some level of cognitive dissonance I simply don't have. I also have no fundamental moral issue with rental, software or otherwise.
> Words like “evil” have unreasonable amounts of emotional attachments, so since you are pressing me, I feel like you are setting an rhetorical trap, so I will refuse to use that word. I will say, though, that you are quite possibly making the world slightly worse instead of better.
You're quite suspicious of me. But to your credit, you're at least not jumping to conclusions.
> The fact that people are taught to be helpless and powerless is a bad thing.
Given just how rampant piracy and cracking, or hacking and modding is, it's a hard sell to me to say that proprietary software is actually teaching this. In fact, I'd argue just the opposite - it's clear any and all barriers proprietary software devs try to come up with to protect their profits are overcome by the users with time. And by "with time" I mean a possibly negative amount of time, where cracked versions of the game release before the non-cracked version does. Vibrant and awesome modding societies pop up around proprietary games - including those that were intentionally hostile to modding (e.g. in a bid to make lives more difficult for cheaters.) Some good, some bad.
And while I'd generally agree with your statement, I admit - I wouldn't consider it a bad thing if pirates felt a bit more helpless and powerless when it comes to intentionally and willfully draining a dev's resources (server bandwidth, support resources, etc.) under false pretext ("I totally bought your game!") while giving nothing in return. Because that's simply not a fair or equitable exchange.
> Whether you are, on the whole, doing a bad thing depends on whether any positive impact of your game (your game, mind you, as compared to any possible replacement game) is large enough to offset this. This could possibly be true, and perhaps not – I do not feel competent to judge this.
And yet I feel judged. To be fair, I asked to hear it, so thank you for responding. But: surely it's only fair to compare the positive impact of my game against the positive impact of the activity that would have replaced it, not against every possible activity? Otherwise, even if I've done zero harm, I'm left competing with "solving world hunger, cancer, and heart disease" all at once. Even if we limit it to games, there was that protein folding puzzle game, wasn't there?
> what exactly are you trying to help explain from davorb's post? Are you sure that what you're explaining was in davorb's post?
One can never be sure about what someone else is thinking, buy davorb did link to the FSF page “Why Open Source misses the point of Free Software” (https://www.gnu.org/philosophy/open-source-misses-the-point....), and since I am fairly familiar with the FSF’s world view, I thought I could clarify, since you seemed to have actual trouble even conceiving any other point of view than the contrasting Open Source one. I suggest that you simply read the linked page, it explains it more explicitly and at more length.
Regarding copyright in general, I am personally very suspicious of the concept (as explained by many others elsewhere). But the FSF’s view is more narrow, focused on software, and your work is producing software, which makes this relevant. And the FSF’s argument why, for software, the right of modification (with implied source code) is important, is, I think, a compelling one.
(You can’t argue that selling yourself into slavery is OK since enlisting oneself is allowed. If that was not the argument you were trying to make, I don’t understand it.)
> Do you consider free as in beer - but proprietary - software to be immoral?
I would have to say yes. It is offering something nice which contains a hidden trap.
> Do you consider free as in beer - but proprietary, and subscription requiring - software to be immoral?
I don’t think so. What you are paying for is not the software, but the service, of which the software is a tool the provider uses to provide the service. Of course, this has other dangers (privacy and dependency), but it is not, I think, necessarily inherently a bad thing.
> I can sympathize a little with the "I thought I was buying it but all I got was renting a license" argument.
It also dilutes the very concept of “owning”. When the Playstation was remotely downgraded by Sony, I saw the argument made that of course Sony had a right to do that to everybody’s Playstation, since it was, and I quote, “their [i.e. Sony’s] console”. People actually had the idea that, even though people had paid money for a console and owned it, it was still, and forevermore, within the domain of Sony, and that Sony therefore had the right to do whatever they wanted to it. This is a dangerous dilution of the concept of ownership.
> And being okay with subscription payment model, but not okay with a one time fee payment model, requires some level of cognitive dissonance I simply don't have.
It’s a question of what causes people to dilute the concept of ownership, and what teaches people to be more downtrodden, helpless, dependent, and afraid. I am against that which does this. Technically the same deal, presented differently, might not have this effect, in which case I would not object to it (on these grounds, at least).
> And while I'd generally agree with your statement, I admit - I wouldn't consider it a bad thing if pirates felt a bit more helpless and powerless
Who are you referring to when you say “pirates”? Are they the ones who crack the game? Those people are incorrigible and will only be aggravated by more copy protection and DRM. Are you instead referring to those which use a cracked copy of the game? Those people are impossible to make more helpless than your regular users. Your regular users, however, will be made to feel helpless if only DRM and online-only play (and no modding tools or source code) is provided.
> Because that's simply not a fair or equitable exchange.
I’d be more sympathetic if 99% of the server components of modern games weren’t only for their DRM-like properties (online-only play even for single-player campaigns, etc.) If, on the other hand, the servers were an optional component, you might solve the ethical problem by simply charging a recurring service fee for the server access.
> And yet I feel judged.
Them’s the breaks.
> But: surely it's only fair to compare the positive impact of my game against the positive impact of the activity that would have replaced it,
> I thought I could clarify, since you seemed to have actual trouble even conceiving any other point of view than the contrasting Open Source one.
Ahh, I see. Thank you. But it's not that I have trouble conceiving the the point of view put forth by the FSF and others. I can see their concerns, their desires, the benifits of a right of modification and the many ways people (including myself) have been harmed by not being able to enjoy such a right, suffering vendor lock-in etc.
What I have trouble with is understanding the jump to the more absolutest position, to say that there is no exception to the rule, to say that all proprietary software is immoral. Certainly, some hold this position - I would say RMS probably does - but I see it as a harmful stance and a rare stance. Those who appear to hold it usually have a fundamental difference of opinion - one that neither of us will make any headway trying to convince the other of in debate - or are speaking hyperbolicly, in which case I would point out the harm of that.
> I suggest that you simply read the linked page, it explains it more explicitly and at more length.
I have, a couple of times at least.
> (You can’t argue that selling yourself into slavery is OK since enlisting oneself is allowed. If that was not the argument you were trying to make, I don’t understand it.)
I'm arguing we allow a form of slavery, in the form of enlistment. This doesn't make it moral. I think in some circumstances it can be moral - but this is an unbacked and unsubstantiated opinion, presented only to help provide my point of view.
>> I can sympathize a little with the "I thought I was buying it but all I got was renting a license" argument.
> It also dilutes the very concept of “owning”. When the Playstation was remotely downgraded by Sony
They were rightfully slapped with lawsuits for advertising a feature and then pulling it. This is wrong regardless of ownership. You could be renting or leasing your console and it'd still not be right. Ownership doesn't grant you the right to do anything you want with it - but that works both ways too.
> Who are you referring to when you say “pirates”?
I was specifically referring to people using the software without paying the license holder for it, in a copyright infringing manner. This does not include people using a no-cd crack on a copy of the game they own, this does not include people buying copies second hand. It didn't include the creators of cracks either, although my opinion would still apply to them. (EDIT: Subject clarity.)
> Are they the ones who crack the game? Those people are incorrigible and will only be aggravated by more copy protection and DRM
They will also be delayed, which is generally the goal of companies applying these techniques (sometimes successfully.)
> Are you instead referring to those which use a cracked copy of the game? Those people are impossible to make more helpless than your regular users.
They cannot be made more helpless, but they can be deterred (sometimes successfully.) I do recognize this reality.
> Your regular users, however, will be made to feel helpless if only DRM and online-only play (and no modding tools or source code) is provided.
I am not made to feel helpless when lightweight DRM doesn't get in my way. I am not made to feel helpless when I can't play Planetside offline, because the game makes no sense to play offline. Make no mistake though - I don't buy games with heavyweight DRM, and I don't buy games with stupid online-only requirements (I'll not buy the latest Sim City, for example.)
> I’d be more sympathetic if 99% of the server components of modern games weren’t only for their DRM-like properties
This is hyperbolic to the point of being untrue. I suspect you know that. Were it true, I would still ask you not to judge the "1%" by the actions of the other "99%" - if for no other reason than your own self interest in giving that "1%" (however small) a reason to not become 0% (however small that reason also is.)
>> But: surely it's only fair to compare the positive impact of my game against the positive impact of the activity that would have replaced it,
> Yes.
>> not against every possible activity?
> That was not my intention.
Cool. Then as part of the "1%" building games with optional server components, I'm doing better than the other "99%" - I like my chances.
"Free software should be looked on from the point of ethics, and not from financial gain."
I think the word 'markets' was a bit misguiding in terms of argumentation. I meant "things which bring people added value and joy" and not "things which can be sold for money". Markets encompass both but I was thinking the consumer added-value in this instance.
"Proprietary software is immoral, and that is the reason you shouldn't be building it"
I think taken without any other context this is not realistic.
I think the concept of ownerhip and responsibility are far more important for high quality software for the point of view of short term end user value than openness.
Software has a philosophical and a mathematical dimension. I don't think anyone should be able to own those and that most software patents are harmful in this way. However.
Software is used as an enabling component everywhere. As an enabling component it's added value does not depend on it's freedom or openness, but from it's capability to function bug free and provide the features end users need. As the canvas in the art program, as the automatic stabilizer in the plane and so forth.
In these instances I claim the biggest human value those softwares bring is indenpendent from their openness.
To enable the example softwares to function correctly require lots of hard labour that is not fun at all. I.e. work. Often it is repeating the same old concepts over and over again, knitting the specific system together piece by piece.
I think there is no Photoshop killer because the people to whom it brings most added value are not programmers and it's usually not fun or rewarding at all to work with such a large codebase. Blender is a fantastic counterexample.
Of course, software should utilize open and hackable data formats so that the data does not vanish.
The long term value for the user and the concept of the software providing free speech capabilities then depends on the underlying software ecosystem, and there open source most definitely helps.
Open formats, open tools to hack on them, open platforms. Yes, definetly! Any other way is harmfull to all stakeholders. IMO, products can be closed source.
I have definitely had my Stallman phase, but after seeing some Free As In Freedom™ projects fail spectacularly, I've realized that there are in fact benefits to keeping some things out of the public eye (from a moral point of view).
Wanting your machine to work is also caring what your machine is doing.
I'm sure the IRS is kicking themselves for running XP for over a decade, building up such an extreme dependency on it that they are now paying huge sums of money for Microsoft to continue to support them.
The black box is only great so long as it actually works, and when it breaks you are completely screwed. It is in Microsofts business interests to make sure it works, but they also want you paying them money while maintaining control of your computer.
But when it does break, your only option is the true owner of your computer, who will milk you for all you are worth, because you are now trapped. You don't know you are in a cage until you want out.
>The black box is only great so long as it actually works, and when it breaks you are completely screwed.
To most users Linux is just as much of a black box as Windows is.
Even among Linux fans, the number of people who actually need to customize the kernel is tiny. The number of people who can customize the kernel is even smaller.
So where's the freedom? At the OS level, all that's happened is that the lockdown has moved from corporations to a subset of the developer community.
Most end users aren't any more empowered than they used to be.
Now - it's different in the web and language spaces, where there's a steady simmer of framework development, and many popular web projects/products wouldn't have been possible without framework sharing.
But there's still plenty of proprietary content there. Just try to get Google or Facebook to share their data collections with you and see how politically relevant open source 'freedom' is then.
If that seems like a tangent, it's missing the point that the value of a system doesn't come from the source code - it comes from the system as a whole, and includes usability, community reach, innovation, invention, and data.
Open source pretends to be a huge lever for freedom, but it's more like a battered fork caught in an avalanche.
In computing, the world-changing leverage is elsewhere, and always has been.
Comments in this thread keep mentioning "most users" or "the average user". But they're not supposed to be the direct beneficiaries of Free Software. We are, and in line with the rule that 80% of users use 20% of features but never the same 20%, it's perfectly reasonable for us developers to expect a feature (open source code) that nobody else will use.
The endlessly apathetic hypothetical "average" user can't see past next week's paycheck, let alone their long-term best software interests. We are the ones who have to look to the future and prepare for it now. Wanting access to source code of core infrastructure is part of that preparation, and "average" users will be indirect beneficiaries via our improved ability to write reliable software.
To add another perspective, constantly targeting "average" is a great way to stay mediocre (see regression to the mean).
Finally, "average" users will benefit immensely from those who are inspired by the ability to tinker at a young age.
The user of open source, in the absence of the skills necessary to change code themselves, can easily pay any other developer on the free market to do it for them.
You can never do that with proprietary software. If you want something changed or fixed, you must appeal to the singular entity with monopoly access to the source.
My mum and dad both uses their computers on a daily basis and yes on occasions things don't always work as expected.
When that happens they ask me to fix it and if it wasn't me, then yes they would take it to a computer technician.
Just like they take their car to the mechanic.
Guess what. Not everyone is or wants to be a computer programmer and that group makes up the majority of all Windows users.
And why Windows has been so successful is you can get away with knowing very little about computers (like my mum and dad) yet still find Windows easy to use.
The number floated around for XP continued support is $200 per machine per year. Redhat, which is one of the few companies that supports a Linux distro for as long as Microsoft supports their products, charges $49/year for the equivalent of Windows Update, and doesn't have any information whatsoever on extended support. So, for the average customer which wants their desktops to have the same lifespan as XP in order to minimize training and other associated costs, ending up with a year or two of Extended Support while you finish your transition is still cheaper than any of the Linux providers.
I'm pretty sure Dell and HP provide support on all their Ubuntu machines, as do System76 et al. And they have extended support offerings all the same. Not sure on pricing, though.
Software support, not hardware. Patches, etc. RedHat is pretty much the only distro maker that will provide software support for the same timeframe as Windows releases are supported. Ubuntu, you'd have gone through the compatibility testing, upgrade testing, migration, etc cycle at least once more often than Windows, and deployments for anyone with custom software get expensive fast.
Besides RHEL there is also SuSE SLES / SLED with apparently 7 years of "general support" + 3 more of "self support" (guess that means you just get serious security fixes).
In addition to the shorter support time, with Ubuntu LTS it's a bit too easy to install packages from "universe" that aren't actually supported for more than 18 months...
The first and biggest effect (presuming a GPL-compatible license) is that Wine would suddenly work much better, and so Ubuntu and OSX would likely ship their following versions with "the ability to run Windows software alongside native software with no performance loss" as a feature. OSX N+3 or so might even eliminate Boot Camp, saying that there's no need to install Windows as a whole if all the software "Just Works."
Then why would anyone build native applications when Windows apps would be WORA? The only incentive I see for Apple is that they are in many ways a hardware company first. They want to sell hardware and this is one reason they haven't feared bootcanp.
Years ago I saw a demo of a Windows subsystem running an ELF Apache server directly on Windows NT. It isn't anything that Microsoft would release though, because once you make it so Windows can run Linux applications, who is going to write Windows apps? It erodes the platform.
I see an official WINE for OSX, that has a perfectly working subsystem, introducing the same problems to that ecosystem.
I dunno about Linux, but OS X has resisted cross-platform applications to a ludicrous degree. It's a good thing in my opinion, moving to Windows recently really highlighted the quality (in design and general OS fit) that indie OS X apps achieve.
There aren't many people who have bought MS software for years and are now excited to use a fork or their own build. Even assuming that some other mega corp comes along and offers world class support/service and their own fork, it still probably wouldn't hurt MS much.
Most small companies would have released their code as GPL (versus MIT as the case here) to limit the risk of a fork fragmenting everything (not being able to merge forks back in or make everything compatible). But in Microsoft's case, because of their size, few companies could hope to make a better version than MS. Further, MS makes money in lots of ways and this isn't a zero-sum game for them. They'll profit immensely from a more open ecosystem. Companies such as IBM, Apple, Google, and Oracle already have proven it's possible to be profitable with open source.
The entirety of Oracle's dabbling in "open source" has been by shaking free software products out of the rotting carcass of Sun Microsystems and - in many cases - adding those products to the grave (see also: OpenSolaris, OpenOffice, various others). The exceptions - namely MySQL (which wasn't a Sun product) and Java (which is still doing reasonably well) - are few and far between.
Oracle Linux certainly exists, but I'm certainly not about to hold that monstrosity on any sort of pedestal.
As for the others, yeah, they're making money on FOSS (and not killing said FOSS in the process), though such free software tends to be integrated in very much proprietary end results (like OS X / iOS and the vast majority of Android distributions).
MySQL got worked into MariaDB. Don't think many people are recommending you use the Oracle version much anymore.
And Oracle Java has also died on every platform that is not Windows. OpenJDK took over the Unix space, and Google has Dalvik and the ART implementing the JRE.
Virtualbox still exists. I guess. It has not done anything in about half a decade, though.
OpenJDK is unofficially ported and built for numerous platforms. But OpenJDK is Oracle's JDK. To contribute to it, you sign Oracle's contributor's license agreement.
Oracle's no golden example as a leader in open source (let alone libre software), but they demonstrate the probable trajectory for the Microsoft projects at this point: a healthy, cohesive community of developers (developers!) using the technology in an ever increasing number of projects and platforms.
>Virtualbox still exists. I guess. It has not done anything in about half a decade, though.
What do you mean? I use VirtualBox for lots of stuff and it gets regular updates... it's also faster than VMWare in most benchmarks I've seen. Perhaps you're thinking of something else when speaking of 'not doing anything'?
I listed MySQL as an "exception" because of it not being a Sun product, regardless of whether or not it's dead.
However, there are still plenty of environments using MySQL (and I'd know; I maintain quite a few such environments). MariaDB is an obvious migration path, of course.
It is funny. At some point Redhat said "screw those guys" and started obfuscating its patches to the kernel (Oracle Linux was based on RHEL's code which was open source) to make it harder for Oracle to apply and create the (what was seen as) a competing Enterprise Linux Distro.
> People who simply don't like Microsoft will not use it just because it's open.
You're forgetting that a lot of people traditionally haven't liked Microsoft because their software wasn't/isn't open. Yeah, Windows would still be a steaming pile of bovine manure regardless of which license it was made available under, but the availability of that source code would make it easy for better, Unix-like operating systems to pick up the necessary pieces to support Windows software, thus continuing to allow Microsoft to profit on their non-operating-system software (Office, SQL Server, Exchange, IIS, Internet Explorer^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^HSpartan, etc.) regardless of whether or not Windows itself is widespread in usage.
Good debugging tools that filter out the irrelevant are perhaps better. Odds are that any time you break out the debugger the fix is in your code not Windows, Linux, the JVM etc. That's not to say those things don't have bugs, only that they are orders if magnitude more debugged than most code under development.
I worked with a closed source stack in the past (java).
Even if the bug wasn't in our code, it was really helpful to see what's going on inside, helped us figure out why something didn't work a lot of time. Making false assumptions, passing stuff in a different format than the system expects etc.
We all, always worked with a decompiler in eclipse, just because it really did make life so much easier sometimes. And well, of course, sometimes there really was a bug in the stack, and finding a workaround is much easier when you know what's going on.
You are so completely wrong. Follow the Wine project for a few weeks/months to get familiar with just how buggy, inconsistent and inconceivably obscure all of Windows' APIs are.
If Windows was good and near-bug-free, Wine would have long been completed. Windows is not just its kernel.
Wine has to duplicate all of the duct tape and bailing wire Microsoft has used to provide backwards compatibility for their customers' existing applications. Just as Wine is an important kludge serving as a compatibility layer [It's Not an Emulator], all those undocumented API's are the same sort of kludge as compatibility layer.
And of course Wine is an outlier. Not only in that it contains its own 20 years of legacy code but in the fact that it is black box reverse engineering. It can't use disassembly to suss out details. Wine's difficulties stem from its requirements as much as anything else. These days VM's are the way to go if the going is intended to get things done. A Windows box in the cloud is pennies per hour.
So far as I am aware, ALL usable operating systems are burdened with that sort of thing. I'd really like to see someone at least attempt a clean-slate OS. Make it 64-bit only from the start, vector fonts to support HighDPI displays from the beginning, etc. It would likely accumulate such cruft over time, but I would expect there to be substantial advantages to building from scratch something that rolls in all of the research advances and assumes capable modern hardware.
I wonder how this strategy is going to affect the bottom line of Microsoft. People writing with CLR languages are deploying web apps mainly (or only) to Windows now. They're going to have an option to deploy to Linux soon. This means less revenues from OS and DB licenses, so it looks bad. Do they expect a large number of people leaving Java, Node, Python, Ruby and picking up C# because of the Linux deploys? Those people would probably have to buy Windows and VisualStudio licenses to code in C# in a VM or just ditch Macs for PCs. More desktop licenses could make up for lost server ones but if I googled well a server costs more than a desktop. Or maybe they're playing a longer game: open source as much as they can, hope some network effect builds up, find out how to profit from it. In the medium term they might be losing money tough. Am I missing something obvious?
Microsoft have already announced that Visual Studio community edition is free, which also means less revenue from VS.NET licenses as well. Although VS.NET is not a major revenue source, most of the revenue comes for Server licenses, e.g. Windows Server, SQL Server and other integrated products.
But it's fairly easy to determine what their strategy is, given that they've been open sourcing most things that touches Azure. Azure also hosts Linux Servers and given the world is moving towards hosting on Linux, they're following the trend to stay relevant and appeal to more developers.
Basically this comes down to making the .NET ecosystem more appealing and attracting more developers to Microsoft platform and development tools which will integrate seamlessly with their Azure hosting services and other commercial server products.
>Microsoft have already announced that Visual Studio community edition is free, which also means less revenue from VS.NET licenses as well.
Is it though? I would believe that the people using the community edition are the same people that were previously using the express edition, i.e. hobbyists and students who were not going to pay for it anyhow. And as you already described in your last paragraphs: if their plan works out, I would imagine visual studio license revenues to go up in the long term as more people are attracted to the eco-system.
This might be the answer, but it's going to become part of distributions so I could run .NET on Linux on AWS or on my own VPS hosted anywhere. I can't see any reason why Amazon won't provide a Linux .NET image to boot from. Obviously MS can try to provide better and faster images for their Azure servers, maybe even their own distro. I can't wait to see Microsoft Linux Server 2018 :-)
Prices for cloud servers are falling, so do they really plan to take part to a race to the bottom? I remember this discussion from November https://news.ycombinator.com/item?id=8582096
> Additionally, full-blown Visual Studio is totally free now.
Not exactly. Anyone inside an "enterprise organization" (according to Microsoft's arbitrary definition) is even worse off than before, because Microsoft won't be releasing the Express editions any more.
Here’s how individual developers can use Visual Studio Community:
- Any individual developer can use Visual Studio Community to create their own free or paid apps.
Here’s how Visual Studio Community can be used in organizations:
- An unlimited number of users within an organization can use Visual Studio Community for the following scenarios: in a classroom learning environment, for academic research, or for contributing to open source projects.
- For all other usage scenarios: In non-enterprise organizations, up to 5 users can use Visual Studio Community. In enterprise organizations (meaning those with >250 PCs or > $1 Million US Dollars in annual revenue), no use is permitted beyond the open source, academic research, and classroom learning environment scenarios described above.
> Not exactly. Anyone inside an "enterprise organization" (according to Microsoft's arbitrary definition) is even worse off than before, because Microsoft won't be releasing the Express editions any more.
I doubt that there were ever significant numbers of developers in enterprise organizations using Express Editions anyway. If you are doing serious business, the cost of paid licenses (and MSDN subscriptions) is generally justified if you are going to be using MS technologies.
Yeah. Even having a proper shared nothing cluster requires the Enterprise Edition, at like $12k a core. Just to get a slick version of what you can do with DRBD for free.
SQL Server Express is pretty useful on it's own. The only limit is the database size be under 10GB. That's a pretty high limit for a lot of applications.
I never tried C# but I did code in Java with vim and emacs. Not a nice experience. Java more or less requires an IDE. I assumed C# would be the same but I could have assumed too much.
The most annoying thing in Java without an IDE (and a source-control plugin for that IDE) is moving types from one package to another and renaming types. Due to the requirement that the directory hierarchy mirrors the package structure and that every public class is contained in a file with the same name this can be a bit annoying at times. Otherwise C# is much more concise for many things but apart from that there isn't much difference, at least for me.
There were some potential solutions (like billing SPARC servers and workstations as "Java development/deployment platforms") that Sun started to brush against, but they ran out of time / money / distance from Oracle's sidaM Touch, etc.
Java actually did/does make them money, most obviously through the Ask Toolbar bundling deal (which is why they still desperately cling to it even though it trashes their reputation and people hate it).
They also get commercial support contracts from it.
And they got money from licensing various trademarks, test suites, J2ME and so on.
But yeah - ultimately Sun weren't able to build a killer business out of Java. They had a variety of small revenue streams but nothing that could compete with Windows.
Making ASP.NET Windows Server only wasn't driving Windows Server sales, it was destroying Windows (and Visual Studio) as a developer platform, and thus, ensuring that no one had any motivation to develop for Windows Phones and tablets. It was a certain path to irrelevance.
Making ASP.NET Linux deployable ensures that Windows stays in the hands of developers. Making Azure Linux friendly ensures that Microsoft can take a slice of that market. And pushing Universal Apps and giving Visual Studio Community and Windows 10, an OS that allows Universal Apps to run on the desktop, for free motivates developers to fill up the app store.
This decision was inevitable and IMHO a huge step in the right direction.
I had to google what Universal Apps are (Windows is a very remote technology stack for me), got it now.
About the .NET on Linux move, so the strategy could be to make more people willing to develop with .NET, bet that they buy VisualStudio, which means Windows desktop licenses and be prepared to see Windows Server replaced by Linux instances running (hopefully for them) on Azure. For that to work they must be bold in selling Azure and luring developers to the .NET stack. I don't foresee developers leaving their 5/10 years investment on other stacks en masse. Maybe new developers will consider .NET as a viable alternative to all the other open source tecnologies. It's a generational gamble and it's going to pay only if they really can compete against AWS and the other VM providers.
However I understand that they must do something, or they'll end up as the company that makes a console and an OS for video gamers and for cheap computers.
Is Microsoft actually putting resources on a Mono replacement for Linux and Mac? My understanding is it's still Mono for non-Windows which has had very limited success. Xamarin is mostly concerned with mobile.
As far as statically typed languages go C# is actually great. If I had the option of using it instead of some gimped version of it then hands down that would be my language of choice.
I am really interested to see what happens once ASP.Net is running on Linux. C# and Visual Studio are fantastic, mature tools and I think a lot of developers would enjoy using them whereas they might be hesitant at the moment due to OS lock-in on the code they are writing.
ASP.NET already runs on .NET with Mono, but not always the latest versions of things. Having it all open source could bridge that compatibility gap and let us fully enjoy the ecosystem, instead of having to be careful with taking dependencies all over.
ASP.NET support on Mono is very shaky, with random things missing. I tried to develop a Mono ASP.NET MVC app on it recently and I had to switch to Windows due to the number of issues.
For webapps, I am using Nancy and .Net 4.5 and I am very happy with it. Developing debugging on windows and deploying it on ubuntu apache with mod mono works great for me.
ASP.NET is a big framework and indeed it's never been as smooth as it could be, especially with the very latest versions. It's not to say that web dev is not possible with Mono. I've been using Owin+Nancy+Signalr successfully on Linux.
No, in it's current form it probably won't be ported to a non-windows platform. This isn't to say there never will be an IDE called Visual Studio that looks like a current Visual Studio that runs on more platforms in the future.
I'm not sure they will migrate Visual Studio. First they'd need to move WPF, which is implemented in DirectX. Second, Visual Studio and MSDN are still profitable. Third, there's the Omnisharp project that;s working on providing support for the more common IDEs on linux. In fact, there's nothing stopping people from making Eclipse and IntelliJ plugins.
Silverlight's implementation of WPF runs on OS X already so presumably a lot of the ground work, at least for an OS X port, exists. I'd buy a copy of VS for Mac in a heartbeat.
Same. I haven't run windows in 7 years, but Visual Studio is still one of the best IDEs I've ever used. I'd gladly pay for it if it was released on OS X/Linux, although that's somewhat contingent on the plugins that I want running on it too.
Having used Eclipse, IntelliJ IDEA, Visual Studio in some fashion over the past year, I personally prefer IntelliJ for its well thought-out interface, low footprint, numerous productivity-boosting shortcuts ( I love Help>Productivity guide) and refactoring options, etc. I literally pine for IntelliJ's features when I use Visual Studio. In fact IntelliJ offers the very popular but pricey Resharper extension for Visual Studio to bring Visual Studio on par with IntelliJ's IDE.
I'm talking about an alternate universe where it's available on OS X, and it's about $45 per month if you get the subscription version.
The other thing is that I want to use OS X for everything else -- I much prefer using it for development above Windows, so while you may be correct in terms of sheer dollar value comparison, in terms of opportunity and switching costs it's quite higher than that for me personally.
Haha, they just sent me an email about why I haven't upgraded to the latest ultimate version. I think cross-platform .NET(F#/C#) support would be the clincher.
A few weeks back I asked Scott Gu if there were any plans for Visual Studio on OSX/Linux and the answer was no. He mentioned they're focusing on Visual Studio Online, though I don't know if that means some sort of online IDE.
This has been asked about for nearly as long as Mono and WPF have been in coexistence. I would say there's very little chance this will happen. However, a recent update in the WPF world is MS has actually been ramping their WPF team back up after it looked like it would suffer the same fate as Silverlight. So who knows?
DX9 support is still experimental and requires an unstable Wine fork to make it run.
I have to say I managed to have it working and I got a 50% increase in FPS in Steam/Source games running under Wine.
There's the "old" DirectX -> OpenGL translation layer in Wine but it is not really efficient or performant, although it may be enough for GUI applications.
>> There is DX9 support on Linux and IIRC a partial DX10/11 implementation bit-rotting somewhere.
Sadly Gallium3D isn't "Linux" yet. It's only really usable on AMD open source drivers that is very small portion of market and I would say it's 15% at best, but in reality likely 5-10%.
I still haven't gotten it working in vim consistently enough to switch to the mac for development. When it does work, the lag makes it too slow, doesn't really allow for method discovery like intellisense, Ctags and omnicomplete still get better mileage for me.
Funny thing is that internal to Microsoft, most devs don't use Visual Studio as an IDE. Maybe as an editor or a debugger, but there are a lot of build systems out within Microsoft and most of them don't plug into Visual Studio.
IMO, open sourcing Visual Studio itself is not all that interesting vs. open sourcing the .NET platform.
I worked at Microsoft Excel 15 years ago and Visual Studio was only used as a debugger. Visual SlickEdit was the editor and compiling/building was command-line based. Each group is different.
Ah, that may be true. However, there was a push within my division to standardize all of our tools with VS (there used to be a separate editor for X++ but now it's all VS), and I've heard other things about other divisions.
Source is I worked there for eight years. Left two years ago. There was a push around the time I left to standardize more on TFS, but historically, VS was regarded internally as a bit non-hardcore.
This post is a few hours old, but I just want to put it out there: We're hiring OPEN SOURCE ENGINEERS. I'm one and our job is awesome[1]. Please get in touch with me if you're interested[2].
Since I wrote a string hash code function recently[1], I was interested to see what they use. Like many, they're using[2] the venerable djb hash. That sent me down a rabbit hole where I discovered that Bernstein was 19 when he wrote about it. Wow.
I don't know the story, but the logic behind it is simple: If you want to guarantee no one depends on GetHashCode staying static between runs of an application, change it all the time.
IIRC, a few yeas ago appeared a denial of service attack, probably originally for Phyton, but it was ported son to other languages.
The idea is that the hash is good enough for normal list, but it's not a cryptographic hash and it's easy to find collisions. Then you can make a lot of requests with strings that has the same hash value. Now the hash operations are O(N) instead of O(~1) and everything is slower.
Using an unpredictable hash calculation makes this attack more difficult.
It's in a #if DEBUG statement so it would not change. Historically, even if they shipped the debug symbols, the assembly would have been built in release. Now, I suppose you could build it in debug.
The string there is in multiple areas of that class, and the same behavior is displayed for all of them. Wouldn't logic suggest everything such as above would be moved in to a constant repository for clarity and also less potential human error for future additions?
It's pretty common in C# to use Constants to represent strings. Cuts down on the repetition as you said and you get the Intellisense too. I'd be curios to know the reasoning for using the literals as well.
It's sort of surprising to me how much huger the .NET version is, in terms of code. Virtually all the lines in the Java version are API docs. The .NET version doesn't seem to have them (they must be elsewhere?) but it does have a lot more code and that code is much lower level.
Not sure what that means, if anything, but it's interesting.
That does answer an unanswered question I had on SO about string hashing. If strings are immutable, why isn't the hash code memoized? Seems like it would make HashSet/Dictionary lookups using string keys much faster.
That was my assumption, too. They do memoize the length, but I'm sure those bytes add up, having run into OutOfMemoryExceptions building huge amounts of strings before.
Today, .NET Core builds and runs on Windows. We will be adding Linux and Mac implementations of platform-specific components over the next few months. We already have some Linux-specific code in .NET Core, but we’re really just getting started on our ports. We wanted to open up the code first, so that we could all enjoy the cross-platform journey from the outset.
Think about all the things that Wall Street has said Microsoft should do over the past decade. None of them was "more open source."
All the groundwork was done by Ballmer. It had to be because now that he and Gates have reduced their ownership stake and are no longer the two largest shareholders, Wall Street's flavor of the month ideas cannot be ignored.
The 'new' Microsoft has been building at least since they hired Hanselman back around 2008. He was talking then on his podcast about taking the job to advocate open source. Even then Microsoft worked with the Mono team. They moved Office to an open file format.
It takes a long time to change an aircraft carrier's course. This is Ballmer's strategic vision. His passion for developers even made it onto Youtube.
Or rather: When OpenDocument was standardized by ISO and threatened to create demand for open, common, standardized interchange formats, they created a similarly and confusingly named XML version of their existing format, with insufficient documentation to allow complete third-party implementations, and pushed it as a competing ISO standard, using tricks such as (accidentally, I believe they said) encouraging partner companies to join standards bodies just for the vote:
http://en.wikipedia.org/wiki/Swedish_Standards_Institute#The...
That may have been true at the time, but the documentation for OpenXML is actually very good. It's incredibly detailed -- I think the standard is somewhere around 5000 pages, and fairly readable. Of course, I haven't ever actually had to write my own implementation (thank god).
That's the way standards work. OpenDocument did not meet MicroSoft's needs. They were the one with a massive investment in an existing code base and customers who would be best served by feature compatibility. So they used the same process to create something with equal legitimacy.
As Grace Hopper said, the great thing about standards is there a are so many to choose from.
I'm not sure those two statements are related. I don't see how having .NET open sourced is directly effecting Microsoft's bottom line. I like getting free stuff too, but that doesn't man it is necessarily profitable for the company.
IMHO, it's the "open" part and less the "source" part. I believe a lot of projects were reluctant to jump on the Mono train because they expected it would be steamrolled (on purpose or just through sheer manpower) by the .NET. However, if everyone is building on top of the same CoreCLR, it's possible that fear might subside. I doubt it will ever go away because Microsoft will continue to keep some APIs close to the vest.
If it's possible to transpile either Java source or better Java class files to a CLR assembly, even some of the community momentum that Java has might turn into a benefit for .NET, too.
I had heard of such projects back in the day, but didn't know if they were still in service (and I didn't take the time to research it). I had some skepticism about whether IKVM had kept up with the rapid moving target that is the JRE, but after seeing that it runs Minecraft[0], I'm guessing it'll likely be alright :-)
But holy shit, who in their right mind still uses CVS?!
First of all, it's not just opening up .NET. It's also many other cool things Microsoft has been doing: Linux on Azure, TypeScript, etc. etc.
Regarding .NET, it is likely it will benefit Microsoft significantly. First, because Microsoft would not do it without strong reasons. Second, we can guess at those reasons: Microsoft wants to grow its developer ecosystem. .NET has been limited due to being Windows-focused. Opening it up makes sense.
(Yes, Mono exists, but the .NET cross-platform experience still wasn't good enough. Not necessarily Mono's fault, but regardless, opening .NET can solve this (possibly at a cost to Mono).)
I'm familiar with the philosophical differences between Free Software and Open Source.
But in this context you seem to imply that there are practical differences i.e: Not all Open Source licences are considered Free Software in the practical sense.
If that is the case, could you provide an example?
The open-source academical licences are a good example.
For example, part of the Win32 kernel are open for academic and are heavily share between the classes, but the limitations are here.
It provide free usage, studying, display, publishing, but also explicitly says that some of the part could be binaries (machine-readable), you can't sell anything derivative, and the licence is personal with no right to give the licence to anyone else.
So, you can consult and modify in the frame of your academical work, but no more.
I'm not sure if you linked to the license you intended to - the license at that link doesn't seem to include any of the limitations you mention. If it did include those limitations, I don't think if would qualify as an open source license, either.
>> I'm familiar with the philosophical differences between Free Software and Open Source.
But in this context you seem to imply that there are practical differences i.e: Not all Open Source licences are considered Free Software in the practical sense.
The Open Source Initiative (OSI) was certifying all sorts of licenses as "open source" under their definition. Basically if you allowed people to see your source code it was considered open source. That's not really practical and most open source licenses had an agenda of the company retaining control, which means people can't really do what they want. To most developers there are basically the BSD/MIT licenses that allow you to do whatever you want, and the GPL/LGPL licenses that allow you to do what ever you want - so long as you pass that freedom on to your users. These are both forms of Free Software, though the distinction causes much argument. Every other "open source" license is generally more restrictive and meant for someone to retain ultimate control, including the old Microsoft licenses.
In my opinion the OSI did more harm to the movement than good by "certifying" a huge number of irrelevant licenses so companies could claim be doing the cool new thing. None of that code can be integrated into anything else under different terms - at least not without consulting a lawyer.
> Basically if you allowed people to see your source code it was considered open source.
This (premise of your entire post) is very much incorrect.
OSI has never certified the notorious MSFT "Shared Source" licenses that you are probably referring to.
Please read the Wikipedia article on the subject, you will find that the licenses that are certified by OSI are the same licenses that the FSF calls "free software", and the licenses that OSI didn't certify are "non-free" according to the FSF.
The OSI "Open Source Definition" criteria are based on the "Debian Free Sofware Guidelines", which are in turn inspired by the FSF's "Free Software Definition".
Most proponents of free software would argue that the difference between free software and open source is a philosophical one, not a technical one [0]. Copyleft is preferable, but not mandatory, for a free software license; the most widely used non-copyleft free software licenses are recognized by such by the FSF [1].
Noteworthy from the source: Microsoft does NOT use (unmergable) SLN-files for their projects, but instead scripts-msbuild invocations against specific projects:
I guess this explains why they saw no need to fix the somewhat broken SLN file-format in the first place, but actually did something about project-files. They don't share their customers pain on this point.
To me Github has become a social networking website for developers (bear with me). As a social networking website its sort of half-way between Instagram (instead of sharing pictures you share code) and LinkedIn (It serves as a programmer's online profile ).
As a social-networking website, the most important thing that defines it is the number of users. You could publish your code on Codeplex or Bitbucket, but you would be simply limiting your reach.
Exactly, CMake getting more traction like this should be good. I know quite some devs who're like CMake? Why don't you just use good old Makefiles? and then struggle with it's syntax and/or fail to provide working Windows or even Mac builds of their software simply because they stick with plain makefiles. (don't get me wrong, they work fine, just not so much for cross-platform projects)
So I'm still a little fuzzy as to what this means. Is this basically the same thing that Mono is trying to provide? Would it now be possible to have the F# front end link to the CoreCLR backend on Linux?
Thats kind if it. CoreCLR is the runtime that all .NET languages run in. CoreCLR doesn't run on Linux yet but they are working on it and once it dose all .NET languages (Including F#) should run on Linux.
My understanding was that Mono was the "open source CLR." What parts of Mono would this not cover? Once this project gets ported, what barriers to running .NET code on Linux/Mac/etc with performance and reliability comparable to Windows would remain?
I wonder why they would do this. In fact, I don't see the benefit of MIT over Apache except for compatibility with GPLv2. But I'm definitely not an expert, if someone would want to explain.
I have no idea what mattered to Microsoft, but my view is that MIT is just simpler. There really aren't too many disagreements on what the license requires.
Additionally, if you don't truly plan on enforcing many requirements of the license (e.g., if a distributor fails to "cause any modified files to carry prominent notices stating that You changed the files" (4.b.), will you send them a letter telling them to correct the problem?), then why bother using a license that includes those requirements?
The irony that Microsoft would one day release code under the MIT license which would make it possible for Mono to benefit (but not the other way around because of GPL) is hilarious.
How the wheel of freedom has turned.
* disclaimer: yeah I know the Stallman crowd have their own particular definition of "freedom" that is vastly different and would disagree
To be honest, that's actually one of the big motivators for me using MIT-style licenses (or BSD or ISC or somesuch) for a lot of my own code. I'm so accustomed to 525600-page EULAs that I cringe as soon as I see a license that exceeds one screenful, and I figure there are plenty of others who do the same.
Plus there are the other benifits (especially when it comes to free software projects being able to reuse that code without worrying so much about license compatibility).
This has all been set in motion during Ballmer.. Not from Ballmer himselve, but from inside out ( a lot of good employees there).
Now one of the guys pushing it, is CEO of Microsoft and we are finally seeing a (real) difference.. I joined the MS community a long time ago and this is (again) a heart warming addition!
Good job Microsoft, you're a bit late to the party. But no doubt, the ROI will show sooner or later! ;-)
Very few people here are qualified to inspect the code in such short notice and look for dangerous code. I think any answer to this question is going to be pure speculation.
Since it's very easy to decompile .NET binaries and Windows has been a traditional place for crackers and what not, I think that code base is very well inspected already.
So I am not sure what this means to the end user but as another reader pointed out, Google and other companies in future could use .Net/C# as part of their mobile architecture. You could host .net sites on OSX or Linux, test them on a Mac. What else?
I wonder how this would aid projects such as IronPython and IronRuby if at all, just out of curiosity. My only dream is that they eventually have VS on Linux.
Probably the most immediate winner is Unity. They've been stuck on an ancient version of Mono because they did not want to license tech from Xamarin. They can now have MS derived runtimes free.
Unity has been developing their own runtime for a while now to get away from the mono licensing costs of using a new version. I'll be interested to see how they react to Microsoft open-sourcing a lot of stuff they might have been able to use.
Not much, I imagine. .net's been supporting languages like that for a long time through the Dynamic Language Runtime. Although, if more people use Iron(Python|Ruby|Scheme), the underlying runtimes, and quite possibly the DLR itself could see some code contribution loving.
I think it would be more interesting and useful to have the source for WPF, given that WPF is an actual GUI toolkit implementing its own widgets, whereas WinForms is basically just a wrapper around Win32.
And WPF seems pretty slick. I've only used it as a developer who has little UI skill, but it seemed to be far better than other UI kits. The design seemed rather coherent. Too bad performance and rendering issues hurt it for the first half decade of its life.
Java Swing turned out to be a failure because it didn't look native on any platform.
I would recommend something like Eto.Forms (https://github.com/picoe/Eto) which is an abstraction over native GUI toolkits (WPF, WinForms, GTK+, Cocoa)
My impression of Java UI components was that they looked boring and old-fashioned. It's a subjective judgement, but I guess others would have thought that way too. If Java Swing or other parts of the Java UI toolkit looked better than many native GUIs then there would have been more interest.
There has been a lot of interest. There are thousands upon thousands of big applications out there built on Swing, you just don't use them, because Java has never caught on in desktop consumer apps. But JetBrains' IDEs are based on Swing, as is NetBeans, and they look great.
I think looking great is entirely subjective.. I think that the JetBrains' IDEs look horrible... the skinning as it stands is inconsistent at best. They are cross platform.. but even looking like something built with bootstrap+fontawesome would look better. VS looks awesome (tends to run like ass though) by comparison.
Yup, it was different and it was worse. And full of bugs, particularly w.r.t. focus-handling, which even Sun's own (presumably correctly written) Swing applications were afflicted with.
I don't think they are going to open-source legacy stuff that they want people to forget. .Net, ASP.NET, CLR -- all of it is technology they want to keep.
This. It might appear a bit clunky at first sight, and it might not provide the shiniest eye candy out of the box but in terms of development velocity to get something practical done - provided it is combined with the tooling in Visual Studio - it's really nice.
I agree I would not suggest doing anything non-standard with it. And yes, it looks dated and ugly. But in some places visual appearance of widgets is not that important.
Often, boring UI:s are good. The tool should fade into the background and let the user focus on getting her work done.
While I love shiny things the real world accepts good enough solutions that are economic to develop.
Theoretically I just adore WPF:s scenegraph oriented paradigm since it feels the right way but... I've observed that for a large projects practically it needs a lot of work to get anything usefull done compared to forms. Perhaps the overall architecture is a bit big-org oriented where every tiny widget will have its own development team. Which is understandable but means for simple UI:s WPF might be a lot more expensive.
I haven't done anything massive with it myself but have just observed a few projects from my work. I hope there are counterexamples.
"Non-standard" doesn't necessarily mean "shiny".
Often you need to compose elements in a way that makes a certain UI feature possible, or convenient.
Such compositions aren't inherently shiny; they can even be ugly. But functional.
I love using simple boring UIs because they are predictable. WPF encourages non-standard custom drawn stuff that only serves to distract from usability in my opinion. WPF also still has weird focus rectangles and font issues.
I'm still waiting for a good cross-platform UI toolkit with good tools and ecosystems around it without resorting to C++ (Qt) or JS approach (atom/nw).
C++ tooling is behind its time (CMake, no dependency management) and JS approach seems kludgy right now.
Java was pretty close (JDK .. or NetBeans, support packaging to Windows or Mac AppStore) but the UI component for OSX is not pleasing and would be nice if it can produce something for Ubuntu/Redhat at least (don't care Arch or the rest at the moment).
Basically I want something better than what Oracle Java has to offer today :)
PS: I would require a great IDE, unit-testing/automation-testing, good scriptable build (Maven/Gradle style, not Ant/NAnt/MSBuild).
You should look at JavaFX. I just finished an app based on it. It's the official replacement for Swing and is quite WPF like in nature (scene graph oriented, etc).
Although there are skins that can make it look native on each platform, I didn't bother with this for my app. Trying to match native on a cross platform app is a fools errand. A few months after you ship your app, the OS developers will decide to reskin everything and anything not using their native toolkit will look out of place. So my app takes its inspiration from Twitter Bootstrap and just doesn't even try to look native. You can skin JavaFX with CSS so it's quite easy to match the look, at least in some ways. Nobody has complained and quite a few people said the app looks great. I would definitely do this again for my future apps. Pick a nice design that isn't native to any platform in particular, and people won't hate you for it.
BTW Java now comes with a packager tool that makes self contained native installers which don't depend on the user having Java installed. Deployment is a lot more practical these days.
I agree. Xamarin guys have done a great job at delivering cross platform UI, making C# first class on most platform, but their API's still lag behind .Net official framework and are stuck to the .Net 4.0 era.
github suppresses views of syntax-highlighted files that have too many lines or too-long lines. They make browsers crawl or crash (too many elements...) and they're not useful for humans to try to view on a web page anyway.
I guess it's no surprise that microsoft software has a human-edited source file that's over 35000 lines...
Great, but it's too late. .NET will still be the Windows-only technology. (I do know Mono) .NET becomes too big and too complex. I don't think it's easy to make it cross-platform.
I'm thrilled. The MS tooling is really, really good and the only thing stopping me from committing to the stack fully has been it's lack of open sourceness (vendor lock in is still feasible but getting less of an issue).
Edit: Pardon the fanboyism but I've tried a set of feasible Non-MS language options for my particular domain and F# in Visual Studio beats for me, my particular use case and coding style Scala, Clojure, Ruby, Python, Haskell, "browser technologies"...