Despite the inflexibility of RMS' arguments (they grate on me as much as anyone) I feel like his position on continuing to develop GCC without modern features like a fully exposed AST is necessary to keep the "moderate" position where it is.
In other words, RMS' radical position is necessary for 'moderate' LLVM to exist. Otherwise we'd still be living in the Borland/Metrowerks/Microsoft world of the 90s - proprietary toolsets developed by private companies with absolutely no intention or incentive to share their code.
In polisci there is a concept called the "Overton window." If a once-extremely radical position is held and promoted by any significant number of people, it shifts the entire conversation in that direction so that the formerly radical position seems more moderate.
That's why RMS is very necessary. He shifts the Overton Window towards what most of us consider the "reasonable" position.
that's in response to the offer llvm made to give hand the copyright over to the FSF and integrate llvm to gcc. I had long been wondering why there was a llvm-gcc on apple machines a couple revisions back.
basically he claims he didn't see the original llvm offer:
>> "If people are seriously in favor of LLVM being a long-term part of GCC,
>> I personally believe that the LLVM community would agree to assign the
>> copyright of LLVM itself to the FSF and we can work through these
>> details."
> I am stunned to see that we had this offer.
> Now, based on hindsight, I wish we had accepted it.
>If I had seen it back then, I would not have had the benefit of
> hindsight, but it would clearly have been a real possibility. Nothing
> would have ruled it out.
> I wish I had known about the offer.
This is false--people are just as likely to respond to opposing arguments with rejection and ideological hardening. This is replicated in psychological studies.
In my personal experience, the less the opposing position acknowledges the values of its opponents, the more likely it is to be rejected. Most people reason by mood affiliation and use argumentation as a social tool, so this should not be a surprise.
I think the "Overton window" is confusing cause and effect; it's the nature of reasonable positions to generate unreasonable fanatics at the tail ends. But if you pay attention to the loud fanatics at a given point of time, you find most of them do not shift any window but instead fall into irrelevance. I think that as RMS's software becomes less important, people will care less about what he has to say.
Sure - "The loud fanatics" who have no following certainly drop into irrelevance. I think if what you say were completely true we would not have had any movement forward on positions that were considered radical just 10 - 20 years ago - like same-sex marriage. It is only because certain people with large followings - Andrew Sullivan and others - began making a vocal argument for gay marriage, which wasn't even considered a mainstream position until very recently, that it is now the law of the land in so many states.
Radicals have to be able to articulate their position in a way that is compelling and reasonable to a significant number of people in order for there to be a shift. But I think political progress is largely explained by this phenomenon.
It very much was and is contrary to plenty of people's beliefs, it's just that the number of people who hold those beliefs has changed. For example in the very recent Alabama decision the people publicly opposing it explicitly say it's against their beliefs.
There's an enormous difference between having an opinion which is not aligned with mainstream and being a radical.
The radical usually wants the world to convert to his/hers own views, which is why they tend to get closed minded and hard to talk with as they get older.
"I'll say nothing against him. At one time the whites in the United States called him a racialist, and extremist, and a Communist. Then the Black Muslims came along and the whites thanked the Lord for Martin Luther King."
I do not understand why this post is downvoted. That there is a very large difference between "an opinion which is not aligned with mainstream" and the type of fanatical opinion here called "radical"--this should not be a controversial opinion.
Some people label all undesired opinions "radical" but that is obviously not what this poster is talking about.
> Arguments virtually never convince anyone, so this is all a moot point.
I would tend to agree with you that when two people are arguing, there is very little chance that one will convince the other. However, one of the things I like to do on HN is read arguments between two informed people on a topic in which I myself an uninformed and unopinionated. So for my personal benefit I would urge the people of HN to keep arguing. And to cite your sources.
It's not that people accept the radical arguments, it's that they subconsciously shift your emotional reaction to less radical arguments to make them seem more reasonable.
One should be skeptical of anything that says "exactly what you would most love to do is the best choice."
In-groups love nothing more than making fun of the out-group. When they can wrap themselves in the reasoning of "I'm just moving the Overton window!", they are avoiding the difficult and often painful steps of wondering whether their course of action is the correct one.
That might have been true in the past, but I don't see it being true these days. Lots of companies are only now figuring out the benefits of free software. Just look at Microsoft for one example.
But they only see the technical benefits of Open Source, not the ethical benefits of Free Software. This means that we'll continue to see the spread of the "Open Source almost everything" methodology, which leads to a world where libraries are free, but the applications built from them are not. The developer gets freedom, but the user doesn't.
ESR seems a bit over the line with this mail, but what never ceases to amaze me is how RMS is disconnected with the present technology:
>From its name, I guess that LLDB is a noncopylefted debugger and that
some might intend it to replace GDB. But I don't know if that is so.
This is just one of many, in the recent arguments he has stated several times that he doesn't understand how automated code refactoring works, he has not experience with IDEs, and things of the like.
Beside the ideological point, he does not seem a person capable of steering important projects, at least when it comes to compiler technology. He just doesn't know enough anymore.
You have to realize that RMS's goal isn't to "steer important projects", and you should give him more credit in that he probably could be "connected with the present technology" if he wanted to.
The fact that "he doesn't know enough anymore" doesn't say much about Stallman; instead it says a lot about how his goal of making sure software is libre has been shoved aside by everyone else for other priorities.
Also, RMS is one of the most humble people on the scene, and will freely admit to not knowing something until he lives and breathes it.
> The fact that "he doesn't know enough anymore" doesn't say much about Stallman
Yeah, actually, it does: specifically, its says a lot about his qualifications to apply theoretical ideals to real world situations. To intelligently plan how to achieve the goals of the ideology, you need more than devotion to and deep understanding of the ideology, you need deep understanding of the existing context to understand the pragmatics of moving toward the goals of the ideology in that context.
But RMS has never really been pragmatic in the sense I think you mean here. RMS is the spiritual leader, other people have always done a lot of the implementation. When the principles collide with the ability to do some specific thing, RMS has pretty much always come down on the side of the principles. This is nothing new.
Just because you disagree with some of the results of his principles, probably because you're focused on getting shit done in your little corner of the world (I'm typing this on a Mac, I'm just like you), doesn't mean that RMS is somehow fundamentally flawed or incapable of being the philosophical leader of a movement.
> But RMS has never really been pragmatic in the sense I think you mean here.
I didn't use "pragmatic" as an adjective describing RMS or a role RMS might be in at all, so I'm not really sure what you are saying. RMS is, and has for a long time been, acting in the role that I stated that the "he doesn't know enough anymore" [about the way working developers now actually build software] claim is relevant to his suitability, which is simply making specific recommendations about what software features and usage restrictions should, or should not, be present to achieve the goals of his ideology.
> Just because you disagree with some of the results of his principles
My position on Stallman's principles is orthogonal to my belief that his particular recommended policies are often counterproductive to achieving his stated principles. The post you are responding to is about the latter, not the former.
But it doesn't matter how developers build software today. When RMS started, software developers used (largely) closed IDEs, operating systems, compilers, etc. His principles are an entirely separate matter and he has stated over and over and over again that he doesn't care if his principles are inconvenient or if adherence to his principles causes technology to advance at a slower rate or software to be less useful. So his knowledge of "modern" software development really doesn't matter.
> But it doesn't matter how developers build software today.
It does if you are trying to make choices about how to use management of which features to include or exclude in copyleft software targeted at software developers as a mechanism to promote the goals of an ideology with a specific view of software freedom.
> His principles are an entirely separate matter and he has stated over and over and over again that he doesn't care if his principles are inconvenient or if adherence to his principles causes technology to advance at a slower rate
But he presumably cares about whether his decisions result in a world that reflects his principles less rather than one that reflects his principles more. And that's where knowledge of the present pragmatics are important when it comes to tactical choices to advance his ideology.
> But he presumably cares about whether his decisions result in a world that reflects his principles less rather than one that reflects his principles more.
No, not if it means compromising the principles themselves. That's the beauty of RMS, he really isn't pragmatic. He isn't willing to compromise, at all, ever. And that's why he is so important, because he represents an unwavering ideal, you don't have to worry about him moving the goal posts, if you hitch yourself to RMS and let out 100 feet of rope, you know that you will always be 100 feet from free software purity.
I'd be really surprised if most of the people working on these marque GNU/FSF projects weren't happy with GPL/copyleft, at least for these "complete" programs (as opposed to libraries like the GPLed GNU Scientific Library).
> Are you actually saying you think the FSF have failed?
I think the FSF has failed in making popular RMS's extremist exclusionary ideology which sees the eradication of non-Free software as a moral imperative, even at the cost of technological progress and of the utility of Free software for its technical, rather than ideological, functions.
I think the FSF has succeeded in using copyleft licensing to create a critical mass of Free Software which established well the pragmatic case for Free software, and -- because the pragmatic case for Free software has been so well made -- has demonstrated (entirely unintentionally) the conflict between (larger, AFAICT) group those whose goal is increased availability, utility, attractiveness, and use of Free software and the (smaller, again AFAICT) group whose goal is RMS's one of eradication of non-Free software and avoidance of Free software utility in producing/generating non-Free software.
it says a lot about how his goal of making sure software is libre has been shoved aside by everyone else for other priorities.
I'd call this a deep problem with how the FSF has operated to-date. It's not that libre software isn't important, or hasn't had a huge impact on the software world. But the very idea that it should be perceived as the ultimate priority in the existence of software is wrong. That viewpoint fails to understand or acknowledge how people use software and what other important risks they perceive and face related to software. As such, our libre utopia falls apart because we didn't understand that it had to be inhabited by real humans.
he probably could be "connected with the present technology" if he wanted to.
Case in point. Understanding how people use and are affected by "present technology" is key to follow-on innovations after copyleft. IMO, a significant risk to libre software is that its social innovation has not continued to adapt to the changing software landscape. For a time, that was fine because we had the heyday of free software's expansion to worry about. But there's been this tacit (or maybe explicit) assumption in the community that the GPL and "belief" in libre software are enough. But in fact, I'll posit that the real goal is to build sustaining social infrastructure for free software and information culture.
Does proud ignorance of IDEs indicate that RMS may be impolitic or undiplomatic? Yes. Does it indicate he "is disconnected with the present technology?" In the world of C/C++, I don't think being willfully ignorant of IDEs and automated refactoring means you are out of date in your technical knowledge, as the grandparent post implied.
I do. There's a reason that CLion is becoming a thing (can't happen fast enough) and Visual Assist has been a de facto standard in C++ development on Windows for years. The tools exist, they just don't exist in Emacs.
The UNIX philosophy is to compose stuff out of small tools. I think that you could see what most UNIX programmers use as an IDE, but it is just their personal combination of vi/emacs with refactoring tools (from primitive sed to eg Go's rename tool), etc.
We are working with certain expensive application, developed by one big multinational, that has virtual machine for extending the funtionality by the user (or more realistically, contractor).
Now, while I am forced to use Visual Studio as a build system and for it's integrated debugger. I too, generally use other tools for actual development. And I'd love if I didn't have to use VS for it's debugger either.
It can only run from inside the IDE and requires extensive use of the mouse and GUIs. This is wasting precious screen space and the mouse is often slower than just typing and composing commands.
Also, since everything is tightly integrated, when something crashes the whole goes down. This can be frustrating when your project takes almost a full minute to load in VS (it's mostly VisualAssistX being busy parsing files and the perforce plugin syncing up).
The first thing I do when beginning work with a new embedded platform is figure out how to bypass whatever wacky IDE its vendor wants their customers to use.
I am not sure many computer scientists care about the game development world apart from maybe VR+Other cutting edge prototypes that we could only benefit from if gaming funds it.
Very narrow minded and ignorant, typical of a certain type of developers. Video game development has contributed volumes to algorithm optimization, physics and physics approximation, computer graphics, design, industrial design, astrophysics, etc. Virtually every field of computer science.
There are lots of people who are not Windows C++ developers. Just because there are lots of people who DO use IDEs, doesn't mean there aren't lots of people who don't. Don't take his point so personally - he's just saying RMS ignorance is not that unusual.
I take his phrase as to mean that those people are justified or the majority.
That is, the phrase "lots of very technically knowledgable shun IDEs and automated refactoring" seems to me to imply that technically knowledgable people do/should shun IDEs and automated refactoring in general.
But, as we all know, technically knowledgable people fall in both camps (pro and against IDEs).
> ... Windows C++ developers (most of which use VS) ...
There is no option for anything else on Windows, though. The shell isn't anywhere near as prevalent and you have no other go-to method at all. The whole ecosystem is point & click, which means you'll get nothing but headaches from trying to use a more UNIX-style toolchain in lieu of an IDE.
It's not exactly a point for IDEs when, even if you wanted to, it'd be a hassle to try to integrate another type of workflow into an OS that clearly is not made for it.
It's not just that his knowledge is outdated, it's also that his philosophical principles will always take precedence over everything else, which includes sound and/or pragmatic technological decisions.
I'm not fond of esr in general but he's spot on with this post.
> It's not just that his knowledge is outdated, it's also that his philosophical principles will always take precedence over everything else
To be fair to RMS he has tried to consider opposing views in the past when he can see and understand the need behind the proposition.
In this case the problem is that RMS doesn't know enough C++/Java/C#-style OO programming, and can't understand why you would need refactoring tools to rename a method, because in his C-style world a search and replace should be sufficient.
And because he doesn't see the need himself, he wont consider it a valid use-case which GNU/GCC/Emacs needs to support.
This already has lead to chilling effects: People have stopped working on adding GCC-based support for auto-completion and similar functionality. RMS may be stubborn, but he's not stupid: He sees adding the proposed LLDB/LLVM-based functionality as a direct consequence of this, and bypassing his "authoriity" on the GCC-issue and thus an "attack" on GCC and the goals of GNU/FCC itself.
In my mind, he is obviously right that people are side-stepping his judgement, but he is wrong that this is an attack on free software: People just want to make Emacs better, and he is vigorously fighting them to make it not happen.
And now we have RMS fighting to artificially limit free software, in his mind to preserve it. I'm not sure how long this has been going on now... A decade?
To me forking core GNU projects to leave RMS out just in order to get things done is increasingly looking like the only option.
>and can't understand why you would need refactoring tools
>to rename a method, because in his C-style world a search
>and replace should be sufficient.
Is that really the underlying reason, or are you making it up ?
From what I've seen, he doesn't want to expose internals of GPL software such that non-free software can be built on top of it, which is a fair stance to take - even if you and I might disagree to it as it as a consequence might prevent our life to be easier/better - at least in the short term.
The direct and inescapable consequence of this is that he makes things more difficult for other GPL (and non-GPL-but-free) software, just because maybe some closed-source software might do something with it in a day and age when they can already just use clang.
Dying on this hill makes him a bad steward of the projects other people have entrusted him with (and fortunately the maintainer of the Emacs debugger stuff is ignoring him), while also making everybody else's lives suck a little more.
I believe the explanation here is that the FSF/GNU is willing to support their tools on non-free OSes as long as those tools work the same as the do on free OSes, and in particular, don't work "better" as compared to the free version because the non-free OS provides additional services somehow.
They don't want their tools to be superior or have better functionality on a non-free OS.
For example, emacs wouldn't take a patch that hooked into speech recognition provided by the OS, unless that was also available on a free OS. And say Microsoft or Apple provided powerful functionality for debuggers via a system api (and this same functionality isn't on a free OS) - gdb would not take a patch that used it.
Yes, this is a perfect example of the difference between open source and free software.
I remember Stallman saying something along the lines of "even if it weren't as good as proprietary software, it would be important for people to use free software" over a decade ago.
The fact that releasing the source code, and allowing people to modify it, leads to high quality software is a nice perk as far as he's concerned; but it's not the reason for the FSF. The FSF exists so that programmers aren't helpless when their system breaks.
Raymond's arguments, on the other hand are all about which compiler or debugger is better, assuming the compilers and debuggers being compared are open source.
Note this is explicitly RMS' philosophy, which hasn't changed. For him, freedom takes precedence over technological quality. He has been saying this for ages, for example when he explained that Open Source (as represented by Eric Raymond) and Free Software (as represented by the FSF) have very different goals.
If you accept that RMS' worldview is about freedom, even if it sometimes means sacrificing some technological advantages, you'll see his position is consistent and reasonable.
The problem is that the number of users is directly proportional to technological quality.
People use gcc over non-copyleft compilers because they perceive it as technically superior. People use emacs over non-copyleft editors because they perceive it as technically superior.
Sacrificing technological quality to fulfill an agenda will actually have the opposite effect, because it'll drive users to non-copyleft solutions in order to get the better piece of software.
It's not just the number of users, either. When your software is dominant, you're in control. You get to have a say in the direction of the technology, and you get to prevent the lesser players from having their say.
By sacrificing technological quality for ideological purity, RMS is giving up both his userbase and his dominant position he can use to prevent non-free software from taking over.
> it's also that his philosophical principles will always take precedence over everything else
Exactly. I respect RMS, and his position that proprietary software is immoral, but I also know that position won't be shared by many (maybe even most) of the people who care about free software.
Saying Stallman isn't qualified to steer major projects because he isn't familiar with IDEs is a pretty weird point to make. Emacs is a far more advanced environment than any IDE I have used, and often the so-called "features" of IDEs are to deal with self-created problems: Java/C# build tooling in IDEs, for example, is there to deal with the fact that the build processes for these languages is massively overcomplicated. I can't speak for Stallman, but I am confident that I'm accurately representing the opinions of a lot of Emacs users when I say that IDEs provide a leaky abstraction over systemic design problems rather than solving them.
ear periodically shoves his oar into the FSF world, occasionally with faint malice about the whole business. He really isn't into Free software as much as he is into Open software. You can't read him as a disinterested techie and come away with correct views.
I think the many replies that point out that you don't need an IDE or code refactoring if you are a real programmer miss that my point is not that you need to use advanced tools to develop a compiler, is that you need to write a compiler that supports those tools for the people who like to use them.
And to write such compiler, you must as a minimum know what refactoring tools and IDEs are.
Also, I'm sure you always get naming and signatures right at the first try, foreseeing every possible evolution of the system. I don't, and since I don't like my programs to turn into huge piles of horror, I enjoy refactoring tools a lot.
So this was a discussion in /r/linux last week about LLVM and GCC et al, so I'm just reposting my statements on why a world where GCC is irrelevant is bad:
It takes little effort on Apples or Googles or Microsofts part to take advantage of an LLVM dominated world to close off their own changes to it and try to force developers to use their own proprietary LLVM distributions on their own operating systems. It stops every other company in the future from taking advantage of all the great LLVM tech to implement their own CPUs in terms of LLVM IR, so they never need to publish their ISA and they can lock down their platform with a blob LLVM of their own. The only thing stopping any of that from happening is GCC being a still competitive alternative.
If LLVM came to dominate the compiler scene to such a degree that GCC were irrelevant, it opens the flood gates to any major company forking LLVM into a proprietary paid for compiler that they require on their OS to profit from. It lets you do anything from what Apple is already doing with Swift, where you create a programming language with a proprietary compiler, or you could go from the other end and implement a proprietary translation unit so that you never need to publish your ISA (something Nvidia never does) but that allows all the LLVM compilers to target it.
Right now, nobody can close off their LLVM contributions because a hobbled and fractured LLVM ecosystem is one that cannot overcome GCC. In the same way Apple and Google had to cooperate on webkit until it was so dominate they were in a position to fork and do their own things with it once Gecko was rendered effectively irrelevant, but the difference is that webkit was LGPL and Clang is its own permissive license - they still cannot directly modify the free software parts without redistribution, so it is harder to make a proprietary webkit, but Apple surely has succeeded since they have their own proprietary patchset on top of trunk webkit nowadays.
"If we live in 'Type A' a universe where closed source is more efficient, markets will eventually punish people who take closed source code open. Markets will correspondingly reward people who take open source closed. In this kind of universe, open source is doomed; the GPL will be subverted or routed around by efficiency-seeking investors as surely as water flows downhill.
"If we live in a 'Type B' universe where open source is more efficient, markets will eventually punish people who take open source code closed. Markets will correspondingly reward people who take closed source open. In such a universe closed source is its own punishment; open source will capture ever-larger swathes of industry as investors chase efficiency gains.
"In a Type A universe, reciprocal licensing is futile. In a Type B universe, reciprocal licensing is unnecessary. In neither universe can the GPL’s attempts to punish what we regard as misbehavior have more than short-term, temporary effects."
That assumes a company writing its own closed-source software is capable of keeping up with an entire community. If that were the case, we'd have had a proprietary version of the Linux kernel long ago, and it'd have completely replaced Linux in all the areas where Linux currently thrives.
Also, assuming the case that Open Source is more efficient, some of us don't want to wait around for inefficient closed-source-based companies to fold, because they can use the large savings they're sitting on to take a very long time thrashing around and doing damage before dying or adapting. If copyleft licenses can speed that process up, great.
Finally, there's a false dichotomy here: closed-source and open-source are not on a single scale of goodness measured by "efficiency", and the market does not perfectly adjust to maximize efficiency.
ESR's type-A and type-B universes both presuppose free markets and perfect market efficiency. There's a difference between wanting that and assuming that it's already the case.
> That assumes a company writing its own closed-source software is capable of keeping up with an entire community. If that were the case, we'd have had a proprietary version of the Linux kernel long ago, and it'd have completely replaced Linux in all the areas where Linux currently thrives.
Uh, yes, that's one of the reasons the license itself doesn't make much of a difference. If you want to fork the Linux kernel and not give back -- and companies do that in our world -- you'll quickly learn that the license is the least of your problems with regard to staying up to date.
> Also, assuming the case that Open Source is more efficient, some of us don't want to wait around for inefficient closed-source-based companies to fold, because they can use the large savings they're sitting on to take a very long time thrashing around and doing damage before dying or adapting.
That's a valid point. Then again, reciprocal licenses are based on the assumption that, fundamentally, they'll get faster results with legal action (or threatened legal action) than the results you can expect from the the added costs of running a fork of an open source project. When your first cease and desist letter works, that's true. Then again, the IBM vs. SCO fiasco showed how long it can take to get vindication from the courts.
> ESR's type-A and type-B universes both presuppose free markets and perfect market efficiency. There's a difference between wanting that and assuming that it's already the case.
You don't have to have perfect market efficiency. Thinking about how things would work in a perfect world is often useful for understanding the imperfect world we actually do live in. The world we live in does involve a very impersonal market that will relentlessly tell you when you're in the wrong business, or trying to do something in a silly way. People have an ability to ignore the market's message, but it certainly exists.
What if we live in a universe that is of Type C, in which the market favors complicated combination of factors of which 'open source' is a very minor one, and market positioning with technical prowess of software is the major one, whereby software is opened or closed based on specific political circumstances, such that the timing and not the direction is important?
I'm sure you're capable of doing that analysis yourself, but in my opinion, the GPL wouldn't provide any more benefits to a Type C universe than it provides in a Type A or Type B universe.
Instead, we live in a Type C universe, where the Efficient Market Hypothesis is untrue and real people regularly behave more in accordance with personal beliefs and ideologies than in accordance with the imperative to increase wealth by all means possible.
Although he didn't say so, Raymond's thought analysis is based on the development of software in a commercial or professional setting. The decision of whether to release personal projects as open source is a different discussion altogether.
I disagree with both premises, because software is not a free market. It exists in the framing context of IP and copyright, unnatural societal constructs with varying reasons for existing. But they do exist.
It is not a universal truth whether or not proprietary or free software makes sense. In a world without copyright, the mechanisms of software profiteering and the utility of open or closed source radically change, since anything and everything is effectively permissively licensed.
On one side, it would take the profit motive out of proprietary software. Without copyright you can not prosecute users who redistribute your binaries, and thus it is "hard" to get people to buy them from you. I would not say impossible because I would not fathom considering all the possibilities in such a foreign context. But in the general case, without copyright, it becomes impossible to profiteer off false scarcity of information in the form of copies of software.
I say that because its important to contextualize why people strive to close off and lock down their software by depriving users of software freedoms - because copyright control, control of distribution, and the right to a monopoly over the idea means profit. If you take that away suddenly there is little incentive not to develop the software you want communally with others who want the same software, because your options are either do all the work yourself closed and have everyone else use it anyway, or open it up and have others contribute and lessen your burden. The mechanism though does not change - if you are distributing your software, which is the only way the GPL even takes effect to compel source release, in a copyright free world it makes more sense to at least release the source as an act of security. Because you cannot even sell visibility - ie, if someone wants to derive from you, you cannot extort them to see the source because if you do take money from them to see it once, you cannot legally compel them not to then release it once you have willfully given them a copy.
In that political paradigm - not a universe, not a fundamental rule of reality - software freedom just makes sense almost all the time. And the edge cases where it does not are much easier to overcome, because today free software is an uphill battle against corporate interests who use their power over software to extort their users of revenue to then fund the enhancement of their product. It is why photoshop and office are so hard to contend with, because they get so much money by using this framework of information monopolies to entrench themselves perpetually.
There are a lot of ways to frame a society that could in theory bias it towards free or proprietary software. which is in part why I don't always agree with RMS - I love the GPL and what it means in our political environment today, but I also claim it is a highly flawed economic model due to the existence of state sponsored IP in the first place. But there are many other ways to do things, and in any of those permutations free software may or may not be economically optimal, and that means it is absolutely simplifying things to claim "universes where free software make sense" exist. That is literally not seeing outside the box at all.
I'll concede that a universe could exist where software isn't covered by copyright or any similar law. I don't see how the GPL would be any more useful in that universe than in the two that Raymond considered.
I'm always confused by this stance. It presupposes that big companies like Apple and Google will at some point become bad actors in the opensource world. (They both contribute huge amounts of code). Even Microsoft is starting to open up their codebase with the release of huge parts of the .NET ecosystem.
Frankly, I want the big boys on my side when it comes to opensource code. I want them to use the code I write. Usually I indirectly benefit from them using my code anyway, and they benefit, and everyone's happy. This is true even if they decide to make proprietary changes which don't get pushed upstream. I gain influence in my community and lucrative job offers. I get invited to talk at tech conferences, and my projects (present and future) attract more attention. I honestly don't see the downside here.
Apple might have a change of leadership and decide to swim against the current and make LLVM proprietary, but if that happens can't we just fork it? As far as I can see, MariaDB is doing just fine. And until that happens (which will probably be never), we can get some huge compiler ecosystem improvements on Apple's dime. All opensource.
Am I missing something, or is this fear of corporations totally unjustified?
For example, see shader compilers for GPUs. How many open shader compilers do you see around? Is there any motivation to open them?
On the other side, during the 90's, we have seen many vendors come with new CPU ISAs, extensions of existing ISAs, new SOCs, etc. Many of them didn't have resources or will to write a new C compiler, so they wanted to use someting existing. They were willing to write new GCC backend - and GCC license basically forced them to be open. After this, there was no point in keeping ISA specification secret.
LLVM/clang does not have this effect. It pretty much rewards for being closed. So today, we have shitty (especially on ARM-SoCs) shader compilers for secret ISAs, and you aren't going to see their sources anytime soon.
I am not very familiar with the world of shader compilers, but it seems like you have provided the most concrete example I've heard yet of how GCC benefited in the long term in a way that LLVM/clang didn't.
I would love to hear from anyone with more expertise in this realm who might be able to dispute this claim in any way... otherwise this seems to be a smoking gun in GCC's favor!
There are some llvm based shader compilers out there. Some even open (AMD), some not (Nvidia).
The situation with architectures in 90s sounds about right: Those that were kept closed (and where a gcc backend was thus no option for the vendor - mostly embedded stuff) have to this day shitty compilers with unpredictable optimizers.
I'm not sure about the exact conclusion you seem to draw but it seems to be something like "GPL projects tend to have better outcomes than non-GPL". The whole point of the thread doesn't support that conclusion coming directly from the GCC brain trust. It's only natural that niche GCC users are going to hang around the longest.
I'd also like to note LLVM/clang didn't gain large non-apple marketshare until GCC adopted GPLv3 which has more to with it's stagnation than anything IMO. v2 is palatable to many business needs, v3 is not.
Mostly I was looking for a specific/practical current example where using LLVM allowed a company to release something closed-source, where they would have been forced to open it if they used GCC.
The rubber apparently meets the road with the NVidia shader compiler.
> LLVM/clang does not have this effect. It pretty much rewards for being closed.
Only if you can convince people to use your architecture. The tools being closed is a minus in that, and must be taken into account with a lot of other stuff. A closed dev environment may well drown a brand-new ecosystem that you're trying to bootstrap.
Sony recently contributed back a ton of LLVM and clang stuff from their PS4 project, by the way. Why would they do that if keeping it closed was so rewarding?
Microsoft is opening their codebase because they've been crushed by the enormous efforts of open source. This concession was hard won over the course of decades.
Apple and Google do not contribute code freely, nor do they contribute a significant amount of their code. The contributions are limited to areas in which an advantage exists. One only has to look at the machinations present in other development platforms to realize the threat. Consider what's happened with Java in recent years. Or look to Swift. Or to the entire Microsoft ecosystem which was built in part on a foundation of open source. BSD licensed code permeates the Windows environment; to ask "what's the concern" belies a rather stunning ignorance of Microsoft's behavior over the three prior decades.
"because they've been crushed by the enormous efforts of open source"
As opposed to their own ineptitude (e.g. Vista and Windows 8) or the changing of the guard from original founders?
I'm certain open source played a role, but I suspect a secondary one. Heck, if post XP Windows didn't suck so much, I and my parents (who nowadays run what I build them) would be using it instead of Linux for our desktops.
Vista and Windows 8 aren't the problem. Lack of presence on servers and mobile devices is the problem -- those are the two key spaces where OSS platforms have won out.
Linux is still not a significant player on desktops. Microsoft is still completely dominating that space.
Nobody in a ten mile radius of me gives a crap about what is running on servers or phones. Most of them do not recognize an Android smartphone as a legitimate computer. They do not know what a CPU is, they do not know what a hard drive is, and they think their monitor on the desk is the computer and the tower in the closet is the "CPU".
Windows has the mindshare of the masses. When many upper-middle class white Americans want to write a document, they can only fathom word. When they want to do a spreadsheet, they can only fathom Excel. When they want to draw, they can only fathom Photoshop.
It isn't about options or features or anything, I'm talking about the super majority of people who cannot any longer comprehend the existence of anything but what they know - where being presented with Linux destroys their world view. They talk about OSX like its an easy bake oven rather than another computer, or as if its another desktop UI for Windows that also runs Office.
Which is why Microsofts open source efforts are pretty much all on the developer end. They know their userbase is completely ignorant to everything just the way they intended, and it would take years of retraining to push the public conscience away from the mindset that Microsoft Windows is the personal computer, and everything else is some gadget.
Mobile devices I firmly ascribe to ineptitude, and can supply some 2nd and 3rd hand details I've read about.
Servers are more complicated. In the mid-90s Windows NT started dropping in quality, and the much older decision to have mandatory file locking resulted in situations where creating a server with a major MS server application could require ~ 20 reboots. And many more bug and security fixes require reboots than they do on UNIX(TM) based/inspired platforms.
Then one could argue ineptitude in marketing when Microsoft didn't cut deals that could have made their software competitive for mass installations. I really wonder about that, because so many of these need source, but it's "a path not traveled", except internally with Azure.
"Apple and Google do not contribute code freely, nor do they contribute a significant amount of their code. The contributions are limited to areas in which an advantage exists."
This is simply false, actually. But of course, you have no evidence of this, only rhetoric, while i actually see literally every code contribution google makes.
That's not what you claimed. You claimed they do not contribute code freely; they certainly do. It might not be a very significant percentage if their code written, but it's a fact that they contribute code to open source projects, and they do not charge for it.
I see the misunderstanding. When I say "contribute code freely" I mean "without restriction."
You are certainly aware that the vast majority of code is under strict restrictions and will be leveraged for competitive/controlling purposes rather than being shared. Employees wishing to freely contribute code in these domains will have their requests denied.
We've both been employed by large SV companies; we both know how this works. The majority of software will be used in an attempt to control the market.
"You are certainly aware that the vast majority of code is under strict restrictions and will be leveraged for competitive/controlling purposes rather than being shared. Employees wishing to freely contribute code in these domains will have their requests denied.
"
??????
None of this is true.
I mean, literally none of this.
I don't even know where to begin.
Because it's an incredibly vague statement that is open to many interpretations?
As written, actually, i don't agree with it.
Google has open sourced > 100 million lines of source code, depending on how you count.
I can't tell you what percent this is, but it is quite significant
It is the vast majority of a number of products, and not the vast majority of a number of other products.
In fact, for some subsidiaries, all of the code is open source. For some, it isn't.
So your statement depends on a lot - who are you counting, what is "their code" (Code we've written, code we've modified, or code we use), etc
If you make a detailed enough statement, i'd probably agree.
But as written, there are plenty of cases where google open sources the vast majority of it's code.
> Consider what's happened with Java in recent years.
What has happened with Java in recent years?
One thing i can think of is that the 'official' Sun/Oracle JDK has gone from being closed source, to having a second-class GPL'd derivative, to being built on a GPL'd core. The amount of proprietary closed-source code has gone from 4% to 1% to nothing that doesn't have a free replacement today.
Narrow and egocentric view of the situation. This is not about developer (your) freedom (to use, to change, to gain over) but is about user freedom. Do a little research about RMS's history and what he defends instead of just reading some (context) limited mailing postings to have your conclusions.
> Am I missing something, or is this fear of corporations totally unjustified?
Sun Microsystems basically grabbed BSD development by hiring up a lot of good people and running with it. It took a while for the various free BSD's to come into their own.
Basically, with "bsd licensed" software, if a big company hired up all the developers, they could take it proprietary and out-compete fork efforts.
I don't think it happens often, but it's not impossible, either.
The AT&T lawsuit and the end of DARPA BSD funding and that research group at UCB didn't have a lot to do with that? That was several years after Sun got in bed with AT&T and announced that BSD based SunOS was doomed.
> Basically, with "bsd licensed" software, if a big company hired up all the developers, they could take it proprietary and out-compete fork efforts.
Well, look, sure, if you hire all the developers that are working on something and understand it, you can probably out-compete other implementations even if all you have the old developers do on your proprietary project is write up specs from which a different set of developers build a legally non-derivative interoperable implementation.
That may be a risk with permissively-licensed Free software, but its also a risk with copyleft Free software and proprietary software.
My experience is that specs - even will written ones - are far enough from working, tested code that it's going to be significantly easier to take the BSD project proprietary than to build a competitor to a GPL licensed one from scratch. Also, to be attractive to the 'core group', I think working on new stuff is more interesting.
I'm always confused by this stance. It presupposes that big companies like Apple and Google will at some point become bad actors in the opensource world
To resolve your confusion in this area, perhaps you should consult an Oracle.
I understand this position, and yes, it looks like the natural outcome in a simple game-like modeling.
Yet, PostgreSQL survives, and is doing well. Ditto for Apache. That is enough evidence that there's something wrong with the usual modeling... Ok, maybe not enough evidence for you to feel secure on the viability of big non copyleft licensed software, but you should at least take it into account.
One of the big advantages of LLVM's license in this regard is that, unlike GCC, you don't risk leaking patents either even if you did open source it. Or did they fix that now?
I strongly disagree with RMS on some issues (like Snowden) and I don't share his hard line on proprietary software either. But RMS is just a particular kind of animal. I respect the need for that animal in a diverse ecosystem and I respect the reasoning behind free software.
In particular, I think gcc has played an essential role in providing free software to users because it is licensed under the GPL. The BSD license is great for less important things, but the moment giant corporations have engineered the whole "open source" ecosystem so that they can distribute forks of all the basic build tools without contributing changes to the public where we can see and influence them - that's the moment we've handed over the keys to the kingdom. You have to be pretty out of touch with history not to see that point.
To hear the way HN talks about RMS, he is a nerdy, smelly, arrogant, technically ignorant Emmanuel Goldstein, representing everything we hate most about the nerdy computing world that pre-existed the current startup gold rush (but which, coincidentally, entirely enabled it and us). Now a large proportion of us here secretly harbor the belief that we are the next Steve Jobs, so we pine for the good old days when people like that made bank on companies whose business models were entirely based on platform lock-in. Because so few of us actually remember how fucked up it really was for all the users and programmers. Because we don't consider ourselves users and programmers, just temporarily embarrassed millionaires and Chief Engineering Architect Engineers. So it's no wonder that HN still doesn't understand the point of GPL. Just like most of Marin county now thinks measles is something to cultivate, like acidophilus.
> The BSD license is great for less important things, but the moment giant corporations have engineered the whole "open source" ecosystem so that they can distribute forks of all the basic build tools without contributing changes to the public where we can see and influence them - that's the moment we've handed over the keys to the kingdom. You have to be pretty out of touch with history not to see that point.
Thankfully, Emacs maintainer Stefan Monnier is being sensible about this:
> As mentioned earlier, in any case I will happily accept and install LLDB support into gud.el. So as long as I'm Emacs maintainer, your opinion on whether this might ruin the FSF's goals are not relevant.
Yeah. What's especially painful is RMS is basically saying "Hey folks, I want to block this until I can research the issue." And then his research is "I don't know anuything about this, can anybody tell me what LLDB is?"
My guess is (based on his past statements) that he doesn't browse the web, or browses it through emacs sending mail to him (really!) and doesn't use search so he can't really find out what LLDB is.
Do you really want to be beholden to a person who blocks software integration (into a unified debugger interface) and then can't even do reearch to justify that?
> My guess is (based on his past statements) that he doesn't browse the web, or browses it through emacs sending mail to him (really!)
Just FYI, pg described his browsing setup at one point a few years ago, and it was actually fairly similar.
(It was in a follow-up comment to "Disconnecting Distraction", if I remember correctly. I don't do it myself, but it can be a great way to force yourself to be productive and only read the things you really want to read, instead of getting sucked into aimless browsing. When you think about it, it's really just a poor man's version of Pocket or Instapaper.)
I had never expected RMS to let go of his child, but the moment he handed over maintainership of Emacs was really a new starting point. A lot of bickering just went away, and pragmatism played at least some role from then on.
(Obviously not at any cost, he chose his successor(s) very well, so they are trusted and are committed to Free Software)
In the linked thread RMS is a bit detached from reality, IMO, but still very reasonable. In the other thread where this whole LLDB drama started he was simply obnoxious and a bully, even insulting and driving out a contributor.
I read a big caveat in the clause "So as long as I'm Emacs maintainer"
Then again I first became aware of this latest cycle when I think a message from him threatening a fork became a Hacker News topic (https://news.ycombinator.com/item?id=8861360), followed by several others with a lot of discussion.
"GCC versus LLVM performance analysis reveals the LLVM inliner 1) does not inline certain hot functions unless a high threshold is provided at -O3 2) produces larger and slower code at -Os."
The problem of LLVM inliner has been known for a long time. One of the best discussion is "Optimization in LLVM" talk from 2013 European LLVM Conference.
Not only does gcc often produce faster running programs as referenced in the post, it has mature OpenMP support (although llvm/clang is moving pretty fast on OMP development), and has a Fortan compiler. These are important for numerical computing.
As the original author of that benchmark, I should point out that that was LLVM SVN (unreleased) against GCC 4.8 & 4.9 - I couldn't get GCC 5.0 SVN to build, so it's possible 5.0 will be faster again than LLVM in certain situations.
However, I would also say that generally I've found LLVM is now producing faster code than GCC in most code I've tested both compilers with.
That is very interesting, because LLVM developers themselves admit LLVM generates slower code than GCC (in average). This is plain if you read LLVM Developers' Meeting talks.
I think it is completely possible that LLVM developers are using wrong benchmarks. Benchmarks are mostly SPEC and some large Google C++ codebases; in some sense both are quite atypical. But then, entire problem is to understand how typical codebases look like.
I can hopefully settle this (as a developer of both).
Assuming we stick to x86/x64, nowadays (literally, let's say as of January 2015) GCC and LLVM are within the noise for most people on most code (IE 1-2% of each other).
You can certainly find benchmarks were LLVM does badly. Some are important to some people, some aren't.
It is harder to find benchmarks where GCC does badly.
Small benchmarks can go either way, but for large codebase (especially C++) inliner is more important than just about anything else. So GCC wins, because it has better inliner.
There are many measures of benchmark sizes. One important measure is size of codes that account for 99% of execution time. If your codebase is a million lines but your hotspot is a thousand lines, benchmark result is sensitive to optimization quirks and in some sense benchmark is small.
I think the real point is there's defending GCC and there's "defending" GCC. Actually defending GCC means making it as useful and usable you can in the context of a robust, interoperable GNU ecosystem. "Defending" GCC includes creating pointless interoperability barriers and weakening other GNU tools in the guide of "protecting" copylefted crown jewels that no one wants to steal any more (those people have long since moved on to clang).
This suggests that the FSF should create a GPL-licensed fork a Clang/LLVM. Such a fork could continue to receive all upstream improvements (since they're under a more liberal license), but not the other way around - the upstream project would not be able to use patches licensed under the GPL. Thus (in theory) it could end up being strictly superior to the original.
Didn't we go all over this when a hostile GPL re-licensing of a (painfully) reverse engineered OpenBSD driver was attempted?
I'm not sure what you could do, that would also be attractive for people to use and contribute to, that would add enough GPL content to make such a thing fly even in theory.
Maybe add a bunch of the GCC backends to LLVM? That's where it's most conspicuously behind GCC. There's also the precedent of GCC derived pre-Clang front ends, although I don't know how many of the non-C and C++ GCC front ends are seriously important (there's Ada, but that's got its own complexities).
>> This suggests that the FSF should create a GPL-licensed fork a Clang/LLVM.
To do that you need developer support for the fork, so it won't happen now. It won't happen until enough people are sufficiently upset with the current development path, and if/when that happens it will probably not be due to the license.
This is a common FSF tactic. It doesn't always end well (both GNU TLS, and GNU Mach come to mind). Not necessarily because they do bad things, but because they aren't able to get enough people using the software to make a significant improvement.
Since clang 3.5 (or possibly even earlier), clang has basically been neck-in-neck with GCC. Most benchmarks seem to show the two basically equal to each other, and then there's a handful where clang wins outright and a handful where GCC wins outright. [1]
For my own use-case (a high-performance photorealistic renderer threaded using TBB), my clang builds outperform my gcc builds, but of course that's completely anecdotal and based on just my own use-case.
Reading ESR tends to make me angry in a special way. I can understand people who prefer a BSD/MIT license over the GPL - they usually seem to understand the difference and claim the former are "more free" while kind of understanding the point of the later but disagreeing with its importance. But when I read ESR, he seems to have it in for the FSF and doesn't really seem to respect or even understand the philosophy at all. I've come to view him as a formerly-high-profile troll.
I don't know why you would make this argument now. Stallman argued clearly in January why copyleft was more important to him than technical superiority. https://gcc.gnu.org/ml/gcc/2014-01/msg00247.html
Everybody already knew that copyleft was more important to Stallman than technical superiority. The problem is that there are a lot of people who put a higher value on technical superiority than copyleft. Those people could be counted upon as GCC users and contributors as long as GCC was the technically best compiler available as open source (all free software is open source, not all open source is free software, etc.).
RMS's goal seems to be to have GCC not be replaced by LLVM. From a reply downthread of ESR's post:
> This means it is more than a potential problem.
> The possible harm is to replace copylefted GNU package
with noncopylefted code. They must have worked for a long long time
to replace the capabilities GDB already had.
So RMS thinks it's a "problem" and "harmful" if lldb replaced gdb. So RMS cares not just about what GCC does, but whether or not other people adopt GCC as well. In terms of that goal, it does not matter what RMS considers more important. RMS is not going to convince everyone else to use GCC instead of LLVM by simply restating his arguements about copyleft forcefully again and again. That doesn't mean he has to sacrifice his feelings about copyleft. But it does mean he has to give a damn about being technically superior to LLVM and lldb if he wants to beat them.
He does not need to beat LLVM for GCC to remain available.
It would be detrimental to GCC to lose developers, sure, but I think one of the reasons for the disconnect between RMS and others is that RMS' goals does not require a large user base, and so he is willing to make decisions that seems counterproductive to anyone who cares about usability and user acceptance first.
The mere continued existence of GCC (and the other GNU tools) in many ways safeguards the freedoms he cares about: It allows users to jump ship if they in the future are prevented from doing what they want with the alternatives. It's not the ideal scenario, but it better serves his goals than giving in, and potentially see these freedoms slipping away at some future point.
Of course he'd be better served by GCC outcompeting LLVM. But if that isn't happening, his goals are better served by slowing developer migration than "capitulating" in a way that might affect developer mindshare by putting LLVM tools in front of more people.
As you say, of course the problem with this is that a lot of us care more about the technical superiority. Especially when the competition is a project that is as open as LLVM.
I continue to be amazed at people buying into the ESR-authored consensual hallucination about his role and importance. His primary creative contribution to the community has been a fictional mythology starring himself.
Can someone explain to me what all the hubub is about. As far as I understand it and even as RMS himself states, isn't this mostly about them trying to block clang/LLVM tie ins to the debugger more than anything else?
I don't understand whats with all the gcc sucks attitude these days. It's worked for quite some time, and yes it's showing some age due to the lack of development so it has lagged behind others, but we should all be worried when people very influential in the GNU community start talking about why GCC is bad.
I don't like this line of thinking at all.
If GCC is behind, fork it and do what needs to be done to make it competitive.
LLvm was created with a different philosophy from GCC. It fact it started being a GCC add on.
Llvm uses his own debugger. It is just RMS realizing people are abandoning GCC en masse, and not liking it. Just that.
LLVM wanted to develop it on ways that gcc people did not, so they created something from scratch, after trying to modify GCC because it was so complex.
It is not simply a fork, but a complete redesign.
LLVM main advantages are:
Instead of being a monoblock compiler like gcc, LLVM is a series of inter operable libraries and tools. This way you can program your own different compilers, or parsers, or debuggers just including libraries.
The above means you don't need to use scripts like in gcc but you can actually program a compiler very easily.
You also don't need to use the linker if you just want a parser. Or you don't need the parser if you already have stored or computer generated an Abstract syntax tree.
It uses his own cross platform "assembly" code so you can use it with dozens of different languages, or compile dozens of different languages.
It compiles to bytecode. It makes portable code possible, programming GPUs on the fly, javascript...
You can use it for whatever you want, even making closed source software.
>>Can someone explain to me what all the hubub is about. As far as I understand it and even as RMS himself states, isn't this mostly about them trying to block clang/LLVM tie ins to the debugger more than anything else?
The "problem" RMS has with llvm is its non-copyleft license. And yes, the issue is that he doesn't want to support llvn with GNU tools. When LLVM started they used GCCs front end to compile C code until their own matured enough. So parts of GCC were being used to develop the middle and back ends of llvm. The LLVM ecosystem is systematically replacing the GNU toolchain with non-copyleft licensed versions and RMS does not want to support that in any way.
>> I don't understand whats with all the gcc sucks attitude these days. It's worked for quite some time, and yes it's showing some age due to the lack of development so it has lagged behind others, but we should all be worried when people very influential in the GNU community start talking about why GCC is bad.
I don't understand it either. GCC is still a great compiler. Developers seem to prefer the modular design of LLVM and they're probably right in that. Users like some of the features enabled by that design as well - IDE integration and cross compilation come to mind. GCC is starting to move, but slowly.
>> I don't like this line of thinking at all.
>> If GCC is behind, fork it and do what needs to be done to make it competitive.
You make it sound like there are lots of compiler developers with time on their hands for open source development AND who share the licensing philosophy AND are unhappy with GCCs development path. Apparently there are not.
"I don't understand it either. GCC is still a great compiler. Developers seem to prefer the modular design of LLVM and they're probably right in that."
There are lots of things that are simply very hard to do with GCC, but are easily doable with LLWM.
IDE integration is just one, because it has the ability to compile just lines(at least the Apple's version).
Things we have done with LLVM:
Millions of mollecules 3D paths' rendering.
Automatic testing of software and hardware.
Simulation of military vehicles doing all kinds of things.
Digital crash test.
Natural language(speech) understanding.
Before LLVM doing all this took years, now it takes months or weeks.
This exploits the ability of understanding languages of a compiler, but is not just compiling c or c++ like gcc does.
I was under the impression that GCC would stick around as long as the Linux Kernel was in existence because there was a tight requirement for GCC to compile the Linux Kernel... well I was wrong! http://llvm.linuxfoundation.org/ shows the status of getting the Linux Kernel to compile using CLANG... and according to the stats as of 1/28/2015 there are only 41 patches required to make this work.
I wonder if any of the big distros will start compiling with CLANG instead of GCC?
I agree with Eric Raymond that GCC probably can't beat Clang.
However, I don't agree that this means we should just jump on the LLVM train. The world still needs GNU. And LLVM isn't GNU.
The GNU community has historically held dominance in the compiler field, so this is an uncomfortable time. We can no longer rely on the popularity of GCC to keep GNU in the forefront. However, this doesn't mean we should just give up--I think the solution is to start again from first principles and build a better system, an alternative to GCC that is also released under the GPL.
I'm not saying we should drop support for GCC. But we need to innovate: GCC became dominant because it was innovative and it lost dominance because it stopped innovating. LLVM isn't the only non-GNU competitor. It's telling that none of the major new languages (Go, Rust, Clojure, Scala) are released under the GPL.
The implementations of new languages aren't being released as GPL because the process of introducing a new programming language is already a nearly insurmountable task. To this end, language authors seek every advantage they can get. Nobody's ever chosen a programming language by virtue of the fact that its implementation was GPL, but I can easily see people ruling out such a programming language out of fear that the use of such an implementation will infect their own code with GPL (I fully acknowledge that this fear is unwarranted if your licensing is set up properly, but it actually can be rather tricky to get right and most people are rightfully fearful of interpreting this sort of thing on their own).
At the same time no developer of a fledgling language is going to worry about someone coming in and taking their permissively-licensed code without contributing back, because getting to the stage where someone cares enough to seriously fork your language already implies an enormous relative degree of success.
This is pretty much the perfect moment for two egos like RMS and ESR to clash. RMS is already getting a lot of flak for expressing his fear of GCC being displaced in such a manner, whereas ESR is well known for his disliking RMS, to put it mildly.
It is not clear that RMS is driven by ego. He's motivated by a very clear goal to keep free software free. Free as in freedom free. As for ESR, well, yup, that's an ego that is sufficient in size to have a gravitational pull. http://www.linuxtoday.com/infrastructure/2000082800620OPCYKN
I think this is an unuseful definition of ego. I had a significant degree of contact with RMS in the period leading up to the launch of GNU, including being one of his roommates when he formally launched it, and I assure you he's very seriously ego driven. Are not his expounding etc. of his Free Software philosophy, a rather big thing as in a set of principles etc., the actions of a man very certain about himself?
I think you're just perceiving a difference in how it's expressed by each of them, e.g. one reply is that ESR has a very clear goal of increasing the quality of software. Which for me is the big difference between "Free" and "Open" software.
Oh, he is certainly ego-driven, but not in the same way as Jobs was, for example. RMS does not put his person before everyone else, but he lives rather through his principles and tries to convince everyone why it makes sense to follow them. And he has a very solid rationale he has developed through the years, making him very articulated.
> It is not clear that RMS is driven by ego. He's motivated by a very clear goal to keep free software free.
No, he's driven by a very clear goal to prevent non-free software, even if that means preventing free software that might, potentially, in the future, be used by someone, somewhere, to create non-free software.
And I think there is a certain amount of ego in there that gets in the way of good judgement on means, in that he tends to take actions which will naturally result in the free software he protects from being involved in producing non-free software losing mindshare to either non-free software or free software not wrapped around with his preferred restrictions, which is contradictory to his purpose -- since it means that not only does software that isn't crippled in features to prevent its utility in contributing non-free software wins, but that that software is also itself either non-free software, or non-copyleft free software that can more readily directly contribute to non-free software as well as being used by people who might build non-free software through use of the features of the software.
ESR disagrees with RMS on many issues. I don't think he dislikes him. "Ours has always been a more complex relationship than most people understand." http://esr.ibiblio.org/?p=5211
(ESR is well known for many things that aren't true.)
For starters, Apple's money is not the main driver of LLVM (In fact, publicly, Apple is not the #1 contributor anymore).
Second, "but merely the fact that compiler technology has advanced significantly in ways that GCC is not well positioned to exploit. " is simply false
In fact, that's exactly the problem for GCC: Compiler technology has not advanced roughly at all.
GCC caught up to everyone else for the same reason.
Time for a history lesson.
About 14 years ago,a group of folks including Diego Novillo, Jeff Law, Richard Henderson, Andrew MacLeod, me, and Sebastian Pop (along with bug fixes/changes from a lot of others) sat around and build a "middle end" for GCC.
Prior to that, GCC had a frontend, and a backend. The frontend was very high level (and had no real common AST between the frontends), the backend was very low level.
There was nothing in between.
We cherry picked the state of the art in compilers and research, and build a production quality IR and optimizer out of it.
This research has not really changed that much in about 10-15 years. Most of the research these days focuses not on straight compiler opts, but on things like serious loop transforms, and helping runtimes (GPU, GC, etc), or dynamic languages.
This covers only until the branch was merged. At that point, it was "not a piece of crap", but this was before people added all the stuff on top of this architecture.
On top of that architecture, it took another few years to get good, and a few years after that to get really good.
Bringing us to today.
LLVM was started around the same time, but had less contributors back then.
Essentially, you could view it as "instead of build something in between two really old parts, what could we do if we just redid it all". People thought it was a waste of time for the most part, but Chris Lattner persevered, found a bunch of crazy people to help him over the years, and here we are.
Because you see, it turns out compiler technology has not really changed at all. So, algorithmically, LLVM and GCC implement the same optimization techniques in the middle of their compilers. Because there is nothing better to do. Just slightly different engineering tradeoffs. To put it another way: Outside of loop transforms, essentially static language compilers targeting CPU architectures are solved. We know how to do everything we want to do, and do it well. It just has to be implemented.
So given enough time/effort, LLVM and GCC will produce as good of code as each other there. The question becomes "will they keep up with each other as engineering/tuning happens" and "who can generate great code faster".
The problem for GCC on this front is three fold
1. The backend, despite being pretty heroic at this point, really needs a complete rewrite, but people value portability over fast code.
LLVM, having started completely from scratch, has a modern, usable backend. They are not afraid to throw stuff away.
2. For any given thing you can implement, it's a lot easier to do it in LLVM than GCC, so, given time, LLVM will produce faster code because it takes less work to make it do so than it does to make GCC do so.
3. Because it was architected differently and more modernly, clang/LLVM are significantly faster at compiling than GCC. GCC can remove most if not all of the middle end time (and does), but it's still slow in other places, and that's really really hard to fix without fundamental changes (See #1)
There are still plenty of open problems in compilers. For instance, writing a program to effectively use all four of my CPU cores is pretty tedious. It would be awfully nice if my compiler could automatically parallelize operations, do effective register allocation across cores, distribute data for best use of L1 cache, etc.
Certainly researchers who are working on this sort of thing today are doing it in LLVM or some custom framework. I can't imagine GCC has any significant traction at least.
The only open problem here is the one i stated "serious loop transforms". Parallelization is not even "hard", it's just hard for languages like C++. Fortran compilers have been parallelization for 20+years
For any interesting type of vectorization we want to do, the problem is not "can we figure out what to vectorize" it's "how long do we want to spend vectorizing code" :)
Making your own tool suck/break/be-difficult when license-compatible patches are submitted to improve the program seems counter productive for everyone.
I tend to agree with RMS, at least on his philosophy of free software. That being said, I don't think adding LLDB support to emacs is a bad thing, and I don't think the rise of clang/LLVM will be a big hit to free software.
As long as there are people who care about freedom, people will maintain GCC. Even if the worst case happens, and there is extreme fragmentation of proprietary patch-sets to LLVM, we can always still use GCC, or even still use the free parts of LLVM/clang.
While there is some issue with fragmentation of the developer community, I see this as a non-issue. These things generally work themselves out through the natural ebb and flow of chaotic systems (just like the economy is largely self-regulating).
We can always still use GCC. I don't really see where the issue is, am I missing something?
Isn't RMS' main issue with LLVM that people can create proprietary plugins and those plugins benefit when LLVM benefits? How exactly is this different than if a company were to make proprietary extensions to Emacs and sell them? Does the Emacs license forbid against non-free extensions? Ultimately, it's not LLVM's problem if people create non-free extensions. You could argue all of Gnu/Linux is bad because some proprietary software runs on it. I don't think that's a route we want to take.
Has anyone considered creating a new libre compiler suite in Rust? That would be pretty cool. I have wanted to do this, but I have so many other projects to work on.
> If the clang/LLVM people decide they want to eat GCC's lunch, they will do it
They have about 5-10 architectures and about 40-50 architecture variants to catch up to GCC. It's doable, but it will take about the time it took GCC to get there, and the result will be that LLVM becomes the kind of unmaintainable mess that GCC is considered to be now.
One would assume that the author of "The Cathedral and the Bazaar" would know a bit or two about the lifecycle of open source software.
Not all of those architectures and variants are important going forward (I'd actually be very interested in lists of them).
GCC was designed for and has been thoroughly maintained to be less maintainable (one of the major points of these recent prominent debates).
C++ is more maintainable than C. (I don't know that I buy this at all, in fact, I'm about to dive into LLVM's source code to see if it could possibly prompt me to revoke my oath to never program in C++ and Perl again unless absolutely necessary :-).
The last two, plus LLVM's Bazaar model of development in part enabled by those technological differences, means it won't be an unmaintainable mess if and when it grows out like that.
I have no idea if this will be true. I'd like to hear from seasoned developers who are also seriously familiar with the LLVM architecture, development model and code base (per the above, I rather hope I won't become one of the latter).
GCC is a technically inferior compiler that has been deliberately neutered to enforce its license. When software has technical limitations to enforce its license that is DRM, ergo GCC is defective by design.
In other words, RMS' radical position is necessary for 'moderate' LLVM to exist. Otherwise we'd still be living in the Borland/Metrowerks/Microsoft world of the 90s - proprietary toolsets developed by private companies with absolutely no intention or incentive to share their code.
In polisci there is a concept called the "Overton window." If a once-extremely radical position is held and promoted by any significant number of people, it shifts the entire conversation in that direction so that the formerly radical position seems more moderate.
That's why RMS is very necessary. He shifts the Overton Window towards what most of us consider the "reasonable" position.