Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
GCC is moving to git (gcc.gnu.org)
295 points by ingve on Aug 21, 2015 | hide | past | favorite | 133 comments


Vim is moving to git, GCC is moving to git. What's with all the change?


> Vim is moving to git, GCC is moving to git. What's with all the change?

Git does something very well: it has great support for rewriting version histories. It nearly flawlessly supports almost every conceivable scenario of slicing, dicing, mixing and combining.

It also has great support for working at the sub-file granularity. For instance, this morning I moved a fix from an experimental branch to stable. The fix was part of a huge commit, and consisted of just a few "hunks" out of many in one file. It was obvious that the particular fix, though needed by the experimental changes, could be promoted to stable (it was a change of the kind where some code is altered in such a way that it no longer interacts with the experimental changes, and is improved at the same time.) With git I easily rewrote that big commit such that just those specific changes in the specific file were removed from that commit. This was done without touching anything in the working copy not so much as altering any modification timestamp! ("make" didn't even see that anything needed to be rebuilt after the commit rewrite.) In git we can do "keep my working copy as it is, but alter the history behind it, so those alterations then appear as staged, uncommitted changes". Then we can easily migrate the changes somewhere, like turning it into its own commit, perhaps on another branch. After I migrated those changes to master, I then went back to that branch and did a "git rebase". Lo and behold, that branch picked up the changes, and now it looks as if those changes were written on master all along, before the experimental changes were developed.

Programmers need this kind of thing because we are obsessed about making the perfect set of changes. Since the history of a code base is a digital artifact, we want to be able to hack that artifact to perfection too, not just the instantaneous/current view of the code itself.

Git is also like an adventure game in which you solve quests and acquire higher and higher status. This creates allure. The Novice Adventurer cannot understand all the reasons why the Grandmaster Wizards of GCC have switched to Git, but is mystified and fascinated.


Maybe this is a not-so-frequent thing, but I know it's not just me: As I see it, the big advantage of Git over other attempts at DVCS is that its internal model of the world is very simple and just right.

Early on when Git was still new, I read about the different types of objects (files, trees, commit objects) and how they interact, and everything just clicked. It's the correct model for version control, and I like to believe that that's why it's winning.

In fact, all your praise for Git is something that, as far as I see it, flows directly from Git having the correct model for version control.


I don't think git's model is particularly special in any way - if it were, it wouldn't be possible to make (near) transparent proxies to the other DVCS's.

The fundamental concepts in all the DVCS's are almost indistinguishable; they're just packaged slightly differently.

It's tempting to seek a technological explanation for git's dominance; but all the obvious signs point to it simply being first-mover advantage. Where the other DVCS's do have technical advantages, they're rarely very significant for most people.


Funny you should say that, Git's fast-import format was adopted by most DVCSes for easy transfer of commits from a foreign repository. Transparent-ish proxies do exist, like git-remote-hg. Those others have different models for history (Bazaar uses folder-type branches, Hg's branching is also different) but Git's internal model is a common subset.

But Git was not a first mover. That simple model was pioneered by Monotone. Git won because Linus' use case required high performance. DVCS flows from that (no waiting on the network), as does the index, and it just made for a huge difference in user experience.


I'd forgotten about monotone - my focus was on today's choices, and monotone isn't exactly alive nowadays ;-).

git+hg share the same model for history; that's a DAG of commits. Sure, there are differences, but they're rather superficial. One such difference is in how you, as a human, categorize commits in that dag - what git calls a branch isn't what hg calls a branch. But so what?

In day-to-day use I can't imagine that the vast majority of git users would really notice a ifference if they used hg instead; and I suspect the same thing holds for bzr too. There's no meaningful performance difference. But the point is also that reverse is mostly true too - why not choose git it it's good enough?

And then git's real advantage shines through - it's by far the most common, and that lets you cooperate with a large number of other developers simply because lots of people know git. Oh, and github is popular too and git-only. Then there's the fact that bzr seems dead, as does monotone. The advantage is largely social - though if the currect pace of development continues it way well become technical.


Git definitely wasn't a first-mover. Being used by the Linux kernel probably helped, though. (Though the Linux kernel actually used a different DVCS before Git...)


Rewriting public branch history is not generally used by any major project except in extreme circumstances. This reads like an attempt at satire by someone who doesn't understand the actual utility of history rewriting in git, which is generally for extending version control to your development changes (e.g. what is more formally codified in Mercurial as the draft phase).


Re-read the post. At no point did the author claim he rewrote public history. He had a certain changeset in the experimental branch, an assumed private branch. He then broke that changeset into two, merged the ready-for-production code into the stable branch and then rewrote history of the experimental branch so that it sat on top of the updated stable branch. As simple as that.


In fact, I'm one "hop" away from public. Upstream from me is a staging repo, and public is the one upstream from that. The staging repo is needed so I can bounce changes among platforms. Its branches get rewritten regularly.


Which you can do in Mercurial, Bazaar, or Darcs, too. It's not really something Git-specific.


Ive never managed to do this in mercurial. Its always a huge pain point.

Is there some extension that helps?


All history editing in Mercurial needs to be enabled via extensions (so that you don't shoot yourself in the foot by accident). That said, the term extension is a bit of a misnomer in this context, since most "extensions" that are being used for this are parts of core Mercurial and are simply enabled by an option (the one prominent exception I can think of is "evolve", since it's unfinished).

Here's how you do it:

  hg histedit -r .
  # change "pick" to "edit" in the edit plan.
  hg commit -i
  hg commit
  hg histedit --continue
Alternatively, and preferred, because you can then test intermediate commits:

  hg histedit -r .
  hg shelve -i
  # test that it works
  hg commit
  hg unshelve
  hg commit
  hg histedit --continue
If you don't want to affect the files in your current checkout, use hg share to create a separate checkout.

That splits the commit in two; use hg graft or hg rebase to move the commits to different branches.

> Its always a huge pain point.

Not sure why; Mercurial's commands for history editing are conceptually pretty similar to Git's. I suspect that it's mostly a difference in familiarity with the tools.


I'm sure part of it is familiarity, but also a difference in documentation. Git's documentation makes it EXPLICITLY clear that one thing Git is intended to be awesome at is easy branching and merging, and rebasing. When I worked on a Mercurial codebase, we pretty much didn't do feature branches (or the hg equivalent), and therefore "rewrite history" was considered not just heresy, but likely to Break Everything.

It's not that it's impossible, but that it was not clearly described how to do it right, in contrast to the Git documentation. Now that I've been using Git for nearly three years, and have used it to save my bacon, I'm sure I could find some way to do similar with Mercurial ... but only after having learned it with git.


> Ive never managed to do this in mercurial.

`hg record` might be somewhat more cumbersome than `git add -p`. But `hg rebase` should about as nice as `git rebase`?


hg crecord is awesome for fine grained patch splitting.


> Its always a huge pain point.

And that is a nonstarter, once Git has opened your eyes to light-weight rewriting, done casually and frequently.


I think I was Poe's Law'd by his last paragraph


Even public history can be rewritten, and it's not too bad to deal with. For several months, earlier this year, I was battling some tough bugs in a kernel USB driver, along with a few other developers. I set up a "wild repository" (not an official repository, but one for experimental changes---though public in the sense of being shared). I rewrote the testing branch regularly. I even adopted a scheme, whereby I would rename a branch through the names "name.1" and "name.2" (deleting the old name.2), similarly to rotating logs. I'd send out a "heads up" e-mail whenever I did this.

A rewritten branch isn't hard to pick up. Git tells you that the history has diverged (N commits here, M over there). If none of the local history is yours, you can just throw it away with a "git reset --hard <origin>/<branch>". Or else you can rebase those changes which are yours.

It's only bad to rewrite "really public" branches that are distributed to the world. Even then, it can be okay to clean up some oops: like seconds after pushing a release, some last minute change needs to be made (typo in the release notes file or whatever).


In the case of the Gnu projects it was pointed out the futility of continuing to maintain GNU Bazaar when there were at least two more widely used DVCS (git and hg).


git has more or less completly taken over the mindshare from all other version control system.


I think sometimes we forget that network effects outweigh most normal values of technical superiority. Let's say, for the purposes of the argument, that "hg/Mercurial is 20% better than git".

This doesn't matter: hg/Mercurial has to be 500% better than git to be able to replace it.

Look at the past changes: they've all been huge shifts in how we do version control, not minor improvements

The DVCSs were 500% better than Subversion (wow, distributed!)

Subversion was 500% better than CVS (wow, renames work! Branching is easy! http support!)

CVS was 500% better than RCS (wow, we can check out files concurrently)


You are right. But it is often referred to as the 10x rule.

http://www.minidisc.org/econ113-paper.htm

A technology needs to be 10x better than a previous one for widespread adoption. It has successfully explained many failed technologies and products.


> A technology needs to be 10x better than a previous one for widespread adoption. It has successfully explained many failed technologies and products.

A hypothesis about failure, on the other hand, has to account for the failures and successes. This one doesn't. While it can plausibly explain failed technologies and products, it doesn't explain the historic success of junk that is worse than its predecessors.

A technology can be hyped into widespread adoption. It can be dumped on the market for widespread adoption. It can ride on top of cheap hardware for widespread adoption. It can be bundled with something else, achieving widespread adoption. A technology can be falsely evaluated as 10X better by a large number of complete idiots, resulting in widespread adoption among idiots, resulting in pressure for non-idiots to adopt.

All these effects can overcome the barrier of having to be actually, objectively 10X better. The "10X better adoption barrier" is only faced by honestly promoted technologies whose campaign consists of "try this because it's better for these objective reasons".


"Better" can mean a lot of things including "cheaper", "easier to use", "better marketed", etc. Explains why sometimes "worse is better", I think :)


Yes; "worse is better" rings true when the "worse" on the left hand side is the antonym of a different meaning of "better" from the right hand side "better"; i.e. an equivocation on the word is going on (not to deceive but to create a play on words).


That's not really a "network effect", but more of an "activation potential barrier". If it cost almost nothing to migrate to a new system, then 5% better might be a good enough reason. If I'm going to do back-end setup, data migration and re-train myself and others, then, damn it, I demand a 500% improvement to show for it. :)


It's a network effect.

If you're going to attract new contributors to a system, having the toolset that everyone else uses is a huge win.

I've been at this long enough to have learned (somewhat) each of RCS, CVS, SVN hg, and git. In the beginning there were (mostly) just two: RCS and SCCS, and nobody used SCCS. With time, I've touched several others, had to wrap my head around them, as additional challenges to getting my primary job done, which generally involved solving problems in other domains (or learning other tool).

And ... some of that's knowledge which has stuck, some isn't. The time invested in learning is sunk cost, though there's the problem that it all leaves traces on your brain.

This is something that 20-somethings don't get: it's not learning things so much as unlearning. It's one thing to launch into a field with the hot new technology under your belt. I landed with some awarness of Unix when that was fairly unusual, and parlayed that and a set of tools for 25 years. During which I've not stood still...

... but the rate at which new tools are introduced, and having to sort out which actually will win is a considerable overhead. After a while it ... just gets kind of old. Especially since you realize that all of it is a constructed complexity, largely arbitrary.

How many operating systems (7, not counting Unix variants), shells (9), editors (10+), scripting languages (8), database implementations (7), firewall systems (4), webservers / application engines (6), mailservers (6), document markup languages (6+), etc., etc., etc., do you want to learn over your lifetime? Some are pretty cool. Others ... are just makework.

This spills over to larger highly integrated stuff which previous experience shows is almost always going to be a massive clusterfuck, of which I'll not mention systemd by name.

Which means you want to consider what projects you work on, where you work, etc. Google have a challenge in this regard with their NIH model -- work at Google long enough and you'll learn some really cool, advanced technology. Which nobody, anywhere else, uses. And, no, they're not the only ones. But they definitely come to mind.


The unlearning point is definitely true. After using CVS for many years, Arch and then git were conceptually difficult to get into, and I was an early adopter of both. Not because they are actually difficult, but because your years of understanding of "how version control works" simply doesn't map onto the new way of doing things.

The last few weeks, after using git for a decade now I guess, I had to use CVS to checkout and contribute some changes to a project still using CVS. Wow, was that painful! One does get used to the simplicty and features of git, and going backward to older tools makes one really appreciate what they give you.


How I share your thoughts.

Been in the industry since the late 80's. Eventually one gets tired of listening to every single fad.


> ... but the rate at which new tools are introduced, and having to sort out which actually will win is a considerable overhead. After a while it ... just gets kind of old. Especially since you realize that all of it is a constructed complexity, largely arbitrary.

One of the most important parts of my job as a sysadmin is the immediate sniff test on trendy technology of the week and being able to spot the obvious pain points early. Bitter experience, it's useful!


Quite. Though it's less the sniff tests and more the banishment from workplaces / sites / tools I use that get to be draining. Again: systemd. Which risks fucking over an entire massive (and otherwise highly useful) resource.


The systemd battle's a lost cause I think. Pity, 'cos upstart was a reasonable implementation of the concept and vastly superior to init shell scripts. At least it's not Windows Server.

(TODO: pointy-clicky interface to systemd)



Could have sworn i have already seen a early version of such a interface for either KDE or Gnome...


Excellent! (cough) Now they just need to make it mandatory in GnomeOS ...


And why MS products stick around (and people bellyache when changes are made to the UI).


I've become exceptionally leery of GUIs for anything remotely complex. Scripting provides the kick in the ass incentive not to arbitrarily utterly bollox interactions as countless millions of scripts would have to be rewritten. You end up with stuff like Sendmail compatible commands and switches for mailers long since the former's heyday (qmail a notable exception). Or SSH and rsh being largely compatible.

On the rare cases syntax does change people raise holy hell. It can kill tools. Especially if done more than once.

GUIs have none of these features and change capriciously.This tends to keep user proficiency at a low general level over time.

Web apps only extend this.


I use hg as a front end to git, so I get the advantages of git's mindshare without having to deal with git's UI. Now I get the best of both worlds.


If I ever am forced to use git I'm hoping I can do this. Does it work well? I'm being forced to use svn right now at work and hgsubversion is a little rough around the edges.


Really well. The one stumbling block is that git branches are mapped on to hg bookmarks, and it's not quite seamless --- I haven't figured out all the subtleties yet, but it doesn't seem to get bookmark deletion right, for example (the bookmark will reappear the next time you do an hg pull).

I use github as a backend, so I tend to do branch manipulation using the github UI, so this isn't an issue for me.

Everything else Just Works. Highly recommended.


(wow, distributed!)

With the ultimate irony that people largely use it in a centralized fashion.


Distributed doesn't mean decentralized; they are not synonyms.

DNS is distributed, yet it has a "root" servers.

Even if there a master repository in a given git scenario, it is still distributed because developers have their own repositories, which are independent, and contain a replica of the copies of the upstream objects.


The point is that at least for a VCS, centralization does largely subvert the distributed aspect. At that point, you no longer really have multiple repositories so much as multiple working copies, similar to a CVCS. Your work model revolves around a blatant SPOF which you must synchronize all changes to.

The Linux kernel is a proper example of distributed workflow.


According to this text by Joel Spolsky, in SVN-days developers spent days and days massaging code but not committing, because committing meant pushing to the central repository public to everyone. With git/hg developers can use version control also in their incremental daily work, and finally merge or rebase or squash or whatever, to massage their work into a form that is good for pushing to the master.

http://hginit.com/00.html


That depends on what you want out of the "distributed" aspect. If all you care about is offline access to the repo, and like the fact that everyone's checkout is effectively a backup of what's on the central server, then centralization doesn't really take away from the distributed nature.

Even if you consider the SPOF-y nature of something like GitHub, more savvy git users will realize that if GitHub was down for an extended period of time, they could push their repo somewhere else that's public, like Bitbucket, or their own server, and keep working and allowing other people to collaborate. And for shorter downtime of the central server (where you might feel like setting up an alternate is too much effort), everyone can still get work done, they just can't collaborate as effectively until things are restored.


That works for the repo, but not issues, pull requests, wiki pages, permissions, or any other services GitHub provides. None of those work offline, and while there is an API that can help with migration, doing so still requires setting up a totally different system, and there is no natural backup on every developer's machine like with the repo - you have to do it manually while the origin is still up. (Not that GitHub is likely to disappear anytime soon, but all things will end someday.) Which is unfortunate because it seems like a missed opportunity: it's certainly possible to implement all those things in a decentralized manner, and some projects have tried, but so far with little success...


A central origin sure. It is nice knowing I can work offline which is my general workflow. I can also experiment with local branches and not push them to a remote.


Do people work on the same files/folders at the same time? Then the workflow is already way more distributed than svn ever was :)


We did that in svn all the time with few issues.


Might changeset evolution [1] (e.g. safe collaborative rewriting of public history) be that 500% bonus?

[1] https://mercurial.selenic.com/wiki/ChangesetEvolution


Changeset evolution is conceptually brilliant [1], but I suspect that it's really something that only a few percent of VCS users will actually use (at least beyond the level of features that Mercurial and Git already support).

[1] ... in that it provides not just safe mutable history, but shared mutable history.


It's like the coefficient of friction.

The amount of force required to move an object at rest is greater than the amount of force required to maintain it already moving (or something like that).


> Branching is easy

when svn first came out (and up until a few years ago) branching was definitely not easy!


What changed a few years ago?


Subversion 1.5 added support for tracking merges: https://subversion.apache.org/docs/release-notes/1.5.html#me...

Before this, merging branches was a huge pain, because you had to manually keep track of the commits that needed to be merged.


Bazaar is dead, it's no longer maintained and hasn't had a release in 3 years.


So are you saying it doesn't work? What issues are you having with Bazaar that require a new release?

I couldn't care less if Git stopped making releases; what I'm using isn't the latest anyway and does everything I want. (Though I'd like any newly discovered security holes to be fixed in any pieces of Git that face the Internet; e.g. via embedding in CGIT).


Google Code shutdown?


GCC didn't use Google Code. Like all GNU projects, they're hosted on Savannah, and that's not changing.


But Vim did, didn't it? Google Code shutting down may still have contributed to the sea change.


Eh, so many projects have been moving to DVCS for the last few years. Google Code shutting down is more recent than the trend towards DVCS.


I think this is coincidental timing more than anything.


irssi moved to git (github even) to encourage contributions.


LMMS http://lmms.io was on svn on Sourceforge. It was more or less moribund. Moved to git on github, immediate influx of contributors and a buggy 0.4.13 was advanced to a really very nice 1.0 in about six months :-)

Unless your project is weighty enough to have its own gravitational pull, github is a ridiculously easy place to contribute and make life easy for contributors.


Will they keep the repo in GitHub? :)


GNU projects don't use GitHub.


I don't think any of the "big" free-software projects do, though some do have mirrors on GitHub. The Linux kernel, the FreeBSD project, LLVM, Firefox, etc. all host their own version-control infrastructure (which makes sense to me).


All the projects you mention predate github. A lot of it has to do with the fact that infrastructure for these projects was put in place before Github was a thing.


FreeBSD now does merges from github so they can accept pull requests.


As in "haven't" or are barred by FSF or other policy?


They haven't, and they shouldn't want to. Relying upon GitHub, a centralized, proprietary service, would be working against the mission of GNU.


It's more the fact that Github keeps their own code (server and client) as proprietary and so the service is not a free software offering.


As github is a closed source proprietary service, it's a bad fit for FSF.


Complementary answer from r/emacs (courtesy or parent)

https://www.reddit.com/r/emacs/comments/3hsab9/could_emacs_b...


GNU Radio is on Github. Although it's a mirror, it's up to date and devs use it for the majority of pull requests. https://github.com/gnuradio/gnuradio


Lots of GNU projects (and projects from SF, etc.) are mirrored on GitHub.

None of the GNU projects (that I've ever seen) are officially hosted on GitHub.


Probably only as a mirror.


Case in point. Emacs moved from bzr to git, and they did it like that.

Official Git repo: https://savannah.gnu.org/git/?group=emacs

Unofficial github mirror repo: https://github.com/emacs-mirror/emacs


I don't think they set up the mirror, I think github just mirrored their repo.


GCC already has a GitHub mirror:

https://github.com/gcc-mirror/gcc

It seems out of sync though.


Worth pointing out though, it's far easier to keep a github mirror of a git repository up to date and correct than from other VCSes -- so even though people wanting to send GCC PRs on GitHub will be disappointed, it's still a win for discovery and searching.


It disturbs me that you're getting down voted. Somehow, we eschew extremism in all facets of life, but when it comes to free software, it's so expected that (even if tongue in cheek) asking a question has a reaction where someone thinks you're not contributing to the conversation.

I have more goodwill towards github than I do for RMS, with his gcc shenanigans.


From my experience, it's the opposite. When I try to promote free software, I face backlash from people who see no ethical concerns with proprietary software, or who are apologetic to "open source" instead. Usually it's in the form of some smartass who will redefine the word "freedom" to mean something different from the context at hand, which refers to the Four Freedoms comprising the Free Software Definition.

Either way, GitHub is proprietary, so it is against GNU's principles.


That's the problem with all this stuff seeming black and white, "who see no ethical concerns with proprietary software" – isn't that quite a blanket statement? some proprietary software or all proprietary software? What's the blanket ethical concern?

Just like I don't need to see the exact farm where my tomatoes came from, or which mine the metal came from for my car engine, I don't need to see every single line of code for my Netflix player.

Seriously, where's the ethical concern there?

Where do you draw the line from "I don't trust anyone, I need to verify everything", to "I trust them enough to not worry too much", to "I don't care"?


You're only proving my point. The only thing you can think of is access to the source code. That has never been the main concern of free software. It's a necessary precondition of needing the preferred form of work in order to properly exercise freedoms 1 and 3.

Not that your analogy is even correct. An end product like metal would be more like a circuit, not amenable to modification in of itself. The Netflix player is general-purpose software running on your property which you are forbidden from studying, using for any purpose, modifying or redistributing. The ethical concerns are enormous and no metaphor to physical objects is applicable when the stakes are so different.


No, there are no ethical concerns whatsoever here. If you don't want software you can't tinker with, then don't use Netflix. If Netflix were required by every citizen, then there would be ethical concerns. You are NOT required to use any proprietary software. This is the exact extremism that I'm talking about, where you feel I should have a problem with this even though my opinion doesn't affect your choices at all. It's the same logic as religion, who feels I shouldn't be allowed to marry someone of the same sex even though it doesn't affect them in any way. It's extremism, and it's quite frankly ignorant.


And again. You're talking about tinkering. What about the freedom to run for any purpose? What about the fact that one is renting a proprietary service with a de facto universal backdoor and spyware capabilities? How is that not an ethical concern?

Voluntary association does tend to work in this case, but it doesn't change the fact that a party is engaging in unethical behavior. Furthermore, the ultimate end result of "Don't like it, don't use it" is the erosion of choice.

You are NOT required to use any proprietary software.

Oh, hell yes, you are. Government agencies, who have a duty to serve the public, run proprietary software all the time. Meaning they are deprived of computational sovereignty and cannot guarantee they are performing their duties unfettered. If you've been to public school in most countries, you are taught to use proprietary software in IT classes. An institution devoted to learning compels students to use software that explicitly forbids being learned from and studied. Even more harmful are proprietary formats, which are like a lock and time bomb on your data, yet are ubiquitous.

Network services are a big ethical dilemma of their own and yet practically unavoidable.

Software is essential to contemporary civilization, and you will be using lots of proprietary programs. You cannot reasonably expect people to become hermits over it.

It's the same logic as religion, who feels I shouldn't be allowed to marry someone of the same sex even though it doesn't affect them in any way.

What a load of nonsense. It is the proprietary software vendor who deprives users of freedom. Your analogy is backwards and completely inappropriate. Free software mandates you have the freedom to run the program for any purpose, so gay marriage would be unrestricted. Proprietary vendors would have terms that may or may not restrict it.


I applaud you taking the time to respond carefully like this. It's always nice to be reminded of the simple, yet powerful foundation at the heart of Free software: the four freedoms:

The freedom to run the program as you wish, for any purpose (freedom 0).

The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.

The freedom to redistribute copies so you can help your neighbor (freedom 2).

The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

Note that "access to the source code" is not one of the four freedoms -- it's merely a precondition to be able to satisfy them.

ed: It might be interesting to note, that most of these freedoms exist for physical objects: you are generally allowed to fix broken physical things, even those that are under patent protection. Or to make sure stuff fits where you need it. And you generally can sell/give away stuff you're not using. These rights are actually protected by law.

Free software could be viewed as a rather conservative measure towards ensuring that we get these "everyday" rights wrt software as well.


Actually, it's exactly the first freedom the one missing from physical objects. You are free to analyze Coke with a microscope, but you won't get the recipe. You are free to 3D-scan a fork and analyze its alloy, but you won't be given the original CAD project and the production plans for the material.

I'm not sure if you were implying that the 4F are replicating what happens with physical objects; I would disagree with this. The 4F expect from software much more than we expect from any other good or service we buy/rent/consume.


Ah, but there are multiple levels: You can take chair apart, or split your pants along the seams -- and if you can find a material with comparable strength, or a fabric you like -- you are free to make as many "copies" as you want.

You are free to attach a metal head to a shaft and call it a hammer, and look at other hammers for inspiration, if your hammer doesn't work quite right.

You are of course right that it only goes so far -- and this is perhaps most scary in the field of bioengineering -- with the possibility of a) sterile produce (you wouldn't be able to plant a potato any more), and b) patented/"copyrighted" gene sequences -- making it illegal to plant certain "proprietary" produce without a license grant.

I do agree that the analogy is flawed -- is the shape of a fork more like the api or the source code? I suppose the main difference is that many useful physical things are trivial conceptually (once you've had/seen the idea).


I'm talking about tinkering because that was the example I was responding to. I think your lack of logic in this thread is unethical. Saying so doesn't make it true though, does it? Actually, nothing you said is logical. You don't have to take a windows programming course, you can choose the many more nix based courses. Those are choices, along with my choice to use proprietary or open licences. You want to remove my ability to make that choice. Foss should exist, and so should proprietary. Anyone dictating that only one or the other should exist is an extremist.

Your arguments are framed as if to make them valid, but they are still fallacies. The MIT licence exists for a reason, and the reason it's the most commonly used. It's the non-extremist licence.

Anyway, I'm done. Arguing with extremists is a fools errand. I believe there is a place for all models.


Insisting people pass on the freedoms that they benefited from to other users, is not extremism. You have a chip on your shoulder.

You say that copyleft removes your freedom to choose to distribute a program as proprietary software. This is correct, but you portray it highly disingenuously, like it's an equally weighted choice.

Generally, freedom has the most influence when its recipients are those at a lower position of power. The power of the software vendor is to have full control over the usage of the program, ostensibly either to provide a better UX (which is dubious), to exercise their IP (an ethical landmine), to benefit from unsustainable per-unit business models that ignore software's intrinsically anti-rivalrous, non-scarce and intangible nature, or other reasons.

Let us see how the user is affected. And keep in mind, all programmers are users. Programmers will be using other people's software at dramatically higher quantities than they will ever write anything individually. As such, user freedom is of paramount importance to them.

Now, for a certain amount of convenience on part of the vendor, the user is put in a position where they cannot a) run the program for all purposes that they may desire to use it, b) study and change it to adjust it to their needs, c) share it with friends, neighbors and acquaintances and d) enrich their community by sharing modified copies. In other words, they get software without most of the benefits that actually comes with having things be software in the first place.

As such, the users lose individual and collective control for the benefit of the vendor, but at a deadweight loss for everyone else. Because, when the software leaves the vendor into another person's machine, how it's used is in no natural sense of any interest to them. Not anymore than a chef is interested in how other people use their recipe. This has been accepted as the first-sale doctrine by courts across the world.

I therefore conclude that user freedom should be given at all times where the software is user-facing.


MIT

It solves your problems, and mine. Middle ground. Compromise.

What is wrong with MIT?


You can take something like LLVM, add a backend for a new processor or a new programming language and never release it back to the LLVM developers.

Thus getting for free the work invested into LLVM, being able to sell the modified version and LLVM developer don't see a dime from it.


You mean to say that the original developer who chose to release via MIT, gets exactly what they agreed to? God forbid, you're right, it's a whole moral catastrophe.


Many choose a given license without understanding its full consequences.


This is textbook nanny state drivel. You don't need to protect people from themselves. There are thousands of articles online, people can do the research. You aren't publishing anything without access to the internet, and the internet has more than enough articles and even wizards to help you choose.


I guess that those posts I see about someone "steeling" their code is a mirage then.


No, those are absolutely real. It's not up to you to protect or stop it. It's a life lesson. It's the same as copyright. Every kid starts some project using someone else's IP. We see it daily. Then someone explains that you can't use Homer Simpson in your app. You don't abolish copyright because some kid gets a C&D.



Here's a link to one person's answer: http://www.gnu.org/philosophy/why-free.en.html

The person to whom you replied seems to quite literally be talking about "people who see no ethical concerns with proprietary software" as such. Some people do.

You want to know why they do? There are many resources available online that you can use to read up on why they do. If after reading them you still don't need to see every line of code on your Netflix player, you will still be disingenuous if you pose "problems with proprietary software as such" as some kind of ending of a reductio ad absurdum. Disagreement is fine; you don't need to be confused by it.


I see a ethical problem with people who would sue a fellow human being because they cared to share a piece of unmodified software which was useful.

I also find people unethical who would go out and punish or actively prevent a fellow human being who seek to understand software which is currently running on a machine that they own.

I also admier people who fixes broken software on their own machines and then goes to share the improvements. I find ethical concerns when people try to prevent those improvements from being run or being shared.

Those proprietary software which isn't covered by the above concerns are unlikely to be ethical concerns to me, but what is then the definition of proprietary software?


I keep trying to provoke a reasoned argument with a pro-FSF person, this seems like as good of an opportunity as any. You may want to move this to email (my email is in my profile).

> I face backlash from people who see no ethical concerns with proprietary software

I think this could be clearer. RMS himself has stated [0] that selling exceptions is perfectly fine, so even free software zealots do not object to proprietary licensing in all of its forms. I think most people object to proprietary licensing in some forms. This is a case where speaking in riddles and platitudes is not effective policy.

> or who are apologetic to "open source" instead

Again this is a platitude turn of phrase. I support some kinds of open source and not others, as do I expect most thinking people. Does that make me "apologetic"? Am I the problem? No clue.

> Usually it's in the form of some smartass who will redefine the word "freedom" to mean something different from the context at hand

Quite frankly, one could accuse the entire Free Software movement of this, as that is essentially its purpose: to redefine the loaded term "freedom" in a nonintuitive way.

But accepting for the sake of argument the FSF's vision of software "freedom" as the correct definition, I am not convinced it is an actual view instead of a post-hoc rationalization. For example, the FSF claims [1] the RPL is non-free for 3 reasons:

1. It puts limits on prices charged for an initial copy.

2. It requires notification of the original developer for publication of a modified version.

3. It requires publication of any modified version that an organization uses, even privately.

Now my question is this: which of the four freedoms, specifically, give rise to these objections? Because from my POV the RPL not only protects the four freedoms, but does so strongly, for exactly the reasons here that are presented as so-called "objections".

So, the only conclusion that I can draw, is that the 4Fs are a lot of nonsense, a "just-so" story to explain why GPL License is Best License. Freedom is allowed to go so far, but no further, until GPL v4, and then we can go further.

I am sorry if some of this seems trollish or offputting. I promise it is not. I have read probably hundreds of books and essays on this and I cannot find anything that satisfies me that "Free Software" is an intellectually defensible philosophy. This may be because the people who are best-positioned to defend it have better things to do than defend it to me.

I want to like the free software community, and we do collaborate, in a sense, but it is always strategic and not ideological. It just so happens, in cases, the GPL and my objectives are compatible. But the FSF's objectives and my objectives are not really compatible at all. So there is the same funny taste in my mouth from working at cross-purposes that I get when writing proprietary software. It has the same artificiality.

[0] https://www.fsf.org/blogs/rms/selling-exceptions

[1] http://www.gnu.org/licenses/license-list.en.html#RPL


I sent you an email.


I'd like to send you one. What's your email?


I feel the downvotes are actually due to the smiley.


git is moving to GCC


Where is Linus when you need him?


Linus hasn't been involved in Git for ages now. Junio Hamano has been running the show for the past decade.


Feels like pressure from clang?


Because clang is hosted with... subversion?


Precisely. It's not unreasonable to suspect that using Distributed VCS would give GCC at least one advantage over Clang, which uses a Centralized VCS. It's a popular opinion that Clang is superior to GCC, so it would make sense that the GCC devs would want any advantage in terms of development time, and DVCS offers efficiency over CVCS. So it would be reasonable to suspect that the move might be "[due to] pressure from Clang", no?

... unless you chose to interpret that as "oh, hey, Clang is using Git, so we (GCC) should too!", which is both incorrect (Clang uses Subversion) and a fairly silly motivation regardless. If that interpretation is both incorrect and silly, why not try for a more reasonable/charitable interpretation like mine? Did it not occur to you?

I'm having a hard time understanding why jussij was down-voted... I suppose it's because people are quick to interpret others in ways that put them down, or because the down-voters lack the intelligence to make the more likely interpretation. Either way, it's disappointing.


A better argument is that the kind of people you probably want to attract to your open source project are most likely already using Git and so making them learn another DVCS to contribute to your project merely increases the barrier to entry at no benefit to the project.


> So it would be reasonable to suspect that the move might be "[due to] pressure from Clang", no?

No. The pressure comes from gcc developers who prefer git over subversion.


Because distributed version control can help increase the pace of development and increase the usage of code reviews.


Still waiting for the obligatory RMS "Anti-Torvalds" rant. Also this doesn't seem to be the announcement of "We are moving to git as of this date" but rather "Why don't we move over to git" proposal


RMS doesn't have a problem with git because Linus made it. He had a problem with it because it separates pushes from commits. :-)

https://news.ycombinator.com/item?id=9292209

But otherwise, rms uses git quite alright. Let's not portray him as any crazier than he actually is.


What other synonyms would you prefer using with respect to RMS and his obsessive-compulsive need to denounce Linus and Linux every time as "GNU Linux" instead of "Linux"?


Well, he only obsesses over that when you refer to the OS. If you're just referring to the kernel, he thinks it's ok to call it Linux. And he doesn't think Android is GNU/Linux.

I call that a bit crazy, but also crazy reasonable, because GNU really is the foundation of the OS, along with Linux.



The point is not so much the size (which has changed over the years) but how foundational it is. You can remove without replacement almost everything else from that pie chart except Linux and GNU and still have a functional OS, but without any coreutils or Linux or a replacement for either you don't have much of an OS left.


*BSD guys seem to do pretty good without a GNU coreutils.


*BSD guys seem to do pretty good without a Linux.


If anything it shows how silly it is to call the operative system for "linux". The term distributions and operative systems has become synonymous, and distributions dwarfs both Linux and GNU.

The operative system running on my laptop and servers is called Debian. The kernel is just one of many packages available in the repository and can be replaced by a single command. Its not even the most significant important package, since libc has a higher impact during upgrades and package dependency.


For better or worse, operating systems are typically named for their kernel. I don't see Stallman making a similar argument over the naming of various proprietary Unices, or of Windows (NTOS kernel + Win32 API, for a time, at least as of ~2000, I don't stay current).

And yes, I understand RMS's interest in GNU and keeping it in currency. And refer to "GNU/Linux" quite frequently myself.


Historically operative systems was named after their kernel because each individual user put together the program that they wanted in their system. The actually term used was "Linux based operative systems", which comes from users starting with a linux kernel and then built everything else on top of it.

GNU is a project to create the individual parts which users need to make a operative system, which is why Stallman calls it GNU based operative systems. This is also attached to the historical context of users building their own systems by assembling parts to compile and run.

While we should recognize their historical and current contributions, Debian is based more on their community and policies than on any specific package.


Agreeing with all that, the distinction isn't made in general speech or writing.

MVS, CMS, VMS, OpenVMS, UNIX, Ultrix, Solaris, AIX, HPUX, DOS, CPM, BSD, MacOS, Windows, BeOS, iOS, Android.

GNU/Linux.

There are times you might specify the user environment. MVS TSO/ISPF, say. Or hardware: VAX/VMS, SolarisX86. But not elsewhere the libraries.

I get RMS's rationale. I largely support and practice it. But it runs against convention and practice through the rest of the industry, for decades.

And mentioning Debian, it's also available without Linux.

Though I agree also really packages & policy.


Git is free software, I don't see why RMS should have a problem with it. Is he still a commiter anyway?


He doesn't have a problem with git per se, at least not overtly. I think the confusion is that Emacs went with Bazaar over Git in 2008 essentially because it was a GNU project (and Arch had faded into obscurity by that point):

>We should use Bzr because that is becoming a GNU package. GNU packages should show loyalty to each other when possible, and in this case it is possible.[1]

But now its main repository is managed with git[2], Savannah as a whole makes extensive use of git,[3] and Stallman tacitly supports git by linking to git repos from his personal website.[4]

1. http://lwn.net/Articles/272853/ 2. http://savannah.gnu.org/projects/emacs/ 3. http://savannah.gnu.org/maintenance/UsingGit/ 3. https://stallman.org/stallman-computing.html


Probably because the FSF doesn't own the copyright for it. Without the copyright, you can't do fun things like forcing everyone to upgrade to the GPL3.


I thought HN submissions that were many years old were supposed to have the year in the title... Oh, wait, my mistake...


Stupid comment - I apologize.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: