Hacker News new | past | comments | ask | show | jobs | submit login
Bzr is dying; Emacs needs to move (lists.gnu.org)
401 points by __david__ on Jan 2, 2014 | hide | past | favorite | 302 comments



I went back and looked at the older discussion, and it doesn't paint Stallman very well as the head of a project. He pins the question of whether to keep using bzr not on whether it is good or whether the Emacs developers want it, but on whether it's being "maintained".

But then he seems to define maintenance as having fixed a specific bug that's been around for over a year, blocking a point release.

He admits that he can't follow the developers list to see if they're genuinely doing active maintenance (reasonable enough: he has a lot on his plate), but also won't accept the testimony of Emacs developers that the mailing list is dead and there's no evidence of real maintenance.

When questioned, he says that there's too much at stake to abandon bzr if it can be avoided at all. But the proposed replacement is GPL software. This is just madness.

Refs: http://lists.gnu.org/archive/html/emacs-devel/2013-03/msg009....

http://lists.gnu.org/archive/html/emacs-devel/2013-03/msg008...

(and surrounding posts).


I think this is more inertia than anything else. It is pretty obvious that Stallman isn't the easiest person to talk to, but he has consistently put huge amount of efforts into GNU for lots of years, and, whatever he did as the maintainer, seems to have worked out pretty much in the end. If there was anyone as active in the Emacs community today as Stallman was in his prime, they could easily push through changes despite Stallman's disagreement, via a fork or otherwise, but there isn't such a person, and that is that.

Consider that it has been almost 30 years since GNU Emacs started, for most of this time RMS has been the maintainer, and GNU Emacs continues to this day to be the most advanced, popular and active Emacs there is, while the various forks in existence like XEmacs, SXEmacs lost steam pretty quickly. So it certainly hasn't been all bad on his side.


Over those 30 years the emacs codebase must have moved through several revision control systems. Why is moving to another one such a big deal?


Because it's important to have a cohesive set of packages that are parts of the GNU project. It's about culture and cohesion, not just licensing.


The popularity of a specific fork of Emacs is a moot point, it's the popularity of Emacs as an editor (or should I say, as a platform?) that is at issue. Stallman was active, how that he's not active he needs to either defer to people who are active or watch as fewer and fewer people take advantage of his work.


Stallman doesn't have much influence on Emacs development itself anymore, but Emacs is a GNU project, Stallman is still the head of GNU, and a move to another VCS would be a major organizational change. I too wish he would be less hard-headed about this, but lets not overblow this issue out of proportion.


In replies, he says he doesn't oppose the move to git. Most devs who've replied support git. There's already a mirror. Let's not make mountains out of molehills...


As he said, 'more than Emacs is at stake here.'.[1]

I presume this refers to Bazaar's status as part of the GNU project, and that RMS did not want to write off part of GNU without being certain he needed to.

Regardless, he has since OKd the switch from bzr:

I don't insist that Emacs should stay with bzr. I chose to support bzr because it was still a contender at the time. [2]

[1] https://lists.gnu.org/archive/html/emacs-devel/2013-03/msg00... [2] https://lists.gnu.org/archive/html/emacs-devel/2014-01/msg00...

edit: formatting


I think Stallman is being reasonable here.



I think the subsequent OK'ing is unfortunate. Attracting younger hackers shouldn't be a more important goal than integrity.


What does this have to do with integrity?


I mean, don't not fixing what ain't broke is certainly a matter of integrity. Fixing something to the latest hippest new fad just to "attract young hackers" detracts from integrity. Making the editor an awesome badass attractive editor should be enough. Otherwise you'll only be attracting groupthink-prone douchebags, anyway.


For better or worse, having a vision is a very different skillset from having the management skillset to make things happen. Sometimes people deride management when talking about the importance of vision and leadership. There is also the current trend of companies giving up managers entirely.

RMS is a prime example of why management is an important skill. It is inefficient for him to do everything himself. It's also inefficient for him to chase people down for all the details. If he had a solid COO who understood his vision, his organization would be much more effective.


> it doesn't paint Stallman very well as the head of a project

Is this news to you?

Cf. e.g. http://www.jwz.org/doc/lemacs.html


I'm not sure that conversation is a good example of what you are trying to describe.

Let's see if someone did the following things to the python project:

1#: Hire away the package maintainer. Then rather than continue and finish any current work, effectively remove that person from the community project.

2#: Redesign underlying structure of the project (like say, PyPy), but don't discuss any changes with the community. No PEPs, no discussion on mailing lists, no communication what so ever.

3#: Ignore current list of new feature being worked at. Community goals are unimportant.

4#: Add code regression! Do not care about maintaining performance.

5#: Demand that the changes get implemented immediately in next official release.

Would anyone expect that to actually work today? Sure, Stallman could be more diplomatic and find (and succeed) with a middle ground solution, but the above steps are not how you join an ongoing software project.


I tried to read that conversation in a neutral light, because I think RMS and JWZ are both kind of ... polarizing personalities, but RMS really came off as a stubborn, ineffective whiner in that whole thread. In particular "you hired away my maintainer" is a pathetic excuse for not releasing something for so long. Anybody could have hired away your maintainer, and you'd have to soldier on; it has no relevance that your maintainer went to a "competing" (in your territorial view) project.

I also read the argument about the redesign of the event system and was pretty flabbergasted. The argument seems to reduce to "lucid emacs decided to design a proper event datatype because having an event be entirely represented by a simple integer keycode both lost information and made it impossible to represent certain keystrokes" versus "but ints are simple and backward compatible!"


Poaching the lead dev to work on your own fork is a hostile move, and in my book, it provides a reasonable excuse for the delays, since nobody was able to take his succession immediately.

The guys at Lucid also barely communicated for long periods of time, making collaboration impossible.


> Poaching the lead dev to work on your own fork is a hostile move

That's an argument one can use when asked to spend more time with kids.

> it provides a reasonable excuse for the delays, since nobody was able to take his succession immediately

Huh?!? Open Source model not working or what?


>> it provides a reasonable excuse for the delays, since nobody was able to take his succession immediately

> Huh?!? Open Source model not working or what?

I guess its only illusion and crazy lawyers who like to add anti-poaching clauses in employee contracts. If a company can't handle loosing their top engineers, clearly its the the proprietary model that is not working.


Funny, I had the opposite view. I guess your opinion of the personalities really does change everything.


There is one (and only one) other possibility, which is that you and I both read the conversation with flawless objectivity, and that you are wrong. :D


> I couldn't change the plans, so I had to make the best of them. I suggested a design to him, one oriented toward editing formatted text--a feature I wanted Emacs to have, eventually.

Hah, 20 years in the making!


Not to mention his insistence on GNU Hurd being based on Mach, which pretty much ended up killing the project.


Back when the choice was made, micro-kernels was all the rage in both academia and commercial ventures, and Stallman chose Mach since he thought it would speed up development, he was hardly alone in choosing Mach at this time, Apple (MkLinux, NeXTSTEP), IBM (Workplace OS) amongst others.

He fully acknowledged that he made a mistake in going with Mach and as soon as Linux took off FSF focused on providing the necessary software to combine with Linux into an operating system and placed Hurd on 'life support', where it's been ever since.


IIRC, ARPA at some point was more interested in funding projects based on Mach than based on any other kernel.


That's a simplistic view. First, the GNU project is very much alive: the GNU tools are used in a huge number of operating systems and are installed on a staggering number of devices. I would bet that the system you are writing this comment from is running thanks to the GNU software.

Second, it is debatable whether sticking to Hurd was a good or a bad idea technically. Imagine if Stallman and co. managed to convince a good number of developers that it was a good idea and the kernel was competitive with Linux, BSD's, etc. If you believe you have a technically superior vision for your product, should you compromise on it just because people who do not share in your vision will not join you?

In the end, I think what killed the pure GNU/Hurd OS was bad PR and absolutism. Hurd as a technical question was just a small part of that. Remember, the debate between the Free and the Open Source guys was pretty fierce. Today we use terms like FOSS to describe all open software, but when Linux and Hurd were young these were different camps with opposing philosophies, and the one that appealed to more developers won out. In simplistic terms, you can think of this as the VHS vs Betamax debate. Can you blame the Betamax backers for continuing to try to push it and "killing" it as a result?


Wait. You misunderstood me. I didn't say the entire GNU Project is dead. Hell no. When I said "project", I was referring specifically to GNU Hurd.

Also my statement was going by the words of the Hurd's former project leader Thomas Bushnell:

"RMS was a very strong believer -- wrongly, I think -- in a very greedy-algorithm approach to code reuse issues. My first choice was to take the BSD 4.4-Lite release and make a kernel. I knew the code, I knew how to do it. It is now perfectly obvious to me that this would have succeeded splendidly and the world would be a very different place today.

RMS wanted to work together with people from Berkeley on such an effort. Some of them were interested, but some seem to have been deliberately dragging their feet: and the reason now seems to be that they had the goal of spinning off BSDI. A GNU based on 4.4-Lite would undercut BSDI.

So RMS said to himself, "Mach is a working kernel, 4.4-Lite is only partial, we will go with Mach." It was a decision which I strongly opposed. But ultimately it was not my decision to make, and I made the best go I could at working with Mach and doing something new from that standpoint.

This was all way before Linux; we're talking 1991 or so." [1]

http://www.groklaw.net/article.php?story=20050727225542530 [1]


Note, in regard to Bushnell's quote, that "1991 or so" and "way before Linux" are contradictory. (Or, at least, require an strained definition of "way before"; Linux was first released in 1991.)


>the GNU tools are used in a huge number of operating systems

You mean linux? That isn't a huge number.

>and are installed on a staggering number of devices

The staggering number of devices you refer to almost exclusively run busybox or one of the similar projects. GNU software is hugely bloated and not a good choice for embedded systems.

>Second, it is debatable whether sticking to Hurd was a good or a bad idea technically

No, it was fine technically. It had no developers and so nothing happened. Minix exists, obviously microkernels are possible.


GNU tools were often used on other OSes too including Solaris and other commercial Unixes but also Windows under Cygwin. Particularly GCC, make etc. but also command line tools such as grep where the default platform versions were often not as feature rich. I didn't use those platforms so others will remember and know better but I don't think huge was an obviously wrong description.


There is a big difference between "some people optionally could install GNU stuff in addition to their existing tools" and "those OSes use GNU tools".


>the GNU tools are used in a huge number of operating systems

Fair enough, I read "in" as a synonym for "on" but read in a less casual way the difference could be important.

Even in the "in" case I think many BSD's used GCC as the default compiler (CLANG seems to be taking over now).


Last I heard a lot of users of Solaris and other Unixes often also use the GNU tools, though those are generally dying out in favour of Linux anyway.


Last you heard wrong then. Occasionally we begrudgingly install some GNU bloatware because some poorly written software requires it. That's about it. Every other OS already comes with its own versions of all the unix tools.


Apple still insists on building their kernel atop Mach, and it doesn't seem to stop their momentum. (Even though it's kind of a strange choice.)


From what I understand, the Mach kernel which is now used in XNU is not the Mach micro kernel (3.0 >) but based upon the pre-micro-kernel 2.5 version of Mach.

I'm not sure where I read this originally but I just googled this source which seems to back it up:

http://www.roughlydrafted.com/0506.mk3.html


I am not intimately familiar with the history of Mach, but what I do see is that in the late 80s and early 90s these two groups (NeXT and GNU) seeing Mach as the future (a position that makes no sense at a later time), and having vastly different outcomes.

I don't know much about how these people work, but I always figured Mach at Apple is just about momentum and familiarity of contributors, rather than technology. NeXT hired Tevanian who worked on Mach at CMU, they spent roughly a decade hacking on Mach, then Apple did the same. I'd imagine they employ people who know Mach well and haven't seen it as worthwhile to replace it.

I even remember they had this goofy project "MkLinux", which sought to put Linux in the position that BSD carries with XNU, on top of Mach... Just goofy stuff, unless you figure they had Mach hackers on staff.


According to the Apple docs XNU is based on Mach 3.0.

https://developer.apple.com/library/mac/documentation/Darwin...


That's misleading. Apple merged in a bunch of Mach 3 code but still maintains the architecture of Mach 2.5 (i.e., Mach + BSD both running in supervisor mode in one big monolith; no BSD server; xnu is not a microkernel…).


It is odd, I predicted about 5 years back that Apple would gradually converge the API to FreeBSD and then switch. They would still be better off doing that I think, but they show no signs of doing this.


Why would they do that? Mach allows for some cool stuff, like kernel extensions being isolated.


Because building and maintaining a competitive OS is a huge effort, and FreeBSD has lots of cool stuff which they can't use...


It's a huge effort worth that seems to be paying off quite handily. XNU seems like a competitive edge over the monolith FreeBSD, as far as desktop and mobile OS is concerned. FreeBSD has plenty of cool stuff, but in a completely different domain.


> XNU seems like a competitive edge over the monolith FreeBSD, as far as desktop and mobile OS is concerned.

This seems a bit delusional. I don't think it's controversial to say that Apple's biggest differentiators exist at higher levels than kernel space. I'd go so far as to say that anyone who claims that Apple's success is rooted in XNU and that the same could not have been done with Linux or *BSD at the lowest layer and all other pieces being equal does not understand what a kernel is.


I don't know anyone saying Apple's success is rooted in any one thing. However, IOKit is very important, "common code for device drivers, this framework also provides power management, driver stacking, automatic configuration, and dynamic loading of drivers". Even if the entire OSX could have been implemented on top of Linux and FreeBSD reusing those kernels (and it does reuse FreeBSD for low level POSIX apis!), how productive would that have been? I honestly don't know, but I choose to trust what I read from the original developers.


Those other kernels have driver frameworks too. And if they find them inadequate for some reason, they are of course able to make modifications.

But this is not a suggestion for them to scrap it, necessarily. As I said in some other comments on this thread, I think the real reason is that they had people that knew their existing kernel well, and don't see a need to replace it.


FreeBSD does not have Mach ports (the IPC system). There is a ton of what OSX does that needs this.

See my comment up thread about Jordan Hubbard's plans.


Needs mach ports, meaning can't be simulated with Unix domain sockets or something else? I doubt it.


I was assuming they would port iokit as part of it...


> XNU seems like a competitive edge over the monolith FreeBSD,

xnu is also a monolithic kernel, just with some nice message-passing primitives. Think Mach 2.5.


Darwin + Aqua also has lots of cool stuff which *BSD can't use. I can see a competitive advantage in making sure that remains the case.


xnu kexts are not isolated and probably never will be, although IOKit exposes some things to userspace.


They're isolated enough that drivers can crash and just be reloaded, without a kernel panic.


No, they are not. A crash in a kext causes a kernel panic.


You are right. An actual crash in kext causes kernel panic. I think I read that this is was possible, but I can't find how, and when, right now. The common thing that I'm thinking of is just voluntary error (exception) handling in kexts and reloading of kext on their own. There is some isolation and benefit to it.

Edit: ok, I just figured out my source of confusion. IOKit allows userspace drivers, which can crash without resulting in panic.


IOKit allows communication between kexts and userland. You could call that "userland drivers", but it's not like you can write only userland code to implement a driver.


Amazingly, jkh is talking about bringing More parts of Mach to FreeBSD. To make it more like OSX.


the exchange made lucid seem like a bunch of assholes


Care to explain? I thought that the Lucid guys made reasonable technical arguments (especially in light of, y'know, history, over the last twenty years) and that RMS was attempting to both grandstand and emotionally manipulate people into adopting his preferred position.


shrug More confirmation.


I thought RMS had resigned as Emacs’ maintainer?

http://lists.gnu.org/archive/html/emacs-devel/2008-02/msg021...


One of the reasons I chose GNU Emacs over XEmacs was a feeling that GNU Emacs will be maintained, even advanced, for as long as RMS has the strength to type. It's his baby.

(There were other reasons, the big one being momentum. XEmacs didn't run on the platform I was on for a long time, so it would have been a switch.)


An excellent point; for example, although I don't have any reason to think RMS himself did this, Emacs 24.4 will have file change notification support across all platforms where it compiles and where such notifications are available.

I grant that's a somewhat overdue feature for Emacs to have (e.g., I've wanted it ever since I set up Dropbox to synchronize my org-mode files across all my boxes), but it's definitely evidence that Emacs isn't lacking of maintenance and improvement.

I wonder: If Emacs rarely gains new features in core, is it because there aren't enough developers doing enough to improve it, or rather because people are having a hard time thinking up new features to add which Emacs doesn't already have?


Lisp systems legacy I suppose, the core is ~tiny, everything else is a library.


Sorry, I was unclear; by "core" I mean "the Emacs distribution itself", as opposed to libraries you find on Github, EmacsWiki, or wherever else that isn't part of the standard Lisp library you get when you download the Emacs tarball.


RMS has RSI, so he doesn't type, he dictates. Oh, shit -- that didn't come out the way I meant it to. ;(


I wonder what software package he uses.


Maybe--but the tenor of the discussion strongly suggests, if not implies, that he makes the call about whether they can leave bzr.


    "When questioned, he says that there's too much at stake 
    to abandon bzr if it can be avoided at all."
The big difference between then and now is that this time you have a very competent developer with an exceptional reputation offering to lead the migration. With esr leading things, there is a lot less at risk.


>git won the mindshare war. I regret this - I would have preferred Mercurial, but it too is not looking real healthy these days

I confess that my perception of Mercurial is the diametric opposite of the author's. Recently I believe I have seen a modest resurgence of interest in Hg and increased uptake. Am I just seeing this through some peculiar VCS-warped glasses?

I believe that much of the popularity of git stems from github making it very easy to adopt, something that bitbucket doesn't seem to have pulled off as well.


Yep, I don't understand the author's assertions about Mercurial either.

Mercurial remains a better choice for a few use cases where git simply falls flat.

Among game developers, in particular, because of their need to have revision control for large assets, Mercurial seems to be more popular than git due to the large files extension.

And realistically, perforce or other solutions appear to be even more popular among that particular developer segment.

Personally, I use Mercurial wherever possible, but that's not because I believe Mercurial to be technically superior, it's just because I hate git's UI.

Perhaps among the general FOSS community git is more popular, but both git and Mercurial have yet to met the needs of many developers.


Git now has the git-annex extension for large files; I use it myself for all my data.

http://git-annex.branchable.com/


It's not the same thing. That only stores the latest revision. You can't easily go back in history and check out earlier revisions of the large files. You can in hg.


Yes you can.


Among game developers, in particular, because of their need to have revision control for large assets, Mercurial seems to be more popular than git due to the large files extension.

Most game devs -- I'd say even most devs in grown-up, professional shops -- use p4.


  Most game devs -- I'd say even most devs in grown-up, professional shops -- use p4.
Hence why I said "realistically, perforce or other solutions appear to be even more popular among that particular developer segment." My comment you quoted was comparing the popularity of Mercurial to Git, not p4.

Also, I'd disagree with your assertion regarding "most devs in grown-up, professional shops". Microsoft's SourceSafe has a large following among the corporate world. And many of the largest tech companies I'm aware of don't use p4 primarily; they use git, mercurial, svn, cvs, SourceSafe, home-grown solutions, etc.


Are that many people really using SourceSafe still? TFS is now Microsoft's preferred solution, although I don't know what they use internally.


Disclaimer: I'm not a Microsoftie, just collecting some links and adding my own opinion.

Microsoft use TFS heavily [1].

Right now I imagine most MS projects are TFS but it doesn't appear to be mandated. Maybe for the big, internal-only stuff. ASP.NET is hosted on CodePlex as a Git repo [2] and MEF is Hg [3].

They've just added Git support to TFS and that probably means a lot of MS projects will migrate to Git over time.

[1] http://blogs.msdn.com/b/visualstudioalm/archive/2013/08/20/t...

[2] http://aspnetwebstack.codeplex.com/

[3] https://mef.codeplex.com/SourceControl/latest#.hgignore


You're right; I had forgotten about TFS.


Guess you need to tell Facebook and their repo that "many times larger than the Linux kernel" to grow up, because they're using Mercurial and have made lots of improvements to it: https://code.facebook.com/posts/218678814984400/scaling-merc...


I've noticed this, oddly, while my workplace is transitioning to git from Mercurial.

A lot of developers using .NET tend to go for Mercurial because a while back it felt a lot nicer to use on Windows. It's why I always preferred using Mercurial. A few .NET shops that use TFP/VSO are moving towards Git for the Visual Studio support, but I've noticed a few Python and PHP developers making the switch to Mercurial.

To be honest, I rarely need to do anything more than the basics so neither has a huge benefit for me. Neither feels particularly faster than the other, and both have comparable GUI tools. Aside from when I am pushing stuff to GitHub I tend to use whichever one pops to my head first on a project. I reckon a lot of developers are probably the same.


> A lot of developers using .NET tend to go for Mercurial because a while back it felt a lot nicer to use on Windows.

In my opinion it is more elegant. Git only works because it installs hacked up Linux utilities on Windows. In practice it might not matter but I feel dirty when I'm using "inelegant" solutions.

I guess it's a matter of taste or opinion but Mercurial is easier to use too. Though if you're just working on something solo the SCM doesn't really matter at all, you just commit and commit (and I mainly work solo).

Personally I also write mostly in Python so I'm naturally drawn to Mercurial.

Then again I've also been looking at and using Fossil for my projects because it's a single binary with no installer which makes it pretty cool in my opinion. It too works well and it's used as SQLite's SCM so I'm confident it won't screw up my projects. The other nice thing (although I haven't used them extensively) is that Fossil also includes an embedded web server that has a wiki and ticketing system so everything's integrated.


"Then again I've also been looking at and using Fossil for my projects because it's a single binary with no installer which makes it pretty cool in my opinion."

It actually relies on some configuration files in user's directory. I had troubles even launching it on a heavy-modified OS. I assumed it was a pretty simple and straightforward CLI tool that could work on bare-bone operating system. I was wrong!


It seems that most .NET devs prefer something baked straight into Visual Studio and my experience has been that this is part of the problem in getting those teams to migrate to Git, which is most powerful from the command line.

Personally, I greatly prefer my source control system to be separate from my development environment or IDE.


Git's pretty well baked straight into Visual Studio now that it's built into the latest version of Team Foundation Server.

Mercurial's the one where you really want to be using it from the command line. It's had VS plugins, but they're all kind of janky by comparison.


Or use TortoiseHg or SourceTree, if you're not comfortable with the CLI. They both work great. Also, if you need to host 'your own github', try RhodeCode.


Microsoft released a git plugin for VS.


I hate when people say "git won the war", it sounds like an excuse to close minds and stop progress.

What if "Microsoft won the war"? Or vim? Or IBM? Or Java? Or Taco Bell?


Microsoft did win the war; very little progress has been made by its competitors from the time; what's had success has been new OSes - OSX, iOS and Android (which while built of GNU/Linux pieces, is radically different from traditional GNU/Linux - enough to qualify as a different OS IMO, since the API is different).

Vim did win the war; there's still nothing better.

IBM did win the war, and then shot themselves in the foot with pricing on their new generation (which generation was a pretty radical shift). There's not much chance of git doing that.

Java did win the war; its competitors from the time are largely dying (Objective-C has had a kind of zombie revival due to iOS, but I don't expect it to last). You could argue that Ruby has overtaken it, but again the changes over the last ten years of ruby - and the influence of rails - have been enormous.

I don't think we should stop trying to make a better VCS. But I do think we should accept that Git has won against bzr and hg in their current form; neither of those will displace git without radical changes that they are probably unsuited to make. Most likely the successor to git will be a new program entirely.


> Vim did win the war; there's still nothing better

I'll just put this here for you:

https://code.google.com/p/vim/source/browse/src/eval.c

Yes that's nearly 25,000 lines of mixed spaces and tab filled pre-C89 C with 492 occurrences of #idef, many appearing in the middle of a function definition. I recently ran vim with debug symbols compiled and it was nice enough to dump a nice 4GB regular expression log file in my project directory. The way to turn that off is to find some ifdefs and comment them out. If vim won then well, I'm not sure what winning means. I've switched to emacs with evil, which in my opinion is better than vim in a lot of ways.

> (Objective-C has had a kind of zombie revival due to iOS, but I don't expect it to last).

Yeah ok, "zombie-revival" sure, your credibility gets a score of 0 here. This isn't an argument, it's a prediction, and a stupid one. Nobody will come back to check your comment in 5 or 10 years and call you out on it. This is just the certain kind of asshat thing you can say and not worry about it coming true or not because you're some anonymous commenter making the internet richer with your irresponsible use of a keyboard.


> Yes that's nearly 25,000 lines of mixed spaces and tab filled pre-C89 C with 492 occurrences of #idef, many appearing in the middle of a function definition. I recently ran vim with debug symbols compiled and it was nice enough to dump a nice 4GB regular expression log file in my project directory. The way to turn that off is to find some ifdefs and comment them out. If vim won then well, I'm not sure what winning means.

Winning means the user experience, not the code. And sure, I was lazy, it would be more accurate to say vim and emacs won between them (and are still fighting it out).

> Yeah ok, "zombie-revival" sure, your credibility gets a score of 0 here.

Do you disagree that a) Objective-C was more or less dead prior to the release of iOS b) almost all people currently using Objective-C are doing so solely because it's the language you can write iOS apps in c) absent huge, radical changes, Objective-C will never threaten Java's popularity the way that post-Java languages (C#, GHC Haskell (very different from the language that was standardized in 1990), Go, Scala) are?


No Objective-C was not dead before iOS. A thriving Apple was supporting Objective-C in every way possible, and moving from Carbon to Cocoa.

Objective-C is used to build applications for Apple software. It's not a threat to Java, but that doesn't mean Objective-C is dead. Objective-C will be around for a long time to come. It's a modern language that powers all of Apple's most recent technology. They have no reasons to change, and there are no signs that Apple is on the verge of disappearing into the aether.

Vim is shitty software. I like the UI, but the thing is single threaded and everything runs on the UI thread. There's no hope for async, or an event loop, or even a settimeout like feature. The code is full of globals and trying to add new features to the thing is going to result in inexplicable, unfathomable seg faults. Vim uses shitty regular expressions in the UI thread to do syntax highlighting which is why that's slow for big files and why the syntax highlighting breaks.

So the code matters. There will never be powerful IDE like features as long it's this single threaded thing that only ever does anything as a response to user input. Given the state of the code, changing this does not seem ever possible.


> So the code matters. There will never be powerful IDE like features as long it's this single threaded thing that only ever does anything as a response to user input.

Run VIM in a sub-process, and communicate with it through a fake terminal. Basically, quarantine the madness.


Say what you like about Emacs; its source, both in C and in Emacs Lisp, is generally quite readable, and the former I've found to be especially well commented.


Making predictions isn't allowed on the internet anymore?


Of course it's allowed. So is judging someone's credibility based on the predictions he chooses to make, whether by the accuracy of said predictions over time, or the plausibility of said predictions in advance of proof's arrival.


> Java did win the war... ...You could argue that Ruby has overtaken it

For what? A specific niche of web applications?


> Vim did win the war; there's still nothing better

Least substantiated claim of 2014 so far.


Haha, but the race is long :)


> Android (which while built of GNU/Linux pieces, is radically different from traditional GNU/Linux - enough to qualify as a different OS IMO, since the API is different).

The "GNU/Linux" vs "Linux" discussion is a long one but I'm pretty sure there's (almost?) no GNU in Android.


I don't normally say "GNU/Linux", but I felt this was a case where the distinction is particularly important, because Android does run the Linux kernel, but is (IMO) a different OS from GNU/Linux.


Right. The reason I made my comment was because you said "which while built of GNU/Linux pieces" referring to Android, which doesn't make sense.


There's GNU in android.


I don't think there is. What are you thinking of? All of userland is not GPL licensed, I don;t even think any is LGPL, so I don;t think there is any GNU there.


In fsf writing about it:

  Android is very different from the GNU/Linux operating system
  because it contains very little of GNU. Indeed, just about
  the only component in common between Android and 
  GNU/Linux is Linux, the kernel. 
http://www.gnu.org/philosophy/android-and-users-freedom.en.h...


> Vim did win the war; there's still nothing better.

GNU Emacs is much better. It's easier to use and easier to extend.


>Vim did win the war; there's still nothing better.

Haha, good one. Many of vim's predecessors are better even, nvi is far nicer to use than vim is for example.


As an avid git user, I believe that git's victory against current tools does nothing to stop someone from creating a better DVCS in the future. They'll just have to identify why git won and address those points, if they want to dethrone git.


The success of git is - apart from the speed - related with its property of being the "stupid content tracker".

Git's architecture is a simple bottom-up engineering approach. The user interface (porcellain) builts upon a conceptually simple core (plumbing). Other VCS have defined nice UI which where then implemented by a core that depends on the UI. This top-down approach means that the core components can suddenly become quite complicated and in the end it is hard for the user to get a deeper understanding of the system.

The funny aspect of this is, that a lot of people complain about Gits bad user interface. It turns out however that Git is really easy to grasp.


Exactly. The git data model is intuitive and easy to grasp after a short amount of time. The porcelain is poorly designed, but part of that is due to the flexibility of primitives underneath, and the desire to support arbitrary workflows.

Contrast with svn which attempts a very clean porcelain interface with a completely muddled data model underneath. The conflation of repositories, directories and branches in svn makes it impossible for it ever achieve 20% of git's functionality simply because things are so poorly defined.

After using git for 6 months I understood it better than I did about svn in the previous 5 years. I would prefer a better porcelain, but given that software development is my full time job and that I can use git for all software development regardless of the language, I'm happy to commit a bit of muscle memory to git's idiosyncrasies.


> the desire to support arbitrary workflows.

Now I am laughing and laughing bitterly. git supports one workflow -- the massively decentralized one. To this day you can't have a simple workflow with git, the one that cvs/svn supported and practically all small project would benefit from, the one that bzr calls a bound branch.

git , I believe , is the textbook case of what the opposite of a user friendly UI is. commands have switches which change the command so fundamentally it should be another command. Which noone wants 'cos there are like 140 commands already. switches which across commands do the same thing but are named differently. The same command doing wildly disparate things without any indication of what's happening -- try git checkout file, and guess what the state of file will be. It might from the staging area or it might come HEAD if it wasn't staged. Nuts.


> "To this day you can't have a simple workflow with git, the one that cvs/svn supported and practically all small project would benefit from"

What? Of course you can. I've worked on teams that did it. You are talking total rubbish.


I'm a happy git user, but tell me how this is supposed to work: there's me and one other guy working on a project. There are three logical branches: trunk, his branch, and my branch. In SVN there would be five source trees: those three on the server, his working copy and my working copy. But in git we end up with fifteen: the three on the server, my branch on my machine, his branch on my machine, master on my machine, the remote-tracking copies of those three on my machine, and the same again on his machine. All of which could be different.

How do I reduce that complexity? I always pull and never fetch, which helps slightly, but only slightly; pull seems to fetch other branches, so it's still possible to have my copy of a branch end up behind my remote-tracking copy of that branch. There's no command analogous to pull for "add and commit and push", so that's always a second step to possibly forget (I don't want to rely on an alias as I work on a number of different machines). Most problematic at the moment is that there's no way to tell the difference between an up-to-date branch and a non-remote-tracking branch, so I sometimes delete branches that I haven't fully pushed, because I forgot to make them remote-tracking, so they didn't show up as behind when I "git status"ed.


Set up a repo on a server somewhere and declare it to be the "central repo". You and the other developer pull/push from that repo only. In other words, ignore some of git's capabilities and treating git like SVN. Just because you can pull from your teammates, does not mean that you have to.

I've been on a team that transitioned from SVN to Perforce, the again to Git, keeping the same workflow all the way through.

It isn't the way that I prefer using git, but it works perfectly well.


> Set up a repo on a server somewhere and declare it to be the "central repo"

That's the situation I described. We still have the problems I mentioned: sometimes we commit without pushing (particularly because we sometimes didn't set a branch to be remote-tracking and didn't realize this), and sometimes our branches are behind our copy-of-the-remote-branch because we pulled different branches (which leads to bogus merges in the history).


Committing without pushing is not a problem (unless you intended to push but forgot to?) It is not a concept that necessarily exists in traditional centralized version control systems, but the fundamental problem still exists. In SVN "forgetting to push" is just called "forgetting to commit".

If your team member forgot to push and you put out new changes, that is a problem for him to resolve. If you forgot to push, and your team member put out new changes, that is a problem for you to resolve. Workflow wise, this all works the same as it does with any other centralized workflow.

If you forget to check for updates... well that is something that happens in other centralized schemes as well. You figure it out when you go to push and it fails, you correct it, then you are good to go.


> Committing without pushing is not a problem (unless you intended to push but forgot to?) It is not a concept that necessarily exists in traditional centralized version control systems, but the fundamental problem still exists. In SVN "forgetting to push" is just called "forgetting to commit".

Sure, but you hit the problem twice as often in git, because you have to do twice as many things.

> If you forget to check for updates... well that is something that happens in other centralized schemes as well. You figure it out when you go to push and it fails, you correct it, then you are good to go.

In SVN that doesn't show up as a merge in the history.


> "Sure, but you hit the problem twice as often in git, because you have to do twice as many things."

Well no, I don't...

If this really is a frequent problem for you, then you might want to consider adding a note to the end of git-commit's output to remind you to push, or even just aliasing git-commit to push by default. I would recommend that you instead learn how to use git, but failing that...

> "In SVN that doesn't show up as a merge in the history."

If you don't want to resolve those situations with a merge, then don't resolve those situations with a merge... Rebasing exists for a reason.


In SVN, that shows up as a conflict which has to be merged by hand. 'git rebase' and/or 'git mergetool' are nicer.


> In SVN that doesn't show up as a merge in the history.

You might like "git pull --rebase"


Don't pull origin/master directly into local feature/test branch. Pull into local master then rebase local branch onto it.


This goes against all the advice I've seen elsewhere; we push our feature branches all the time, and sometimes pull from each other's if we're working in overlapping areas, so we need to not rebase them.


If that's what you want to do, that is fine, but you are no longer using a centralized process if you do it that way.

Git allows you to follow a centralized process perfectly fine. However if you choose to not follow a centralized process, it will not force you to.


Is there a reason you aren't using disposable feature branches?

I don't track my other developers' feature branches locally unless I need to view them.


You're right, thinking about it - most of the time I wouldn't have a copy of my coworker's branch (though we do sometimes pull changes from each other's feature branches - indeed that's supposed to be the big advantage of git, no?)


> It turns out however that Git is really easy to grasp.

Please write something to explain it. I must be pretty dumb, because I don't get it, even after reading about ten explanatory texts.


Since you have already read ten things about it and don't understand it, it would help if you told us what you do not understand so that we do not waste our time rehashing things that you have already not understood.


That's a very good question. I hadn't really tried to pinpoint what exactly I don't understand about it until now, but here's what some thinking uncovered:

Branching/merging/committing is pretty straightforward. The problem is that some commands seem to be very convoluted. For example, why does reset do four different things depending on whether it's soft or hard or plain? I keep having to Google to find how I can revert my latest commit.

Another thing I have trouble with is obscure failures. Obviously this isn't something I can just learn, but there are times when git fails for a reason I don't understand...


Ah yeah. The "why" with a lot of the porcelain stuff probably tends to be an unsatisfying "Because somebody didn't think it through very much a few years ago." git-reset would probably be better if it defaulted to --soft, and possibly left the other options to other commands.


Here http://www.sbf5.com/~cduan/technical/git/ this is the only way to understand git.


That's a great explanation, thank you!


Same here, whenever I attempt to give it another chance, sooner or later I break down and revert to hg-git. I recently discovered EasyGit [1] and it seems promising from the docs. There's also gitless [2] that was covered in HN a few days ago.

[1] https://people.gnome.org/~newren/eg/

[2] http://people.csail.mit.edu/sperezde/gitless/


And we all know what runs through porcelain before it runs through plumbing.


1. Linux

2. GitHub

That's it really.


As far as I can tell, Mercurial is being actively developed (i.e., updates being pushed out on a roughly monthly basis), and it's also actively being used. It isn't used as much in the open source community (probably the GitHub effect), but that doesn't mean it's "not looking real healthy" (anymore than OS X does compared to Windows).

Bazaar is a different story. Technically speaking, Bazaar isn't going away anytime soon. Canonical's development depends too much on it. The primary problem with Bazaar is that updates on Canonical's side are pretty much limited to dealing with issues that Canonical has, and there is little activity with respect to getting other bugs/issues fixed/addressed.

This is unfortunate, because Bazaar does do a few things better than either Git or Mercurial.


Yes, Mercurial is being actively developed; Facebook has a great post about why Mercurial and not Git with data: https://code.facebook.com/posts/218678814984400/scaling-merc...


Yes, there are plenty of local maxima, but this is a pretty nice look at the global picture - http://redmonk.com/sogrady/2013/12/19/dvcs-and-git-2013/ - and git definitely won the mindshare war by those numbers.


Bitbucket is nice. I use bitbucket for all of my private git repos, while using github for all of my public repos (and three or four private repos).

Bitbucket provides a good service.


I used Mercurial with a hg-git plugin back in the day as we were a Mac-only shop and I was the only schlub with a Windows machine (I was in charge of fixing IE issues). Mercurial was very good back then, when Git performance was subpar on Windows.

Git on Windows is now blazing fast enough that msysgit will get you by. I still favor the Git workflow even when I am using other SCMs (like right now, where we have a Subversion dependency)


Yes, Mercurial is looking quite healthy—Facebook details why Git doesn't work for them and all of the improvements they've made to Mercurial for their repo which is several times the size of the Linux kernel: https://code.facebook.com/posts/218678814984400/scaling-merc...


FWIW, just a few days ago I was browsing through the Emacs Bzr repository - after a full bzr clone, that took ridiculously long as well, a simple bzr blame takes 40-60 seconds to execute locally, and I have an SSD drive, four-core intel i7 and 8GB of RAM. I have never seen this kind of slowness with Git, with any repository size.


Oh yeah, doing anything with bzr and Emacs is just painful. For fun, check your CPU usage while your doing that bzr blame. I did one recently and it pegs at 100% for the whole time. Git is way more efficient.


Does this make the issue a critique of RMS's management style, or of the FSF licensing that is unable to back out of a failed project?


If you look at the thread, it seems to be neither, just inertia. The licensing is fine, its GPL, no different from bzr.


Inertia seems to fit with bad management style. Good managers should be fighting it when it becomes cumbersome. Stop making excuses for FOSS celebrities and start demanding better outcomes.


Well, it looks like it will happen.[1]

In light of my other comment, good for Stallman. Seems he wasn't actually so hardheaded as it seemed.

[1] https://lists.gnu.org/archive/html/emacs-devel/2014-01/msg00...


The important take-away here isn't the relative merits of each DVCS, but that bzr is not used by anybody any more, and it is impeding the uptake of new contributors to Emacs.


Compared to the decision (and ability) to contribute to Emacs, the choice of DVCS seems to be rather unimportant.


It's about lowering the threshold so that when I need to patch your project, I can trivially clone the repo with software I already have installed and know, do my change, commit and create a patch or submit a pull request to Github or whatever with so little extra hassle that I feel compelled to do so rather than just making the change to my local tar-ball and never upstreaming the changes unless/until there's something large enough to be a pain to maintain separately.

Frankly, that ability is more important than the choice of DVCS: There's more value from most people standardising than in picking the "optimal" DVCS just because of the lowered barriers to participation.


The impression I have is that bzr is just another roadblock between a potentially interested novice and a patch accepted into the Emacs source.

(It's not by far the largest one, though, and while I think esr has a point, I also think it'd be of help for some of the current Emacs developers to publish a "How to start hacking Emacs" document, for the benefit of people like me who would love to contribute but who have absolutely no idea where or how to start.)


> help for some of the current Emacs developers to publish a "How to start hacking Emacs" documen

1. Find thing you don't like 2. M-x find-function RET function-to-fix RET 3. Hack hack hack (use C-M-x or M-x eval-buffer liberally; also, read about edebug) 4. Make diff relative to Emacs base code 5. Send diff to bug-gnu-emacs@gnu.org

What I love about hacking on Emacs is that it's so easy to find the bit of code responsible for a given feature and hack on that code in a running editor. There's nothing like it. If I'm using Firefox and don't like the address bar, I need to go dig through XUL, find the thing I don't like, and restart Firefox. Emacs? You just find the function and edit it right in the editor.


Thank you, yes, it's not so much finding the code I need to modify that's the problem, as understanding how any given single thing fits into the Emacs C source as a whole. Hacking the Lisp side I don't really have a problem with -- but if I want to, for example, extend the MESSAGE primitive so that it can check a list of messages not to emit in the minibuffer, things get very hairy, very fast. A general overview of what's what, and where, in the C source, would be extremely useful, and I haven't had much luck finding anything like that in the source distribution or online. (And, yes, I have read etc/CONTRIBUTE, etc/DEBUG, and (Info)Elisp/GNU Emacs Internals/*. And I'm pretty sure doing what I'm talking about doing to MESSAGE would be a bad idea, because it'd require a call up into Lisp space every time the primitive gets called, which can be extremely frequent especially during initialization. But I know somebody who'd like to have that functionality available, and it seemed like a relatively simple place to start, until I tried actually doing it.)


> There's nothing like it.

Smalltalk. I once crashed my Squeak environment by making "true := false".


Ten (edit: even five!) years ago I'd agree with you. Back then a source control system was a piece of software you used for keeping a versioned history of your work and to enable you to collaborate with coworkers or friends. It was a tool.

Now? It's A Big Deal to a lot of younger developers. It is almost totemic. If it's not Git and (ideally) Github then.. it isn't worth hacking on?


>Now? It's A Big Deal to a lot of younger developers. It is almost totemic. If it's not Git and (ideally) Github then.. it isn't worth hacking on?

Do you really want those guys on your project?


I wonder if any of those guys would ever contribute to emacs anyway, unless we rewrite it in javascript.


You made me spill my coffee on the keyboard. It does seem Javascript people have taken over the obnoxious hipsterism from Ruby folks.


SNARK: No need to rewrite it in Javascript. Just call it "Sublime Text 5" and charge them seventy bucks a copy.


If the Emacs/Guile thing ever gets off the ground properly, ECMAScript/Emacs may become a possibility.


But remember: it's not because it's possible that it's a good idea. ;-)


Absolutely. Stigma over a (d)vcs !== (in)ability to contribute to a project.


Well, since they're constantly told on HN that unless they're visible on Github they aren't getting a job, why are you blaming them?


Long answer yes with a but. They will eventually take up the mantle from devs who age out or die. We might have something totally different than Git at that point but it's not good to dismiss them either. That is how projects die.


For core work, it's probably not that important. For trivial patches, it's probably kind of a pain in the neck. But a lot of people start with a small patch, so it's best to encourage them by making it easy.


With a repository the size of Emacs, it matters. Look at some of the other comments on this article here on HN, where people note that trivial bzr commands on the Emacs repo take way too long to run.


For me, I know to use subversion and git to a degree that I am really comfortable with (from the two I stronly prefer git). Another VCS means that I have to take a few leaps, new commands, subtle to extreme differences in workflow as well as different names for the same things. So a VCS that is well-known by people will (on average) make contributing more convenient to the average potential contributor.

After diving into Emacs' codebase, changing or tweaking a few things that bug me, there are a few walls to climb when actually contributing those changes. I.e. cleaning up the code, creating a patch/pull request, outlining changes and intentions, etc. An unfamiliar VCS adds another burden to the contributor. Remember that we are not talking about people who are paid for diving into their employer's VCS but about people who primarily work on other projects.


I never would have guessed. Pretty much all of the interaction I have with Emacs contributors is through packages on github. Emacs lisp is so ubiquitous and useful that it doesn't really make much sense to include most things in Emacs itself.


Many Emacs contributors are already using Git and simply publishing everything in Github. Most of the things in my .emacs comes from Github. Simply they're not part of the core Emacs.

I think the issue is not only bzr vs Git. It's also, if I understand things correctly, the super restrictive license that the core Emacs has, making every developer sign papers and send them (by snailmail!? or are scans allowed!?)... And if you several other contributors helping you, you must have them all sign these papers.

I've seen at least one prolific .el Emacs author (I think the mulled/multi-line-edit author) complain about that: saying that out of 10 people who helped him he managed to have nine of them sign the papers and sent them to him and couldn't contact the last one...

And eventually he simply decided to abandon getting all the signatures and went it's own way (i.e. Github / non-core Emacs, like many do).

I'm not well versed in licenses / GPL things but I'm a long time Emacs users and I'm definitely seeing change: now most of the newer (and great) .el files I add to my Emacs are coming from Github.


The Tcl guys are in a similar situation - they use Fossil. Which by all accounts looks pretty cool, but at this point it's "not git".

http://www.fossil-scm.org/index.html/doc/tip/www/index.wiki


Fossil is also used for SQLite's SCM.


Yes, they're written by the same super bright and productive guy.


Something I've wanted to see for a while is a fossil-like wrapper system for git. The idea of keeping bug tracking and wiki as part of the repo makes a lot of sense.


If Tcl Core wanted to move to git, Fossil exports repos to that format. So they really lost nothing. They have a small team of commiters as well so Fossil works just fine for them.


Well, there's also the "social and signaling effects" of using something that's non-git, that Eric S. Raymond articulates well: "we cannot afford to make or adhere to choices that further cast the project as crusty, insular, and backward-looking."


Well, there's also the "social and signaling effects" of using something that's non-git

The "not a field" of Computer Programming, to appropriate Alan Kay's quip, is so broken that "social and signaling effects" swamp actual facts and information to a degree that makes it look like Astrology. I've been watching this for decades now -- literally.

Dynamic languages were for years after still tarred with being "slow" when both Moore's Law and progress in JIT VMs and generational GC had make them perfectly fine for tons of applications. If the half-life of patently false misinformation is literally about a decade, and what passes as fact between practitioners is less fact than school rumors, what does that say about our "field?"


What are the actual facts in this case?

There are tons of people who use and know git. It's fast, it works pretty well. There's some value in the fact that it's widely known and used (network effects), probably enough that whatever takes its place will probably be not just a bit better, but a lot better, in some way. bzr does not strike me, offhand, as being a lot better. Is fossil?

So in this case, I think that the network effects are an important fact.


What are the actual facts in this case?

I'm talking in general. I think it's good they're going to git.

bzr does not strike me, offhand, as being a lot better.

I never said it was better or worse. My comment is about the "field" and how accurate its "information" is in general. Sometimes social signalling and network effects are good. What disturbs me is that so many of us use this as a substitute for facts and objective analysis.

Taking social signalling and network effects into account is okay. Only going that far and stopping is just dim laziness. (It's also behind the worst examples of sexism and ageism in our "field.")


> What disturbs me is that so many of us use this as a substitute for facts and objective analysis.

I think there's something to this. At a guess, people use a heuristic because facts and objective analysis are hard. I don't mean that sarcastically— I mean that it's difficult and complex even if you are not lazy. When people opt for what everyone else is using, they receive the benefits of treading a well-worn path. This isn't an excuse, but I am sympathetic. Some people are just trying to get work done.

On the other hand, that is a poor justification for being too lazy to do the job right. Often a problem isn't as hard or complex as it looks, and you might just learn something while looking into it. You get the idea.

Also, +1 to your comment re: sexism and ageism.


But that was more about bzr's trajectory than its popularity. My (uninformed) impression is that Fossil is niche but actively maintained.


It's more about attracting people with an easy barrier to getting involved. Granted, it's not really that big a deal (IIRC there are github mirrors), but it is an obstacle.


A very smart move by Fossil--it makes it safe to try. There's a path out.


"most of Canonical's in-house projects have abandoned bzr for git".

Nope. Unity, Mir, Upstart, all of the new phone/tablet apps and platform tools are on bzr on launchpad.

Not disputing the fact that git has won the war, just nitpicking that point.


Here is a previous discussion on emacs-devel from Mar'13 http://lists.gnu.org/archive/html/emacs-devel/2013-03/thread...

Stallman's opinion on this subject - http://lists.gnu.org/archive/html/emacs-devel/2013-03/msg009...

TLDR Stallman doesn't want Emacs to give up on bzr (also a GNU project) yet. This opinion might change now though.


Just read the link to Stallman's opinion - side question: he signs his name with a Dr. prefix. But did he finish graduate school at MIT?


According to http://en.wikipedia.org/wiki/Richard_Stallman, he has 14 honorary doctorates and professorships.


It's considered seriously naff to title yourself Dr. on the strength of an honorary doctorate, though — at the very least, you should add "honoris causa". (Even with an earned doctorate you're supposed to avoid referring to yourself as Dr. Smith, in the same way that you shouldn't introduce yourself as Mr. or Mrs. Smith.) Not that I'd really grudge the title to RMS though, who's done more technical hard work and innovation than many people who are running around with earned PhDs.


Well, he did finish a physics degree at Harvard, which awards Doctor of Science titles



For those wondering: Vim is currently on Mercurial @ Google Code [1]

[1] http://www.vim.org/sources.php


And happily for git users, the git-hg plugin makes maintaining a git mirror of the vim hg repo very convenient:

https://github.com/cosmin/git-hg

Submitting patches upstream is also not a problem as Bram Moolenaar only accepts patch files on the dev mailing list.


I'm not convinced Bram actually uses hg in his normal workflow, but provides the repo as a convenience.


Oddly, both Bazaar, Git and Mercurial were created around March/April 2005. Why the sudden appearance of popular DVCSs around that time, and why did Bazaar fall behind the other two in popularity?


All of them emerged due to the end of license agreement for a free license of bitkeeper for the linux kernel. Git won due mainly to Linus personnality and the rise of "social coding" via github. Bazaar failed because, at the beginning, it was painfully slow compare to git and mercurial. It speed has increase over time, but bad reputation is hard to get rid of.


I'd say git is popular not just because of Linus, but also because it is oriented towards "just getting stuff done", rather than towards theoretical concepts.

You can rewrite history, fix your mistakes, and generally do whatever you want. When merging, git isn't picky, either: if the code looks the same, it is the same.

In-place branching is hugely useful, just switch your tree in an instant (your editor should update the contents of your files automatically). So is the stash. Overall, it's just a useful tool that doesn't try to teach you "how things should theoretically be done", and never says "well in order to get X, you should have done this a long time ago".


Except bzr is conceptually easier to learn and has a more consistent command syntax than git.


What are some of the conceptual differences that make bzr easier to learn? (I agree that git's command syntax is inconsistent.)


Command names are closer to SVN/CVS ones, which lower the learning curve for similar concepts. The documentation really is at another level of didactic compare to git, and no, pro-git is not enough neither. Changes are tracked automatically, no need to explicitly add the files you changed to a particular commit (Whether or not it is a good idea is another discussion). Empty directories can be versioned, no need to create a .gitignore in it to fool the system (Same remark as above).


The syntax is part of it being easier. Also, though it's been a while since I tried using any official git documentation, bzr's website had handy tutorials, references, and cheatsheets available. Great layout, assumed no VCS experience.


That's not conceptually simpler, that is just easier to learn. Git's concepts actually are simple. You can explain the concepts and guts of git to developers with a whiteboard in a few minutes.

The standard CLI UI is admittedly a weak-point, but it does not appear to have slowed adoption...


I get what you're saying, but it depends on how you learn it. I tried learning both of them through their official tutorials and /their CLI commands/. With git's more confusing command set, I had a harder time learning.


All of them emerged due to the end of license agreement for a free license of bitkeeper for the linux kernel.

Bet bitkeeper regretted that decision...


If I recall, the zeitgeist then was that bitkeeper was a necessary evil. My guess is that git would have come into being eventually in any circumstances short of bitkeeper becoming free software, and maybe even then. Distributed change management is such a critical component of the kernel development process (especially for Linus and the other maintainers) that relying on someone else's software seems suboptimal.


The reason Bitkeeper dropped the kernel was that certain kernel devs had started writing free software to interact with the kernel repository (they wanted to be able to perform certain tasks that bitkeeper couldn't). Had this continued, we would probably have ended up seeing a free version of Bitkeeper.


They don't seem to have updated their website since http://www.bitkeeper.com/


Git was by far the fastest, this was a big factor when evaluating the move from svn (or similar) to a new system.


Not at all, early git was way slower than early hg. For some operations, e.g. cloning, got is still way slower.


But most people were comparing it to CVS or SVN, and it was much faster than those.


>> Bazaar failed because, at the beginning, it was painfully slow compare to git and mercurial. It speed has increase over time, but bad reputation is hard to get rid of.

Is this an example that goes against the common advice to launch an MVP fast to test the market (and then keep on improving)? It seems that the advice is valid only when there is nothing for the customer to compare the to-be-launched product to. If competing products end up launching at around the same time as yours, the advice may turn its head on you.


Git was also early to the market, but had a fast core and terrible user interface. Git was used for the Linux kernel only two months after Linus had started coding.


This is a good point. Is there more to this? As a big believer in the MVP, this is something I have to look more into.


Well, as this article implies, a lot of people don't consider bzr's speed minimally viable.


Adding a bit of history to the other comments: Bazaar is actually a successor to an earlier DVCS, called Gnu Arch (or Tom Lord's Arch, TLA, at some point). It started out in 2001 and was, I believe, the first of the DVCS crowd. It had some idiosyncrasies, but was a huge step up from CVS in terms of its principles.

tla was forked into baz (previously Bazaar), and bzr (previously Bazaar-NG) was a rewrite taking into account lessons learned from tla/baz. Darcs is yet another DVCS inspired partly by Gnu Arch.

So while the explosion of new DVCS around 2005 can definitely be traced back to the Bitkeeper incident, I believe the seed for modern DVCS was laid a bit earlier, in 2001, with Tom Lord's Arch. I think Gnu Arch/tla is to be credited with originally introducing many of the concepts of distributed version control.

Of course, if anyone knows of earlier history on distributed VCS or a VCS that isn't in some way a spiritual successor to TLA I would be quite interesting in knowing that.


Here [1] is a good but necessarily incomplete summary of version control tool history, written by ESR.

Tom Lord does deserve much credit for DVCS concepts, but so do Larry McVoy (Bitkeeper) and Graydon Hoare (Monotone).

[1] http://www.catb.org/esr/writings/version-control/version-con...


I'd give Larry McVoy most credit of the three. He worked at TeamWare, which was the first DVCS, at least the first I have heard of. I don't know how much credit goes to him specifically, compared to the other people who worked on TeamWare though.

http://en.wikipedia.org/wiki/Sun_WorkShop_TeamWare


One thing that git wins for me is interoperability. git's format-patch, send-email, apply and am subcommands make it very easy to interoperate with others using plain text patches on mailing lists for code review, etc.

At this stage, isn't a "bzr vs. git vs. hg" question at all. It's just "look, it's a patch".

I think this ease and ability to work losslessly with plain text patches gives git a clear advantage over bzr.

This functionality makes it really easy for people who don't know git to interoperate with people who know it.

Lossless interoperability with plain text is a key Unix principle (see TAOUP) and something that bzr lacks.

With bzr, everybody involved with a project must use bzr. OTOH, a project that uses git can work more easily with a whole spectrum of people since sending a simple patch to a mailing list provides exactly the same ease of workflow to git-using developers as a complex multi-stage set of changes that is heavily reviewed and modified before being committed.

(I don't know hg well enough to understand how it fits in here)


I'm entirely ignorant of bzr, but I've got to say that this surprises me. We use svn, which certainly works well with patch.


To the extent that svn works with patch, so does bzr, and so does git.

What I'm talking about is lossless integration with entire patch sets in the form that git does it with format-patch, send-email and am.


git patch support is more feature complete. Atomic application by default and using parent information for three-way merge to apply patches if requested.


Git won the popularity war because the Linux kernel used it and then Github pushed it mainstream. Frankly, Bzr never got as popular as Git and Hg because it just isn't as good. It's slow, has a bizarre (sorry) branching model, and came out of Baz and Arch which were downright terrible (to be fair, Bzr shares no code and was designed as a rewrite to jettison all the stupidness of Baz and Arch—nonetheless it is tainted by being related to them).


Actually I think Github got popular because it was Git, not the other way around. At the time it appeared people just wanted to have git repos, Github was there, so they used it.


There wasn't a demand for Git repos any more than there was a demand for Hg repos. For instance, open source projects moving away from SVN seemed to be pretty evenly distributed between Git and Hg.

The real demand was for a way to publish code and collaborate, and GitHub provided a truly innovative approach (the social aspects, encouraging forking and pull requests) that was miles better than the alternatives (SourceForge was already in decline, Google Code didn't have the social/forking/pull request aspects). I think they would have been successful with any distributed VCS. I always preferred Git, so I'm happy they chose Git, but I think that if they had chosen Hg, we'd be talking about how Hg won the popularity war instead of Git.


Confirming that, Bitbucket added git support because they saw Mercurial as not competitive with Git.


But only after Git became wildly popular because of GitHub.


This is what happened, basically a huge "I told you so, proprietary software blows." for the Stallmans of this world: http://www.theregister.co.uk/2005/04/14/torvalds_attacks_tri...


This is excellent trolling material because you can argue in both directions: It proves proprietary software can't be trusted; and at the same time it proves Open Source is all about free knockoffs of innovations made in proprietary software.


I think it was due to the free version of bitkeeper (which the Linux kernel used for vcs) was taken off market. So the Linux devs would either have to pay for a DVCS or make their own... Several made their own, Git was chosen for the kernel, alot of people found uses for the others. I could be wrong, but I think that's the case.


There never was a free version.

BitMoover Inc provided free BitKeeper licences to the Linux kernel devs. This was unpopular because of the non-FOSS nature of Bitkeeper. Andrew Tridgell [1] at OSDL became frustrated with some aspects of the software and reverse-engineered its network protocol to develop his own limited client. This caused BitMoover to revoke the licences used by OSDL, which in turn provoked Linus Torvalds (who was at OSDL) to develop the early version of Git. [2]

[1] http://en.wikipedia.org/wiki/Andrew_Tridgell

[2] http://en.wikipedia.org/wiki/BitKeeper#Pricing_change


In 2005, there were already some free, open-source DVCSs. Linus considered using Monotone for the Linux kernel project before deciding to start Git.


git is used by a bigger project (the kernel), that's mainly why. they're all ok, otherwise, even with their technical differences.

github also contributed to amplify the adoption as its makes it really easy to use (even thus people generally use git in a non-distributed way with github)

(i do prefer git in usage, tho, but thats subjective i suppose)


for me it's the easy branching and rebasing that make git the winner. That's in spite of the inconsistent cli.

If Hg were as good at branching and rebasing, I'd reconsider it.

btw, anyone know of a python command line client for git? I just keep finding stuff for programmatically using git and that's not what I want.


What do you mean, a python command line client? If you're using it from the command line and not programmatically, why does it matter what language it's written in?

(gitless may possibly be what you want)


It matters if you want to hack on it.


portability



Thanks!


And Xorg, and Samba, and Wine, and a bunch of other projects. In retrospect, people overestimate github's impact, I think. "Fast enough for big projects like the kernel" was a big deal.


Cambrian explosion followed by consolidation.

Git and Mercurial were Linux affiliated.

In the end, when apps are similar and don't have fatal flaws, it comes down to a popularity contest.


> Cambrian explosion followed by consolidation.

The term is "adaptive radiation", of which the Cambrian explosion is a prominent example. (Unrelated but awesome note: search for "edicarian biota" to look at some body plans that might have won, but didn't.)


If you read the article included in the post it says that git, mercurial and bazaar all came together because Bitkeeper stopped being free. Now as for why bazaar didnt get used so much even when supported by canonical is a mystery to me.


I'm surprised he didn't mention GitHub. You can't ignore the luck that the guys that made GitHub used Git.


people need just one DVCS. it was slow and when it finally got fast, it didn't matter, because everybody was happily using the alternatives.


Canonical mismanagement, contributor license agreement, missing manpower.


The article and the quite detailed bzr history it links to do not appear to suggest this at all. Can you provide any justification for your three claims?


License Agreement: the first paragraph in the section "Hard to land patches" describes the CLA as one reason why it's hard.

Missing manpower: just glance at the mailing list archives.


Actually, sources for all three statements can be found within 2 paragraphs (moving forward and backward) of "Decline and focus on Ubuntu and UDD"


Bzr certainly isn't dead.

git has won the mindshare war for mainstream developers, but I've found it to be a useful FOSS alternative for developers forced to use a Windows environment [for business-related reasons, don't laugh..it pays the bills]. Their UI is a bit more intuitive than git's for new users.

Now, if you're a hardcore Linux-stack career dev...get onto git ASAP... but for lesser folks...bzr works just fine...


> but I've found it to be a useful FOSS alternative for developers forced to use a Windows environment [for business-related reasons, don't laugh..it pays the bills]. Their UI is a bit more intuitive than git's for new users.

However, the same also applies to Mercurial, and even though Mercurial is less popular than Git, it still has a lot more devs who use it than Bzr.


I'm using bzr on a large scale project and have not experienced any problems. A full fresh branch does take a while to complete but I find it acceptable as we're not branching the full repo from the server that often. Usually its local branches. When using git on a similar scale project we found that we spent a lot more time managing the source control system than we ought to. Source control should be something you rarely worry about and requires no 'management' other then regular usage. Git was not that. Git required effort.


I had to interact with bzr last year for my GSoC project (Mailman, hosted on Launchpad). It wasn't a pleasant experience, and got a whole lot more complicated when I had to move a Git repo to LP.

Bazaar is bad. :(


Yeah, Mercurial is a viable choice still for just about all projects, and most of the time the simpler one.


the most annoying thing about bzr in my opinion is it's stupid branching model. luckily a migration to git is a oneliner.

git init ; bzr fast-export --plain | git fast-import


luckily a migration to git is a oneliner.

I wish. Unfortunately, not all repos we use are owned by us, and using two VCSs is worse than sticking with bzr.


Isn't it possible to track the upstream just like with svn? We use git internally but interact with third party subversion repositories. We do it by dedicating a branch to tracking the svn repo, use git-svn to branch and to push the commits (after rebasing). It's not perfect but after a few hurdles at the beginning it is now problem free.



Curiously, the URL seems exactly identical in both submissions, which are almost consecutive ( ...94, ...96).

That probably means the duplicate detector has some delay on getting its past submissions data. The rate of submissions these days seems to average one a minute, but the delta on this pair may be different, and it is not apparent by now.


I can't tell now that the other one is dead, but I believe it's link was http: while this one's is https:. I use the "https everywhere" browser extension so when I copied the URL I probably got the upgraded one. I think that explains why the dup detector didn't catch it. I don't know why this story took off and the other died. Maybe the headline?


You're probably right, I only looked at the path.

Thanks for the followup.


TLS FTW. (or sometimes TLS MITM FTW.)


The entire GNU project is like a house that looked so mod in 1971, but now has peeling paint, stained concrete, shag carpeting that smells pretty funky, and a crazy old relative that haunts the place. "Renovation" is changing the light bulbs and occasionally washing the stucco.


It's a bummer to think of all of the real-life dollars that have been sunk into Bzr's development, which I'm sure makes a decision like this harder. That said, isn't this what forking is for? The people who want Git support should code and maintain it themselves.


For those who want to learn BZR and help revive it by adopting it in your projects, there's a nice book here: http://www.foxebook.net/bazaar-version-control/


I wished people would use software based on merits, not on popularity. That being said, Bzr is slow, it was always slow from day 1 -- and slowness is part of the UI experience.


Popularity means developers, which means bugfixes and more features in the future which projects may want to use. Switching DVCS has a cost to a project. Thus software popularity is important.

This may be unfortunate, but this is how it is. DVCSs are by no means complete today, and cross-pollination of features continues. An unpopular project that has fewer developers working on it will fall behind. A project that doesn't want to keep switching DVCS has a reasonable interest in the DVCS project's future, since future features will come "for free".


But popularity is one of the merits a software package can possess. Popularity offers:

* A wider availability of resources -- tutorials, books, documentation of any kind, a community around the software that can offer help and support to new users, related tools and add-ons. * Evidence of the potential for longevity in the software -- the more popular it is, the more likely it is to continue to receive new features and bugfixes. * Portability of the knowledge of the workings of the software -- for users, this means that investing time and energy into learning the software now is a better investment, because that learning has a higher likelihood of being useful down the road. For people running projects, it means a larger pool of people who already have knowledge useful to contributing to your project.

Your statement basically translates to, "I wish people would use software based on hypotheticals, rather than an evaluation of what it's like to actually use that software."


> I wished people would use software based on merits, not on popularity.

This is “your favorite band sucks” for software. For professional tools, popularity is usually directly tied to merits: notice how frequently people say they started with bzr/hg/darcs/etc. and switched to Git because it was faster/safer/better supported/added features they liked? Most of the competitors seemed to think they'd solved one big problem well enough that people would put up with the “minor” warts but in practice those are the things which you notice on a daily basis. Git was the third or fourth DVCS I tried and even fairly early on it was obvious that considerably more care was going into making basic day to day work easier and safer.


> I wished people would use software based on merits, not on popularity.

this. so much. but "social and signaling effects" and "mindshare war" instead. sigh. what /is/ true, though, is the quote from the linked article:

"All problems in computer science can be solved by another level of indirection... Except for the problem of too many layers of indirection."

anyway, in an attempt to return from the hopelessly abstract and say something cogent about hip technologies and the article at hand, umm... i wonder how bzr runs on pypi?


part of something's merits is its popularity.


Do people really decide whether to contribute to an open source project based on the version control system it uses, rather than the importance of the project itself?


If Emacs moves to git, can we call it by its proper name: git/Emacs?


"git won the mindshare war"

Sad, but true...


They had two things going for it:

* Linus was using it for the kernel, so it became mature slightly faster (because it had more people's attention.)

* Github happened at the same time Sourceforge stopped being cool (this is the same time that Digg and Reddit started beating Slashdot.)


Github became popular because people wanted free Git hosting, not the other way around. Most other Github features just get in the way.

Also, instant branching are really convenient feature. I remember the time when I was using Darcs that I got lost pretty quickly with 10 copies in 10 directories.


I disagree wrt Github. A lot of projects I was involved in wound up switching from git to mercurial because it was too difficult to manage git master repos on a server compared to Mercurial. Github solved this neatly, and the issue system has allowed my last two companies to switch from Mercurial+FogBugz|Trac to Github. That's useful consolidation and lower admin overhead.


I think there was a co-evolution of the two. A lot of people started using git because Github was where the cool projects were being hosted. At the time, I recall hoping for bzr to "win". I like git these days but coming from an svn is king world - bzr was a bit gentler for transition.


The article is also mentioning bad state of Mercurial, any idea what he could have had on mind? What's wrong with Mercurial these days, beside its less widely used than Git?


I'm pretty sure Atlassian is the only major supporter of it, and even they co-host Git on BitBucket (I suspect to stay competitive with GitHub).

And in my very biased opinion, Atlassian markets themselves with an aggressively old-school mentality (closed source everything, monolithic software, questionable terms of use, strong force in the enterprise market, etc.), which makes Mercurial look bad by association.


facebook hired most mercurial hackers, to me they are the biggest current supporter.

(and I would have cited quite a few other companies/org besides atlassian, who probably contributed as much as atlassian to the development of mercurial)


What's so sad about it? IMHO git is awesome.


I only use git:

- it has a lot of non-orthogonalities and non-closure operations (ie options on one command are different or not present on other commands)

- if the network drops, fetches and pushes have to start from scratch. For fetches, it would be nice to save partial packs and resume them. For pushes over crappy links, doing similar could be a potential DDoS, so setting up some sort of rsync box is probably better.


An unpalatable command line that has no linear mapping between commands and its arguments and what you want to do.

It's a great tool, no doubt; but the user interface is terrible. I use Magit for Emacs which has eliminated my need for the git command line.


wow.

please, please, you Emacs/LISP gurus out there: make a working modern package manager and integrate the browser like lighttable does. and perhaps rewrite emacs from scratch so that the source code makes sense in today's world not in 1980's world.

unfortunately lighttable is staying closed for much too long, but something like it is desperately needed.


>make a working modern package manager

package.el works great.

>integrate the browser

That may actually be possible in the future, see http://www.emacswiki.org/emacs/EmacsXWidgets

>perhaps rewrite emacs from scratch

That is just silly. The current Elisp interpreter might be replaced with an Elisp implementation on top of the Guile VM. That would be quite the big upgrade, technically speaking.

>unfortunately lighttable is staying closed for much too long

I agree, I don't have much faith in the "it will be free eventually, just trust us" development model. If the project really wanted to be community friendly then the source code would have been free from the beginning.


This XWidgets project for Emacs has a rather unfortunate name, given that there is already a well-known GUI toolkit named wxWidgets.


> make a working modern package manager

    M-x packages-list-packages
> and integrate the browser like lighttable does.

    M-x eww
> and perhaps rewrite emacs from scratch

Would you like a pony, too?

> so that the source code makes sense in today's world not in 1980's world.

I've actually found that Emacs's source it very readable (even the C lisp engine stuff). Do you object to the lisp or to the C?


Emacs has a package manager, it works fine. I particularly like that I can script it to install my packages on first run.


works fine = work fine for me after I have spend x hours looking around the net, writing elisp myself. that's the whole problem with lisp. it is not communal, because it does not enforce standards.


(package-initialize)

Wow, now you have access to the package manager.

Emacs is meant to be customized heavily by its users and the language to do so is Elisp. If you are afraid of that then I don't know what to tell you.

You are quick to condemn all Lisps with an assertion that doesn't make sense to me.


I don't understand, what's wrong with package-list-packages? It has concise descriptions, MELPA is kept very well updated, and and installing plugins is a breeze, no elisp configuration is necessary for like 90% of the packages you install.


M-x package-refresh-contents

M-x package-install

<package name>

That's with a blank .emacs file on Emacs 24 on a brand new user account I created just to test that for you.

EDIT: (ofc the auto-installing my favourite packages stuff is elisp I wrote)


The command preceding this outburst was:

$ sudo braindump bachback.core|tail

;-)


haha. I've actually spend time trying to read emacs's source code. I guess the downvoters haven't.


Rumor has it that LightTable will be open sourced very soon, but I doubt it will ever take the place of Emacs.


I doubt it will too, but I hope it gets a nice "marketshare" so people try to experiment and find new ideas. Emacs will probably benefit a lot from cross-pollinisation too (it excels at absorbing new ideas).


LightTable is truly terrible. Adobe beat that team with Brackets without even really trying, and all of the gimmicks it promised have been available in vim and subl for a while. They should do something else with their time.


not true at all. it has this thing called the browser in it, as well as a completely live hooks and a modular system. this here http://www.youtube.com/watch?v=gtXpOD6jFls is impossible in any other editor.


> rewrite emacs from scratch

You obviously have no idea of how huge undertaking would this be. You are free to write your new editor and call it Fnbdt.


I agree - Emacs has to be rewritten on top of a modern rendering engine!


They did. It is called 'Eclipse'.


Just not the same. Adding functionality to Eclipse is a "project". In Emacs, it's just some code and an eval-last-sexp away.


Then do it.


How do you understand something like Webkit? Its so huge.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: