I went back and looked at the older discussion, and it doesn't paint Stallman very well as the head of a project. He pins the question of whether to keep using bzr not on whether it is good or whether the Emacs developers want it, but on whether it's being "maintained".
But then he seems to define maintenance as having fixed a specific bug that's been around for over a year, blocking a point release.
He admits that he can't follow the developers list to see if they're genuinely doing active maintenance (reasonable enough: he has a lot on his plate), but also won't accept the testimony of Emacs developers that the mailing list is dead and there's no evidence of real maintenance.
When questioned, he says that there's too much at stake to abandon bzr if it can be avoided at all. But the proposed replacement is GPL software. This is just madness.
I think this is more inertia than anything else. It is pretty obvious that Stallman isn't the easiest person to talk to, but he has consistently put huge amount of efforts into GNU for lots of years, and, whatever he did as the maintainer, seems to have worked out pretty much in the end. If there was anyone as active in the Emacs community today as Stallman was in his prime, they could easily push through changes despite Stallman's disagreement, via a fork or otherwise, but there isn't such a person, and that is that.
Consider that it has been almost 30 years since GNU Emacs started, for most of this time RMS has been the maintainer, and GNU Emacs continues to this day to be the most advanced, popular and active Emacs there is, while the various forks in existence like XEmacs, SXEmacs lost steam pretty quickly. So it certainly hasn't been all bad on his side.
The popularity of a specific fork of Emacs is a moot point, it's the popularity of Emacs as an editor (or should I say, as a platform?) that is at issue. Stallman was active, how that he's not active he needs to either defer to people who are active or watch as fewer and fewer people take advantage of his work.
Stallman doesn't have much influence on Emacs development itself anymore, but Emacs is a GNU project, Stallman is still the head of GNU, and a move to another VCS would be a major organizational change. I too wish he would be less hard-headed about this, but lets not overblow this issue out of proportion.
In replies, he says he doesn't oppose the move to git. Most devs who've replied support git. There's already a mirror. Let's not make mountains out of molehills...
As he said, 'more than Emacs is at stake here.'.[1]
I presume this refers to Bazaar's status as part of the GNU project, and that RMS did not want to write off part of GNU without being certain he needed to.
Regardless, he has since OKd the switch from bzr:
I don't insist that Emacs should stay with bzr. I chose to support bzr because it was still a contender at the time. [2]
I mean, don't not fixing what ain't broke is certainly a matter of integrity. Fixing something to the latest hippest new fad just to "attract young hackers" detracts from integrity. Making the editor an awesome badass attractive editor should be enough. Otherwise you'll only be attracting groupthink-prone douchebags, anyway.
For better or worse, having a vision is a very different skillset from having the management skillset to make things happen. Sometimes people deride management when talking about the importance of vision and leadership. There is also the current trend of companies giving up managers entirely.
RMS is a prime example of why management is an important skill. It is inefficient for him to do everything himself. It's also inefficient for him to chase people down for all the details. If he had a solid COO who understood his vision, his organization would be much more effective.
I'm not sure that conversation is a good example of what you are trying to describe.
Let's see if someone did the following things to the python project:
1#: Hire away the package maintainer. Then rather than continue and finish any current work, effectively remove that person from the community project.
2#: Redesign underlying structure of the project (like say, PyPy), but don't discuss any changes with the community. No PEPs, no discussion on mailing lists, no communication what so ever.
3#: Ignore current list of new feature being worked at. Community goals are unimportant.
4#: Add code regression! Do not care about maintaining performance.
5#: Demand that the changes get implemented immediately in next official release.
Would anyone expect that to actually work today? Sure, Stallman could be more diplomatic and find (and succeed) with a middle ground solution, but the above steps are not how you join an ongoing software project.
I tried to read that conversation in a neutral light, because I think RMS and JWZ are both kind of ... polarizing personalities, but RMS really came off as a stubborn, ineffective whiner in that whole thread. In particular "you hired away my maintainer" is a pathetic excuse for not releasing something for so long. Anybody could have hired away your maintainer, and you'd have to soldier on; it has no relevance that your maintainer went to a "competing" (in your territorial view) project.
I also read the argument about the redesign of the event system and was pretty flabbergasted. The argument seems to reduce to "lucid emacs decided to design a proper event datatype because having an event be entirely represented by a simple integer keycode both lost information and made it impossible to represent certain keystrokes" versus "but ints are simple and backward compatible!"
Poaching the lead dev to work on your own fork is a hostile move, and in my book, it provides a reasonable excuse for the delays, since nobody was able to take his succession immediately.
The guys at Lucid also barely communicated for long periods of time, making collaboration impossible.
>> it provides a reasonable excuse for the delays, since nobody was able to take his succession immediately
> Huh?!? Open Source model not working or what?
I guess its only illusion and crazy lawyers who like to add anti-poaching clauses in employee contracts. If a company can't handle loosing their top engineers, clearly its the the proprietary model that is not working.
There is one (and only one) other possibility, which is that you and I both read the conversation with flawless objectivity, and that you are wrong. :D
> I couldn't change the plans, so I had to make the best of them. I suggested a design to him, one oriented toward editing formatted text--a feature I wanted Emacs to have, eventually.
Back when the choice was made, micro-kernels was all the rage in both academia and commercial ventures, and Stallman chose Mach since he thought it would speed up development, he was hardly alone in choosing Mach at this time, Apple (MkLinux, NeXTSTEP), IBM (Workplace OS) amongst others.
He fully acknowledged that he made a mistake in going with Mach and as soon as Linux took off FSF focused on providing the necessary software to combine with Linux into an operating system and placed Hurd on 'life support', where it's been ever since.
That's a simplistic view. First, the GNU project is very much alive: the GNU tools are used in a huge number of operating systems and are installed on a staggering number of devices. I would bet that the system you are writing this comment from is running thanks to the GNU software.
Second, it is debatable whether sticking to Hurd was a good or a bad idea technically. Imagine if Stallman and co. managed to convince a good number of developers that it was a good idea and the kernel was competitive with Linux, BSD's, etc. If you believe you have a technically superior vision for your product, should you compromise on it just because people who do not share in your vision will not join you?
In the end, I think what killed the pure GNU/Hurd OS was bad PR and absolutism. Hurd as a technical question was just a small part of that. Remember, the debate between the Free and the Open Source guys was pretty fierce. Today we use terms like FOSS to describe all open software, but when Linux and Hurd were young these were different camps with opposing philosophies, and the one that appealed to more developers won out. In simplistic terms, you can think of this as the VHS vs Betamax debate. Can you blame the Betamax backers for continuing to try to push it and "killing" it as a result?
Wait. You misunderstood me. I didn't say the entire GNU Project is dead. Hell no. When I said "project", I was referring specifically to GNU Hurd.
Also my statement was going by the words of the Hurd's former project leader Thomas Bushnell:
"RMS was a very strong believer -- wrongly, I think -- in a very greedy-algorithm approach to code reuse issues. My first choice was to take the BSD 4.4-Lite release and make a kernel. I knew the code, I knew how to do it. It is now perfectly obvious to me that this would have succeeded splendidly and the world would be a very different place today.
RMS wanted to work together with people from Berkeley on such an effort. Some of them were interested, but some seem to have been deliberately dragging their feet: and the reason now seems to be that they had the goal of spinning off BSDI. A GNU based on 4.4-Lite would undercut BSDI.
So RMS said to himself, "Mach is a working kernel, 4.4-Lite is only partial, we will go with Mach." It was a decision which I strongly opposed. But ultimately it was not my decision to make, and I made the best go I could at working with Mach and doing something new from that standpoint.
This was all way before Linux; we're talking 1991 or so." [1]
Note, in regard to Bushnell's quote, that "1991 or so" and "way before Linux" are contradictory. (Or, at least, require an strained definition of "way before"; Linux was first released in 1991.)
>the GNU tools are used in a huge number of operating systems
You mean linux? That isn't a huge number.
>and are installed on a staggering number of devices
The staggering number of devices you refer to almost exclusively run busybox or one of the similar projects. GNU software is hugely bloated and not a good choice for embedded systems.
>Second, it is debatable whether sticking to Hurd was a good or a bad idea technically
No, it was fine technically. It had no developers and so nothing happened. Minix exists, obviously microkernels are possible.
GNU tools were often used on other OSes too including Solaris and other commercial Unixes but also Windows under Cygwin. Particularly GCC, make etc. but also command line tools such as grep where the default platform versions were often not as feature rich. I didn't use those platforms so others will remember and know better but I don't think huge was an obviously wrong description.
Last you heard wrong then. Occasionally we begrudgingly install some GNU bloatware because some poorly written software requires it. That's about it. Every other OS already comes with its own versions of all the unix tools.
From what I understand, the Mach kernel which is now used in XNU is not the Mach micro kernel (3.0 >) but based upon the pre-micro-kernel 2.5 version of Mach.
I'm not sure where I read this originally but I just googled this source which seems to back it up:
I am not intimately familiar with the history of Mach, but what I do see is that in the late 80s and early 90s these two groups (NeXT and GNU) seeing Mach as the future (a position that makes no sense at a later time), and having vastly different outcomes.
I don't know much about how these people work, but I always figured Mach at Apple is just about momentum and familiarity of contributors, rather than technology. NeXT hired Tevanian who worked on Mach at CMU, they spent roughly a decade hacking on Mach, then Apple did the same. I'd imagine they employ people who know Mach well and haven't seen it as worthwhile to replace it.
I even remember they had this goofy project "MkLinux", which sought to put Linux in the position that BSD carries with XNU, on top of Mach... Just goofy stuff, unless you figure they had Mach hackers on staff.
That's misleading. Apple merged in a bunch of Mach 3 code but still maintains the architecture of Mach 2.5 (i.e., Mach + BSD both running in supervisor mode in one big monolith; no BSD server; xnu is not a microkernel…).
It is odd, I predicted about 5 years back that Apple would gradually converge the API to FreeBSD and then switch. They would still be better off doing that I think, but they show no signs of doing this.
It's a huge effort worth that seems to be paying off quite handily. XNU seems like a competitive edge over the monolith FreeBSD, as far as desktop and mobile OS is concerned. FreeBSD has plenty of cool stuff, but in a completely different domain.
> XNU seems like a competitive edge over the monolith FreeBSD, as far as desktop and mobile OS is concerned.
This seems a bit delusional. I don't think it's controversial to say that Apple's biggest differentiators exist at higher levels than kernel space. I'd go so far as to say that anyone who claims that Apple's success is rooted in XNU and that the same could not have been done with Linux or *BSD at the lowest layer and all other pieces being equal does not understand what a kernel is.
I don't know anyone saying Apple's success is rooted in any one thing. However, IOKit is very important, "common code for device drivers, this framework also provides power management, driver stacking, automatic configuration, and dynamic loading of drivers". Even if the entire OSX could have been implemented on top of Linux and FreeBSD reusing those kernels (and it does reuse FreeBSD for low level POSIX apis!), how productive would that have been? I honestly don't know, but I choose to trust what I read from the original developers.
Those other kernels have driver frameworks too. And if they find them inadequate for some reason, they are of course able to make modifications.
But this is not a suggestion for them to scrap it, necessarily. As I said in some other comments on this thread, I think the real reason is that they had people that knew their existing kernel well, and don't see a need to replace it.
You are right. An actual crash in kext causes kernel panic. I think I read that this is was possible, but I can't find how, and when, right now. The common thing that I'm thinking of is just voluntary error (exception) handling in kexts and reloading of kext on their own. There is some isolation and benefit to it.
Edit: ok, I just figured out my source of confusion. IOKit allows userspace drivers, which can crash without resulting in panic.
IOKit allows communication between kexts and userland. You could call that "userland drivers", but it's not like you can write only userland code to implement a driver.
Care to explain? I thought that the Lucid guys made reasonable technical arguments (especially in light of, y'know, history, over the last twenty years) and that RMS was attempting to both grandstand and emotionally manipulate people into adopting his preferred position.
One of the reasons I chose GNU Emacs over XEmacs was a feeling that GNU Emacs will be maintained, even advanced, for as long as RMS has the strength to type. It's his baby.
(There were other reasons, the big one being momentum. XEmacs didn't run on the platform I was on for a long time, so it would have been a switch.)
An excellent point; for example, although I don't have any reason to think RMS himself did this, Emacs 24.4 will have file change notification support across all platforms where it compiles and where such notifications are available.
I grant that's a somewhat overdue feature for Emacs to have (e.g., I've wanted it ever since I set up Dropbox to synchronize my org-mode files across all my boxes), but it's definitely evidence that Emacs isn't lacking of maintenance and improvement.
I wonder: If Emacs rarely gains new features in core, is it because there aren't enough developers doing enough to improve it, or rather because people are having a hard time thinking up new features to add which Emacs doesn't already have?
Sorry, I was unclear; by "core" I mean "the Emacs distribution itself", as opposed to libraries you find on Github, EmacsWiki, or wherever else that isn't part of the standard Lisp library you get when you download the Emacs tarball.
"When questioned, he says that there's too much at stake
to abandon bzr if it can be avoided at all."
The big difference between then and now is that this time you have a very competent developer with an exceptional reputation offering to lead the migration. With esr leading things, there is a lot less at risk.
>git won the mindshare war. I regret this - I would have preferred Mercurial, but it too is not looking real healthy these days
I confess that my perception of Mercurial is the diametric opposite of the author's. Recently I believe I have seen a modest resurgence of interest in Hg and increased uptake. Am I just seeing this through some peculiar VCS-warped glasses?
I believe that much of the popularity of git stems from github making it very easy to adopt, something that bitbucket doesn't seem to have pulled off as well.
Yep, I don't understand the author's assertions about Mercurial either.
Mercurial remains a better choice for a few use cases where git simply falls flat.
Among game developers, in particular, because of their need to have revision control for large assets, Mercurial seems to be more popular than git due to the large files extension.
And realistically, perforce or other solutions appear to be even more popular among that particular developer segment.
Personally, I use Mercurial wherever possible, but that's not because I believe Mercurial to be technically superior, it's just because I hate git's UI.
Perhaps among the general FOSS community git is more popular, but both git and Mercurial have yet to met the needs of many developers.
It's not the same thing. That only stores the latest revision. You can't easily go back in history and check out earlier revisions of the large files. You can in hg.
Among game developers, in particular, because of their need to have revision control for large assets, Mercurial seems to be more popular than git due to the large files extension.
Most game devs -- I'd say even most devs in grown-up, professional shops -- use p4.
Most game devs -- I'd say even most devs in grown-up, professional shops -- use p4.
Hence why I said "realistically, perforce or other solutions appear to be even more popular among that particular developer segment." My comment you quoted was comparing the popularity of Mercurial to Git, not p4.
Also, I'd disagree with your assertion regarding "most devs in grown-up, professional shops". Microsoft's SourceSafe has a large following among the corporate world. And many of the largest tech companies I'm aware of don't use p4 primarily; they use git, mercurial, svn, cvs, SourceSafe, home-grown solutions, etc.
Disclaimer: I'm not a Microsoftie, just collecting some links and adding my own opinion.
Microsoft use TFS heavily [1].
Right now I imagine most MS projects are TFS but it doesn't appear to be mandated. Maybe for the big, internal-only stuff. ASP.NET is hosted on CodePlex as a Git repo [2] and MEF is Hg [3].
They've just added Git support to TFS and that probably means a lot of MS projects will migrate to Git over time.
I've noticed this, oddly, while my workplace is transitioning to git from Mercurial.
A lot of developers using .NET tend to go for Mercurial because a while back it felt a lot nicer to use on Windows. It's why I always preferred using Mercurial. A few .NET shops that use TFP/VSO are moving towards Git for the Visual Studio support, but I've noticed a few Python and PHP developers making the switch to Mercurial.
To be honest, I rarely need to do anything more than the basics so neither has a huge benefit for me. Neither feels particularly faster than the other, and both have comparable GUI tools. Aside from when I am pushing stuff to GitHub I tend to use whichever one pops to my head first on a project. I reckon a lot of developers are probably the same.
> A lot of developers using .NET tend to go for Mercurial because a while back it felt a lot nicer to use on Windows.
In my opinion it is more elegant. Git only works because it installs hacked up Linux utilities on Windows. In practice it might not matter but I feel dirty when I'm using "inelegant" solutions.
I guess it's a matter of taste or opinion but Mercurial is easier to use too. Though if you're just working on something solo the SCM doesn't really matter at all, you just commit and commit (and I mainly work solo).
Personally I also write mostly in Python so I'm naturally drawn to Mercurial.
Then again I've also been looking at and using Fossil for my projects because it's a single binary with no installer which makes it pretty cool in my opinion. It too works well and it's used as SQLite's SCM so I'm confident it won't screw up my projects. The other nice thing (although I haven't used them extensively) is that Fossil also includes an embedded web server that has a wiki and ticketing system so everything's integrated.
"Then again I've also been looking at and using Fossil for my projects because it's a single binary with no installer which makes it pretty cool in my opinion."
It actually relies on some configuration files in user's directory. I had troubles even launching it on a heavy-modified OS. I assumed it was a pretty simple and straightforward CLI tool that could work on bare-bone operating system. I was wrong!
It seems that most .NET devs prefer something baked straight into Visual Studio and my experience has been that this is part of the problem in getting those teams to migrate to Git, which is most powerful from the command line.
Personally, I greatly prefer my source control system to be separate from my development environment or IDE.
Or use TortoiseHg or SourceTree, if you're not comfortable with the CLI. They both work great. Also, if you need to host 'your own github', try RhodeCode.
Microsoft did win the war; very little progress has been made by its competitors from the time; what's had success has been new OSes - OSX, iOS and Android (which while built of GNU/Linux pieces, is radically different from traditional GNU/Linux - enough to qualify as a different OS IMO, since the API is different).
Vim did win the war; there's still nothing better.
IBM did win the war, and then shot themselves in the foot with pricing on their new generation (which generation was a pretty radical shift). There's not much chance of git doing that.
Java did win the war; its competitors from the time are largely dying (Objective-C has had a kind of zombie revival due to iOS, but I don't expect it to last). You could argue that Ruby has overtaken it, but again the changes over the last ten years of ruby - and the influence of rails - have been enormous.
I don't think we should stop trying to make a better VCS. But I do think we should accept that Git has won against bzr and hg in their current form; neither of those will displace git without radical changes that they are probably unsuited to make. Most likely the successor to git will be a new program entirely.
Yes that's nearly 25,000 lines of mixed spaces and tab filled pre-C89 C with 492 occurrences of #idef, many appearing in the middle of a function definition. I recently ran vim with debug symbols compiled and it was nice enough to dump a nice 4GB regular expression log file in my project directory. The way to turn that off is to find some ifdefs and comment them out. If vim won then well, I'm not sure what winning means. I've switched to emacs with evil, which in my opinion is better than vim in a lot of ways.
> (Objective-C has had a kind of zombie revival due to iOS, but I don't expect it to last).
Yeah ok, "zombie-revival" sure, your credibility gets a score of 0 here. This isn't an argument, it's a prediction, and a stupid one. Nobody will come back to check your comment in 5 or 10 years and call you out on it. This is just the certain kind of asshat thing you can say and not worry about it coming true or not because you're some anonymous commenter making the internet richer with your irresponsible use of a keyboard.
> Yes that's nearly 25,000 lines of mixed spaces and tab filled pre-C89 C with 492 occurrences of #idef, many appearing in the middle of a function definition. I recently ran vim with debug symbols compiled and it was nice enough to dump a nice 4GB regular expression log file in my project directory. The way to turn that off is to find some ifdefs and comment them out. If vim won then well, I'm not sure what winning means.
Winning means the user experience, not the code. And sure, I was lazy, it would be more accurate to say vim and emacs won between them (and are still fighting it out).
> Yeah ok, "zombie-revival" sure, your credibility gets a score of 0 here.
Do you disagree that a) Objective-C was more or less dead prior to the release of iOS b) almost all people currently using Objective-C are doing so solely because it's the language you can write iOS apps in c) absent huge, radical changes, Objective-C will never threaten Java's popularity the way that post-Java languages (C#, GHC Haskell (very different from the language that was standardized in 1990), Go, Scala) are?
No Objective-C was not dead before iOS. A thriving Apple was supporting Objective-C in every way possible, and moving from Carbon to Cocoa.
Objective-C is used to build applications for Apple software. It's not a threat to Java, but that doesn't mean Objective-C is dead. Objective-C will be around for a long time to come. It's a modern language that powers all of Apple's most recent technology. They have no reasons to change, and there are no signs that Apple is on the verge of disappearing into the aether.
Vim is shitty software. I like the UI, but the thing is single threaded and everything runs on the UI thread. There's no hope for async, or an event loop, or even a settimeout like feature. The code is full of globals and trying to add new features to the thing is going to result in inexplicable, unfathomable seg faults. Vim uses shitty regular expressions in the UI thread to do syntax highlighting which is why that's slow for big files and why the syntax highlighting breaks.
So the code matters. There will never be powerful IDE like features as long it's this single threaded thing that only ever does anything as a response to user input. Given the state of the code, changing this does not seem ever possible.
> So the code matters. There will never be powerful IDE like features as long it's this single threaded thing that only ever does anything as a response to user input.
Run VIM in a sub-process, and communicate with it through a fake terminal. Basically, quarantine the madness.
Say what you like about Emacs; its source, both in C and in Emacs Lisp, is generally quite readable, and the former I've found to be especially well commented.
Of course it's allowed. So is judging someone's credibility based on the predictions he chooses to make, whether by the accuracy of said predictions over time, or the plausibility of said predictions in advance of proof's arrival.
> Android (which while built of GNU/Linux pieces, is radically different from traditional GNU/Linux - enough to qualify as a different OS IMO, since the API is different).
The "GNU/Linux" vs "Linux" discussion is a long one but I'm pretty sure there's (almost?) no GNU in Android.
I don't normally say "GNU/Linux", but I felt this was a case where the distinction is particularly important, because Android does run the Linux kernel, but is (IMO) a different OS from GNU/Linux.
I don't think there is. What are you thinking of? All of userland is not GPL licensed, I don;t even think any is LGPL, so I don;t think there is any GNU there.
Android is very different from the GNU/Linux operating system
because it contains very little of GNU. Indeed, just about
the only component in common between Android and
GNU/Linux is Linux, the kernel.
As an avid git user, I believe that git's victory against current tools does nothing to stop someone from creating a better DVCS in the future. They'll just have to identify why git won and address those points, if they want to dethrone git.
The success of git is - apart from the speed - related with its property of being the "stupid content tracker".
Git's architecture is a simple bottom-up engineering approach. The user interface (porcellain) builts upon a conceptually simple core (plumbing). Other VCS have defined nice UI which where then implemented by a core that depends on the UI. This top-down approach means that the core components can suddenly become quite complicated and in the end it is hard for the user to get a deeper understanding of the system.
The funny aspect of this is, that a lot of people complain about Gits bad user interface. It turns out however that Git is really easy to grasp.
Exactly. The git data model is intuitive and easy to grasp after a short amount of time. The porcelain is poorly designed, but part of that is due to the flexibility of primitives underneath, and the desire to support arbitrary workflows.
Contrast with svn which attempts a very clean porcelain interface with a completely muddled data model underneath. The conflation of repositories, directories and branches in svn makes it impossible for it ever achieve 20% of git's functionality simply because things are so poorly defined.
After using git for 6 months I understood it better than I did about svn in the previous 5 years. I would prefer a better porcelain, but given that software development is my full time job and that I can use git for all software development regardless of the language, I'm happy to commit a bit of muscle memory to git's idiosyncrasies.
Now I am laughing and laughing bitterly. git supports one workflow -- the massively decentralized one. To this day you can't have a simple workflow with git, the one that cvs/svn supported and practically all small project would benefit from, the one that bzr calls a bound branch.
git , I believe , is the textbook case of what the opposite of a user friendly UI is. commands have switches which change the command so fundamentally it should be another command. Which noone wants 'cos there are like 140 commands already. switches which across commands do the same thing but are named differently. The same command doing wildly disparate things without any indication of what's happening -- try git checkout file, and guess what the state of file will be. It might from the staging area or it might come HEAD if it wasn't staged. Nuts.
I'm a happy git user, but tell me how this is supposed to work: there's me and one other guy working on a project. There are three logical branches: trunk, his branch, and my branch. In SVN there would be five source trees: those three on the server, his working copy and my working copy. But in git we end up with fifteen: the three on the server, my branch on my machine, his branch on my machine, master on my machine, the remote-tracking copies of those three on my machine, and the same again on his machine. All of which could be different.
How do I reduce that complexity? I always pull and never fetch, which helps slightly, but only slightly; pull seems to fetch other branches, so it's still possible to have my copy of a branch end up behind my remote-tracking copy of that branch. There's no command analogous to pull for "add and commit and push", so that's always a second step to possibly forget (I don't want to rely on an alias as I work on a number of different machines). Most problematic at the moment is that there's no way to tell the difference between an up-to-date branch and a non-remote-tracking branch, so I sometimes delete branches that I haven't fully pushed, because I forgot to make them remote-tracking, so they didn't show up as behind when I "git status"ed.
Set up a repo on a server somewhere and declare it to be the "central repo". You and the other developer pull/push from that repo only. In other words, ignore some of git's capabilities and treating git like SVN. Just because you can pull from your teammates, does not mean that you have to.
I've been on a team that transitioned from SVN to Perforce, the again to Git, keeping the same workflow all the way through.
It isn't the way that I prefer using git, but it works perfectly well.
> Set up a repo on a server somewhere and declare it to be the "central repo"
That's the situation I described. We still have the problems I mentioned: sometimes we commit without pushing (particularly because we sometimes didn't set a branch to be remote-tracking and didn't realize this), and sometimes our branches are behind our copy-of-the-remote-branch because we pulled different branches (which leads to bogus merges in the history).
Committing without pushing is not a problem (unless you intended to push but forgot to?) It is not a concept that necessarily exists in traditional centralized version control systems, but the fundamental problem still exists. In SVN "forgetting to push" is just called "forgetting to commit".
If your team member forgot to push and you put out new changes, that is a problem for him to resolve. If you forgot to push, and your team member put out new changes, that is a problem for you to resolve. Workflow wise, this all works the same as it does with any other centralized workflow.
If you forget to check for updates... well that is something that happens in other centralized schemes as well. You figure it out when you go to push and it fails, you correct it, then you are good to go.
> Committing without pushing is not a problem (unless you intended to push but forgot to?) It is not a concept that necessarily exists in traditional centralized version control systems, but the fundamental problem still exists. In SVN "forgetting to push" is just called "forgetting to commit".
Sure, but you hit the problem twice as often in git, because you have to do twice as many things.
> If you forget to check for updates... well that is something that happens in other centralized schemes as well. You figure it out when you go to push and it fails, you correct it, then you are good to go.
In SVN that doesn't show up as a merge in the history.
> "Sure, but you hit the problem twice as often in git, because you have to do twice as many things."
Well no, I don't...
If this really is a frequent problem for you, then you might want to consider adding a note to the end of git-commit's output to remind you to push, or even just aliasing git-commit to push by default. I would recommend that you instead learn how to use git, but failing that...
> "In SVN that doesn't show up as a merge in the history."
If you don't want to resolve those situations with a merge, then don't resolve those situations with a merge... Rebasing exists for a reason.
This goes against all the advice I've seen elsewhere; we push our feature branches all the time, and sometimes pull from each other's if we're working in overlapping areas, so we need to not rebase them.
You're right, thinking about it - most of the time I wouldn't have a copy of my coworker's branch (though we do sometimes pull changes from each other's feature branches - indeed that's supposed to be the big advantage of git, no?)
Since you have already read ten things about it and don't understand it, it would help if you told us what you do not understand so that we do not waste our time rehashing things that you have already not understood.
That's a very good question. I hadn't really tried to pinpoint what exactly I don't understand about it until now, but here's what some thinking uncovered:
Branching/merging/committing is pretty straightforward. The problem is that some commands seem to be very convoluted. For example, why does reset do four different things depending on whether it's soft or hard or plain? I keep having to Google to find how I can revert my latest commit.
Another thing I have trouble with is obscure failures. Obviously this isn't something I can just learn, but there are times when git fails for a reason I don't understand...
Ah yeah. The "why" with a lot of the porcelain stuff probably tends to be an unsatisfying "Because somebody didn't think it through very much a few years ago." git-reset would probably be better if it defaulted to --soft, and possibly left the other options to other commands.
Same here, whenever I attempt to give it another chance, sooner or later I break down and revert to hg-git. I recently discovered EasyGit [1] and it seems promising from the docs. There's also gitless [2] that was covered in HN a few days ago.
As far as I can tell, Mercurial is being actively developed (i.e., updates being pushed out on a roughly monthly basis), and it's also actively being used. It isn't used as much in the open source community (probably the GitHub effect), but that doesn't mean it's "not looking real healthy" (anymore than OS X does compared to Windows).
Bazaar is a different story. Technically speaking, Bazaar isn't going away anytime soon. Canonical's development depends too much on it. The primary problem with Bazaar is that updates on Canonical's side are pretty much limited to dealing with issues that Canonical has, and there is little activity with respect to getting other bugs/issues fixed/addressed.
This is unfortunate, because Bazaar does do a few things better than either Git or Mercurial.
I used Mercurial with a hg-git plugin back in the day as we were a Mac-only shop and I was the only schlub with a Windows machine (I was in charge of fixing IE issues). Mercurial was very good back then, when Git performance was subpar on Windows.
Git on Windows is now blazing fast enough that msysgit will get you by. I still favor the Git workflow even when I am using other SCMs (like right now, where we have a Subversion dependency)
Yes, Mercurial is looking quite healthy—Facebook details why Git doesn't work for them and all of the improvements they've made to Mercurial for their repo which is several times the size of the Linux kernel: https://code.facebook.com/posts/218678814984400/scaling-merc...
FWIW, just a few days ago I was browsing through the Emacs Bzr repository - after a full bzr clone, that took ridiculously long as well, a simple bzr blame takes 40-60 seconds to execute locally, and I have an SSD drive, four-core intel i7 and 8GB of RAM. I have never seen this kind of slowness with Git, with any repository size.
Oh yeah, doing anything with bzr and Emacs is just painful. For fun, check your CPU usage while your doing that bzr blame. I did one recently and it pegs at 100% for the whole time. Git is way more efficient.
Inertia seems to fit with bad management style. Good managers should be fighting it when it becomes cumbersome. Stop making excuses for FOSS celebrities and start demanding better outcomes.
The important take-away here isn't the relative merits of each DVCS, but that bzr is not used by anybody any more, and it is impeding the uptake of new contributors to Emacs.
It's about lowering the threshold so that when I need to patch your project, I can trivially clone the repo with software I already have installed and know, do my change, commit and create a patch or submit a pull request to Github or whatever with so little extra hassle that I feel compelled to do so rather than just making the change to my local tar-ball and never upstreaming the changes unless/until there's something large enough to be a pain to maintain separately.
Frankly, that ability is more important than the choice of DVCS: There's more value from most people standardising than in picking the "optimal" DVCS just because of the lowered barriers to participation.
The impression I have is that bzr is just another roadblock between a potentially interested novice and a patch accepted into the Emacs source.
(It's not by far the largest one, though, and while I think esr has a point, I also think it'd be of help for some of the current Emacs developers to publish a "How to start hacking Emacs" document, for the benefit of people like me who would love to contribute but who have absolutely no idea where or how to start.)
> help for some of the current Emacs developers to publish a "How to start hacking Emacs" documen
1. Find thing you don't like
2. M-x find-function RET function-to-fix RET
3. Hack hack hack (use C-M-x or M-x eval-buffer liberally; also, read about edebug)
4. Make diff relative to Emacs base code
5. Send diff to bug-gnu-emacs@gnu.org
What I love about hacking on Emacs is that it's so easy to find the bit of code responsible for a given feature and hack on that code in a running editor. There's nothing like it. If I'm using Firefox and don't like the address bar, I need to go dig through XUL, find the thing I don't like, and restart Firefox. Emacs? You just find the function and edit it right in the editor.
Thank you, yes, it's not so much finding the code I need to modify that's the problem, as understanding how any given single thing fits into the Emacs C source as a whole. Hacking the Lisp side I don't really have a problem with -- but if I want to, for example, extend the MESSAGE primitive so that it can check a list of messages not to emit in the minibuffer, things get very hairy, very fast. A general overview of what's what, and where, in the C source, would be extremely useful, and I haven't had much luck finding anything like that in the source distribution or online. (And, yes, I have read etc/CONTRIBUTE, etc/DEBUG, and (Info)Elisp/GNU Emacs Internals/*. And I'm pretty sure doing what I'm talking about doing to MESSAGE would be a bad idea, because it'd require a call up into Lisp space every time the primitive gets called, which can be extremely frequent especially during initialization. But I know somebody who'd like to have that functionality available, and it seemed like a relatively simple place to start, until I tried actually doing it.)
Ten (edit: even five!) years ago I'd agree with you. Back then a source control system was a piece of software you used for keeping a versioned history of your work and to enable you to collaborate with coworkers or friends. It was a tool.
Now? It's A Big Deal to a lot of younger developers. It is almost totemic. If it's not Git and (ideally) Github then.. it isn't worth hacking on?
Long answer yes with a but. They will eventually take up the mantle from devs who age out or die. We might have something totally different than Git at that point but it's not good to dismiss them either. That is how projects die.
For core work, it's probably not that important. For trivial patches, it's probably kind of a pain in the neck. But a lot of people start with a small patch, so it's best to encourage them by making it easy.
With a repository the size of Emacs, it matters. Look at some of the other comments on this article here on HN, where people note that trivial bzr commands on the Emacs repo take way too long to run.
For me, I know to use subversion and git to a degree that I am really comfortable with (from the two I stronly prefer git). Another VCS means that I have to take a few leaps, new commands, subtle to extreme differences in workflow as well as different names for the same things. So a VCS that is well-known by people will (on average) make contributing more convenient to the average potential contributor.
After diving into Emacs' codebase, changing or tweaking a few things that bug me, there are a few walls to climb when actually contributing those changes. I.e. cleaning up the code, creating a patch/pull request, outlining changes and intentions, etc. An unfamiliar VCS adds another burden to the contributor. Remember that we are not talking about people who are paid for diving into their employer's VCS but about people who primarily work on other projects.
I never would have guessed. Pretty much all of the interaction I have with Emacs contributors is through packages on github. Emacs lisp is so ubiquitous and useful that it doesn't really make much sense to include most things in Emacs itself.
Many Emacs contributors are already using Git and simply publishing everything in Github. Most of the things in my .emacs comes from Github. Simply they're not part of the core Emacs.
I think the issue is not only bzr vs Git. It's also, if I understand things correctly, the super restrictive license that the core Emacs has, making every developer sign papers and send them (by snailmail!? or are scans allowed!?)... And if you several other contributors helping you, you must have them all sign these papers.
I've seen at least one prolific .el Emacs author (I think the mulled/multi-line-edit author) complain about that: saying that out of 10 people who helped him he managed to have nine of them sign the papers and sent them to him and couldn't contact the last one...
And eventually he simply decided to abandon getting all the signatures and went it's own way (i.e. Github / non-core Emacs, like many do).
I'm not well versed in licenses / GPL things but I'm a long time Emacs users and I'm definitely seeing change: now most of the newer (and great) .el files I add to my Emacs are coming from Github.
Something I've wanted to see for a while is a fossil-like wrapper system for git. The idea of keeping bug tracking and wiki as part of the repo makes a lot of sense.
If Tcl Core wanted to move to git, Fossil exports repos to that format. So they really lost nothing. They have a small team of commiters as well so Fossil works just fine for them.
Well, there's also the "social and signaling effects" of using something that's non-git, that Eric S. Raymond articulates well: "we cannot afford to make or adhere to choices that further cast the project as crusty, insular, and backward-looking."
Well, there's also the "social and signaling effects" of using something that's non-git
The "not a field" of Computer Programming, to appropriate Alan Kay's quip, is so broken that "social and signaling effects" swamp actual facts and information to a degree that makes it look like Astrology. I've been watching this for decades now -- literally.
Dynamic languages were for years after still tarred with being "slow" when both Moore's Law and progress in JIT VMs and generational GC had make them perfectly fine for tons of applications. If the half-life of patently false misinformation is literally about a decade, and what passes as fact between practitioners is less fact than school rumors, what does that say about our "field?"
There are tons of people who use and know git. It's fast, it works pretty well. There's some value in the fact that it's widely known and used (network effects), probably enough that whatever takes its place will probably be not just a bit better, but a lot better, in some way. bzr does not strike me, offhand, as being a lot better. Is fossil?
So in this case, I think that the network effects are an important fact.
I'm talking in general. I think it's good they're going to git.
bzr does not strike me, offhand, as being a lot better.
I never said it was better or worse. My comment is about the "field" and how accurate its "information" is in general. Sometimes social signalling and network effects are good. What disturbs me is that so many of us use this as a substitute for facts and objective analysis.
Taking social signalling and network effects into account is okay. Only going that far and stopping is just dim laziness. (It's also behind the worst examples of sexism and ageism in our "field.")
> What disturbs me is that so many of us use this as a substitute for facts and objective analysis.
I think there's something to this. At a guess, people use a heuristic because facts and objective analysis are hard. I don't mean that sarcastically— I mean that it's difficult and complex even if you are not lazy. When people opt for what everyone else is using, they receive the benefits of treading a well-worn path. This isn't an excuse, but I am sympathetic. Some people are just trying to get work done.
On the other hand, that is a poor justification for being too lazy to do the job right. Often a problem isn't as hard or complex as it looks, and you might just learn something while looking into it. You get the idea.
It's more about attracting people with an easy barrier to getting involved. Granted, it's not really that big a deal (IIRC there are github mirrors), but it is an obstacle.
It's considered seriously naff to title yourself Dr. on the strength of an honorary doctorate, though — at the very least, you should add "honoris causa". (Even with an earned doctorate you're supposed to avoid referring to yourself as Dr. Smith, in the same way that you shouldn't introduce yourself as Mr. or Mrs. Smith.) Not that I'd really grudge the title to RMS though, who's done more technical hard work and innovation than many people who are running around with earned PhDs.
Oddly, both Bazaar, Git and Mercurial were created around March/April 2005. Why the sudden appearance of popular DVCSs around that time, and why did Bazaar fall behind the other two in popularity?
All of them emerged due to the end of license agreement for a free license of bitkeeper for the linux kernel.
Git won due mainly to Linus personnality and the rise of "social coding" via github.
Bazaar failed because, at the beginning, it was painfully slow compare to git and mercurial. It speed has increase over time, but bad reputation is hard to get rid of.
I'd say git is popular not just because of Linus, but also because it is oriented towards "just getting stuff done", rather than towards theoretical concepts.
You can rewrite history, fix your mistakes, and generally do whatever you want. When merging, git isn't picky, either: if the code looks the same, it is the same.
In-place branching is hugely useful, just switch your tree in an instant (your editor should update the contents of your files automatically). So is the stash. Overall, it's just a useful tool that doesn't try to teach you "how things should theoretically be done", and never says "well in order to get X, you should have done this a long time ago".
Command names are closer to SVN/CVS ones, which lower the learning curve for similar concepts.
The documentation really is at another level of didactic compare to git, and no, pro-git is not enough neither.
Changes are tracked automatically, no need to explicitly add the files you changed to a particular commit (Whether or not it is a good idea is another discussion).
Empty directories can be versioned, no need to create a .gitignore in it to fool the system (Same remark as above).
The syntax is part of it being easier. Also, though it's been a while since I tried using any official git documentation, bzr's website had handy tutorials, references, and cheatsheets available. Great layout, assumed no VCS experience.
That's not conceptually simpler, that is just easier to learn. Git's concepts actually are simple. You can explain the concepts and guts of git to developers with a whiteboard in a few minutes.
The standard CLI UI is admittedly a weak-point, but it does not appear to have slowed adoption...
I get what you're saying, but it depends on how you learn it. I tried learning both of them through their official tutorials and /their CLI commands/. With git's more confusing command set, I had a harder time learning.
If I recall, the zeitgeist then was that bitkeeper was a necessary evil. My guess is that git would have come into being eventually in any circumstances short of bitkeeper becoming free software, and maybe even then. Distributed change management is such a critical component of the kernel development process (especially for Linus and the other maintainers) that relying on someone else's software seems suboptimal.
The reason Bitkeeper dropped the kernel was that certain kernel devs had started writing free software to interact with the kernel repository (they wanted to be able to perform certain tasks that bitkeeper couldn't). Had this continued, we would probably have ended up seeing a free version of Bitkeeper.
>> Bazaar failed because, at the beginning, it was painfully slow compare to git and mercurial. It speed has increase over time, but bad reputation is hard to get rid of.
Is this an example that goes against the common advice to launch an MVP fast to test the market (and then keep on improving)? It seems that the advice is valid only when there is nothing for the customer to compare the to-be-launched product to. If competing products end up launching at around the same time as yours, the advice may turn its head on you.
Git was also early to the market, but had a fast core and terrible user interface. Git was used for the Linux kernel only two months after Linus had started coding.
Adding a bit of history to the other comments: Bazaar is actually a successor to an earlier DVCS, called Gnu Arch (or Tom Lord's Arch, TLA, at some point). It started out in 2001 and was, I believe, the first of the DVCS crowd. It had some idiosyncrasies, but was a huge step up from CVS in terms of its principles.
tla was forked into baz (previously Bazaar), and bzr (previously Bazaar-NG) was a rewrite taking into account lessons learned from tla/baz. Darcs is yet another DVCS inspired partly by Gnu Arch.
So while the explosion of new DVCS around 2005 can definitely be traced back to the Bitkeeper incident, I believe the seed for modern DVCS was laid a bit earlier, in 2001, with Tom Lord's Arch. I think Gnu Arch/tla is to be credited with originally introducing many of the concepts of distributed version control.
Of course, if anyone knows of earlier history on distributed VCS or a VCS that isn't in some way a spiritual successor to TLA I would be quite interesting in knowing that.
I'd give Larry McVoy most credit of the three. He worked at TeamWare, which was the first DVCS, at least the first I have heard of. I don't know how much credit goes to him specifically, compared to the other people who worked on TeamWare though.
One thing that git wins for me is interoperability. git's format-patch, send-email, apply and am subcommands make it very easy to interoperate with others using plain text patches on mailing lists for code review, etc.
At this stage, isn't a "bzr vs. git vs. hg" question at all. It's just "look, it's a patch".
I think this ease and ability to work losslessly with plain text patches gives git a clear advantage over bzr.
This functionality makes it really easy for people who don't know git to interoperate with people who know it.
Lossless interoperability with plain text is a key Unix principle (see TAOUP) and something that bzr lacks.
With bzr, everybody involved with a project must use bzr. OTOH, a project that uses git can work more easily with a whole spectrum of people since sending a simple patch to a mailing list provides exactly the same ease of workflow to git-using developers as a complex multi-stage set of changes that is heavily reviewed and modified before being committed.
(I don't know hg well enough to understand how it fits in here)
git patch support is more feature complete. Atomic application by default and using parent information for three-way merge to apply patches if requested.
Git won the popularity war because the Linux kernel used it and then Github pushed it mainstream. Frankly, Bzr never got as popular as Git and Hg because it just isn't as good. It's slow, has a bizarre (sorry) branching model, and came out of Baz and Arch which were downright terrible (to be fair, Bzr shares no code and was designed as a rewrite to jettison all the stupidness of Baz and Arch—nonetheless it is tainted by being related to them).
Actually I think Github got popular because it was Git, not the other way around. At the time it appeared people just wanted to have git repos, Github was there, so they used it.
There wasn't a demand for Git repos any more than there was a demand for Hg repos. For instance, open source projects moving away from SVN seemed to be pretty evenly distributed between Git and Hg.
The real demand was for a way to publish code and collaborate, and GitHub provided a truly innovative approach (the social aspects, encouraging forking and pull requests) that was miles better than the alternatives (SourceForge was already in decline, Google Code didn't have the social/forking/pull request aspects). I think they would have been successful with any distributed VCS. I always preferred Git, so I'm happy they chose Git, but I think that if they had chosen Hg, we'd be talking about how Hg won the popularity war instead of Git.
This is excellent trolling material because you can argue in both directions: It proves proprietary software can't be trusted; and at the same time it proves Open Source is all about free knockoffs of innovations made in proprietary software.
I think it was due to the free version of bitkeeper (which the Linux kernel used for vcs) was taken off market. So the Linux devs would either have to pay for a DVCS or make their own... Several made their own, Git was chosen for the kernel, alot of people found uses for the others. I could be wrong, but I think that's the case.
BitMoover Inc provided free BitKeeper licences to the Linux kernel devs. This was unpopular because of the non-FOSS nature of Bitkeeper. Andrew Tridgell [1] at OSDL became frustrated with some aspects of the software and reverse-engineered its network protocol to develop his own limited client. This caused BitMoover to revoke the licences used by OSDL, which in turn provoked Linus Torvalds (who was at OSDL) to develop the early version of Git. [2]
git is used by a bigger project (the kernel), that's mainly why. they're all ok, otherwise, even with their technical differences.
github also contributed to amplify the adoption as its makes it really easy to use (even thus people generally use git in a non-distributed way with github)
(i do prefer git in usage, tho, but thats subjective i suppose)
What do you mean, a python command line client? If you're using it from the command line and not programmatically, why does it matter what language it's written in?
And Xorg, and Samba, and Wine, and a bunch of other projects. In retrospect, people overestimate github's impact, I think. "Fast enough for big projects like the kernel" was a big deal.
The term is "adaptive radiation", of which the Cambrian explosion is a prominent example. (Unrelated but awesome note: search for "edicarian biota" to look at some body plans that might have won, but didn't.)
If you read the article included in the post it says that git, mercurial and bazaar all came together because Bitkeeper stopped being free. Now as for why bazaar didnt get used so much even when supported by canonical is a mystery to me.
The article and the quite detailed bzr history it links to do not appear to suggest this at all. Can you provide any justification for your three claims?
git has won the mindshare war for mainstream developers, but I've found it to be a useful FOSS alternative for developers forced to use a Windows environment [for business-related reasons, don't laugh..it pays the bills]. Their UI is a bit more intuitive than git's for new users.
Now, if you're a hardcore Linux-stack career dev...get onto git ASAP... but for lesser folks...bzr works just fine...
> but I've found it to be a useful FOSS alternative for developers forced to use a Windows environment [for business-related reasons, don't laugh..it pays the bills]. Their UI is a bit more intuitive than git's for new users.
However, the same also applies to Mercurial, and even though Mercurial is less popular than Git, it still has a lot more devs who use it than Bzr.
I'm using bzr on a large scale project and have not experienced any problems. A full fresh branch does take a while to complete but I find it acceptable as we're not branching the full repo from the server that often. Usually its local branches.
When using git on a similar scale project we found that we spent a lot more time managing the source control system than we ought to. Source control should be something you rarely worry about and requires no 'management' other then regular usage. Git was not that. Git required effort.
I had to interact with bzr last year for my GSoC project (Mailman, hosted on Launchpad). It wasn't a pleasant experience, and got a whole lot more complicated when I had to move a Git repo to LP.
Isn't it possible to track the upstream just like with svn? We use git internally but interact with third party subversion repositories. We do it by dedicating a branch to tracking the svn repo, use git-svn to branch and to push the commits (after rebasing). It's not perfect but after a few hurdles at the beginning it is now problem free.
Curiously, the URL seems exactly identical in both submissions, which are almost consecutive ( ...94, ...96).
That probably means the duplicate detector has some delay on getting its past submissions data. The rate of submissions these days seems to average one a minute, but the delta on this pair may be different, and it is not apparent by now.
I can't tell now that the other one is dead, but I believe it's link was http: while this one's is https:. I use the "https everywhere" browser extension so when I copied the URL I probably got the upgraded one. I think that explains why the dup detector didn't catch it. I don't know why this story took off and the other died. Maybe the headline?
The entire GNU project is like a house that looked so mod in 1971, but now has peeling paint, stained concrete, shag carpeting that smells pretty funky, and a crazy old relative that haunts the place. "Renovation" is changing the light bulbs and occasionally washing the stucco.
It's a bummer to think of all of the real-life dollars that have been sunk into Bzr's development, which I'm sure makes a decision like this harder. That said, isn't this what forking is for? The people who want Git support should code and maintain it themselves.
I wished people would use software based on merits, not on popularity. That being said, Bzr is slow, it was always slow from day 1 -- and slowness is part of the UI experience.
Popularity means developers, which means bugfixes and more features in the future which projects may want to use. Switching DVCS has a cost to a project. Thus software popularity is important.
This may be unfortunate, but this is how it is. DVCSs are by no means complete today, and cross-pollination of features continues. An unpopular project that has fewer developers working on it will fall behind. A project that doesn't want to keep switching DVCS has a reasonable interest in the DVCS project's future, since future features will come "for free".
But popularity is one of the merits a software package can possess. Popularity offers:
* A wider availability of resources -- tutorials, books, documentation of any kind, a community around the software that can offer help and support to new users, related tools and add-ons.
* Evidence of the potential for longevity in the software -- the more popular it is, the more likely it is to continue to receive new features and bugfixes.
* Portability of the knowledge of the workings of the software -- for users, this means that investing time and energy into learning the software now is a better investment, because that learning has a higher likelihood of being useful down the road. For people running projects, it means a larger pool of people who already have knowledge useful to contributing to your project.
Your statement basically translates to, "I wish people would use software based on hypotheticals, rather than an evaluation of what it's like to actually use that software."
> I wished people would use software based on merits, not on popularity.
This is “your favorite band sucks” for software. For professional tools, popularity is usually directly tied to merits: notice how frequently people say they started with bzr/hg/darcs/etc. and switched to Git because it was faster/safer/better supported/added features they liked? Most of the competitors seemed to think they'd solved one big problem well enough that people would put up with the “minor” warts but in practice those are the things which you notice on a daily basis. Git was the third or fourth DVCS I tried and even fairly early on it was obvious that considerably more care was going into making basic day to day work easier and safer.
> I wished people would use software based on merits, not on popularity.
this. so much. but "social and signaling effects" and "mindshare war" instead. sigh. what /is/ true, though, is the quote from the linked article:
"All problems in computer science can be solved by another level of indirection... Except for the problem of too many layers of indirection."
anyway, in an attempt to return from the hopelessly abstract and say something cogent about hip technologies and the article at hand, umm... i wonder how bzr runs on pypi?
Do people really decide whether to contribute to an open source project based on the version control system it uses, rather than the importance of the project itself?
Github became popular because people wanted free Git hosting, not the other way around. Most other Github features just get in the way.
Also, instant branching are really convenient feature. I remember the time when I was using Darcs that I got lost pretty quickly with 10 copies in 10 directories.
I disagree wrt Github. A lot of projects I was involved in wound up switching from git to mercurial because it was too difficult to manage git master repos on a server compared to Mercurial. Github solved this neatly, and the issue system has allowed my last two companies to switch from Mercurial+FogBugz|Trac to Github. That's useful consolidation and lower admin overhead.
I think there was a co-evolution of the two. A lot of people started using git because Github was where the cool projects were being hosted. At the time, I recall hoping for bzr to "win". I like git these days but coming from an svn is king world - bzr was a bit gentler for transition.
The article is also mentioning bad state of Mercurial, any idea what he could have had on mind? What's wrong with Mercurial these days, beside its less widely used than Git?
I'm pretty sure Atlassian is the only major supporter of it, and even they co-host Git on BitBucket (I suspect to stay competitive with GitHub).
And in my very biased opinion, Atlassian markets themselves with an aggressively old-school mentality (closed source everything, monolithic software, questionable terms of use, strong force in the enterprise market, etc.), which makes Mercurial look bad by association.
facebook hired most mercurial hackers, to me they are the biggest current supporter.
(and I would have cited quite a few other companies/org besides atlassian, who probably contributed as much as atlassian to the development of mercurial)
- it has a lot of non-orthogonalities and non-closure operations (ie options on one command are different or not present on other commands)
- if the network drops, fetches and pushes have to start from scratch. For fetches, it would be nice to save partial packs and resume them. For pushes over crappy links, doing similar could be a potential DDoS, so setting up some sort of rsync box is probably better.
please, please, you Emacs/LISP gurus out there: make a working modern package manager and integrate the browser like lighttable does. and perhaps rewrite emacs from scratch so that the source code makes sense in today's world not in 1980's world.
unfortunately lighttable is staying closed for much too long, but something like it is desperately needed.
That is just silly. The current Elisp interpreter might be replaced with an Elisp implementation on top of the Guile VM. That would be quite the big upgrade, technically speaking.
>unfortunately lighttable is staying closed for much too long
I agree, I don't have much faith in the "it will be free eventually, just trust us" development model. If the project really wanted to be community friendly then the source code would have been free from the beginning.
works fine = work fine for me after I have spend x hours looking around the net, writing elisp myself. that's the whole problem with lisp. it is not communal, because it does not enforce standards.
I don't understand, what's wrong with package-list-packages? It has concise descriptions, MELPA is kept very well updated, and and installing plugins is a breeze, no elisp configuration is necessary for like 90% of the packages you install.
I doubt it will too, but I hope it gets a nice "marketshare" so people try to experiment and find new ideas. Emacs will probably benefit a lot from cross-pollinisation too (it excels at absorbing new ideas).
LightTable is truly terrible. Adobe beat that team with Brackets without even really trying, and all of the gimmicks it promised have been available in vim and subl for a while. They should do something else with their time.
not true at all. it has this thing called the browser in it, as well as a completely live hooks and a modular system. this here http://www.youtube.com/watch?v=gtXpOD6jFls is impossible in any other editor.
But then he seems to define maintenance as having fixed a specific bug that's been around for over a year, blocking a point release.
He admits that he can't follow the developers list to see if they're genuinely doing active maintenance (reasonable enough: he has a lot on his plate), but also won't accept the testimony of Emacs developers that the mailing list is dead and there's no evidence of real maintenance.
When questioned, he says that there's too much at stake to abandon bzr if it can be avoided at all. But the proposed replacement is GPL software. This is just madness.
Refs: http://lists.gnu.org/archive/html/emacs-devel/2013-03/msg009....
http://lists.gnu.org/archive/html/emacs-devel/2013-03/msg008...
(and surrounding posts).