Hacker News new | past | comments | ask | show | jobs | submit login

Not to take away from git, but would git be as successful if it wasn’t made by him? Mercurial did come out around the same time, no?



Mercurial, darcs, monotone, others. It was a popular area for a while. I think it's less about Linus wrote it than Linus used it (for the Linux kernel) though. If somebody else had written it and he'd used it, the outcome would have been the same.


> If somebody else had written it and he'd used it, the outcome would have been the same.

IIRC, the mercurial project had already started when Linus started working on Git. (In fact, very early versions of Xen used BitKeeper, following suit with Linux; when the license changed, Xen moved over to mercurial because git wasn't ready yet, and stuck with it for a number of years.)

The main reason Linus wasn't happy with mercurial was the performance -- Linux just has far far more commits and files to deal with than nearly any other project on the planet, and even at the time, operations in mercurial took just a bit too long for Linus.


> Linux just has far far more commits and files to deal with

Not really. I work with Mercurial every day, in a FAANG company's monorepo which is many many times the size of the Linux kernel. It's not always pleasant, but it's not clear git would be any better. Mercurial's performance issues are solvable, have been solved to a large degree, but IIRC when git started that was not the case. It's a shame really. The fragmentation is annoying sometimes, even if it's the result of historical accident rather than any bad decisions made at any particular moment in time.


Don't you guys (which is definitely Facebook) use some sort of extensions that makes it not the same as a vanilla Mercurial repository?


At least for darcs there was a technical/theoretical reason on why it didn't succeed.


What was it? I used darcs quite a bit back when I used to write more haskell, but have lost touch over the last decade or so. What happened? I was always under the impression that one of the things that set darcs apart was that it had a theoretical basis. Did something go wrong with that?


Darcs has the problem of exponential merges[0], where certain merges that would have a lot of conflicts takes an exponential amount of time (based on number of conflicts), which in practice can render certain patchsets unmergable.

The authors of Pijul[1] found a solution for the exponential merge problem, and from what I've heard there have been discussions that darcs might in the future switch to Pijul's algorithm.

[0]: http://darcs.net/FAQ/Performance#is-the-exponential-merge-pr...

[1]: https://pijul.org


The irony here is that all of the large git projects I've worked on don't use merges at all. They cherry-pick and/or rebase their patches around.


Cherry-picks can still lead to conflicts though, no? It's essentially just a particular kind of merge.


My mental model is that cherry-picks are like a copy by-value of one or more patches. A merge is a copy by-reference of a specific series of patches.

It just so happens that what I usually want in practice is to make copies by-value. There are other integration workflows that make copies by-reference with dedicated merge commits. But my team's most common use case for long-lived branches is to track bugfixes for deployment to remote hardware separately from mainline development. So, by-value copies of small changes is much more common.


Yeah, I never merge. No idea why you would want to.


Thanks! Interesting stuff!


The most important determinant to a product's success is who uses it. Given that git was guaranteed to be used for Linux, that success was assured no matter who the author was or how its technical features compared.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: