We are all using git only because Linus wrote it. The cargo cult is real and very much alive in our industry, I think precisely because we are all here to write specialist software. Too busy in our domain to worry about version control nuances so we just go with what is popular and don't think about it too much. It's not just version control, it's libraries, frameworks, languages, all of it. If it's not popular it's doomed to failure.
You assume that the creator of one of the.. if not the largest open source projects on the planet might not understand the issue at hand better than literally anyone?
That seems arrogant. I imagine more thought & care went into git than you can fathom.
If "the issue at hand" is running a globe-spanning open-source project with hundreds of contributors, a bunch of targets, and a code base that goes back 30 years, sure, I'm happy to listen to Torvalds.
if "the issue at hand" is the more typical modern work environment for devleopers, then no, I don't think he has much special insight. Indeed, the fact that he's wrangling something so large and important means he's unlikely to have the time and attention to devote a lot of thought to how best to serve a pretty different group of people.
An obvious consequence of this is git's terrible interface. Its premise is "if you totally understand what Git is up to under the hood, the interface is great!" Fine for a small in-group of kernel developers whose lives are distributed patch sets, but terrible for the average developer. It has taken 15 years to get some reasonably named commands for common things like "restore a file". That's a great sign that we shouldn't listen to Torvalds on topics outside the realm of his admittedly impressive expertise.
This isn't about "understanding the issue at hand," this is about UX development.
Linux is, very intentionally, a piece of software which does not have "easy to understand for non-experts" in its design goals. You generally interface through Linux with system call wrappers provided by a libc (or another specialist library for other interfaces like libfuse or libnetfilter or whatever), not directly. Linus is, quite obviously, good at many things; it's silly to assume that means he's an expert at everything.
Linus developed Git as a low-level tool. Linus put a lot of thought and care into getting the implementation details right, but intentionally did not build an easy-to-use interface. The Git command line that people use today ultimately derives from Cogito, a toolkit written by someone not Linus that sat on top of Git. Eventually Git (which Linus had long since handed off to someone else) adopted most of the conventions of Cogito and created a "porcelain"/"plumbing" split. If you think it was arrogant to develop Cogito, I suspect you disagree with Linus.
Finally, Linus develops Linux with a very particular model, with e-mail-based patch reviews, merges from subsystem maintainers, etc. Most projects (even most open-source projects, but certainly almost all proprietary projects) do not work this way. The Linux kernel does not use a GitHub-style workflow for development, which is by far the most common way people outside the kernel community use Git. It may well be the case that a lot of thought was put into making Git work really well for Linus's use case but it does not match how other people do development.
The difference between the kernel development model, and some putative "typical other" development model merely changes which git commands/tools get used to handle getting stuff into a particular branch on the canonical repo.
It has no impact on the underlying concepts that make git scale well, cover 100% and 100% remote cases equally well, and provide deep under-the-hood concepts that can be deployed in exceptional circumstances.
Correct, it has no impact on the internals of Git, which was my point. Git's internals are used by both the kernel development workflow and the GitHub-style workflow, and Linus designed it well.
The conversation is about the user experience of using Git - its CLI design, etc. It is entirely about "merely" what commands are being used. TFA is about new Git commands, not about any changed internals. Or, in the case of Sourcetree, it's about not using any of "Linus's" interface to Git (which, again, wasn't written by Linus) but interacting with the same Git internals.
I think the conversation is about whether or not the Sourcetree interface to git can be useful when carrying out tasks that involve, for example, filtering the reflog.
If you have never done this before, and have only ever used git via the ST interface, switching to the CLI to get this done is going to be quite a shock. Maybe that's OK because realistically such tasks should be rare. But sometimes they are a critical task in development, and finding that the entire dev team is completely intimidated by it can be an issue.
Isn't that the "appeal to authority" fallacy? Mercurial demonstrates that you can have a VCS with less painful and frequent gotchas and certainly the grottiness of the git submodule mechanics, for example, doesn't show as much thought and care as one would hope.
There are at least two different commercial hosting sites that implement a pull request model. If you want ephemeral branches like in git, hg branches are indeed not the right choice. But that doesn't mean that they don't have their place. Try topic or bookmarks if you want gitish behavior.I have absolutely no clue what you mean with trunk-based development...
Funny, I rarely use branches (or topics) in Mercurial unless it is a complicated long-term project. Modern "everything on a branch" was invented by git users because the UI forces naming things. Mercurial allows sharing code and still linearizing history with rebase safely, so much less need for merges. I've always found pull requests to be only useful for the passing-by contribution. They are a pretty awful interface for anything else driven by GitHub internals more than anything else...
> Pull Requests were never going to happen with Mercurial
TBF, AIUI they're not, strictly speaking, happening with git either: They're an external addition, invented by GitHub or some such, and not actually part of git itself.
How does that work; is it something technical built into it? Or do you mean just because after that one knows that it's incorporated upstream, so not needed as a separate entity any more? Because that would also go for a "pull request" by, say, e-mail or whatever.
I’d add another reason to that: we’re using Git because BitKeeper wasn’t free (as in beer) at the time for general purpose use. Had it been, we’d all be using BitKeeper instead.
Probably not. The free (as in speech) part of git is what made it usable for entities like google, Microsoft, github etc.
If git had been released with the same license model of BitKeepr it NEVER would have taken off.
True, a lot of people uses git without knowing how to use it, mainly because they do everything with a GUI and never learned how it works, and if something strange happens that can't be solved with the GUI they just delete the repo and clone it again.
At that point I say to these people, why you even bother with git? Just use what I call ".zip versioning", that is archive the source code an call it "project-vX.Y.Z.zip" and put it on the company fileserver.
Or better learn how to use git, and that means learning the command line and throwing out every gui (well, not all of them, for example I do commit and push/pull with VSCode, but when I have to do serious stuff like merging stuff I do it with the command line). To my experience GUI always cause problems than corrupt the history of the repository.
But he didn't write it. He started writing it in bash, and then (because he is Linus Torvalds), some very talented linux hackers jumped in to to help him. Just like with the kernel really. It was a tool written by extremely competent linux hackers for linux hackers, and that's why it is so successful.