My git productivity hack is `git diff --color-words`. Instead of showing the line-by-line diff, it shows only the words that changed. Especially useful if you have long sentences where only a comma changed or some other typo. With git diff, the two lines are shown, with --color-words, only the changed symbol is highlighted. The option --color-words also works with git show. I even made aliases for them: git cshow and git cdiff.
Other than that, I recommend that people learn to use git properly. In my work, I often have problems with people overwriting their commits and trying to handle merge requests of commits where one commit message is "did some updates" and the other commit is "some fixes". Getting to know git for an hour, may have prevented both issues. But I am biased, since I use git since my bachelor thesis.
> Other than that, I recommend that people learn to use git properly.
Sorry to be harsh here, but that is completely useless advice. It's a tautology. Of course people should learn to use git "properly". What's the alternative, that they should learn to use it improperly? Everyone should learn to use everything properly. It's like telling someone dealing with a crisis that they should "take appropriate action", as if taking inappropriate action was something that someone would actually seriously consider absent this advice.
The problem is that no one knows what "properly" means when it comes to git. Git itself provides no clue, and everyone and their second cousin has an opinion. That makes the advice to use git "properly" utterly vacuous. Figuring out what "properly" means is the whole problem with git.
I've always found it interesting that Git gets a pass for its horrible UX by so many devs. The programming community wants to provide too many options for _everything_. If there's a tool you don't like, there are probably 10 other versions that do similar enough things that you can just switch. Devs are harshly critical of tools. And yet, with Git, the response just seems to be "if you don't like it you must just not _get it_." Which, I guess is fair, but I don't particularly understand why everyone has to "get it"? Why can't we expect a tool that's used by so many be intuitive? Or at least, _more_ intuitive? It feels a bit like hazing at this point. The post recently about `git undo` was great, I think, because of the frequency with which users encounter surprising and unintended behavior.
It continues to be kind of frustrating to me. It feels like a tool that should be wrapped in something else and never even brought up to 99% of users. A half finished idea on productivity and team centric version control (to get ahead of comments: I know it's not designed to be that, but that's how it's very frequently used).
>And yet, with Git, the response just seems to be "if you don't like it you must just not _get it_." Which, I guess is fair, but I don't particularly understand why everyone has to "get it"? Why can't we expect a tool that's used by so many be intuitive?
There are lots of UIs for git - many people on this forum will advocate for using one. My personal experience is that a UI manages to over-simplify the git workflow. It never quite allows you to perform all of the useful tasks you want, unless your use of git is very basic.
In a more general sense, git can also be considered quite conceptually simple; "getting it" is not the real problem most users have. The confusion about what commands do comes from the fact that users learn the tool backwards. No job bothers to teach a junior developer about the DAG and what commits, branches, and refs really are. Instead, they are forced to start with git's large and crufty CLI - so it's no surprise that it's hard to pick up any intuition about what's going on.
And that's a problem with lots of technical teaching, not just git. Since the focus is on getting immediate observable results, and not on conceptual understanding, users tend to learn how to use tools instead of how they work. That creates a culture of "I don't know what this is doing, and I'm too scared to mess with it" - basically the anti-hacker mentality.
> And that's a problem with lots of technical teaching, not just git. Since the focus is on getting immediate observable results, and not on conceptual understanding, users tend to learn how to use tools instead of how they work. That creates a culture of "I don't know what this is doing, and I'm too scared to mess with it" - basically the anti-hacker mentality.
yeah I guess I don't really align with the "Developers must learn the internals of all of the technology they are required to touch", and I don't really agree with the sentiment that that decision disqualifies one from "hacker mentality".
>I don't really align with the "Developers must learn the internals of all of the technology they are required to touch"
The DAG, commits, refs, etc. are not the "internals" of git - they are the basic building blocks of its mental model. Without them, there is no way to understand what git is doing - so it's no surprise many people think git is unintuitive, because they aren't taught what those concepts are.
The thing I'm saying is counter to the hacker mentality is using a tool as a black box, without understanding what it does and what it is trying to do. You don't have to read the code and know all the technical details.
Lots of users treat git as a small set of known commands that they execute in sequence until the repo is in the state they want; and any deviation from that happy path is met with a repo wipe and re-clone. A hacker should be happy to play around with various man pages and commands to build an understanding of what each command really does - and then be able to fearlessly mix and match commands, because they know how they are affecting the repo at each step.
> The thing I'm saying is counter to the hacker mentality is using a tool as a black box, without understanding what it does and what it is trying to do. You don't have to read the code and know all the technical details.
And I'm saying that this statement is just hustle culture nonsense.
> A hacker should be happy to play around with various man pages and commands to build an understanding of what each command really does - and then be able to fearlessly mix and match commands, because they know how they are affecting the repo at each step.
No true scotsman would use git without reading books about it!
This is just nonsense. Gatekeeping holier than thou nonsense.
I use SmartGit (there are many other visual tools), as I have no interest in untangling the utterly hostile Git command-line interface. Using the terminal to stage chunks of a file? No thanks.
I feel no embarrassment about it either. You may pride yourself in using Vim, while working on a multi-thousand-file project. You are a power user, good for you. Now go collect other time-wasting useless accolades, like Pokemons.
It's not like there aren't more than ten different tools that do approximately what git does, but a little differently. There are also dozens of UIs on top of git. So the response to git's UI is not uniformly "RTFM", many people tried to provide alternatives.
Don't use it blindly. Learn it enough that you understand those mystical incantations and aren't summoning Cthulhu by accident, only on purpose
Don't cookbook it, understand what the thing you're typing means. Don't use a GUI to abstract it away, be familiar with the CLI and what it's doing. Don't default to rm-rf when you get stuck, check git reflog and see if you can unfuck yourself first. Ask someone who doesn't have your problems how they're using it - it's possible you're "holding it wrong" in some obvious way, but in a way that masks the true problem that's biting you.
Some of my coworkers regularly have problems with git - I don't, and some other coworkers also don't. We all use roughly the same workflow, it's the people familiar with "what does pull --rebase really do?" that don't get into trouble and/or can get themselves back out of trouble, and the ones who always type the same command regardless of the situation that have problems.
In my eyes all you did with this comment is join the people your parent poster is arguing against; you kind of just reiterated the stance they don't like. Not sure how constructive such a comment is.
> it's the people familiar with "what does pull --rebase really do?" that don't get into trouble and/or can get themselves back out of trouble
Absolutely not my experience. No small amount of people who seem to be doing more with GIT than me regularly get into trouble. Has been the case in at least 7 companies so far. Check your selection bias.
> and the ones who always type the same command regardless of the situation that have problems.
People like you forget that we use tools to make our lives easier. Tools. Not a whole damned ecosystem of dyslexic scripts that can't make up their mind even on a common CLI switch convention...
> Learn your tools, inside and out.
I am not paid to know GIT inside out. I am paid to deliver and fix code and to not step on other people's feet. I am aware that the second part is what many deem to be an ideal case for knowing GIT inside out but not to me and not to almost all devs I ever worked with. I do indeed want to issue one command and be done with it.
GIT does a poor job of bringing remote changes into your branch, for example. As other commenters have pointed out, it can do much better, like detect concurrent identical changes -- which is something that happens often in big teams, people just swing by a module and fix a trivial bug and include it in a bigger PR. You can argue until the end of days that's not a good dev team practice but in the end these things still do happen and this supposedly amazing tool is supposed to handle it. Guess it isn't designed for that?
> It's possible you're "holding it wrong" in some obvious way
"Obvious", sure. As if I care what an index, staging area, reflog etc. are. I don't. But GIT's team has been stubborn. This article seem to show some desire to improve UX, which might go contrary to what you feel what GIT users should do. ;) So I'd say even their team is starting to recognize some problems and are working to address them.
GIT's problem is super classical in all dev tooling. The creator(s) directly exposed the underlying data structures and are putting the onus on the user to learn them inside and out. As opposed to actually making a good UX.
Stuff like, say, "git sync" (which should do "fetch" + try to merge/rebase main branch with yours) should have been no-brainer right from the start.
> I am not paid to know GIT inside out. I am paid to deliver and fix code and to not step on other people's feet.
Your argument is still the same as the other git detractors, and is still wrong.
It _is_ part of your job, it _is_ part of delivering and fixing code. It's the same as anything else you're using. It may be a larger surface area that you're exposed to, but it's no different from CI/CD, build systems or something like a package manager. You don't need to understand how pip or npm or maven works to deliver software effectively, but if you get into trouble you have no alternative to rm-rf.
> As if I care what an index, staging area, reflog etc. are. I don't.
If you don't want to learn it then stop complaining about how difficult it is to use. That's your problem, not git's.
All of those things are important to the power of git. If you don't want / refuse to learn about them then you're trying to build your software with a car stuck in first gear and complaining about how slow it is. STFU and RTFM.
None of this detracts from your final point, which is valid! Just because the UX is mediocre (at best) doesn't mean you can't learn it.
The issue here is GIT is way more complicated to learn, because it has 10.000 options that might do what you want to do or do something else entirely.
Imagine your deployment pipeline required you to manually craft TCP packages to send to your machines to deploy code...would you still say "it _is_ part of your job, it _is_ part of delivering and fixing code", or would you say "that is stupid"?
With GIT, you are spending more time on learning and battling the tooling required to deliver code, than actually writing and testing the code.
> Figuring out what "properly" means is the whole problem with git.
I think you've missed OPs point by focusing too much on a single word ('properly'). Yes, there's no clear-cut way on how to use git, no silver bullet, but the main problem with git is that most devs simply panic when they have to do anything that goes beyond the bog-standard commit/pull/push/merge. Rebase? Squash? Reset? Rebase interactively? I think OP was referring to this cluelessness with 'not using properly', rather than which approach to git is the best.
I work on a monorepo with 40-something other devs, and we've recently switched to enforced linear history because the history got to the point of being completely useless, it was an unreadable spiderweb. The problem was not that folks didn't see that what they were doing was not-so-good (introducing often more merge-commits with every PR than non-merges), it was that they had no clue how to avoid that.
It took us quite a bit of time to get everyone up to speed but pretty much everyone got around to it after a while. It's not rocket science after all.
I've tried the "rebase into a feature branch" workflow, which I think you are alluding to. Unfortunately, it always results in scary conflicts. So I go back to merging the main branch into my feature branch workflow, which works every time. I then hit squash in gitlab for my merge request, and no one is the wiser.
Should I be doing something different? As I said, rebase in that situation is disastrous. Many folks recommend it vigorously but are not around when things inevitably go wrong.
> Unfortunately, it always results in scary conflicts.
I think you're referring to what I call "conflict cascades"? E.g., you're 4 commits ahead of "master", and want to rebase on top of it - but you have a conflict with your first commit, and after you resolve it, this results in conflicts in the second one, etc.?
It's a very interesting question because it's one of the main pain points of rebase that simply does not exist with merge.
What I've found for myself is that it's very important to create "atomic commits", which do not rely on other commits to make sense or "be complete". For example, a very common thing that I see is "fix formatting", "fix tests" commits, because some build was complaining about something being broken. And it's often these kind of commits that create weird conflicts (especially the formatting ones) because just a lot of code is moved/refactored/added/removed etc.
But do these commits make sense on their own? For example, if you have a history as follows:
Does it make sense to ever check out <commit-1>? No, right, because that commit is broken - wrong formatting, broken tests. What you want is always <commit-3>. So why not just adding those changes directly to commit-1 by using commit --amend when committing or by using fixup in an interactive rebase.
This was of course just an example, but in my experience, if people follow these kinds of trains of thoughts - what can be in a separate commit, what should be together, etc., there is an inherently smaller conflict-cascade-potential, which pays off a lot when rebasing. On top of that, it also makes history generally more readable and individual commits more meaningful and easier to cherry-pick.
I've never understood - or used - rebase as a standard part of a workflow. And I've been using git for nine years.
What is wrong with a master branch whose history reads "Merged feature foo" after "Merged feature bar"? If you need more detail then check out the feature branch and git bisect to your heart's content.
I do wish that we could "archive" branches from the output of `git branch -a` but really it's not a big deal when branch names are prefixed by Jira ticket identifiers.
I had this opinion for the longest of times as well, but me and all the other devs I know who started using a rebase-based approach just don't want to go back.
I think it's also a bit a question of team size. If your project has just a handful of devs working on it, merge-commits really don't matter all that much. If there are a few dozen with a good amount of juniors on the repo (a bit over 40 devs in my case), the commit-history becomes an absolutely unreadable spiderweb, that's at least my experience.
At work, we want to move towards trunk-based development (merge to master goes straight to prod), but whenever master was broken, a look at the commit-history didn't really all tell you that much about the who, how and why without checking it out and digging into it. So we've recently started to enforce linear history, in which case it is immediately obvious without the shred of a doubt who's responsible. Analyzing the build became an absolute breeze, at the cost of having a bit of a harder time at "insertion"-point (which is perfectly fine, given the fact that after merge, it should eventually go straight to prod).
It took a while to get everyone on board, but it was really worth the efforts and can only recommend it (I most definitely will prefer a workplace that is open to such practices in the future). In general, most devs were also quite happy to be guided through the process, as it is clear that git is probably one of the longer-lasting constants in our industry.
Also, I can't repeat it enough; speaking from experience, most devs severely overestimate the complexity of cherry-pick/rebase/reset/reflog etc. Whenever I had sessions explaining it or helping someone out with a problem, they had their first "aha!"-moments after a few minutes, and after that with some do-it-yourself-experience most of them get there rather fast. Obviously, there are always a few outliers who need a little bit of extra-nudging and extra-help to stay in line, but those are usually the ones who often also need that sort of assistance in other areas so that's okay.
> I think it's also a bit a question of team size. If your project has just a handful of devs working on it, merge-commits really don't matter all that much. If there are a few dozen with a good amount of juniors on the repo (a bit over 40 devs in my case), the commit-history becomes an absolutely unreadable spiderweb, that's at least my experience.
git log --first-parent gives you a nice linear history of (generally) your top level merge commits. If you follow a PR workflow, it's PR-by-PR breakdown of activity. The DAG gives you the power to "explore the spiderweb" if want/need to, but also pull back and say "give me the high level over" (--first-parent). I still think a lot of the emphasis on rebase-only workflows would disappear if more of the graphic UI tools (including and especially GitHub) had a better --first-parent experience by default. With the GitHub PR flow it has always surprised me that the main commit log isn't just a --first-parent view with an optional drilldown experience.
Sorry to beat a dead horse, but I'd love to be convinced. Could you give me an actual example where you were able to review a commit log that had been rebased easier than it would have been had been merged? Or an actual example where a merged commit log was a pain where it would have been easier had it been rebased?
Maybe the advantage is for people who use graphical tools? I'm still in the stone age using Git in bash.
> I've tried the "rebase into a feature branch" workflow, which I think you are alluding to. Unfortunately, it always results in scary conflicts.
You never rebase something into something else, you rebase onto something. I suppose you meant you want to rebase feature branch onto master.
If you have conflicts, you'll have them anyway, regardless if you're merging feature branch into master, merging master into feature branch or rebasing feature branch onto master. Conflicts are not a consequence of rebasing.
I'm working in a feature branch, and after a day or three changes have landed in the main branch. Which I need to incorporate or my merge request will be behind.
When I pull from main (git pull origin develop) there are rarely conflicts. When there are they are easy to understand and fix. Clicking squash on my request hides all intermediate commits so history stays clear in any case.
When I try to rebase from main (git pull --rebase origin develop) often almost every file in the project is in conflict, and the conflicts themselves are incomprehensible. I then run rebase --abort and go back to the merge strategy.
I've tried this about four times in the last few years; no one is able to explain what went wrong. If I had to guess it may have something to do with me pushing the branch when creating it.
git checkout master
git pull => fetch and merge the new things on master
git checkout develop => go back to your work branch
git rebase master => simple rebase, no squash, no nothing. The conflicts are the same as you'd get with git merge
do not forget to force push the changes of your branch
git push origin --force develop => to update your branch on the server
You can skip the "checkout master + pull" part by using "git fetch origin master:master", or by using just "git fetch origin" and then "git rebase origin/master". Other than that, that's pretty much my workflow too :)
I don't know, most people I've seen "scared" by merge/rebase conflicts have simply been scared by the UI provided to resolve them. It's just a matter of spending a bit of time familiarising yourself with what you're seeing and understanding what the tools are doing.
There's a bunch of resources on that and once you wrap your head around what git is doing it shouldn't be too hard to figure out how to navigate conflicts.
Merge conflicts in a rebase can propagate through every commit in your feature branch (suppose that you modify a boilerplate line in 10 commits, and you get a conflict on that line in the rebase, you now have to solve that conflict 10 times possibly with some interference with nearby conflicts)
In many cases also there are problems with git being line oriented rather than token oriented, IMHO I would have expected language aware diffs (for definition of "words" and maybe parenthesis) for common languaged to be common place by now (maybe even with a language server support)
In my team we are lucky as most conflict are a matter of simply choosing a side and overwriting the other.
>Merge conflicts in a rebase can propagate through every commit in your feature branch (suppose that you modify a boilerplate line in 10 commits, and you get a conflict on that line in the rebase, you now have to solve that conflict 10 times possibly with some interference with nearby conflicts)
Yes, this is true. But in a lot of cases this is a relatively trivial fix. That being said, I do think git could do better here. But I have not had it happen very often (and usually I know when it's about to happen so I know what to expect).
> In many cases also there are problems with git being line oriented rather than token oriented, IMHO I would have expected language aware diffs (for definition of "words" and maybe parenthesis) for common languages to be common place by now (maybe even with a language server support)
Language aware diffs would be a neat idea indeed, but I guess the issue may be that now git would need to have metadata to tell it which language server to use for which file.
I wonder if it would be possible to give git a plugin system for diffs, just have an option to offload to a diff program which could handle the metadata separately.
> [...] in a lot of cases this is a relatively trivial fix.
> [...] I have not had it happen very often
Often it is simple but it is tricky and hard to practice. Part of the problem is not git fault, it is just that thinking in diffs is not easy.
> metadata to tell it which language server to use for which file
My understanding is that git already does this (via filename or custom helpers) for binary/text files, I believe that it is used only for newline conversion, but for most usecases filename-based rules in a gitignore derived syntax would be enough.
Currently for binary files git has heuristics described in gitattributes(5) and these only go as deep as text encoding autodetection and the ability to specify the text encoding of a file or mark a file as binary. It's not sophisticated enough to know what kind if file it's looking at outside of just that.
Regarding your first point (and I think some of the commenters in that thread also address this already) is that as long you don't have uncommitted changes, you're safe. You're right that a lot of git-commands mess with uncommitted changes in hard-to-recoverable ways, but once changes are committed, there's almost no way to mess anything up, you can always go back to the previous state (git reflog <branch> telling you which commit that was).
Regarding the second point - if you have a rebase-based work-flow, you will encounter this problem less often (or not at all).
For your third point - what helped me a lot is to actively distinguish between commits and the working tree, and making myself aware that branches really are just "pointers" to commits (obviously, but somehow, actively thinking about it made the intent of git commands a lot easier to understand). Also, most git commands are just combinations of other commands. git reset is just moving the pointer (--soft without touching the working tree, --hard will do so), rebase is essentially a hard-reset with consecutive cherry-picks afterwards, and so on.
Also, an eye opener for me personally has been that e.g. an interactive rebase with squash towards a branch that is strictly ahead of you except for the commits you want to rebase is essentially just a "git reset --soft <target>", and then recommitting everything with a fancy commit-message. This and similar mental gymnastics with git-commands has helped me a lot; most of them really are quite sane (except for all those crazy options), "git checkout" is really the odd-one-out of the bunch (and "git pull" a combination of things that should never have been combined in the first place).
I really am not sure if you were being sarcastic or are really that unaware what a mish-mash of specialized jargon you just threw at the guy.
Don't think you the proponents actually understand the problem. It's not that programmers are unwilling to learn (something that many elitistic people love to pretend so they look smarter), it's something called tool fatigue. You just want your tools to do their job and move out of the way which GIT absolutely does not do.
I know that one day I'll get completely sick of GIT and will research and learn it to the bone. Sure, it's bound to happen. But the fact that GIT imposes its internal data model on you and doesn't attempt to solve more problems for you from the get go is what many of us are ticked off about.
> I really am not sure if you were being sarcastic
I was not :P But the commenter also said in the linked comment that they "understand how git works", so I took the liberty to make a few assumptions ;)
> But the fact that GIT imposes its internal data model on you
It does, to some degree, but once you get a grip of "everyday commands" that go beyond commit/merge (cherry-pick, rebase, reset, reflog), it's actually almost surprising how little you have to know about gits internals. It does help to have an idea about them, though.
My main complaint about git is that most commands have too many options that make it do too many different things. And that resolving conflicts via command line is absolutely atrocious (something I actively avoid and discourage).
> I know that one day I'll get completely sick of GIT and will research and learn it to the bone
I can only recommend it! It's really not that hard as many people suspect (as I've mentioned, I have a bit of a habit of teaching people git at work). If you had any exposure to something computer science in the past, the puzzle pieces probably start to come together after some hours.
And it's probably one of the longer-lasting constants in our industry, so in my opinion, it's really worth it to know how to make good use of it.
Honestly, if you were less busy sulking about it you might have had some capacity left to be receptive, in stead... And then it might actually have helped clarify a lot.
Also: In a discussion where one party is being helpful and the other sullenly sarcastic, it's not the helpful one that comes off as an asshole.
By "properly" - I mean not messing up your own or others work in unpredictable ways.
My dev team uses git properly. They are not some masters in a way that they know each command by heart. They often use visual tools or what is built into IDE. We just have general guidelines and everyone knows how to do basic moves like "get code from remote", "merge others work into your local changes".
We had people who always have problems like "GIT ate my homework", well they don't work with us anymore so maybe it is a selection bias.
But I don't think my team members were NOT studying git for months to get to that level, it just came as they go along. Visual tools help a lot really.
In the end I expect someone with any abstract thinking capabilities to be able to use GIT "properly" after a week of working within the team. Like pull new changes every day, create new branch, create a pull request, merge new changes into your working branch.
The alternative is trying to use git while doing your best to avoid learning anything more about it than you absolutely have to, which is what most git users do. This is a great strategy for many tools, but git is not one of them.
> Of course people should learn to use git "properly". What's the alternative, that they should learn to use it improperly? Everyone should learn to use everything properly
No, its not. The alternative is not to.
Time is not free. Learning a tool properly is an investment. Sometimes that's worth it and sometimes its not. E.g. i would never advise anyone to take the time to learn dc properly.
> What's the alternative, that they should learn to use it improperly?
Yes. Or at least, avoid learning anything if they can help it, treating it as a black box that can never be understood. "Learn got properly" just means "actually make an effort to learn the tool rather that clinging to learned helplessness".
A viable alternative my coworkers seem to have adopted is, “Execute the git commands blindly as provided by me and get upset with git when something doesn’t work right.”
I think the point is is that it's worth learning it, subjectively speaking. People tell me to learn vim properly, but I really dont believe I'll gain much based on the sunk cost.
Looking at diffs on the command-line is cute and all, but for anything substantial I doubt this will ever be as good as using a proper GUI interface. I like "meld". You have to install it, then run
If one guy says "This shit doesn't work!" and the other says "This is how it works", then it seems more like knowing how it works, well, properly than "shaming".
> with git checkout you can create and switch to the new branch in one command using the -b flag:
git checkout -b new_branch
> You can do the same with the new one, but the flag is -c:
git switch -c new_branch
It's like they had a design meeting where they discussed this and said "so I propose switch -b newbranch to create and switch to a new branch" and the objection was "nah that would make it consistent with checkout, which is against the project policy"
The purpose of `git switch` is to switch branches, and its normal argument is a branch ref. Thus -b for "branch" would be redundant and confusing. -c is for "create", i.e. create the branch that i want to switch to.
The whole point of adding git switch and git restore is to come up with more user friendly porcelain. Otherwise we might as well stick with git checkout. git checkout is maximally consistent with git checkout!
How does -b mean “create” in checkout? I get that it means “branch” but why is “branch” what I use to create a new one? The problem here is obviously the choice made for checkout, not switch. But consistency is more important so it’s better to keep consistency than “correctness” here imo. This problem already happened in the past with -d (delete).
> How does -b mean “create” in checkout? I get that it means “branch” but why is “branch” what I use to create a new one?
Because you can "checkout" other stuff too, not just branches.
> But consistency is more important so it’s better to keep consistency than “correctness” here imo.
Huh? I thought the whole point of these new commands was to try and fix the old illogical command line interface. Since one of the main problems -- probably the main problem -- is precisely the lack of consistency, the new ones logically just cannot be consistent with all the old ones, because then they would be just as inconsistent among themselves as the old ones. Something somewhere has to give. (And IMO this "-b" seems as good a candidate for the chop as any.)
Yes - if this is meant to be a whole new set of commands which are internally consistent and can be used without using the others, then it makes sense to do it right.
But is that the case? Won’t this just be used in addition to the old commands (except checkout)?
The worst case would be adding commands that attempt to fix an UX but which don’t replace it, creating two separate UXes you must use at the same time. Anyone who uses the Windows control panel knows.
Dunno. (Elsewhere in this thread, the guy who built them reported that he isn't on the git dev team any more, so perhaps not too promising.)
> Won’t this just be used in addition to the old commands (except checkout)?
That's still a massive improvement, since checkout was arguably the main culprit. Also, it was never the case that you'd have to replace all git commands to achieve consistency: Many (hopefully most) already had consistent naming of arguments and options. So make your new ones compatible with the largest set of internally-consistent ones already present, and you only need to replace a minority. (Worst case, if all N were incompatible with each other, you'd have to replace N-1 -- but you could still keep one. :-)
Then there's also usage frequency to consider. Given that checkout is the command most inconsistent with others, often inconsistent even with itself, and among the very most used ones overall -- almost certainly, in my estimation, the most used of the candidates for replacement -- I'd say even if nothing more comes of this, it's a huge net win.
Also note that it's not a case of "two separate UXes you must use at the same time": git checkout is still there; no piece of the "old control panel" has -- AFAIK, or has any? -- been disabled or removed.
I completely agree, I’m not asking for a completely new set of consistent tools, but an end result that forms a consistent set possibly together with some old ones. If it’s the case that you can now work in a consistent set and never use checkout then that is a big improvement.
Which is better depends on what you want to achieve. In this case, the whole point is to make git easier to use as a tool. Carrying over ideas we already know are confusing would therefore be wrong in my opinion.
Keep in mind this is your opinion. Unless you've done a poll of users (my comment is getting a lot of upvotes), you have no idea what most people prefer. I can come up with more reasons why -c is better, why consistency in this case is not an issue, and why -b makes sense for checkout, but it's not worth the time debating opinions on a done deal.
The only reason I bothered commenting is your comment pretty much called them stupid because their decision didn't line up with what you would have done.
I don’t have as much issue with this particular switch as I do with the taped together ball of aggregated mud that is the existing set of git commands. Those are the reason I found this particular one amusing. I would have preferrec consistency over intuitive (or allowing both!) but as you say I have no idea what a majority of users would prefer. I’m pretty sure whoever chose this doesn’t either (another problem with relying on software that can’t pay thousands of dollars for UX testing).
The fundamental problem with this UX is that it (like so many Unix tools) doesn’t draw a hard line between interactive use and scripting use, so making large changes is out of the question since the interactive UX is already scripted in thousands of aliases and CI scripts. That’s a different topic though.
> I don’t have as much issue with this particular switch as I do with the taped together ball of aggregated mud that is the existing set of git commands.
You're expecting the impossible. The current set of commands isn't great, everyone agrees on that, but they're too engraved in the minds of millions of developers to change. Thus, to solve the problem, the git developers come up with a new set of commands. By definition they won't be consistent with the current set of commands, as they're meant to solve the problems the current set has.
You have an issue with the new commands not being compatible, but you also have an issue with the old commands, that the new commands would replicate if they were compatible. What do you actually expect the developers to do?
I think the idea of a new set is great, but then I think it needs to have a bigger scope than this pair of commands. Perhaps a whole new suite that is consistent and can replace most if not all typical tasks. If this pair of commands is part of a larger suite of porcelain that is internally consistent that’s great. Or at least part of a design or plan for such a “new toolset” where more commands will follow - that’s also a good solution. The important bit is the design process and long term plans so it isn’t just more slow aggregating which was how git got into the current state in the first place.
If there is more to this retooling project than meets the eye (in this article) then great. My
fear is that it’s more inconsistent aggregation, again - but it’s not obviously the case of course.
You don't find it interesting you are armchair quarterbacking the most successful VCS software in existence, disregarding that it evolved, that its API reflects that? You assume there is no design process. You're making a lot of assumptions and simplifications. Concern or worry even "fear" are fine, but your language has been more judgmental than concern.
There are plenty of git wrappers that offer a cleaner API; you are totally free to use any of those, write your own, or to ditch Git altogether. Or is your beef with all of programmerdom, for choosing git as the platform for open source dev?
> You don't find it interesting you are armchair quarterbacking
This is hacker news. It’s basically a few thousand armchairs. Welcome.
> You assume there is no design process.
I’m not saying that. I was just a few minutes ago explaining how I have tried to see traces of it. From the software itself there aren’t any obvious signs of (long term) UX design.
> the most successful VCS software in existence
Lots of stuff is wildly successful despite being an aggregated ball of mud (php, js, unix, c++…). Good (intuitive, consistent) UX is in no way required for popularity or success.
> There are plenty of git wrappers that offer a cleaner API; you are totally free to use any of those, write your own, or to ditch Git altogether. Or is your beef with all of programmerdom, for choosing git as the platform for open source dev?
The fact that none of them are popular I think is due to the problem with such extensions, that soon enough you have to write something portable (a CI script for example) that runs on vanilla git.
> Or is your beef with all of programmerdom, for choosing git as the platform for open source dev?
I like git, and I hate git. Git is simultaneusky hideous and awesome. I really really hope git isn’t the end of VCS, even though it’s now one of the better ones.
> your language has been more judgmental than concern
Fair point. I do enjoy pissing on things that look or feel Unix-y
> I do enjoy pissing on things that look or feel Unix-y
Uh... Git was created by the guy who created Linux.
Initially to help manage distributed Linux dev. Not forced on anyone other than Linux devs. Not dominant through nefarious business practices like MS Windows, but because it spread *organically* overtaking other "well designed" systems. People voted with their feet.
Exactly. Just like a lot of other things created in the same spirit: technically good, but designed in a bazaar rather than an ivory tower. That’s why people complain about UX and not that it’s pretty but crashes a lot.
> I don’t have as much issue with this particular switch as I do with the taped together ball of aggregated mud that is the existing set of git commands. Those are the reason I found this particular one amusing. I would have preferrec consistency over intuitive
Seems unreasonable:
* The problem with the "taped together ball of aggregated mud" is that commands and switches are illogical and inconsistent.
* These new commands are an attempt to fix that by introducing a set of (AIUI) parallel commands with better internal consistency.
* And now your complaint is that the new ones aren't consistent with the old ones?
No, of fucking course they aren't consistent with the old inconsistent ones. That's the whole point: If they were consistent with the inconsistent ones, they would also be inconsistent.
i think they had a design meeting where they said "lets add some new commands to make git easier for newbies".
carrying over flags whose abbreviated forms don't make any sense in the new context doesn't make anything easier for anybody. if you want to keep using the commands you have memorized, you can do that - just don't use switch or restore. the new commands are different, that's the point.
Probably. But choosing the wrong-but-consistent flag would have been easier to learn and remember. I don’t think anyone will consider different git commands as old/new, or core/porcelain or whatever it is. I just want semantical “delete” to have the same switch name, for example.
Sure. But you gotta see the benefit of a "-b" flag creating a branch in every Git command consistently (where it makes sense). That would be easy to remember!
That is probably what happened, with the observation that being consistent with checkout is a terrible idea for everything, as checkout is a clusterfuck of a command
You don't need -- as much in the "git restore" example. With git checkout it may be necessary to separate the branch and the paths with "--" but since "git restore" does not take a branch (except with -s), doing this is totally fine
Not typing it with "checkout" gets in the way of good tab completion. At work we have at least tens of thousands of branches and if I hit tab after "git checkout file_path_prefix", my shell is going to freeze for a while, while tab completion is going over remote branches.
How do you end up with that many branches? It sounds like you keep every feature branch around forever. Keeping one around for a few months I get (although personally the sooner they're gone after rebasing or cherry-picking them the better), but this sounds like a full on history of every branch ever.
If someone pushed a branch to remote, then there is a chance that they made a CI build for a customer based on that branch. Later you may need that branch to look at the source code, when you get a coredump, or logs or something.
(Every non-official build is made from a separate branch.)
You may not have much choice in the matter, but this level of complexity is where I would strongly recommend a (slightly) more complex architecture.
It's apparently a business requirement to keep every branch around forever, and I'll just take your word for that. At that point, you can have an `origin` remote where work happens, and the CI can include a push to an `archive` remote which is append-only.
Lets you have a development environment where the existence of a branch on origin means that it's in-play, and everything exists on archive if it proves needful.
Deployed versions that need to be referred to later usually get git tagged, but those are not that much different from branches of course (they'll both show up in autocomplete as a committish object). But even then; tens of thousands of tags/branches? At that point it might be worth considering simply baking in the git commit hash into the build artefact for future reference instead of tagging every CI build with a branch/tag.
When I say "non-official", I say that it has some commits that aren't on the main branch. You need to push them somewhere, or else they're going to be lost.
From the article, I was thinking that it was again a stupid confusing design for the cli to requires the -- even with a dedicated command.
One main issue with git is to not be consistent and logic with the comments. Always to use different way or option abbreviation for different command.
For example having a space or a slash between repo and a branch in a command.
-- is typically to end argument processing and to treat all further ones as files. With git-restore, it's probably only relevant if you happen to have files in your repo that begin with hyphens. A fairly unusual situation, granted, but not forbidden.
Still does not really make sense. Because you can still quote the - if ever you had such a file, and most Unix tools have the -- optional only if you would need it. Not systematic!
> For example having a space or a slash between repo and a branch in a command.
Actually, this one makes sense, once you understand the underlying model of how git works with remote repositories, which (imo) is fairly fundamental to a distributed VCS.
They're different arguments (in your words, having a space) when you're accessing remote repositories and have to specify the location it should access. You can always substitute an URL for a repo in this case, and they fail if you cannot connect to the remote repository.
In all other cases (having a slash), you're referencing a ref in your local repository. This is a local copy of the remote repository. That means that it always works offline, but also that it doesn't sync with the remote repository. You can also substitute another ref, such as a "local" branch.
I have taken git apart and learned it three times now, and it all makes sense to me. The commands however, never clicked. The terminology never felt intuitive, nor predictably applied.
So I can explain how git works in great detail, but ask me how to perform an action I haven't in a month, and it's a lot like figuring our a tar command.
When you checkout a specific commit and are now in detached HEAD state, you are by default given the message
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
It's not the correct solution because it requires knowledge of the intent of the project leaders, which is not easily available to us.
A much better response would be, "Thanks for pointing that out! I have filed an issue." One can't simultaneously through shade for not being community-spirited while expecting other people to do all the work.
Submitting a patch would start with filing an issue right? And I assume in that venue the maintainers would clarify their intent. If help were indeed needed then I’m sure the maintainers would be relieved to get some help fixing the documentation but.
In any case, the suggestion of submitting a patch when you notice a problem doesn’t deserve downvotes. That’s not throwing shade, at all.
There are ways to suggest submitting a patch that don't come across as dismissive, as a way to shut down a legitimate complaint. But this wasn't one of them.
I have trouble imagining you can't figure that out on your own. If that's really the case, then take a little time and explain exactly what impact you think that comment would have had on the person they were replying to. Then explore some other ways to actually encourage submitting a patch. Write a couple of them out.
If you do that work, I'm glad to try to help you see where their approach falls short in terms of sincerely encouraging contribution.
> I have trouble imagining you can't figure that out on your own.
I'm not being coy, as if I really know what you think, but I'm asking anyway just to be frustrating. I mean, I have an idea, but I also didn't want to make assumptions about your intent, when it's just as easy to ask you to elaborate. After all, it's your point that it was incorrect, so I just wanted to know why.
Any way, you said there are "legitimate ways to suggest submitting a patch, but this isn't one" (I'm paraphrasing). To me that didn't really make sense, because that implies one of the wrong ways to suggest submitting a patch is to suggest submitting a patch.
> If that's really the case, then take a little time and explain exactly what impact you think that comment would have had on the person they were replying to
In my mind, it seems like the effect that it would have on them is to suggest to them to submit a patch. Alternatively, it might also have no effect, because they might not care enough to do anything about it, or just be busy, which is also ok. They don't have to listen to any advice, and it was given in good faith, without malice.
I can also imagine a situation where it is technically possible for someone to attribute all sort of weird malice to "so submit a patch" that wasn't actually there. After all, people can feel however they want. However, in my opinion it's quite rude to assume about someone without knowing them that they're such a fragile creature as to invent malcontent where there is, in my eyes, obviously none.
So going back to when I said "I have an idea", I meant that yeah I can imagine, after this back and fourth, that maybe you would assume something like this about someone else - that they can't hear "so submit a patch" without taking it en every wrong way possible - but I didn't want to make that assumption without at least giving you the opportunity to explain yourself.
Ok, on to exploring other ways to encourage them to submit a patch. You could say, "Do you know you can submit a patch?", "Feel free to submit a patch", "This looks like a documentation bug, would you submit a patch?". All of these also seem fine to me.
> If you do that work, I'm glad to try to help you see where their approach falls short in terms of sincerely encouraging contribution.
What I don't like about this, personally, is that you're convinced their suggestion wasn't sincere, as if there has to be an ulterior motive. As if believing that a programmer on a startup forum is capable of fixing a documentation bug is so crazy, there can be no other explanation than really they were trying to humiliate the other person.
Any way, I do believe that you're trying to be helpful and it's not really that big of a deal so no worries. Hope you can believe me when I say I'm being sincere as well. Basically, a lot of people have different ways of encouraging each other, and there are a lot of different styles of encouragement that people appreciate. It doesn't mean that one you don't agree with is wrong. When I read "so submit a patch", my mind didn't immediately go negative, and I hope the recipient of that didn't either.
I'm not convinced of anything based on one terse reply. But given the history of replies like that and given the user's history, it's my best guess that they were not intending to be helpful. I read them as being dismissive in a way that shuts down a complaint.
If I wanted to actually invite somebody to contribute to an open-source project, I'd start with positive reinforcement. "Great point! That's definitely an issue with the docs." I'd then first tell them how to let the right people know about the problem. "You can file an issue here Be sure to categorize it as X, mentioning Y, and Z." Then I would explain that if they wanted to get involved in the project, they could try submitting a patch, giving them at least a few sentences of instruction on what files to look at, where to find the contributor guidelines, etc. In sum, if I'm asking somebody to do work, I'd encourage them and do a little work myself to show that I'm trying to support them.
So if that reply truly seems fine to you, please understand that if it ever was, it has been ruined by a long history of people acting like that to be dismissive or jerky. That rudeness is something I can almost excuse in someone actually working on an open-source project, as they can get a raw deal. But I think there's no place at all for it in a forum like this.
> "Thanks for pointing that out! I have filed an issue."
This means, here's my problem, someone else fix it for me. Since it has not been fixed for YEARS, maybe not the best alternative if you want to get it fixed.
If it truly hasn't been fixed for years, then either the people involved haven't noticed or don't care. If the former, filing an issue is helpful. If not, filing a patch may not get anything fixed either.
It is more likely that "the people involved" are working on other things or just don't share your concern. Filing a ticket can be helpful, but a pull request is usually preferred times 10,000. You are right, it is also possible that a change may mot get merged... In which case you've added the feature so the software is more useful to you anyway. Everyone wins.
Everyone wins if I have to maintain my own branch forever just to fix some documentation that I don't need because I figured out what was going on well enough to write a documentation update?
I used to know lot of terminal commands but I'm seriously falling behind due to jetbrains integration which covers 99% of my daily use case. Together with local history, I've never "lost" work in years.
Never heard of switch /restore and will probably forget about it the next time I'm on the terminal.
Rebase and interactive Rebase is so well integrated for my use cases and I feel much more productive without having to switch context.
Reflog and bisect are when I nowadays switch to the terminal, have not discovered an equivalent last time I checked.
> but I'm seriously falling behind due to jetbrains integration
This IntelliJ integration is the source of quite a lot of git problems in teams I worked with.
I'm quite flabbergasted by this - devs claim to know git on their CV, come in and know what "commit" is and how to use the IntelliJ UI, but don't even understand what its doing. And everyone is acting like it's OK and learning git is a "hard thing ill never need" and we should all use sourcetree or jetbrains. Or people just get so used to it and never understand what exists below it. They lose all sense of what they're doing and just think "the machine knows what I want". Then a vaguely questioned dialog appears - or something similar - and they cause clusterfuck upon their branch - or sometimes even other people's remote branches.
How do we allow our culture to be so lazy that people resist using one of the basic tools because "oh its hard I gotta remember 5 commands" and we find it OK?
No wonder the plane is burning.
It's good that you still know that reflog exists, because a lot of "inteliij is my git client" users don't even know about it. Tho I'm still wondering, isn't it faster/easier to open intelliJ terminal and type a command or two than having to ope a whole new window and click around it?
(also sorry if this sounds like an attack on you, it isn't! just really wondering!)
Also, re: OP: So basically 2 new commands were added that do what other commands already do, but people dont read the docs so we should add new commands so maybe people will read the docs for them?
How do we allow our culture to be so lazy that people resist using one of the basic tools because "oh its hard I gotta remember 5 commands" and we find it OK? No wonder the plane is burning.
Its probably a rhetorical question but I think its worth answering anyway.
Experienced developers had the luxury of learning git, say, over a 10 year period. I certainly know a lot more git than 10 years ago.
If you are a new developer thrown into your first real project and Intellij handles git for you so that you can concentrate on being productive then learning command line git takes a back seat.
I see this in many aspects of programming. One thing I am struggling with currently, is that JHipster generates applications in a user friendly way. JHipster is highly productive and seems like the future of programming BUT it means that there will be a generation of programmers that do not understand the MB of Spring Java code that is casually puked out by the code generator.
Its depressing to say it, but I think not understanding the tools is only natural and is probably the new normal in this age of complexity.
>Experienced developers had the luxury of learning git, say, over a 10 year period. I certainly know a lot more git than 10 years ago.
The depressing thing is I'm not talking just about off-the-shelf newbies, I'm also talking about experienced devs.
>Its depressing to say it, but I think not understanding the tools is only natural and is probably the new normal in this age of complexity.
Yes, unfortunately, this is becoming the new normal.
But what is the next "new normal" after this? More complexity and obscurity? How long can we keep building that house of cards before it collapses upon us?
I don’t know about most people but i have to relearn stuff if I don’t use it.
I use the basic git commands a lot but the wider set not very often.
So maybe I should learn Git from first principles every year? Like a cop would practice at the firing range?
But I’m doing that on my own time. But then what about the 1000 other things from Dockerfiles, to Relational database query plan optimisation, to the latest azure cloud offerings, to the newest React library for managing state, etc. etc.
the solution I believe would be to focus on creating simpler tools, and by that I mean tools with at least the same usefulness/as powerful as the ones we have now, but with much less cognitive burden, with better interoperability and composability by using simple standard data interchange formats.
If you really believe this, then I don't believe that you understand what the job of a powerful revision control system is.
The "cognitive burden" of git comes as much from the tasks that it is occasionally required to make possible than anything else.
If the only git commands you ever use are push, pull, commit and checkout then its a very simple system with very little cognitive burden. But one day, you may need to perform a truly complex task with git that a simpler system with much less cognitive burden would just not allow.
I do not understand how "data interchange formats" are an issue here. What do you want to inter-operate with?
I generally agree with you that much of git's conceptual complexity is irreducible if you want to retain git's power. But there's plenty of examples of poor-quality and inconsistent porcelain that provide no extra power. Indeed, these "UI smells" mostly offer no benefit at all.
Note that we're specifically discussing the theoretical reason for git's usability woes. The costs of actually fixing them in git aren't trivial, of course.
Most of the experienced developers you mention probably also don't know really know assembly - unless they are specialists, something which the generation before them think shows their lack of understanding. Assembly used to be canon, now it's a speciality. I don't know a single developer in my company who is able to write it.
We'll see if JHipster and their ilk are really the new building blocks. A lot of the older RAD development tools have also gone the way of the dinosaur. For sure the new primitives will be higher level and allow us to build more complex applications, it's just a matter of time before we'll find our which ones have to most expressive power and ergonomics, and which turn out to be giant hairballs.
This attitude mystifies me. The whole point of computers is to solve problems well enough that most people don't have to understand the machinery.
It is literally impossible for us to understand how everything we use works. We expect a compiler to just compile things and work; nobody is ranting about how modern developers are lazy because they can't hand-verify the compiler is outputting the right opcodes. Nobody is ranting about how web developers are lazy because they can't debug a browser runtime and its interface to the OS's rendering primitives. Can some people do those things? Sure. Do we imply character flaws for the 98% of developers who have focused on other things? No.
If this old-man-yells-at-clouds, good-enough-for-grandpa-so-its-good-enough-for-me style were the dominant approach in our industry, we'd still be working with punch cards and fanfold printer output. Thank goodness it isn't. Instead of having a million developers adapt to git, git should adapt to the million developers and free up their time for doing something that actually matters to a user.
>Instead of having a million developers adapt to git, git should adapt to the million developers and free up their time for doing something that actually matters to a user.
How should git adapt?
Can you get a simpler model than this?
Is really a basic structure like a tree so complicated millions of developers are having a hard time with it?
Nobody is advocating for "dont use new stuff, old is betteer". Nobody is advocating for people to hand-check compiler outputs (unless your analogy pertains to people who work in a GUI tool that shows them pretty pictures for opcodes and they check if the pictures match).
You've missed the point.
The point is "try to understand the basics of the tool you are using".
If me wanting people to learn and understand more makes me an asshole, so be it.
I'd rather be an asshole and help people learn more and understand more than coddle them in the safety of the pretty buttons and say "it's ok, you don't have to understand".
If we don't understand our tools and the problems they solve, how will we make progress?
How will we make better tools?
How will we make better software?
How do you learn and grow if not by understanding what you don't?
One way it could adapt is by doing exactly what this article is talking about: Giving commands for common operations where name and behavior matches user intent.
That you can't tell the difference between "let's make software more usable" and "nobody should learn anything ever again and just soften into blobs of undifferentiated protoplasm" seems like a you problem.
This is an unnecessarily antagonistic rant, which boils down to “I know git from scratch, I’m so smart, why isn’t everyone else doing this”.
Git is a very complicated, bloated command line. Frankly, most of the day-to-day can easily be done with a high level understanding and a GUI. There is no shame in not knowing the minutiae of git.
The goal of being a programmer is to produce cool things, not to duel with your tooling.
The irony with git is that the underlying data model is far simpler than the user interface implies. People assume git is complicated because the user interface is complicated, but it really is very simple under the hood.
So much so that you can write your own basic version of git in a couple of hours: https://wyag.thb.lt/
The problem is that git leaks the underlying data model all over the place, even where it's unnecessary to actually controlling versions. It's a nice data model, but the user view shouldn't expose it. The data model is an implementation detail, but git treats it like a show & tell.
You have it exactly backwards, and I think it gets to the core of why people get so unnecessarily confused about git. The underlying data model is both much simpler than the interface and crucial to understand. If you treat git as a collection of memorized command recipes the same way you did with SVN or CVS, you're going to have a bad time. The underlying DAG needs to be top of mind when interacting with git, as it best represents the inherent complexity of the problem that code repositories are meant to solve.
But this isn’t a problem because the underlying data model is almost perfect for version control of text files. In some cases the data model is an implementation detail but in this case the porcelain is the fungible detail that can change depending on surface things like people's preferences.
I don't know git from scratch, never written it or dived into source code itself.
Yes, you can write C without understanding poinnters, you can paint without understanding colors and how they mix, draw without knowing difference between 3H and 2B, use a drill without knowing which type of head is for what.
But try doing it long term and you either gotta learn those things or your output will be limited and clear expression of your vision will be harder.
I'm not advocating for people to learn git from scratch.
I'm advocating for understanding - not low level, at least high level.
And is it better to go through a "how to use X tool in Y GUI" tutorial or how to use "X tool" and then gain immediate understanding of what Y does? What if you switch GUI's? Which knowledge will be sticking?
C is a great example here. If you write a lot of C, you really have to understand pointers very well. And quite a lot else. That makes it a bad tool for many purposes, so most developers do not use it, and fewer use it every year. It's just a bad tool for modern purposes.
The same applies to the git CLI. And really, to git. The right tool for a small group C-using kernel developers 15 years ago may not be the right tool for different people doing different things today. Let's hope it goes the way of C.
Except you don't need to know the git CLI except when you've screwed something up. Even the reflog is available via TortoiseHg. Bisect okay maybe you have to drop to the command line, but otherwise there's no reason to futz about it when you can checkout, merge, rebase, cherry-pick, create new tags, etc. using the GUI.
My experience is that knowing the DAG in GIT is equivalent to needing to know pointers to write effective C.
Knowing the internals is more like needing to know assembly in C. Yes, it can be really useful for debugging certain classes of problems, but not really necessary in day-to-day work.
I've used git professionally for 4 years now and haven't once run into a corrupted repo or a problem googling for help couldn't solve. Day to day driver is TortoiseGit, which is frankly just a shitty TortoiseSVN skin on top of git. But I like being able to visually see the graph since at the end of the day I'm doing graph operations when manipulating the repo.
> I'm advocating for understanding - not low level, at least high level.
It sounds a lot like you mean low-level, not high-level, though?
There’s more than enough to be achieved by just knowing the tree concept of git, and using a GUI to visualize it while running the basics - checkout, commit, branch, rebase, etc.
>There’s more than enough to be achieved by just knowing the tree concept of git, and using a GUI to visualize it while running the basics - checkout, commit, branch, rebase, etc.
Oh, that's exactly what I meant. At least basic, high level understanding of the git model and basic operations on it. But most people never even get to rebasing or try do understand what lies behind magical buttons in the GUI.
I think you're approaching git and software dev from a bottom up perspective. You learn the tools, understand why they exist, and then use the tools to solve higher level problems. Unfortunately due to time constraints, interest levels, and simply ease people go top down. They need to switch branch, so they will follow the least effort principle and use a UI. Barely understanding many fundamental tools is common these days for devs, and I honestly think that it's a sign that our field has grown massively in terms of the tools we need to use, the processes we use to deploy, and the products we use to develop.
I have to deliver products and deliverables, when and what do I focus on wrt gaps in my knowledge? Git? Unix commands? OWASP Security principles? Cache busting? global state management? ORM integrations with popular DBs? Kubernetes configs?
It's hard to see the gaps someone else has, and wonder why they can't know what you know, but they may have some deep knowledge in a domain you are only superficially knowledgable in.
I agree we need to nail the basics, but this isn't 2002 anymore, and we don't ship Gold CDs to customers by running a build command on a single PC in the office. Our jobs are so much more complex and multi-faceted, and the oldbeard assumption that things are 'bare essentials' is eroded by the pragmatic realisation that we only have so many hours in the day.
You can use top-down knowledge to inform yourself on what concepts you need to learn, and you can ignore the parts of the domain you need right now. But you still have to learn them bottom up. Getting confused by the concepts you're using is a gigantic waste of time.
And the harder the concepts, the easier it is to get confused, so there's no trade-off.
> I think you're approaching git and software dev from a bottom up perspective. You learn the tools, understand why they exist, and then use the tools to solve higher level problems
Actually, I'm trying to look at it more from a "shallow sea" perspective - you have a problem and are given a tool to solve it.
But do you not dive in just a bit to see "hey what is this tool" after you solve the problem?
Or after using it for a while?
UI is also something you have to learn how to use.
Sourcetree UI is as complicated as terminal for someone who never used it.
>I have to deliver products and deliverables, when and what do I focus on wrt gaps in my knowledge? Git? Unix commands? OWASP Security principles? Cache busting? global state management? ORM integrations with popular DBs? Kubernetes configs?
With that mentality, what do you ever learn?
Do you just keep on chugging year after year with "duct-tape the tools"?
Where is the joy in that?
Where is the growth?
Are you always in a rush to deliver software without a moment to think?
You learn about the level you're using - like, are you using git daily?
Just reading a tutorial or two and spending 5 mins a day with it in the terminal is going to do wonders for understanding it long-term.
>Barely understanding many fundamental tools is common these days for devs
It's common for someone starting to develop or starting to use a tool - hell, 8 years ago I have been as confused as "wtf is this git" as anyone.
Gradle was magic to me. Terminal was a dark and scary place.
That doesn't mean that it is okay to stay at that level.
If we accept "not trying to understand" as the new common, then we accept failure and ignorance as the new common.
We accept the world of broken software because people don't understand what they're doing
- and we're telling them "you don't need to understand so don't even bother".
And hell, maybe that's gatekeeping, but fuck it, I'll rather be St. Peter at the gates than accept a world where learning and understanding is something "we don't have time for".
>>I have to deliver products and deliverables, when and what do I focus on wrt gaps in my knowledge? Git? Unix commands? OWASP Security principles? Cache busting? global state management? ORM integrations with popular DBs? Kubernetes configs?
> With that mentality, what do you ever learn?
The way I understand it, this comment is saying you have to prioritize and work on the highest-impact areas. Of course you grow, but until a certain point you don't benefit much from knowing the guts of git more than Framework X.
I have never experienced any joy of creativity from using git, so it's not something I'm putting high on my list. Heck, learning Rust was 10x as good for my development as git, so I don't regret prioritizing that way.
I expect (hope?) that most people understand the git they use in their daily workflow... but that doesn't mean they have to have mastered every dark corner. Find a specialist (Stack Overflow :) when an unusual situation arises. If it's happening consistently, then I agree some more learning should occur.
My conflict resolution skills have atrophied due to jetbrains. It’s just so much easier than anything else I’ve tried. Even when I don’t already have a project opened in idea, I’ll open it just to resolve a merge conflict.
The rest I do on the command line because I don’t want to forget — except commits because I’m not at risk of forgetting “git commit -m”.
I currently use a third-party git GUI (GitKraken atm) but I also use jetbrains products, am I missing out by not using the integrated git functionality? If so, is there a tutorial or guide you could recommend, or is it all fairly self-explanatory?
The Jetbrains git interface is really quite good. For me the magic is the general combination of git and local history. The search is good and the diffing and jump-to-source from those. The changelist handling is pretty good too.
I think a quick skim of the available features in the docs would probably give a good overview. Then poking around.
Jetbrains Git integration is quite self-explanatory - if you're already using GitKraken, you're not missing out much except being able to do it from your IDE instead of another tool. Maybe conflicts are a tad easier due to same syntax highlighting style and ability to edit on the go.
I think sublime merge is pretty close to the optimal git GUI, https://www.sublimemerge.com/. It uses standard git commands and terminology and allows you to add custom git commands to the GUI. Unlike Sourcetree is cross platform for Windows/Linux/Mac and includes a merge tool. I think command line git and Sublime Merge git translate back and pretty easily.
I don't like Sublime Merge that much. Probably because I tried GitKraken first and loved it. It helped me understand stash and rebase better than the command line ever did. At some point they decided to not allow gratis use on private personal repos so I switched to VSCode with the Git Graph extension which gives me most of what I liked from GitKraken.
For those wondering this is a separate application from Sublime Text, built by the same folks, and with the same free evaluation and $99 lifetime license (not sure how "required" it is).
I haven't felt a need for a git gui but I might give this a try anyhow.
I tried to use it and ended up back with Fork for some reason. I think basic tasks like merging were unnecessarily cumbersome and reliability wasn’t perfect.
A lot of comments along the lines of ‘why do people use <ide> instead of learning the commands’.
For me, I used to use the terminal git, and I still do occasionally. But I use Sourcetree now for most things because I make less mistakes seeing the tree visually all the time.
My job isn’t to use git, it’s to write specialist software. If I get the software written and the customer is happy, it doesn’t matter whether I use <ide> or not. Imagine having 100 complex things bouncing around your head and having to make that 101 when you forget the order of arguments to merge.
The guy who knows every command of git backwards is welcome to apply for a job managing a git repo or something if such a thing exists? But I could harp on the same way about his missing MATLAB or firmware skills.
I wouldn't say I'm an expert but I've got about 10 years experience using git via CLI and whenever a noob does something weird and he's using an IDE I'm like... Sorry I have zero idea what this is trying to do and cannot help you
I've had similar experiences with git or other tools - not being able to help a junior dev because the GUI they're using is obfuscating whatever it is the underlying tool is trying to do. I think the irony is that we've got this insanely complex version control system that actually could have several valid use cases for what is likely a common path for users in a GUI.
I'm also not sure referring to people as "noobs" is going to help you empathize with their difficulty. ;)
Eh, that term seems to have fallen out of favor compared to when I was a growing up but I don't think it necessarily has a negative connotation.
I remember being in programming and software related IRCs at 10-11 years old having no earthly clue what in the fuck I was doing, asking adults questions and getting called a "noob".
Well, yeah, it was true. (I also made damn sure they had no idea I was a child.)
They could have said "inexperienced person" but that's not got quite the same ring and people are lazy, aye? Haha
My personal experience around noob is covered by the ones I found on ~~most~~ edit: the most high search-ranked online explanations: Mostly negatively conotated, deregatory version of newbie. Often associated with people not sufficiently able to learn or at least not learning on their own.
I've been around long enough and in enough circles that I have seen 'noob' go from mildly chiding to full-on insult. It used to be that in some communities, being referred to as a noob was a sort of "hey check out the new guy" ribbing. A noob was just someone starting out, a neophyte. It was the impatient and rude who later decided that being a noob was a bad thing.
I often see people self-describe as noobs when asking technical questions, for example in programming Reddits. I couldn't say that I understand the reason for it, though.
“Hey can you help me? I stood up a kratr pod and it’s lined to fundle but for some reason when I try to press the bin tree to the overlay layer the reznik instance on my laptop says ‘out of tokens’. Have you ever encountered this?” No, no I have not.
Having been troubleshooting computers since I was literally 8 years old (like I'm sure many of us here have) I feel reasonably comfortable I could at least be helpful solving this problem as well.
Every problem I have to troubleshoot is almost by definition one I've never encountered before.
This is exactly what motivated me to write Learn Git The Hard Way - I was surrounded by people who depended utterly on IDEs and got themselves into terrible states that based on models that were refractions of what the the (universally available) CLI gives you.
The best example of this was when I worked with a project with 12 dev teams that didn't rebase. They asked me where a commit came from, and when I ran 'git log --oneline --graph' the output was all pipes across my (maximized) terminal.
I've never seriously gotten into the 'plumbing' of git, so can't claim to have a 'true' vision of git via the CLI, but I've never yet been in a situation where it's been required.
Any developer demonstrating that level of daily incompetence with their basic tools of the trade should never be allowed to write code. The fact that this level of incompetence seems the rule rather than the exception speaks to the rather terrifyingly pathetic state of software, where badly engineered systems end up killing people. Source: firmware engineering consultant that cleans up messes like these for some of the biggest corporations in the world, who build things that will kill people when they fail. I would love to blow the whistle, if I thought that would have any positive outcome whatsoever.
Who defines which tools are "basic tools of the trade" and which are just bland infrastructure?
And who decides how much of each tool one should know to move beyond "daily incompetence"?
If I'm getting the job done and produce more net value than my peers for my salary, why should there be requirements on how well I know some specific tool?
I'm willing to bet a ton of value has been generated by folks who don't know the first thing about git beyond "commit, push, pull request". Who cares?
Now, of course, whatever my core job role is (perhaps it is firmware development, as in your example), I should know that role inside and out if my output is going into safety-critical places. But that's a different comment all together.
I'll admit as a 20yr c++ developer, I don't know what rebase is or when to use it. I've only been using git for a few years, cvs before that. I commit and push often, then do a merge request via a web portal gui(bitbucket or gitlab), then merge it squashing commits again using the gui.
I prefer merge/fast forward because I want all history preserved. People can rebase their changes in their branch, but rebasing is never done on the main or development branches. Most of the worst merge conflict I’ve had to deal with were caused by someone rebasing changes they didn’t make. I reject all pull requests that change history.
We ask candidates the most ridiculous algorithm and data structure design questions when we should be asking them to describe the git data structures. Let’s fix interviews and kill two birds with one stone.
Huh, how did it get so bad that "blow[ing] the whistle" about "things that will kill people when they fail" "would have [no] positive outcome whatsoever" ?!
I disagree, the point is to get code written that solves problems, and the tools are a means to that end and not an end in of themselves. Our tools should make that task easier and more accessible to more people.
I feel you. I was recently hired to help with svn to git migration and Java upgrades from 5-7 to 8 (due to my previous experience with such migrations). When I joined they already had a plan. They told me no one will be using command line. They are currently discussing which GUI to use and everyone will have this GUI installed by default, git command will be discouraged as non-standard approach, windows users will not have git-bash installed. I was asked for recommendation of GUI tooling for git and I had absolutely nothing to say, the only I ever used was Kraken and that was in 2014?. When I offered to train and assist everyone with git command training to make everyone more efficient in industry standardised tooling, they told me that's not in the plan. My job was done there without me having to do doing anything, I was free to leave that job!
Sublime merge would have been a good suggestion in this scenario. It's UI uses standard git terminology and concepts, so you'd have been sneakily trainig them in the command line whilst they used the UI!
It was in a very old fashion and regulated working environment where developers and testers aren't allowed to install whatever they want. Software installers can be whitelisted after a review is done. It included things like: licence OSS vs enterprise, support, community, or even more banal things like stars on github. Adding new software to a list could take literally months - like in case of git GUI client. Sublime was frequently requested but managers were responding "you already can install Eclipse and Intellij, you also have Notepad++, you don't need more editors. More editors installed is a security risk to this company!" (some hackers found a way to run VSCode portable edition which didn't offer updates ;)) This was end of discussion.
This is also why there was so so much fuss about git GUI client, there were looking for a golden hammer, a GUI that would do everything and anything.
The git GUI in Intellij and related products is really quite good. The diff view that shows three adjacent panes for conflict resolution (|theirs ->|merge|<-yours|) is so much better than the normal conflict resolution flow (not least because you still get the normal code highlighting in it), and separate checkboxes for each changed section of the diff makes it so easy to chunk large changes into logical commits. I wish they offered it as a standalone product.
>What was their justification for being like that?
The more tools people are allowed to install and use, the less standardised software development. Meanwhile we didn't even have common source code formatting pattern and single Java class could be formatted with tabs, 2 spaces and 4 spaces.
Not necessarily. On my team at work we have people using Sublime Text, VS Code and vim, but all 3 (+ our CI) plug into ESLint and TypeScript for ensuring style coherence.
I have 10 years of git cli and I use GitHub desktop or Sourcetree most of the time. I started using using a git IDE to help support the team members that weren’t experienced in git that chose to use one. I work in games and a lot of game developers mostly know Perforce.
I would never brush off a team member with “cannot help you”. I’m a git expert and I will figure out what’s wrong and fix it.
Every IDE I know shows you a log of the raw commands its running. I use the IDE first, then if something goes wrong (which very very rarely happens, git isn't exactly complicated) I check the log and see what it's trying to do.
On top of that, the IDE will let you run git directly in an authenticated session if you really desire. If a simple/dumb GUI wrapper around git is enough to scare off an experienced developer, then I have my doubts about their competence.
It’s usually a mix of both ide and git issues. And yes I’ve become an expert in using Sourcetree and GitHub desktop. I actually like GitHub Desktop a lot and use it for most dev now.
- Want a diff between two commits from two different branches? Click one commit and Ctrl+click the other. All the diffs show up immediately.
- Messed something up badly and you need to see the reflog? Click the Recyclable Commits checkbox, and everything in the reflog shows up just like any normal commit. And you can use the diff trick above on them.
- Wonder what's in your stashes that you forgot about? Click the Stashes checkbox and they all show up as normal commits. Because that's what a stash really is.
- Committed something to the wrong branch? Drag the branch markers to where you want them.
This is my main gripe about git, it’s literally a tool created to suit the guy managing the linux kernel development via mailing lists… What proportion of junior developers are even tangentially working on anything resembling that?
Even if it’s exceptionally wonky, I have absolutely no problem introducing git to new devs, as long as it’s alongside a graphical representation (sourcetree or some ide-extension), and as long as I am able to enforce limitations on what commands can be run on the git CLI (or more realistically, disallow push to master, and force every merge/rebase through a review)
I’ve personally found that it’s trivial to understand how IDEs integrate git and then adapt the workflow around that, but it’s a complete clusterfuck to let self-proclaimed git wizards loose on a repo without any structure..
> This is my main gripe about git, it’s literally a tool created to suit the guy managing the linux kernel development via mailing lists… What proportion of junior developers are even tangentially working on anything resembling that?
always did give me a chuckle how it took off the way it did for projects that don't come even remotely close to the scalability (like tree sizes) it provides.
the facts that it's free, fast, reliable, good for offline operation and can be used for huge and tiny projects alike are nice though. just never expected to see the day when web designers would become religious about using it.
On a more serious note, I wonder if GP ever worked on projects outside of large enterprises and is just gatekeeping it. Git is useful on solo projects, for God's sake.
HG is simpler to understand, has a more consistent CLI, and way better error messaging. Just a pity that it's been so sorely overshadowed by the Swiss Army Chainsaw of VCS.
Git can certainly do almost anything/everything anybody might want from a VCS, but, just like a chainsaw, it'll cut your leg off just as soon as it'll cut off the branch you're pruning if you're not fully expert in using it.
I understand that some GUIs sort of become "black boxes" while trying to abstract away the complexity, but I know at least one (Sublime Merge) that always shows you the low-level commands it's trying to run.
Same for me. I switched from SVN to git professionally in 2013 and used git via Sourcetree on personal smaller projects. I had to switch to the CLI git because so used SVN also on CLI only and could not transfer the documentation to the buttons of Sourcetree. Whenever I help out other coworkers with git do it via commandline. Doesn‘t matter if windows or Unix. What drives me nuts with programs like Sourcetree is the fact that they apply their own default behaviors to make it „easier“ to use git. Like initializing submodules by default. I had a project with a sparse checkout scheme that could not be cloned via Sourcetree because of this. Our submodules where an optional compiler for a 3rd party platform.
I completely agree. At work we still use SVN, so I'm used to TortoiseSVN which is OK imo. Now for GIT there's TortoiseGIT, but it was such a bad experience. The TortoiseSVN background made it even worse, because things seemed similar, but did totally different things.
So weirdly, for SVN I still use Tortoise, but for GIT, I prefer the CLI client.
Specially when the ide doesn't use standard names and/or introduces new terms.
Visual/ide tools to use git are welcome and I use them occasionally. But the cli should be the common denominator.
Isn't that just being incredibly lazy. Surely a reasonably experience person like yourself should be able to read the error message and at least get a rough idea about what the problem is. Then from there explain how you would fix it on the CLI and try to map that onto the GUI. Most Git GUIs map fairly closely to the underlying git model.
Why not just say the truth "I could help you, but it's not my job, so I'm not going to"
> Most Git GUIs map fairly closely to the underlying git model.
Maybe, but most Git GUIs don't provide clear error messages. This is the case of VSCode, which my teamates keep using. When you use Git CLI you can just have the original error message and know what's wrong.
I was also burned by sourcetree some years ago where it lost part of my code while doing a merge I didn't even understand.
I am not a Git expert. But I can remember the 5 commands needed to do my job every day: `commit`, `push`, `pull`, `rebase`, `checkout`. (you can also add `add` and `status` to the list if you want)
They're straighforward, except for `checkout` which is adressed by `switch` for the main usage.
If you use it wrong, Git CLI will tell you in most of the cases, and even tell you how to do what you intented to do. Git GUIs will probably tell you something wrong happened and left you at that or try to be more intelligent than you are and do something wrong.
> Isn't that just being incredibly lazy.
-> this is what I'd answer to do those Git GUIs.
> Isn't that just being incredibly lazy.
-> this is what I'd say to people who won't write these 5 commands on a sticky note or something and keep it for a month before realizing it wasn't hard to remember once you _commit_ to it and that Git is not hard, minus exceptional problems but Git GUIs absolutely won't help you with these exceptional problems.
If you are using a command line tool for 10 years and don't consider yourself an expert then either you are not using the tool very often or the tool is not well designed.
In case of git I would say it is the latter.
I mostly use git via tortoise interface and that works well 95% of the time.
Other 5% is split between git mv, git branch -D, and trying various commands in order to make git do something simple that can't be done and then giving up.
At this stage I see git as necessary evil rather than something that helps me with source version control.
First of all, using "noob" like that is offensive. Everyone's a "noob" at some point. And if that's how you talk and treat people, I probably wouldn't want your help.
And second, what exactly mystifies you about what e.g. SourceTree menu commands do? They map clearly and intuitively to CLI commands.
If you have "no idea" what it's trying to do then you're not even trying to be helpful. You're just being condescending.
I get you disagree with the GPs post but “noob” isn’t offensive. What’s offensive is their lack of support for colleagues (which reads like a potential issue with their own ego: where they didnt want to be shown up in an unfamiliar UI). But their post reads exactly the same if you substituted “noob” for “newbie”, “junior”, “inexperience” etc.
What we need to do is get passed this ridiculous mindset some have that not knowing something is a bad thing. We all have to start off somewhere and “noob” is just a common term for describing that. Case in point: I’ll readily post “noob question guys, how do I…?” on Slack to give context that I’m asking a potentially basic question and basically don’t really know what I’m doing. For reference, I am the most senior on my team and yet I have zero issue highlighting stuff I don’t know when asking for help. And that’s exactly the way it should be.
There’s no shame in being new at something. What there should be is shame in wanting to mock newbies and shaming and those who don’t offer up their help to others. Being a noob should be celebrated as someone new joining the team rather than added to our dictionary of inappropriate terms.
Anyway, I’ve been the “resident git expert” before, and even just supporting competent CLI users is miserable once the team grows beyond 30 or so people. I can’t imagine trying to then reverse engineer and debug a half dozen crappy GUIs.
Optimizing everything around developers that can’t figure out the CLI sounds like a great way to attract bozos (both by admitting bozos and by chasing non-bozos out).
> which reads like a potential issue with their own ego: where they didn't want to be shown up in an unfamiliar UI
> What there should be is shame ... those who don’t offer up their help to others
I read it more as, I don't want to be become L1 Tech Support for something I don't know either.
What happens when the GP runs into an issue themselves. Do their colleagues help him or return the same cold shoulder?
Computers are sufficiently advanced that there will always be blind spots in your team. Sometimes that means working together as a team to figure them out. Which is the kind of behaviour a good manager should encourage and the sort of attitude a good senior engineer should have already learned.
The issue isn't the technical complexity, it's the office politics.
If they fix it, do they then become responsible for fixing it in future? Is this now a "responsibility" the "company" (coworkers) expects of them, with no compensation? In addition to their existing responsibilities?
I introduced git to the company I'm at (previously on SVN running on an old beige box in the corner).
It made collaboration easier. I accepted the fact I'll get pinged whenever someone makes git do a weird thing, but I'm lucky that for the most part coworkers learn from these incidents rather than coming back each time with the same issues.
Again, it shouldn’t be a matter of office politics. If you have a healthy team then everyone will chip in their individual strengths. Yours might be git but someone else might be stronger with Bash, package management, or whatever. Eventually you end up with everyone helping each other. So while you might get interrupted occasionally for git queries, you’d save hours on another issue that someone else helped you with.
There should also be schemes in place to help people level up their own skills. Whether it’s organised training days or an acknowledgment that x number of hours a sprint is spent on personal development (A Cloud Guru or whatever).
If a company doesn’t set up those two pillars then they risk creating toxic teams and that’s ultimately more harmful for productivity than any lost time in a given day helping a peer with version control.
Source: been a hiring manager at several companies. Seen what works and what doesn’t. Ultimately a closer team almost always out performs one where individuals are only looking out for themselves.
> First of all, using "noob" like that is offensive. Everyone's a "noob" at some point. And if that's how you talk and treat people, I probably wouldn't want your help.
Unfortunately, a word needs to exist that stands in for this:
"Person, who isn't so different than myself when I started, with little experience who is mistake prone due to the lack of experience, who's mistakes creates disruption and often great expense at the most inconvenient times."
Being offended by noob just makes communication more difficult, and does not really change anything. We've had all kinds of much more offensive names for the same thing (many other names imply race, social status and so on). noob seems to be about as offensive as rookie, but one syllable less.
> Contrary to the belief of many, a noob/n00b and a newbie/newb are not the same thing. Newbs are those who are new to some task* and are very beginner at it, possibly a little overconfident about it, but they are willing to learn and fix their errors to move out of that stage. n00bs, on the other hand, know little and have no will to learn any more. They expect people to do the work for them and then expect to get praised about it
The specifics of the definition probably vary depending on who you ask but that's roughly it.
Indeed, and it should be clear that "noob" is intentionally/cleverly spelled like "boob" in the sense of "idiot". That's the whole point of the word -- that's the joke.
Now it's fine to joke around and call yourself a noob, that's just self-deprecating.
But if you're talking about other people and don't want to inadvertently offend, stick to "newbie" in speech or "newbie/newb" in writing, which have a connotation entirely of "beginner" as opposed to "idiot".
(Of course, if you're among friends where you enjoy making fun of each other, say whatever you want!)
I've been using the term noob for decades. Reading your comment is the first time I've ever seen anyone compare it with boob. There is a big world out there, with lots of different people using words to mean things.
I agree with your last paragraph but "noob" is offensive? That's just silly... it's like you said, we've all been there once and it's a useful term for quickly describing where one's at. There's no inherent malice unless intentionally added by the speaker/poster.
Just as a data point, I unselfconsciously describe myself as a noob (or more likely n00b) when I'm inexperienced with something, and still prone to making embarrassing mistakes because I don't know what I'm doing.
> My job isn’t to use git, it’s to write specialist software.
I’m a big fan of automation, but there are certain fundamental tools I think one needs to understand to do the job. Both because you should have some idea what the automation is there to accomplish and also to get yourself out of a pickle when something goes wrong (or to even recognize when that happens!).
So, as I think most people would agree: you certainly don’t need to understand the obscure corners of your programming language, but you should have a solid understanding of the fundamentals and a decent overview of the rest.
In the case of source control, and git in particular, IMHO you should have a decent fundamental understanding (which isn’t even particularly complex at a conceptual level) so even if you don’t remember the command for ‘X’ , you’ll know to look for it when you do need it.
Given how you started your comment, perhaps you don’t even agree with implication of the sentence I quoted.
I'd go even farther than not completely agreeing with that and just say that I completely disagree. Our job as programmers is not just to write a bunch of code in a vacuum, it is to create that code and communicate it to the machines and people who will be consuming and manipulating it.
Things like version control should be first class tools that we all learn in detail. They are literally the most fundamentally important tools we use every day if we work in a team. You aren't doing your job if you don't care about how your code interacts with your team and your deployment. Like an architect who doesn't know how to use a drafting table.
It's incredibly frustrating that people let their egos stoke them into this idea that tooling is beneath them. It's almost certainly the cause of an incredible amount of bad software, even though much of its code is, I'm sure, quite clever in a vacuum.
And I'd rather work with a hundred programmers who know how to use their tools than one programmer who looks down on them for it.
I don't understand how "using a GUI to drive this tool" instead of "using a CLI to drive this tool" means someone doesn't understand version control, doesn't care about how their code interacts with their team, or doesn't know how the tool works.
I don't understand how I'm supposed to reply to this given that it claims I said something I didn't even come close to saying and has nothing to do with what either me or the GP post were replying to.
If you want to try again, the central thesis of my post was:
"Using git (or some kind of version control) is, in fact, your job."
With a digression that amounts to:
"The disdain OP's post shows towards people who take the time to understand the tools fundamental to performing their job is extremely unappealing to me in a coworker" (ie. the "they should get a job managing a git repo or something" part)
How should one go about achieving a 'decent fundamental understanding' (specific pointers, if you have them). And, how much time must one devote to being a 'good enough git guru'? (and, it surprises me that gitless is not more popular: https://gitless.com/ )
> My job isn’t to use git, it’s to write specialist software. If I get the software written and the customer is happy, it doesn’t matter whether I use <ide> or not. Imagine having 100 complex things bouncing around your head and having to make that 101 when you forget the order of arguments to merge.
Imagine if you knew a cabinet builder who said:
"My job isn't to use a table saw, it's to build beautiful cabinets. If I get the cabinets built and the customer is happy, it doesn't matter whether I use a japanese handsaw or a CNC controlled laser. Imagine having 100 different pieces bouncing around around in your and having to then remember the assembly order."
Now, you might argue that this supports your point, by claiming that it actually doesn't matter whether the carpenter uses a japanese handsaw, table saw or CNC laser cutter. But I'd argue the opposite: it does matter whether the carpenter knows the tools they use as well as possible, because this will affect both the quality & speed of their work, but also the range of possibilities they can even consider. It doesn't matter much which tool they use, as long as they know it intimately and in depth.
I would argue that the same is true of the tools we use as software developers. Pretending that all of the skill lies only in the domain of creating actual lines of code is misleading. If you're using a revision control system, you owe it to yourself, your customers and your colleagues (if any) to be a master of that in the same way that you're a master of the specialist software you're creating.
> But I'd argue the opposite: it does matter whether the carpenter knows the tools they use as well as possible, because this will affect both the quality & speed of their work
You chose a strange analogy, because CNC woodworking is far and away the superior choice over hand tools for producing cabinets. CNC cut and drilled wood is going to be orders of magnitude cheaper, more accurate, and allow the cabinet professional to focus on what really matters (installing them properly) rather than wasting huge amounts of time trying to do everything the manual way.
Ironically, this might be a great analogy to support the OP's point: Someone who gets so caught up in the tools and methods and doing everything the manual way for the sake of flexing their knowledge is at risk of wasting a lot of time and energy.
Personally, I have a lot of fun doing things the hard, manual way when I'm working on a hobby project. But when it's time to get work done, I choose the efficient abstractions and tools that let me focus on the core work without wasting time.
If you have a CNC system, then I'd say you should absolutely be using that in preference to other things. But you should also come to as deep an understanding of the CNC system as possible, so as to be able to grasp what you can do with it, and how you could use it to carry out specific tasks that might arise in the course of your work.
What you should not do, IMO, is to say "Oh, I have a CNC system, but I never interact with it directly, I let CNC-Foo on my computer control it whenever I need to do anything".
> What you should not do, IMO, is to say "Oh, I have a CNC system, but I never interact with it directly, I let CNC-Foo on my computer control it whenever I need to do anything".
I have experience with CNC machines, and I would still not recommend that anyone try to control their CNC machine by typing in G-Code directly. Understanding G-code can be helpful, but you really need to learn to use the higher-level tools properly to get anything done.
HN is really strange on the topic of knowing the underlying details of complex systems:
- When the topic of CS interviews comes up, the comment section is irate that companies are testing for low-level knowledge that isn't used in day to day programming tasks. Asking someone to reverse a linked list is blasphemy because we have libraries for that.
- When the topic of Git comes up, the comment section is insistent that the only possible way to use tools is to have intimate working knowledge of the underlying low-level concepts. Insisting that people know the ins and outs of every git command instead of using a GUI is the only acceptable option.
I suspect there's a lot of overlap between the two positions, which boils down to: "I do things a certain way and I'm convinced my way is the correct and only way."
1) I don't think it ever makes much sense to have a concept such as "HN is really strange on ...". This place is a collection of thousands of individuals, and you're going to see a wide range of opinions, many of them contradictory. Since not everyone comments on every article, you'll see patterns that are not reflections of a single position held by any particular person. If you see contradictions in things that I've written, I'd love to have them pointed out, but saying that "one set of people on HN say X and another set of people on HN say !X or X'" just doesn't really convey anything that seems actionable or even that interesting. "People have different opinions! News at 11!!"
2) After I posted the comment you're replying to, I was absolutely certain that the question of which level of control was appropriate would come up (since that's really what we're discussing in the context of git). It doesn't seem obvious to try to carry the analogy too far, but for both systems there are obviously different levels that one can think about. I haven't used a CNC system, but I would imagine that there are very high level control systems that maybe start from some sort of design data and take it from there, some mid-level control systems that let you specify what you need and then program the machine to create it, and low-level control that would be rarely used unless the higher level tools just couldn't do what was needed.
If indeed this is an accurate description, then it seems to match the case with git quite well, and I'd still argue that even if you often/sometimes use the very high level control tool, you should understand and be aware of the possibilities of the lower level ones too.
> 1) I don't think it ever makes much sense to have a concept such as "HN is really strange on ...".
I enjoy HN, but I've also been here long enough to accept that HN is an echo chamber. Yes, we have downvotes and a variety of opinions, but the reality is that if you come into certain threads expressing certain unpopular opinions you're going to get hit with a wave of downvotes. Eventually you learn to just stop posting those opinions, which results in an echo chamber effect.
No social media site is free of echo chamber effects. HN is not an exception.
> I haven't used a CNC system, but...
Friendly suggestion: If you're not familiar with how something works, maybe it's not the best subject for an analogy?
"Haven't used" isn't the same as "have never been in a shop with a CNC system installed, never worked around people using them, and have no close friends who use them regularly".
As for the echo chamber, you won't be suprised that I disagree with you on this point.
It would be perfectly reasonable for someone to just say "no, I'm not going to imagine my job designing software as analogous to cabinetry," because argument from analogy shifts some of the burden of proof to the analogy itself, and the analogy must be defensible. In this case, you're drawing an equivalence between OP's 100 complex things and 100 cabinet components that must be assembled, yet also drawing an equivalence between the 101st thing and a central tool of the cabinet maker's trade. This is an internal inconsistency. You must either acknowledge the 100 things as 100 tools of cabinetry, each more important than the 101st, or you must regard the 101st thing as yet another fixture to be added to the cabinet. A domain expert is telling you that version control is a peripheral concern in their trade, the 101st tool, and you're responding with an idealism of mastery that stems from notions of integrity and identity and establishes no reasonable boundaries. You're welcome to this ethic and idealism, it's not a bad thing to have, but you're second guessing someone else's mastery based on arbitrary reference to your own mastery. I might as well be chiding you for looking up man pages without understanding the command-line capabilities of `troff`.
> A domain expert is telling you that version control is a peripheral concern in their trade
1) I have no way of knowing if they're a domain expert.
2) With 35 years of software development under my belt, and a half dozen version control systems too, I'd consider myself likely to be as much of a domain expert as the OP, and I'm saying the opposite.
3) I am not as master woodworker. I have built cabinets and many other things. Philosophically I consider the disjunction between our culture's attitude towards physical tools (where there's an assumption that one will have to take time to learn how to use the tool before actually being able to make anything good with them) and digital tools (where there's an assumption that apprenticeship should be kept to a minimum, preferably zero) extremely interesting.
4) When I'm working on a task (I also build very specialized, highly complex software) the distinction between the cognitive load caused by the task and the cognitive task caused by the tools I have available to work with is not very big. It doesn't make much different whether the "101st" thing concerns a tool or a component, it's just another part of the big picture that I have to struggle to keep in my head as I move forward.
This is an excellent reply, more interesting than my comment, and I certainly cannot gainsay you. However, '100' might as well be 'n'. Your reply suggests that there is a struggle to maintain contextual awareness of 'n'. In this model, for everyone, there will be an n+1, a necessary factor where the diminishing returns of mastery do not justify the cognitive load. I do not think the heart of your dispute truly lies so much his model of peripheral necessities as with his ranking of git as an n+1 tool.
Now I'm curious: why? Not that I don't believe you, but I'm wondering what is the reason? It's not intuitively clear why gloves would make it more dangerous, even knowing that it is.
If a glove comes into contact with a spinning blade, there is a good chance it will pull the rest of the glove, and the hand within it, right into the path of the blade.
Speaking as a person who lost half a thumb to a table saw ...
the difference in your analogy emerges when someone wants/needs you to carve out something more intricate, and you say "it's not my job to be able to do this without gloves".
To the op's point, using git is not the act of coding or in this example making cabinets. I think it would be more akin to a work flow used by a cabinet maker and I think people would be less likely to look down on one that has an idiosyncratic work flow. It may not be the most optimal work flow, but they can get the job done well following their methods.
I don't subscribe to that thinking and I try to optimize and improve wherever I can, but developers are an idiosyncratic bunch. You can lead them to water or show them how to fish but lots of good developers just like to do things their own way.
It's about tools. Git (and revision/version control more generally) is a tool available for you to use when developing software. The questions for me are: do you use the tool? do you understand the tool?
I think it's fine if someone says "I don't use that tool". I personally think that they are making their life harder and their software probably worse, but that seems like a valid choice. What I have a difficult time with is someone who says "I use that tool, but I limit my understanding of it to be as little as I can, because understanding my tools is not part of my job". That's what I felt the OP was doing.
Surely some tools have to be more important than others tho and not knowing some tools impacts some jobs less. Is git a saw for woodworking? Or is it a tool chest that has a drawer for a saw? If you need to use a thousand different tools, the tool chest becomes more important (the analogy doesn't map well there but it seems similar to working on a bunch of repos that have a bunch of branches or one repo with one branch). If you only need a handful, it seems like it would impact your job less.
There are more things that reasoning applies to than you can learn, so it’s not really helpful in deciding between all the things. Everything depends on your goals and there’s no one size fits all answer.
I know how to file documents in filing cabinets. I don't know how the bearing mechanism for sliding out the drawers work, or how to replace the user-serviceable locks – but I know that there's a mechanism, and that the locks are user-serviceable, and I could look up the relevant information if I ever needed to do anything with that.
You don't need much knowledge of your version-control system in your head to be an effective programmer.
Assuming a binary search over version history is a valid way of finding the bug. If you have a better idea of the probability distribution (e.g. it might've started being buggy any time in the past year, but I'm fairly sure it happened about nine months ago when the formatting of the report broke a bit) you can do a better job manually than `git bisect` – and if the bug is actually multiple bugs, or other things changed that affected the bug's presence or absence (making it appear or disappear in non-trivial patterns between versions), `git bisect` will mislead you.
`git bisect` is pretty cool, but “knowing your tools” isn't the same thing as “memorising the man pages”.
if you are confident that it happened about nine months ago, then just start the bisect bounded by age-appropriate commits. just a case of knowing your tools ;)
yes, if the bug is a complex interaction of several different code changes, and comes and goes, then a binary search will not find it. however, it may be the fastest way to realize that it is this sort of bug. recognizing patterns in what happens when you use git bisect is another aspect of knowing your tools, and in this case, will help you move more rapidly toward a more appropriate approach.
I interpreted what GP said as " the version control system is the place I store my files, so it's not just a tool, but it's an essential and necessary environment, the space where I organize my stuff"
That's more what I was going for. It's also part of the background, as opposed to something I deliberately interact with, so I don't use it like a tool in my day-to-day.
All things I know are possible. I understand Git's architecture, just like I know what an inode is. I could re-implement my own (bad) version control system, if I wanted to, just like I could make my own (bad) filesystem.
But that doesn't mean I need to have memorised the CLI invocations. There's a button in Git GUI for all the things I need to do (including the things you've listed), and in the rare case where I need to do something else (e.g. submodules), I can look it up, just as I do in the rare case where I need to make a symbolic link.
All visual git tools I've used suck and wound up eventually corrupting the repo. Also I've noticed that all of my colleagues who learned git using these visual tools didn't actually learn git, and have no idea how to anything other than add/commit/push.
I say "just rebase your branch" and I can see the panic grow in their eyes.
> I say "just rebase your branch" and I can see the panic grow in their eyes.
The irony of that is that resolving conflicts in a complicated multi-commit rebase is much more easily/efficiently done in a good GUI than on the command line. Not all GUIs support it though (I think SourceTree gives up if there's a conflict), in fact it's a bit of an acid test for a Git GUI. The Jetbrains IDEs (PyCharm, CLion, IDEA, etc.) work very well. (So does TortoiseHg but obviously not for Git!)
Magit[1] is also excellent. And like all good git tools it exposes a log of what it did. Unfortunately it doesn't work so great with megarepos, but I understand that's being worked on. One area where this tooling is vastly superior to the command line is adding hunks instead of files to a commit, making it much easier to have a sensible history while allowing programming in a more natural style.
My experience also supports this. I think resolving complicated merge conflicts is the best argument for using an IDE with Git. My experience of having to maintain a fork of a code base where there was frequently a lot of conflicts showed that I would reach for IntelliJ to resolve complicated conflicts even if it wasn’t Java files. The 3 window visualization they use is just great for seeing conflicts in large files that span hundreds of lines of code.
I use the terminal commands for everything else though.
> The irony of that is that resolving conflicts in a complicated multi-commit rebase is much more easily/efficiently done in a good GUI than on the command line.
This is an argument for a good GUI diff/merge tool, which is not necessarily the same thing as an argument for a GUI Git client; I think the two uses are being conflated a lot throughout this comment thread. I have Kaleidoscope set as my default mergetool and difftool, but I'm still working with the git CLI nearly all the time. (I use Gitup on the occasions I want to stage changes to individual sets of lines in the same file as multiple commits, and exceedingly rarely if I'm trying to do something frightening with a local as-yet-unshared feature branch. There are times being able to undo with Cmd-Z is really helpful.)
You're right that there are two different parts to a these, but I think they both benefit from graphical tools.
I'm talking about a VCS tool telling me that the rebase had got to commit n of m and a conflict has been encountered, and which files have conflicts resolved (either automatically or I've manually resolved then so far in this tool), and which other files have conflicts outstanding. Then I can invoke the three way merge tool from there (which is indeed hugely better than a command line tool). Something like this (except this is for Hg):
SourceTree doesn’t give up. It tells you that you have conflicts to resolve. Once you do, “continue rebase” does what it says, either until the rebase completes or until it hits another conflict.
Does it show a GUI, analogous to [1], or tell me to sort it out myself on the command line? I'd it's the latter then that's very much giving up. On the other hand, it's possible I'm mixing it up with some other GUI I tried.
If a rebase encounters conflicts, it tells you as much, puts every piece of the commit that it could auto-merge into the index, and adds conflict markers to what it couldn’t in the working tree. Then it’s up to you to merge them in an IDE or Optionally (I think) choose yours / theirs for each hunk. Once everything is resolved and moved to the index, “continue rebase” goes on to the next commit.
I think it’s had a rebase GUI since I first started using it in 2013. It may not have had one back then for interactive rebase, or else it didn’t work for me, but it does now.
It’s not as nice as that one. I described it in my other comment. The workflow is okay, but that visual context of to/from branches in your screenshot is something I wish SourceTree had.
I actually agree, using a nice GUI to handle your merges is a godsend. If you know how to use it properly, and still know how to do more advanced operations as well.
For these people, I think a manual backup is what they actually want. “If I fuck up so bad I have to roll back.” I don’t think it’s wrong necessarily either, and GitHub actually encourages this behavior with the easy Web file uploads. Many repositories are now 99% automatically created commits by drag and drop.
I think these people would be better served by a backup system where you can pin snapshots. They don’t really want to use VCSs like they’re meant to be.
How are version control systems meant to be used, if not as a history of the work? If not as a remote backup of work in progress?
I get the whole “we should have a neat history of feature commits” argument, but that’s really only one facet of a good source control system.
The fact that these goals appear to conflict shows me there’s some sort of lack in git. For all its (many, many) problems, uber-complex source control system ClearCase at least allowed you to specify a view, so you could see both types of information depending on your use-case.
They’re not really in conflict at all. The workflow I’ve been using for ~11 years is:
- Make a branch for my work.
- Commit early, often and with often meaningless commit messages like “WIP” or “try x” or “nope x doesn’t work, do y instead” - in other words, what the work was.
- Push this branch to a remote (either personal or shared depending on policy) largely to synchronise between machines, but also as a backup.
- When ready to integrate, interactively rebase into a set of cohesive units which are independently buildable and have detailed commit messages which explain why the work was done, not what the works was.
- Push _this_ as a pull request, Gerrit change set, or email patch, depending on policy.
This approach gives you the best of both worlds: fast, easy backup and easy unwind when doing work, and a clean history for the benefit of future developers on the project.
You still lose your real commit history though, and I’ve found it useful to have that in the past. A dead end or experimental avenue that turned out not to have use at the time turns out to be a real timesaver later.
I’m fact I’ve found that far more useful than a well curated history.
Having both would be a good thing. Having both in a not-ridiculously overcomplex system even better… I wonder if there even is a sweet spot.
I agree that having both would be preferable - and to some extent GitHub gives you that (via the "squash and merge" button). Unfortunately it doesn't appear that workflow is usable in a lot of cases since the commit message cannot be reviewed independently in this model, unlike in Gerrit.
You can always keep your working branches on a personal remote, or interesting sets of changes in gists, however.
I will agree that visually seeing the tree is such a useful tool to have access to. I know that's not the true desire of your use case, but in case it's useful, I will add what is obviously the best git alias, 'git lg': https://coderwall.com/p/euwpig/a-better-git-log
git lg --all is probably my most
used command in terminals and I think it gives me a better view of how projects are flowing on the whole.
No apparently that means you don’t know what you’re doing, and you’ll corrupt the repo, and you’re not a real software developer so we’ll come to your desk and rip up your I’m A Very Serious Professional card.
It's funny, because most devs that are pro CLI for these kind of things, are actually unable to operate vi, or install any server / container without installing a complete desktop environment.
And that's totally valid. gitk is my main experience with visual git tools and using it gives a ton of information that is not only visually nice to look at, but also pretty easy to navigate. But gitk isn't always the most available and sometimes I just need a quick look at the history. They're all just tools in the toolbox, but even knowing the tool exists is a first step.
and the ultimate option for git lg is --reflog. Seeing all branches, even the old ones that do not exist anymore is a eye opening event in discovering the true nature of git: it never changes a commit, ever.
I exclusively use git via the cli because guis are too confusing for me to keep track of what is going on, I also like to use tig to browse the commit history like above.
Seriously, seeing the commit tree laid out with colored lines is essential to me. A glance at the interface lets me know exactly what state the repository is in. Just like you say, it's less mistakes. Which is precisely one of the benefits of good UX.
Going from SourceTree back to the command line would be a huge step backwards for me. I still use the command line sometimes because there's advanced stuff SourceTree can't do. But for most of my basic everyday operations, the command line is just inviting me to make little accidental mistakes every so often because the state of the repository and branches isn't obvious at a glance.
I only see upside to using an IDE, zero downside. (I've never had SourceTree "corrupt" my repository, and all its commands do exactly what I expect -- it's just running the git commands I'd be typing out anyways.)
Yeah, viewing a changeset and staging only some files or just parts of files is really important to my workflow. Sometimes I leave myself comments or skip tests locally and I have no intention of committing those changes. Using a tool like sourcetree to review, add, and commit only the lines I want is very helpful and saves me time.
I do use the command line for everything else, though. Well except interactive rebasing, I suppose. I pop back in to vscode for that. But even that gets started in the terminal.
This is also part of my workflow, that I use git commit -p instead. Seems weird to only use a tool for that part when git's included workflow is perfectly capable
Probably SourceTree makes this workflow better (git add -p is a forward-only workflow, so you can't glance up and down a file, for example), but it's so ingrained in my fingers that I'd have a hard time switching things up.
Just in case you didn't know, there is git add -p (or --patch) which does precisely what you want: It splits your changes into small parts ("hunks") and allows you to specify whether to stage every hunk. I don't know your exact workflow in Sourcetree, but most likely git add -p is the CLI equivalent of the Sourcetree interaction you describe.
Bonus commands: -p/--patch also works for git stash (allowing you to stash only certain changes) and for git checkout (allowing you to discard only certain changes). Since I'm an Old School Git user I actually don't really know the restore/switch commands, but apparently git restore supports -p/--patch as well.
Another neat git add flag is -u/--update: The manpage is a little confusing on this flag, but essentially it makes git add ignore untracked files (it will only stage files that are already part of the repository). If you're like me, you have tons of files laying around in the project folder (e.g. benchmark results or local test input files) that you don't want to commit and yet don't want to add to the .gitignore file (since the files are really just temporary files, other users have no use for the gitignore entries). By using git add -u, you prevent adding them by mistake in a command like git add src/ and realizing a few weeks later that you accidentally added 10 MB of cat pictures to a bugfix commit. If you can identify with this story then git add -u is made for you.
Another bonus fact: If the temporary testing files mentioned in the last paragraph ever reach the status of permanent testing files, and they're still only useful to you personally (so adding them to gitignore doesn't make sense), Git has a little-known feature: You can add local ignore patterns (same syntax as gitignore) to .git/info/exclude (go ahead and check, this file most likely already exists in your Git repository). These patterns are not be part of the repository itself (you don't commit them), rather they act as local configuration. The idea is that you put exclude patterns that are valid for every user of the project (e.g. target/ for a Rust project) in .gitignore, and local exclude patterns for your IDE/editor configuration (.vscode/, .idea/ and friends) and similar files in .git/info/exclude.
In conclusion, I only every run a) git add -u $file (when I want to add all changes made to an existing file), b) git add -p $file (when I want to add only certain changes made to an existing file), or c) git add $new_file (when I consciously want to add a previously untracked file).
These three commands are all you need if you're in the camp of Git users that at least try to make every commit a good package (single, reasonably-scoped and atomic change). If you're in the git commit -a/"squash all intermediate commits into one single monstrous commit" camp then.. have fun with your cat pictures, I guess.
I hope that was at least a little bit helpful to someone.
May I ask why you drop into VS Code for interactive rebasing? Is it about resolving the merge conflicts, or editing the rebase command list?
I'm just going to drop two more nice Git features here, but I'll stop myself now before I write too much: git commit --fixup= together with git rebase --autosquash, and git rerere (not a typo).
I completely agree. It always baffles me how something so simple (yes, simple!) as version control can end up as the abomination that is git. After controlling for popularity, you don’t see nearly as much posts explaining subversion in detail, and subversion being centralized is neither the only nor the biggest reason for that.
No, version control is not simple, never was. git was the first vcs that didn't suck because it was the first tool that understood the nature of the problem. All others were ass backward and instead of solving the problem stood in the way and actively made things worse.
We are all using git only because Linus wrote it. The cargo cult is real and very much alive in our industry, I think precisely because we are all here to write specialist software. Too busy in our domain to worry about version control nuances so we just go with what is popular and don't think about it too much. It's not just version control, it's libraries, frameworks, languages, all of it. If it's not popular it's doomed to failure.
You assume that the creator of one of the.. if not the largest open source projects on the planet might not understand the issue at hand better than literally anyone?
That seems arrogant. I imagine more thought & care went into git than you can fathom.
If "the issue at hand" is running a globe-spanning open-source project with hundreds of contributors, a bunch of targets, and a code base that goes back 30 years, sure, I'm happy to listen to Torvalds.
if "the issue at hand" is the more typical modern work environment for devleopers, then no, I don't think he has much special insight. Indeed, the fact that he's wrangling something so large and important means he's unlikely to have the time and attention to devote a lot of thought to how best to serve a pretty different group of people.
An obvious consequence of this is git's terrible interface. Its premise is "if you totally understand what Git is up to under the hood, the interface is great!" Fine for a small in-group of kernel developers whose lives are distributed patch sets, but terrible for the average developer. It has taken 15 years to get some reasonably named commands for common things like "restore a file". That's a great sign that we shouldn't listen to Torvalds on topics outside the realm of his admittedly impressive expertise.
This isn't about "understanding the issue at hand," this is about UX development.
Linux is, very intentionally, a piece of software which does not have "easy to understand for non-experts" in its design goals. You generally interface through Linux with system call wrappers provided by a libc (or another specialist library for other interfaces like libfuse or libnetfilter or whatever), not directly. Linus is, quite obviously, good at many things; it's silly to assume that means he's an expert at everything.
Linus developed Git as a low-level tool. Linus put a lot of thought and care into getting the implementation details right, but intentionally did not build an easy-to-use interface. The Git command line that people use today ultimately derives from Cogito, a toolkit written by someone not Linus that sat on top of Git. Eventually Git (which Linus had long since handed off to someone else) adopted most of the conventions of Cogito and created a "porcelain"/"plumbing" split. If you think it was arrogant to develop Cogito, I suspect you disagree with Linus.
Finally, Linus develops Linux with a very particular model, with e-mail-based patch reviews, merges from subsystem maintainers, etc. Most projects (even most open-source projects, but certainly almost all proprietary projects) do not work this way. The Linux kernel does not use a GitHub-style workflow for development, which is by far the most common way people outside the kernel community use Git. It may well be the case that a lot of thought was put into making Git work really well for Linus's use case but it does not match how other people do development.
The difference between the kernel development model, and some putative "typical other" development model merely changes which git commands/tools get used to handle getting stuff into a particular branch on the canonical repo.
It has no impact on the underlying concepts that make git scale well, cover 100% and 100% remote cases equally well, and provide deep under-the-hood concepts that can be deployed in exceptional circumstances.
Correct, it has no impact on the internals of Git, which was my point. Git's internals are used by both the kernel development workflow and the GitHub-style workflow, and Linus designed it well.
The conversation is about the user experience of using Git - its CLI design, etc. It is entirely about "merely" what commands are being used. TFA is about new Git commands, not about any changed internals. Or, in the case of Sourcetree, it's about not using any of "Linus's" interface to Git (which, again, wasn't written by Linus) but interacting with the same Git internals.
I think the conversation is about whether or not the Sourcetree interface to git can be useful when carrying out tasks that involve, for example, filtering the reflog.
If you have never done this before, and have only ever used git via the ST interface, switching to the CLI to get this done is going to be quite a shock. Maybe that's OK because realistically such tasks should be rare. But sometimes they are a critical task in development, and finding that the entire dev team is completely intimidated by it can be an issue.
Isn't that the "appeal to authority" fallacy? Mercurial demonstrates that you can have a VCS with less painful and frequent gotchas and certainly the grottiness of the git submodule mechanics, for example, doesn't show as much thought and care as one would hope.
There are at least two different commercial hosting sites that implement a pull request model. If you want ephemeral branches like in git, hg branches are indeed not the right choice. But that doesn't mean that they don't have their place. Try topic or bookmarks if you want gitish behavior.I have absolutely no clue what you mean with trunk-based development...
Funny, I rarely use branches (or topics) in Mercurial unless it is a complicated long-term project. Modern "everything on a branch" was invented by git users because the UI forces naming things. Mercurial allows sharing code and still linearizing history with rebase safely, so much less need for merges. I've always found pull requests to be only useful for the passing-by contribution. They are a pretty awful interface for anything else driven by GitHub internals more than anything else...
> Pull Requests were never going to happen with Mercurial
TBF, AIUI they're not, strictly speaking, happening with git either: They're an external addition, invented by GitHub or some such, and not actually part of git itself.
How does that work; is it something technical built into it? Or do you mean just because after that one knows that it's incorporated upstream, so not needed as a separate entity any more? Because that would also go for a "pull request" by, say, e-mail or whatever.
I’d add another reason to that: we’re using Git because BitKeeper wasn’t free (as in beer) at the time for general purpose use. Had it been, we’d all be using BitKeeper instead.
Probably not. The free (as in speech) part of git is what made it usable for entities like google, Microsoft, github etc.
If git had been released with the same license model of BitKeepr it NEVER would have taken off.
True, a lot of people uses git without knowing how to use it, mainly because they do everything with a GUI and never learned how it works, and if something strange happens that can't be solved with the GUI they just delete the repo and clone it again.
At that point I say to these people, why you even bother with git? Just use what I call ".zip versioning", that is archive the source code an call it "project-vX.Y.Z.zip" and put it on the company fileserver.
Or better learn how to use git, and that means learning the command line and throwing out every gui (well, not all of them, for example I do commit and push/pull with VSCode, but when I have to do serious stuff like merging stuff I do it with the command line). To my experience GUI always cause problems than corrupt the history of the repository.
But he didn't write it. He started writing it in bash, and then (because he is Linus Torvalds), some very talented linux hackers jumped in to to help him. Just like with the kernel really. It was a tool written by extremely competent linux hackers for linux hackers, and that's why it is so successful.
I expect other programmers to learn and master tools they use 50 times per day. You should be getting more efficient, productive and make fewer mistakes with languages, frameworks and tools you use daily. If you prefer sourcetree, be it, but I expect you to use it efficiently and not make mistakes other people wouldn't do with git, zsh and ohmyzsh (which contains like 100s of handy shortcuts).
For me I’ve used visual git tools in the past and it ends up doing something unintended/unexpected. So now I only use the terminal commands. Perhaps the old git tools I’ve used have improved significantly though.
> My job isn’t to use git, it’s to write specialist software.
This is true on so many other levels too. My job isn't to be an AWS expert, VIM master, Visual Studio ninja, Unix professor, et. al.
My job is to make the customer happy. That is it. If the customer is happy, my project managers are happy, the executives are happy, the investors are happy. When all of your bosses are happy, you can get away with absolute murder. No one gives you shit about anything. Production went down because you fucked up? No big deal - that was like the first time in 18 months we had any problems, and the customer can't even see these things through all the magical features get to play with day-to-day. Need to take the entire afternoon to play overwatch because [arbitrary fuck you reason abc]? No one cares as long as you didn't have a scheduled meeting. In this realm, your mind is free to explore side projects without fear of reproach or guilt-trip. Tasks are executed with confidence and calm. Innovations are more frequent and valuable. People are actually relaxing in their time off and enjoy working for their employer.
When the customer is pissed off, it is like entering into Doom Eternal as a non-player character. At every turn you begin to anticipate a heated conversation about missed target XYZ and incident ABC. Each ding of your outlook bumps your blood pressure by 20-30% before you even see the subject line. Your executives start taking damage from your customer's executives. Investors begin executing difficult queries regarding long-term viability. NO one is sleeping anymore. Side projects? Are you fucking kidding me? Not in this hell.
So, when someone in my organization starts giving me the run-around about [pedantic greybeard doctrine which adds 10x overhead to a process], and also has no business value to show for said run-around, I begin to shut things down pretty quickly. If you want to play nuclear release authorization simulator every time you need to check in source code, please do this on your own time. Even the most elite hacker rockstars like to use GUI tools so they can see what the fuck is going on without making their eyes bleed every 10-15 minutes due to terminal character display restrictions.
So putting aside your argument that I completely disagree with but a lot of people have already voiced my concerns.
> The guy who knows every command of git backwards is welcome to apply for a job managing a git repo or something if such a thing exists?
Yes, this is the job of a maintainer in fact. They exist in a variety of organisations but maybe not enough. The best example is the linux kernel. Developers are expected to maintain their own local tree. When it comes to contributing code to the kernel, the patches are sent in a standardised manner to a mailing list and a maintainer then handles dealing with branches, rebases and merges. This means that developers don't need to know any more git than they really want to learn, aside from how to use git-format-patch and git-send-email which are really quite simple tools with an incredibly vast number of tutorials out there explaining them.
This means that people who insist that it's "not their job" to learn git can achieve the requirements of "patches which build at every step and contain isolated step by step changes" using a GUI or doing something really stupid like copying the code aside, deleting and re-cloning the repository and then pasting and committing each step. It also means that people who actually know how to use git can get the job done in a fraction of the time.
It also means that a carpenter^Wdeveloper's insistence to not learn how to use a claw hammer^W^W^Wgit will not affect their fellow coworkers/cocontributors.
Absolutely. I can’t wait until something with better ux comes along and gets enough traction to make git a distant memory. I do not want to know the detailed inner workings of my VCS data model or 100 incongruent commands to make it work.
> My job isn’t to use git, it’s to write specialist software.
Part of the job is to know and understand the tools that you need to use to in order perform the duties as part of that job. Saying that it isn't your job to use git is like a surgeon saying it's not their job to learn how to tie sutures when closing up the surgical site after completing the operation.
And then optionally adding whatever else, `--all` most frequently. (Obviously not writing it all out every time - with git config aliases, and actually `gitl` as a shell alias for that even. That's probably up there in my top.. 5? shell commands.)
GUIs work too of course. Just pointing out you don't have to abandon the CLI for a tree. There's fancier third-party tools than native git log that are still CLI even.
You don't even need git at that point. I don't understand why using git if you want a GUI... at this point, put the source code on the company fileserver and adopt "zip versioning", i.e. when you do a new release create an archive named "project-X.Y.z.zip" and archive it on the fileserver. If you need to work on another branch copy the source code directory. Why bother at this point?
I don't understand people that wants to use git but they want do to so with a GUI that abstract everything that git was created to address, and they limit themself to write some code, commit and push. You are not gaining any benefit in using git this way, you are only wasting time to me.
If you choose to adopt git, you learn how to use it, and so you learn the commands (it's not that much effort). In my experience GUI always created problems, especially if someone in a team uses a GUI that creates junk in the repository (like 100 useless merge commits created automatically for things that shouldn't really have been a merge that make git log unreadable...).
Also people that uses GUI typically when they have a problem that the GUI doesn't know how to solve (because they typically implement the basic things and if something goes wrong they can't help you) just deletes the whole repo and clones it again, or worse they try to fix it by pressing random buttons in the GUI and put the repo in a shitty state so another coworker that knows how to use git has to waste his time cleaning up the crap that the fantastic git GUI made.
And I'm not saying that you shouldn't 100% use GUIs. I use the one of VSCode for doing simple things like creating commits, switching branches, and stuff like that. For advanced features like merge, cherry pick, rebase, whatever I use the CLI, I find it more practical.
I was gonna say cherry-pick is farily frequent for me. Also I seem to learn new variations once in a blue moon for common commands. E.g. recently I learned about the --cherry option for git-log and it changed my life.
>> My job isn’t to use git, it’s to write specialist software.
It's like a plumber complaining that his job isn't driving with a car and that he wants customers to pick him up or wait for him until he comes on foot or via public transport.
A fairer analogy is that GP is saying, "I don't want to use a van to bring my stuff, but prefer a pickup truck." From the customer's POV, it makes no difference.
Having been through two Perforce -> git transitions of medium sized repos with a few dozen people contributing and being the person with the most git knowledge in the group to be called in when people new to git mess things up: these GUI git clients are ok if you know what you are doing and what the consequences of checking various checkboxes are. They are not conducive to people learning how the tool git works and how to use it to solve real world problems. The command line is a great way to learn git and then fundamental understanding can be used to reverse engineer what GUIs do under the hood.
At the end, it comes down to a personal preference of what you're most comfortable with. Some will prefer to use GUIs, others will prefer the command line.
Personally, I really enjoy using both the command line and the Github app. The Github app is super simple and straight forward, its great for just committing (parts of) files. Anything more than that and I prefer using the command line for "direct control".
If you understand how git works, its data structure essentially, then its far easier to do anything with IDE/GUI instead of CLI. They are more intuitive and shorten the work and less prone to mistakes.
For me, as a dentist, I used to use the dental drill, but now I trust my janitor to handle it, this way I make less mistakes myself
My job as a dentist isn't to use dental drill, it's to fix teeth in general. If I managed to fix a tooth and customer is happy, it doesn't matter whether I use drill myself or janitor does. Imagine having 100 complex things bouncing around your head and having to make that 101 when you forget the order of drill bits you need for a root canal.
The guy who knows dental drilling backwards is welcome to apply for a job managing dental drills or something if such a thing exists? But I could harp on the same way about his missing medical or braces-training skills.
Endless opportunity for analogies here. The gist is that you are delegating all sensitive versioning and versioning history management operations to a 3rd party with extremely limited capabilities, 3rd party you know nothing about (effectively a black box).
We thrive on abstractions, but unfortunately in case with versioning and git in particularly, GUI apps is a wrong one.
The janitor knows nothing about dentistry. The git GUI knows plenty about it and the devs make it their job to know it too. A janitor is not an abstraction, he's a liability. If a GUI abstraction helps me get the job done faster, I really don't see the problem. Plus almost every one of them fully state the commands being used to perform every action and have logs you can parse. I used to use Sublime Merge and now use Fork and both have this
You are contradicting yourself and making my point for me.
janitor is an abstraction (you trust a janitor to operate with a professional tool for you) and you are completely right, git GUIs (just like a janitor) are a liability.
Your analogy is awful because there literally is a better analogue. A dental assistant. Someone who actually knows dentistry and can help with some of the simpler tasks to leave the dentist to concentrate on the actual surgery. You basically tried to fit a square hole in a round peg with your "analogy" to make your terrible point
It's not really new anymore, but still way underused, so it could certainly do with more attention. Git's UI has become better, but they can't really remove the old UI and tutorials using those, so people keep sticking to that.
These new commands make a lot more sense, but the weird thing is they don’t bring anything else to the table.
They behave exactly like the existing ones, so much that anyone that really cared could have just aliased them.
So is there any incentive to switch for the people who went through the trauma of burning the old ones in their soul ? (I often heard that knowing how it works internally makes git commands feel natural. I was lied to)
They don't behave exactly like the old ones, because they do less - which means it's harder to accidentally do the wrong thing, and it's easier to guess what its arguments do.
If you know the old ones already and don't make mistakes, it's fine not to use these, really. I consciously got the new commands into my muscle memory in case I'm pair programming or someone less familiar with Git is looking along.
And yeah, knowing how Git works makes it a lot easier to understand, but it doesn't make the commands more natural. (Except perhaps knowing when you can use a commit id instead of a branch name.)
I had git-checkout syntax burned into my soul. I switched to switch/restore a year or so ago and am happy to be mostly unburdened of git-checkout. I say mostly because I still use it in scripts so they work with old git versions.
We're commenting on one :) But really there's not much to it - `git checkout` is the main wart of Git's UI given how overloaded it is, and with `git switch` and `git restore`, you'll have banned out pretty much all of its regular use. And given how much more straightforward (i.e. they have more guessable arguments) those are, there's not as much need for tutorials anymore.
I can't reasonably start using such functionality until the PC with the oldest software that I still use has updated or I will have to deal with 2 ways of doing things all the time.
Currently, that's an ubuntu 18.04 machine at work and that doesn't have `git restore`, yet.
Analogy still applies, though. If a room is smelly to you, you aren't going to hang around. So the people who spend time there are going to be ones comfortable with the funk.
To be fair, if you mean that these changes are “bloat” and that git should be kept “simple” (not easy but non-complex), I don’t think this has much to do with Linus, because as far as I know he’s no longer involved in the development of git.
Git was written to meet the version control requirements of the Linux kernel. It works well for that project's needs which are an outlier for most development needs unless you are working at FAANG scale.
Git is a perfect match our project's needs, which are so far from FAANG scale that it would be a joke to even compare them.
In our case, fully distributed development (no developers live or work within 1000 miles of each other), public repository, welcoming 3rd party PRs, strong use of topic branches, fully rebase-not-merge workflow. 600k lines of C++, 21 year history, on the order of 100 contributors, 2-3 core developers at any point in time.
There were other solutions, much less complex to deal with than git, but hey, they lacked the luxury of being an hard requirement to deal with Linux kernel and related eco-system.
I've used, over the years, RCS, SCCS, CVS, SVN, Bitkeeper and Perforce. I would not trade any of them for git at this point in time, primarily due to the way that git allows for both net-connected and disconnected development without any change in the workflow.
Kind of ironic to say "FAANG scale" here, since Google notoriously uses an enormous monorepo, and Facebook uses mercurial and has done considerable work to scale it.
(Myself, I've been heard complaining that git is overly complex, but the source code control systems that I used to use before git include Subversion, CVS and various Rational products and I have no desire to go back to any of them.)
I can't recommend pijul, but I can recommend keeping an eye on it. Pierre-Étienne Meunier is a ferociously smart guy, and he's convinced me that patches are the correct way to build a VCS, rather than snapshots.
I catch up on a forum a few times a year, asking myself if it's ready for me to switch a repo or two over and see how it goes. So far I have to answer no, but I'm hoping it's just a matter of time.
As a helpful aside, in my experience, there are only about a dozen or so Git commands you need to do ninety percent of your work. You don't need to become a git zen master right away.
1. git init: to start a new repository
2. git status: checks your current state
3. git add -A: To begin tracking files
4. git commit -am: Commit all changes in the working directory with a message added on
5. git switch -c [branch name]: Create a branch and switch to it.
(git checkout -b will do the same thing)
6. git switch [branch name]: switch between named branches
7. git merge[branch]: merge named branch into current branch
8. git branch [branch name] -D : delete branch if not tracked
9. git log --pretty=oneline: show a graph of commit history.
11. git push
12. git clone [repo]: Copy of a project/file onto your local computer.
I would add `git rebase -i`, beacause I usually develop on a local branch and rebase it to the updated one. With git things can get messy when you want do something outside of the basic stuff. What I hate the most is resolving 3 way merges.
I'm mildly amused that your set of commands can't actually commit anything other than a brand-new file!
I will admit that I'm a lazy git user, and do most of my commits with `git commit -a`, rather than `-am` since I do try and give a short paragraph explaining the reasoning behind whatever the title message claims is the purpose of the commit.
I do run `git diff` first to see what I've changed, and if the diff has unrelated changes in different files I'll usually break it up into separate commits.
Decent introductory list, though. It won't surprise you that I think diff should be learned immediately; whether or not you need rebase depends on the conventions of the codebase, and if someone can learn it later they should, it can get tricky.
I’m interested in how many people double check their changes before committing as I’ve done this since the days of sourcesafe, and a decent number of bugs I’ve seen are people just not checking what they committed (some stray change got in that they never intended)
Yes! I find that if I don't comment my PRs, my coworkers avoid them. But then in commenting them, I find quite a few ... not always errors, but cleanups at least.
Merging rocks! Never had an issue with merging and it’s less work. I’d only rebase if there is a “story” I want to tell in the commit history, that would be otherwise lost. This is rare, probably if someone else did some major refactor or move around
It only makes a difference if everything is fast forwarded onto master. If branches are squash merged in (as is common), keeping those merged in branches up to date via merges or rebasing doesn't make much difference to others.
If you are chucking all your commits onto master - power to you. That's probably a good pattern (I prefer as much historical data as possible to a 'neat' commit history), but I never see it done because usually master is linked to CI and the idea is is every commit to master should be a valid build.
Master or any branch you're aiming to integrate into.
Squashing everything is bad because it makes code review and backports harder. Maybe you don't care about backports, but if you care about code reviews...
Honestly I just diff the whole thing from where they branched to the tip anyway I rarely find the individual commits useful as there is usually a process of their discovery in that with backtracks etc. that’s a waste of time to review.
For example I could review a new class that they heavily refactored in the next commit.
I’ve not had an issue with code reviews or backporting. I suspect this is because either out units of work are smaller than usual (no long feature branches)
But maybe you are onto something here and I’m missing out on a better way but I’ve not experienced enough pain in squashing on merge to master to contemplate switching to a rebase workflow (which means convincing
team members too)
If you want to learn Git from the inside out, I wrote a two-parter that aims to explain Git from the inside out, focusing on the data-structure Git uses:
Finally, _if_ you have an O'Reilly subscription, I am currently writing Head First Git (first four chapters are in early release). If you are not familiar with the Head First series, its a rather unique format that involves using lot of pictures to explain ideas and traditionally the books move a lot slower than most technical books. Ideas/concepts are cemented using puzzles, quizzes, crosswords.
For some time I was pretty annoyed that git was showing the new suggestions, but for some reasons my git autocomplete did not know about them and thus couldn't tab complete. (It was on ArchLinux with zsh using the grml zsh config).
After a few month the autocomplete got updated as well and I could actually use the new interface without to much frustration.
Once upon a time, someone decided to overload `checkout` with a bunch of semi-related actions, apparently for convenience. The day these patches were accepted was a sad day.
It's great to see that someone took time to restore sanity. I'll switch to these commands now.
restore is a bad name for this action IMO. It is extra pandering to beginners, by targeting what the maintainers maybe believe is the most common use case.
Thats’s why checkout is such a great name.
Maybe ‘apply’ would be good.
My git-fu is often lacking, but I don’t blame git for being hard, I blame myself for not taking the time to be amazing at a tool basically everyone uses all day every day.
It’s silly not to know git. It might be the most used development tool in the world.
It might take five years for a change like this to show up on every box you log into, depending on how much your org relies on LTS releases to avoid randomly breaking stuff.
I remember learning `git checkout` checked out files from a specific branch and the default behavior was to checkout all the files of the current branch and it made sense except for one thing. If I check out one file from another branch, HEAD is still pointing to my current branch, if I check out all but one file from another branch it's the same, but if I check out all files, then HEAD points to the other branch and that seems inconsistent. I always thought there should be one command that switched branches and then checkout changed the files.
But in your terms, a branch is the files it contains. Or rather: Changes gathered in a commit can be changes to several files. A branch is just a chain of commits, each based on the previous ones. So what a branch "is", is a bunch of (bunches of) changes to one or more files. Therefore, "checking out a branch" is checking out (a bunch of changes to) one or more files. And now you want to "check out one file from another branch"... Is it really any wonder that doesn't make much sense?
I think your doing yourself a disservice by even thinking in terms of "checking out a file" as separate from checking out a branch. The units git deals in are commits and branches, not really individual files as such. If you want to use it, better get used to thinking in the same units it does.
It's easy to become a periodic newcomer with service tools like Git, especially after an extensive dive into some complex new contexts. Simply speaking, I may forget the exact wording of Git (or other VCS) commands or switches.
However, what really sticks in my memory are the concepts Git implemented. This helps to refresh the operational knowledge rather quickly, well, notes help too.
In my view, 'check-out' as a concept is very much central to VCS as such. So, having a dedicated 'checkout' command which works on both commits/branches and files is quite reasonable - it keeps you in the same conceptual context.
I don't mind the specialized commands, such as 'switch' and 'restore'. But sure enough, I'll forget these wordings or maybe even mix them up with 'reset' or 'revert', ??'undo', yet the 'checkout' command would likely present itself on my command line as it's directly tied to the concept.
I use restore quite a lot but it kind of terrifies me that it can erase any amount of uncommitted work if I type something wrong. "git restore ." is basically "delete everything that I don't have a backup of".
I see command line switches as an api, and it feels wrong to me to keep aggregating new things without a clean break.
I feel like these things should be versioned in some fashion. Perhaps come up with a clean slate api and enable it via calling git2 or git3 instead of this terrible mish-mash where new commands are constrained by previous bad decisions.
Of course this is not a novel idea, and people attempt this by making their own veneer with a different name… but this has to come from the top for it to be effective. People won’t en masse bother with git2 or git3 if it is not built into the mainline distribution.
If you’re a Git CLI user then I can highly recommend using SCM Breeze [1] which makes things a bit prettier and more convenient. It gives you a shorthand for various commands and improves default output formatting.
For example, ‘git status’ becomes ‘gs’ and it also gives you a numbered list of files. You can then substitute these numbers in place of file names in subsequent commands. No more manually typing names / paths.
One of the most common footgun mistakes I see in Git is people checking out a detached head state without knowing it, doing some work, then trying to push and immediately going to hell. I like that switch prevents this behavior by default. Also all git installations should come with a default shell prompt update so that it will show you immediately that you've gone into a detached head instead of a branch name. All git onboarding tutorials should include that instruction right after installing git.
I like how these new commands are basically the same as the Subversion commands "svn switch" and "svn revert" ("git revert" obviously wasn't a possibility for this command as it was already taken).
Obviously Git and Subversion are different beasts but, if you only needed the functionality provided by Subversion, I always found the Subversion commands better named and easier to use. I'm glad that Git is slowly re-inventing these commands that Subversion had from the beginning.
And I think arguably modifying working-directory files vs. modifying .git/HEAD are distinct things (though at this point I also find using checkout for everything to be pretty intuitive).
> Yea it never bothered me and I never understood why people kept complaining about it online. It seemed very superficial complaint.
Consistency may be superficial to you, but that's a personal preference that not everyone shares.
> The way you use git is by first understanding its model. If you understand the git model, everything makes sense
Doesn't follow. I think I've got at least an acceptable handle on git's model, but I can't see how that should mean I'd have to accept that wildly inconsistent command switches "make sense". Care to explain how one leads to the other?
switch being underloaded to checkout is the same as checkout being overloaded to switch.
In the checkout model, branches are just named aliases for their current hash. This seems trivial to me. Do we really need to use another vocabulary spot up in our heads for a command that is strictly a more-restricted checkout?
I used to use Git Legit: https://frostming.github.io/legit/
But the official addition of git switch conflicted with their git switch (which is better as it auto-stashes changes before switching). Legit also had the issue of not properly handling branches with "/" in them.
I love git even with all of its inconsistencies and complexity. May I recommend you an interactive tutorial (not mine)?
https://learngitbranching.js.org/
I'm using Git for a long time, but I'm still get back to the Magit for most of the tasks, except very primitive... Good UI helps a lot, especially for things like selective discard or commit
Yea, all of us who have used git and learned to live with its quirks (and probably have Stockholm Syndrome from it) can chuckle a bit, but much of the git cli is a dumpster fire from the "principle of least astonishment" user perspective. The affordances in some cases are terrible.
Ever drive a car that had every function ever on one stalk? The one that's the turn signal also has the cruise control, radio, lights, wipers, and blinker fluid. Much of git is like that – context-sensitive.
I can't really fault the programmers, they (he) had a certain mental model and translated that concretely to commands. Except that the internal abstractions of how get works are orthogonal to tasks and use cases of users.
Well, I'm him :) I actually left Git. And it looks like nobody has picked it up since. So it's going to be forever "experimental" [1] until either someone starts doing something, or deletes the whole thing
[1] The experimental status is not because it's unstable but rather to allow us (or now, them) to change the UI design based on feedback if we got it wrong (again!).
git reset HEAD --hard or if it doesn't want to do it, I checkout another branch delete the old one and checkout back then use the local history from Jetbrain's IDE is my way to go. Probably not best practice but fast and efficient.
I don’t know if it’s the article or me being a curmudgeon but I’m not convinced to switch. I grokk the checkout context aware semantics. I’m not switching.
Most of those are plumbing. They're only needed if you're building tools on top of git (integration with IDEs or custom GUI, for example), or doing very advanced scripting/broken repository repair.
TL;DR because 'checkout' was found to be confusing, as of git 2.23 the switch command can switch to branches or commits (git switch master, git switch 0c38cf) and the restore command restores files (git restore pufferfish.txt).
I don't find switch or restore confusing? And also I never found checkout confusing, but that might only have been because I never realized it had multiple functions. I've been using it for both things (switching and restoring) for years, but I remember starting to read the article and thinking: how did I never notice this is the same command for two completely different things?! Maybe I used to know and now it's just not something I think about anymore? Either way, I can see how checkout can be seen as odd, but why switch or restore?
The two are added with some extra protections. Something that cannot be done with git-checkout without breaking scripts.
I believe git-checkout could silently overwrite data in one case (can't remembe the details). And git-swith will stop you from moving the branch when you're in a middle of a rebase or other multi-command operations. It also tries to avoid entering detached HEAD mode by default.
It's definitely geared towards newcomers. But even I'm glad it catches me from doing stupid things from time to time.
The problem is that this breaks down when you specify both arguments. Doing `git checkout branch file` checks out file from branch to the working tree, but doesn't change HEAD.
I wish git had some tree transversal related commands instead of all the crypto commands that nobody understand what they are doing without long tutorials. Got a feeling that this is a case where going down to the metal is better than all the abstractions.
-b in checkout is short for "branch" while -c in switch is short for "create".
IMO the UI of git switch is much more intuitive, since the argument is always a branch and the default behavior is to switch to an existing branch. For slightly different behavior (like creating the branch first) there are flags.
So I think it's good that the flag for switch is a different one than for checkout, since the interface of git checkout was quite unintuitive IMO.
The complaint has been, for ages, that checkout got that wrong. As designers, these developers are improving: They got it more right on the second try.
Other than that, I recommend that people learn to use git properly. In my work, I often have problems with people overwriting their commits and trying to handle merge requests of commits where one commit message is "did some updates" and the other commit is "some fixes". Getting to know git for an hour, may have prevented both issues. But I am biased, since I use git since my bachelor thesis.