Hacker News new | past | comments | ask | show | jobs | submit login
A Better Model for Stacked Pull Requests (timothyandrew.dev)
110 points by tta on April 23, 2021 | hide | past | favorite | 63 comments



While working on PyTorch, I also wrote an equivalent tool (funnily named nearly the same thing) for doing stack diffs (https://github.com/ezyang/ghstack/), which most of our team uses for more complicated PRs. The UX for working on commits is a bit different than this tool though; instead of pushing branches individually, you just run "ghstack" on a stack of commits and it will create a PR per commit in the chain (amending each commit so that its commit message records what PR it corresponds to). To update the PRs, just amend or interactive rebase the original commits. Personally, I find this a lot easier to handle than finagling tons of branches.


This sounds pretty great, I wish I'd seen this a few months ago! One quick question: does this support adding commits to open PRs, or are you effectively locked in to one-commit-per-PR?


When you amend a commit and resubmit it to the PR, ghstack translates into pushing a new commit (not force pushing) to the PR. So yesish


ghstack is amazing. When I started working on PyTorch in GitHub I desperately missed the stacked diff workflow of Phabricator, and ghstack basically made me whole again :-).


In my opinion git-ps provides a much nicer patch workflow similar to Phabricator while still supporting GitHub, Bitbucket, or GitLab for peer reviews.

* https://github.com/uptech/git-ps * a detailed walkthrough https://upte.ch/blog/how-we-should-be-using-git/


I definitely miss hg’s affordances like hg next/prev and restack. It doesn’t look like, though, that git-ps lets you submit multiple patches for review, this was the raison d’etre for ghstack


We are working on support for requesting review of a series of patches right now. But generally you want to avoid that and get your individual patches reviewed as early as possible anyways.


Doesn't help when it can take several days to land! (e.g., waiting for CI.) But I will admit this is a problem that is somewhat specific to the nature of our project :)


I was wondering how your tool related to this article and low and behold I find you remarking on this very topic! :)


I don't like this workflow, I see it as needlessly complicated. If you really can't cherry pick your big PR into simple branches to merge directly into master (which I rarely have seen over the years), create a longer lived branch and make small PRs to that. Then merge longer lived branch into master. You should never be going off and work on huge changes without keeping in sync with the base branch. Stacked PRs make all this more difficult.


Depends on the team you're on. "Small PRs" to some people could mean a few lines. But if those few lines are required for the next thing I do, on the same day, and I don't want to mess around with refactors or merge conflicts if somebody merges out of order, and I'm not going to get enough eyes on the PR quickly enough, this is exactly the strategy I've used before, minus the tools.

Just stack up some very very small MRs in the order they're depending and you can easily have people understand the changes across them without wasting time solving merge conflicts you already knew you'd otherwise have to deal with.


In that case I just use git stash. Stay on the branch you need and keep working, stash when needing to make changes to the open PR.


I have a couple of comments here.

> You should never be going off and work on huge changes without keeping in sync with the base branch.

I agree, but the stacked PR approach doesn't preclude this. If you periodically `autorebase` the stack against `origin/master` right after a `git fetch origin`, your stack stays in sync with `master`.

> ate a longer lived branch and make small PRs to that. Then merge longer lived branch into master

Large features don't necessarily imply long-running feature branches, though. I personally prefer feature flags (or some other mechanism for conditional execution) on the base branch for features that are "open" for more than a week or so, say.

"Large" is also subjective. You could have changes that are large in terms of lines of code added, the number of services that are being touched, or the number of different+unrelated concerns involved.

As an example, this week I worked on a semi-greenfield Kafka service. Stacking PRs helped me keep track of the review process a lot more easily than "one big PR" would allow, and the stack was merged in 4-ish days. I structured the stack like this:

  - Add a `proto` file covering messages that this Kafka consumer reads off the incoming topic
  - Pull in a gradle library for protobuf code generation
  - Add in a simple Kafka consumer and producer
  - Add a `Processor` that works on incoming messages
  - Hook everything up; the sevice consumes off Kafka, "processes" the data, and produces back to Kafka
These PRs aren't similar in terms of size or complexity, but the various concerns that comprise this service are nicely split up.

> Stacked PRs make all this more difficult.

I'm in agreement. Stacked PRs involve more overhead, but there are times when this is a reasonable tradeoff.


I agree. I will regularly have a patch that lives for more than a month and I am rarely more than a couple days out of date. If the area you're working on is rarely touched, it isn't an issue. If you're constantly having merge conflicts, you might need to sync up with the team a bit more.


Horses for courses. These tools originated with linux kernel maintainers. I remember sending a patch to improve Andrew Morton's patch scripts, and he told me it was "slow". I was using a dumb algorithm for something, and so I asked him how many patches he had in his stack, and I don't remember the precise number, but I remember that it was in the thousands! My algorithm worked fine with 10 or 100 patches, but thousands? Hadn't occurred to me someone would be dealing with such a pile.

I haven't had a pile that big myself, but I have had a pile that was in the low 100s.


The other use case that a stack of patches workflow really fits is for maintainers of open source linux drivers (what I used to do). In that case, you're maintaining the driver in the upstream kernel, but you also need to get your code into Redhat's various kernels, SuSE's, Ubuntu's, and whoever else is the flavor of the day. All those guys have several kernels they support and they're all a little bit different. Porting over a pile of small patches is really the only sane way to go for that job, and for that, these kind of tools are a godsend.


> create a longer lived branch and make small PRs to that.

I follow this pattern a lot, we call it an epic branch and usually name it something like `epic/foobar`


Yeah, that works as long as only one person does this, and they regularly merge into and out of master. Otherwise you still get all the communication and feedback problems long lived branches have.


The article claims that GitHub doesn't support this workflow, but it actually does a pretty good job natively. In particular, if PR2 targets PR1, merging PR1 into master will automatically change the base branch of PR2 to be master, and PR2 will then be able to merge cleanly. I suspect this might not work as cleanly if you rebase or squash as the merge method, since you're destroying the historical information that git would have used to figure out what's going on.


I though that was what the article was talking about as well, but I think it's actually the other way around - merge the last feature branch (the one that depends on the others) into the second to last, and keep collapsing down until you've got a PR targeting master with all the changes. That way, each individual changeset can be reviewed more easily, but the whole thing is still one "unit" that will be merged into master at once. (I think that's the gist.)


Exactly - that’s what he’s arguing for - LIFO instead of FIFO because the first patch may actually be “broken” or incomplete without the subsequent.


I've never really understood the hate for a merge-based workflow for that exact reason. Having a true history is more important than having a clean history. (And "git log --first-parent" gives you a clean history, too.) Rebasing for a pull request means there is no way to tell that my local branch has been merged in, because the commit I made has disappeared into the aether.


I feel there’s a desire to organize everything in perfect little buckets based on one hard time dealing with a smorgasbord of various commits - but normally nobody even looks at them and it should be very easy to find the ones with substantial comments.


I prefer a situational policy:

* Branch can be reduced to a single, coherent commit: squash + rebase

* Branch can be reduced to a single commit plus one or two unrelated fixup commits (like formatting): rebase

* Multiple commits: merge

But that requires value judgement and can lead to discussions, so it's not ideal for many teams.

Of course then there are also a lot of companies that use Jira as their primary source of history, where most commits are just "Fix ABC-123". For those it really doesn't matter much what you do to the Git history anywway.


> I suspect this might not work as cleanly if you rebase or squash as the merge method

It does not. After PR1 is merged, PR2 gets retargeted onto the target branch of PR1—but all the commits it was based on now move to the PR2, sometimes causing conflicts. GitHub won't automagically rebase your PR2, that makes sense.


I never understood why GitHub insists on creating a new commit for "Rebase and merge" when the branch is already ahead of master.


I follow something very similar, though I do merges starting with F1 -> mainline, then rebase F2 on mainline and merge, then rebase F3 on mainline and merge.

This is because the earlier feature branches are usually more "done" and I'm still working on the later ones, so don't want to wait until they are all done to submit PRs and merge them from the tail.

I also squash my feature branches before posting PRs, so there's only a single commit, which avoids the rebase issue when branches get new commits. If there's PR feedback for an earlier branch, I'll make the changes, squash (or amend) the commit, then force push that branch. I'll then rebase the later branches to propagate the change. If you have re-re enabled the later rebases are simple.

This works well for me, usually I don't go past F2 or F3 in practice, which is small enough # of branches to manage all the rebasing by hand.

Overall this is a great doc/explanation, and I love the visuals.


I tend to squash at the point that the PR is merged in so that the incremental commits themselves can be referenced on the PR historically, but mainline stays clean with a single squashed commit added per PR.

I've tried to do squashing into intermediate branches before, but in those cases I'm not a fan of the enormous commit message I need to make when squashing the combined feature into mainline in order to keep things clear to contributors on what it actually adds


> I'm not a fan of the enormous commit message I need to make when squashing the combined feature into mainline

Agreed. I keep each feature/significant change in its own branch/PR as a single commit, and just create more branches/PRs for each separate commit.

Sometimes my features require multiple commits over time to deliver, they each get their own branch/PR and those commits are preserved in mainline.


This was really well written, and I loved the visualizations too - I wouldn't have grasped it nearly as quickly without them.

This very much reminds of of merged-based-rebasing[1] aka psycho-rebasing[2] (which itself became part of git-extras[3]; Have you seen the concept before?

--

1: https://tech.people-doc.com/merging-the-right-way.html

2: https://tech.people-doc.com/psycho-rebasing.html

3: https://github.com/tj/git-extras/blob/1894f453f6967201117230...


Maybe I am just too dumb but I always found pull requests hard for making small commits. Somehow PRs always ended up being on the bigger end. We even tried these stacked PRs but it was quite confusing.

In the end we moved to Gerrit and never looked back. We are not super experienced yet but I'd say our commit quality went up by quite a bit.

This is a good introduction to the differences of stacked diffs vs PRs: https://jg.gg/2018/09/29/stacked-diffs-versus-pull-requests/


Working with both gerrit and github for a couple of years and operating a CI service for both I have to say that Gerrit is far ahead of GitHub. However I stick to github for my own projects since I don't want to spend time hosting my own gerrit.


This article is really well written, and does a great job of introducing the problem all the way through to a really simple solution.

I feel something like this might be good as a first class citizen in git (it feels like a common problem, and I don't see how it's solved by some sort of Github feature)


There is an even simpler approach that doesn't involve cherry-pick or rebase.

Instead of doing this:

```

checkout f2

git rebase f1

push -fu origin f2

checkout f3

git rebase f2

push -fu origin f3

```

Do this instead:

```

git checkout f2

git merge f1

git push

git checkout f3

git merge f2

git push

```

You would follow this same approach if someone merged a PR ahead of yours into master:

```

git checkout master

git pull

git checkout f1

git merge master

git push

```

I stack PRs everyday because of the points mentioned in the article, but haven't found a tool which will automatically rollup merges yet (haven't looked either).

(edit: formatting)


I dislike the “merge from top to bottom of the stack” flow proposed as the “standard way”.

I find that the bottom PRs get unwieldy; you can end up merging a multi-thousand line behemoth f1 containing f2, f3, etc. which is hard to verify as correct.

Instead, merge f1 including any changes f1’ you may have force-pushed to that branch. Then do a ‘git rebase <sha> f2 —onto master’ where <sha> is the old f1 commit that you originally targeted your f2 PR against.

You’ll need to resolve merge conflicts f1..f1’ into your f2. You can now force push that f2 and retarget your f2 PR onto master.

Repeat for f3, f4, ..., though I generally find more than 2-3 stacked PRs to be risky; the further away from master you get, the more likely something is going to change drastically in a PR that you’re branching off.

Anyway, this requires a bit more rebase-fu, but I think it leaves a better paper trail and is easier to work with if you do grok rebase fully.


--onto seems to be the right way to me. I'm not sure why the article proposes a sequence of cherry-picks. That sounds more frustrating.


I've considered building systems that would speculatively run builds using forward projections of all pending pull requests, as well as run multiple parallel scenarios on merge order + rebase such that merge conflicts are minimized. Maybe sprinkle in some monte carlo if the numbers get a little intense.

In practice, I feel it is much easier to assign work and organize your codebase so that logically-different business functions can be managed in isolation.

We rarely have merge conflicts these days, and we are also on 1 big monorepo. It mostly boils down to discipline and architecture.


This is pretty much done with https://zuul-ci.org/

You simply create a change and in the footer say

Depends-On: <pull request URL/gerrit URL/gitlab URL/etc>

Zuul will pull everything together, merge it all, and provide your tests with repos on disk that reflect the entire dependency chain. You write your test to install/run/etc. from these repos that have been prepared for you. The test might have only your change, or it might have 20 other changes; Zuul handles all this. This way you test everything together.

You don't need to spend resources speculatively running every possible combination; most changes probably don't affect each other (when you know they do; setup dependencies). But the trick is, you don't commit, Zuul does. It puts all approved changes in a queue, merges them and runs the "gate" tests. It does this in a smart way to try and batch as much as possible. When they pass, it commits the change. When something fails; either it won't merge with HEAD, or it's tests fail against HEAD, it gets rejected and you fix it up and go through the cycle again. It's impossible to commit anything broken to HEAD, because everything that is committed has been tested.


You might enjoy Uber’s paper on SubmitQueue, their system to run CI on what they guess will be the sequence of commits that test successfully: https://eng.uber.com/research/keeping-master-green-at-scale/


This. Not the speculative builds thing, but the organising your codebase so that you don’t have this problem.

Our code structure needs to support both the functionality required by our customers, and our needs as developers.


Mergify just introduced a “speculative merge queues[1]” feature that does something very similar.

[1] https://blog.mergify.io/announcing-speculative-merge-queues


First, never use --force, but use --force-with-lease -- this will fail to push in case someone make remote changes. Useful if you are collaborating on branches (or develop from more than 1 computers).

> It can't pause when a conflict occurs, so you have to fix the conflict and (somehow) re-run it from the point it stopped at, which is fiddly at best.

There is better way: 1. I use `git rebase -i` from the top of the stack -- it opens a vim with list of changes it's going to do. 2. I have script (https://github.com/mic47/git-tools/blob/master/GitStackTodo.... ) that process this and inserts commands to backup branch and move branch to new location to TODO list. At this point, I can even modify the TODO list to rip out commits I don't want, or squash commits i want. Or you can reorder commits (I usually do code review fix at top of the stack and then reorder commits -- at least if I am reasonably sure there won't be conflicts for this fixes). 3. At this point, you can insert more things, like run tests after each branch, or commit (and pause rebase in case of failure, so you can fix it). 4. When I close this file, rebase starts. In case of conflict, rebase pauses, let you fix it, and when you continue rebase, it will finish the TODO file. 5. After, I have script that removes backup branches. 6. I have script that runs command on each branch, so at the end, I do this to push my changes `git stack-foreach master git push --force-with-lease `

What if you can't resolve conflict and you are in the middle of the rebase? You can run `git rebase --abort` and it will restore top of your commit. Only drawback is that branches that were rebased are not restored, but hence my script also create backup branches so I can fix that manually and move branches back.


"force" is harmful not only because you might push over other people's changes, but because you're also introducing additional work for people who are using the branch you're force pushing to. No longer can those people simply run `git pull` to get a updated branch, since older references also changed.

Solution is simply to never use force. Let the history be the history, don't overwrite it.


But for wast majority of pull request branches, people are not checking them out. And if so, they are not modifying it, so dropping local branch and checking fresh one is 20s additional work.

Additionally, technically speaking --force does not overwrite history, it just moves the branch pointer. Old commits are there.


If they checkout `origin/branchname` rather than just `branchname` then `branchname` doesn't get created locally so they don't even have to drop the old one!


I highly suggest reading my article on How we should be using Git. It covers a Git Patch Stack workflow, where it originated from and the tooling we built around it.

It has important ties to how the Linux Kernel and Git dev teams work as well as breaks down the benefits in relation to CI as a methodology.

https://upte.ch/blog/how-we-should-be-using-git/


How does git-ps compare to stgit?

I've been using stgit for a very long time, and before that, I used quilt, and before that, Andrew Morton's patch scripts. If I am not mistaken, Andrew Morton's patch scripts were the inspiration for quilt.

* https://stacked-git.github.io/ * https://linux.die.net/man/1/quilt * https://lkml.org/lkml/2002/10/20/149


git-ps is an extension on top of Git to allow you to basically locally manage a stack of patches. Once you are then ready to have one of those patches peer reviewed you do a git ps rr <patch-index> and it takes and creates branch appropriately based on your patch stacks base and cherry-picks that patch into that branch, and then pushes that branch up.

This allows you to use the local Patch Stack style workflow similar to the Linux Kernel team but while still using GitHub, Bitbucket, or GitLab to do the peer review process.

Everything else I have tried including quilt and the various other tools that have attempted to do this all feel too complicated and too much work.

No offense to the authors of st-git and the work they have put in. But, personally the workflow with it has felt too complicated to me and not natural.

git-ps is the first way I have found where managing the stack of patches feels easy and natural while still allowing me to use GitHub, Bitbucket, or GitLab for peer review.

Check out my article on it here to understand the workflow better, https://upte.ch/blog/how-we-should-be-using-git/

I am happy to answer any questions I can.


It's hard for me to tell what git-ps is actually doing behind the scene and how it works with GitHub, GitLab, Bitbuket and others. Looks like in some cases the commands are basically shortcuts to normal git commands (which is not bad thing in itself), and some command do more stuff but I don't know what.

    git ps pull
similar to: git pull --rebase origin master

    git ps rebase
similar to git rebase -i origin/master

    git ps ls 
similar to: git log --oneline origin/master..HEAD but display more info (Shows incremental commit numbers instead of hash. Shows this "rr" status, but when this "rr" status is shown? When we have remote branch for this commit/path?)

    git ps rr 0
This seems to do more. Like pushing the commit to some remote branch and some additional code to create pull request? What is actually the created branch name? (you need branchname to create PR in Github and others right?)

    git ps pub 0
This is merging branch created for "0" commit to master and push it to origin? How can it work with Github, Bitbucket where you often have protected master branch and must merge PR in web UI?

>git-ps is the first way I have found where managing the stack of patches feels easy and natural while still allowing me to use GitHub, Bitbucket, or GitLab for peer review.

Maybe it would be good to describe more what is happening behind the scenes when you do this "git ps" commands. At the end of article you mention that you need to know Git. I know it, but I don't know what is happening behind the scenes (I could probably test it myself by actually using git-ps, but I've just read this article)


This. I was about to say,the kernel has solved this issue with patch series and tools like patchwork [1] and there is already a lot of experience on those workflows. What would be really nice is to have these kernel workflows in GitHub/GitLab (and, god forbid, stash/bitbucket), with UIs for mere mortals...

This approach is interesting but sounds more complicated than patch series, and therefore more error prone.

[1] https://patchwork.kernel.org/


This guy! ^


Don't love it. Would much rather:

  git push origin f1:f1
  git checkout f2
  git merge f1
  git push origin f2:f2
  git checkout f3
  git merge f2
  git push origin f3:f3
Merge is generally less effort to handle conflicts too. I see cherry-pick as a last-resort option for fixing certain complicated scenarios.


Great visuals and nice write up!

As others have noted already, I like the git-ps approach (https://upte.ch/blog/how-we-should-be-using-git/)


This seems like a very involved way to avoid tackling the underlying human problem.


I'm still reading the article, but it looks like there's a typo where it refers to:

    Merge in the stack from top to bottom - in this case we're
    going to merge Feature 3's PR, Feature 2's PR, and Feature
    1's PR, in that order, using GitHub's UI.
But the "top to bottom" would be merging F1, then F2, then F3 (and you'd have to rebase & re-upstream each as you go).


This is good feedback, thank you! I intend "top" to mean "the PR furthest from `master`" and "bottom" to mean "the PR closest to `master`". I'll add a section clarifying this nomenclature.


My confusion comes from the associated image, where the top of the list is F1, and the bottom of list is F3.

Maybe a new image of the branches with the list order reversed?


Nice article. Reminds me of a similar approach and tool called git-ps - https://github.com/uptech/git-ps .

Write-up: https://upte.ch/blog/how-we-should-be-using-git/


I really like seeing efforts being made in this direction. For the last few years I've been using git-town.com, which works quite well but I often end up having to work around it (especially when I have diamond-shaped branch dependencies, e.g. to refactor two independent things before implementing a small feature on top). I'll have to try this instead and compare!


> Changes that build on the original either go in the same PR, or have to wait until the PR is merged so a second PR can be created.

For feature branches feature-2 that's based on feature-1, why not create a PR from feature-2 onto feature-1 to get the review started? Once feature-1 is merged, simply rebase feature-2 and update the PR. Pipeline that shit, baby!


This is exactly right! The quoted section assumes stacked PRs aren't an option, as a way to introduce the problems that the stacking workflow addresses.


I've been using stgit[1] for this type of workflow for last 10 years. It allows to think in terms of atomic patches and transparently rebases all changes for you. Also it plays nicely with Gerrit.

[1] https://stacked-git.github.io/





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: