Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Could you elaborate on how code review after merge is fine or better?

Some major downsides to review after merge:

- cost of context switching (author has moved on to something else new, which now remains paused to "go back", so there's no agility benefit to just merging if it works)

- increases risk of unnecessary conflicts (how do you address someone merging something, you have feedback, then someone else merges after on top? A PR helps resolve code that's done vs could be improved because it forces a communication moment between the authors)

- tooling (a PR or diff is well supported. How are you discussing feedback when everyone can just merge on top without review? I am assuming there's no point to a PR if everyone can just merge)

- decreases shared learning and understanding (I might think the code follows our standards but there still may be feedback from my team that could help improve. Why put that in the main branch before such feedback? It seems like it would be hard to keep track.)

I can't imagine my team performing well under those circumstances and I think we have a very healthy code review / quality culture. If I'm not giving or receiving feedback - that sounds more like code slinging than thinking and humility, even for the most experienced architects I've worked with welcome feedback, so it's not a matter of trust.



I haven't used this workflow but I imagine your concerns could be addressed.

> addressing merge conflicts

I actually think, from the pov of the change author, this workflow is better at this. Other code changes have to resolve conflicts with yours, not the other way around. The followup changes from review feedback can begin with conflicts addressed for you.

> tooling

I don't know how other platforms handle this, but on GitHub at least there is nothing stopping you from reviewing a merged PR. You can prevent pushing straight to the trunk while still allowing authors to merge their own PRs at will.

I do think your other points are clear drawbacks but on its face the practice doesn't seem without merit. Seems like the "show" point on the ship/show/ask spectrum.


The biggest benefits are:

- Breaking the framing that a review should only address code that has changed in a PR, and encouraging a more holistic view of the codebase. You don't have to review every merge every time, just review code at a granularity that makes sense when it makes sense to do it.

- Removing a delaying step between code being functionally complete and being delivered, where value invested is sitting on the shelf waiting for a reviewer.

- Forcing a high test pipeline confidence. If you're relying on code reviews to catch functional problems rather than stylistic or structural ones, yeah, you won't have the maturity to do this. Saying "we'll review after merge" is saying "we have enough confidence in our automated quality gates not to rely on a human before then."

To address your worries:

- Cost of context switching is removed, not made worse. If you insist on a review before merge, the author has to wait for a review, and unless you have a culture of reviewers context switching to do reviews immediately when they are requested, they will probably pick up something else. Now when the review comes back, what do they do? Do they leave the review waiting, increasing the likelihood of a merge conflict as it gathers dust? Or do they context switch to deal with it and realize the value invested? Contrast this to review-after-merge: the value is delivered, and there's no merge conflict potential; the reviewer can get to it when convenient; and the result of the review can just be another set of tickets on the board to be picked up in the normal sequence. No context switch required.

- Disconnect the review from the PR and the problem of conflicts goes away. Instead review a module at a time (where "module" can be whatever scope makes sense: file, import, header, whatever), so that you're always looking at the whole context rather than the few lines that happen to have changed. That also minimises the human tendency to find as many issues in a 50 line diff as in a 2000 line diff.

- The tooling issue is to some extent an availability bias. Just because tooling for a specific process is well-known does not make that process good. It can nail on harmful practices and make them hard to change or even to see how harmful they are. And yes, if you're doing something close to trunk-based development then either PRs get very small or they go away entirely - branch management is orthogonal to when reviews happen, but you can't usefully move in that direction without addressing the delays inherent in review-before-merge.

- Learning and understanding would only decrease if reviews stopped entirely, rather than moving when they happen. The reviewer always has `git blame` to direct feedback to the right place, and by expanding the scope of a review from just-the-diff to all the code around it, the reviewer has more of a learning opportunity, not less.

It's possible you do have a healthy review culture, but the question I would have is this: how long do PRs sit on the shelf waiting for a review, or for review feedback to be addressed? Do you track that number? And are you relying on humans to catch problems that should be embedded in automated quality gates? Moving the review out of the commit delivery cycle removes a potential process bottleneck, and it's a very easy one to be completely blind to.


No amount of automated tooling can account for subtle security issues, wrong understanding of the spec (something that happens easily especially to people new to the project, but also every time you work with an area of the code you're not familiar with), gotchas previously encountered, etc. There are things that humans (especially ones with a lot of experience) are good at catching that machines aren't.

If you do "review after merge" and you deploy after every merge, I think that's highly irresponsible. If you don't, then your first point still applies - there is a delay between the function being "complete" and being delivered.


That's just blame diffusion though. No review is guaranteed to catch issues like that, all it does is say "well we followed The Process, guess it sucks that one got through" when something bad inevitably makes it to production because the reviewer didn't have the specific knowledge to catch the specific problem. That's more likely to happen when the scope of a review is limited to a diff - either implicitly because that's all the tooling shows you, or explicitly because of "why should I fix code I didn't write on this ticket" reactions to review feedback.

That's not to say blame diffusion is without value! You might need it to avoid toxicity around the team. But that's a different problem.


It's not "blame diffusion", it's risk management. Two sets of eyes are more likely to catch issues than one, and if I specifically know that some part of the code is tricky and person X is familiar with that part of the code, I might even ask that specific person to review.

Honestly, it sounds like you have a very cavalier attitude towards breaking production. That might work in some settings, but definitely not all of them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: