I agree with the middle ground comment. That is how we tend to do things where I work.
We have a modified GitFlow:
main: is the source of truth and is what is in production.
develop: is the constantly moving branch we make PRs against. You can commit directly here which is discouraged but it isn't a hard rule.
ticket: is a branch for each JIRA ticket not each feature.
release: We don't make these and just use tags on main. Each "release" is a merge from develop to main and that gets deployed.
hot fix: These are made against main and merged back to develop when they are used. It is rare enough I have to look up our "official" procedure.
With that we can easily use PRs, release code in small hidden chunks, do code reviews, etc.
Seems like the big win they got was releasing small hidden chunks of a feature and deploying it to staging. They also gave up some nice things as well like code review before merging.
> ticket: is a branch for each JIRA ticket not each feature.
This kind of thing really grosses me out. Why can't you just include the issue number in relevant commit messages? Why does it matter what your branches are named?
Naming branches relevant to what they actually represent is incredibly important to me, personally. I don't care what you do but I refuse to play by this rule in particular, when it's a hard rule.
I've worked with lots of variants, and by far I find using both works best: feature/PROJ-124-user-edit or bug/PROJ-234-startup-crash
This unambiguously lets you trace back to the ticket (as either the author or owner), but keeps the branch readable (you don't have to go to the ticket to see what it is). It also makes the merge message (containing the branch name) much more useful when looking back months later.
It works with multiple branches per ticket (which I often do to make PRs easier): feature/PROJ-456-refactor-config,
feature/PROJ-456-config-ui
It avoids having to worry about the text name. There's no worry about duplicates, either current or historically. It can also be short: just descriptive enough so someone looking at the branch list can see what's what, and you can find yours without memorizing ticket numbers.
The ticket is also very useful when you're cleaning up old branches: maybe there was a different fix and this was abandoned, or maybe it was blocked and then forgotten? The ticket can answer that.
It's very low effort: you naturally know the ticket when creating the branch. After that, you just work on the code, and when you're done the ticket number is right there for you - no searching, sticky notes or kanban board necessary.
Edit: the bug/ or feature/ prefix is optional, but keeps the display way nicer: most UIs will treat it like a folder and allow to collapse, keeping the top level organized and tiny. The classification is also easy and useful - at a glance you can see if it's mostly fixes or new features happening (without looking at your issue tracker). For products with released versions, release/1.0, release/1.1 etc works well for the same reason.
Sorry, I didn't mean to imply that the branch is named for the JIRA ticket. I personally like to have the ticket number in the branch name along with a description though. It was more that each ticket branch is for a small chunk of work that can be merged and deployed when a single ticket is complete not when the whole feature is complete.
Weird. In a GitHub-flow model I don’t care about branch names at all except inasmuch as that they need to be unique.
Branches pushed to origin are just a backup of the commit log that leads to a pull request - they’re ephemeral, belong to the pulled request author(s) and are nobody else’s business but theirs.
I don't know about Jira, but Gitlab has a pretty cool thing whereby it can detect the relevant issue when you git-push a branch to the repo, by looking at the branch name (e.g. 123-some-issue is linked to issue #123).
Not gp, but I (almost) never read commit messages. And usually tickets are so amorphous that it's hard to come up with a good name for the branch,so I always use the ticket number.
Of course, I have no problem with well named branches, but when not doing it by ticket number, you tend to get nearly meaningless branch names: search, fix_color, tooltip, search2.
That said I'd never mandate it, it just seems easier.
But now the development is pretty much tied forever to the issue tracker. As to read the code history need to have the access to the issue tracker to understand the reasoning. Which to me sounds like a bad idea for the future.
> We have a modified GitFlow: main: is the source of truth and is what is in production. develop: is the constantly moving branch we make PRs against. You can commit directly here which is discouraged but it isn't a hard rule. ticket: is a branch for each JIRA ticket not each feature. release: We don't make these and just use tags on main. Each "release" is a merge from develop to main and that gets deployed.
If you're a huge team with a slow release process then I guess you need that develop/master split, but it's costly. When I've worked in a small team we've had a single master branch and every feature branch gets released and deployed immediately after merge (with a "lock" so that you don't merge your feature until the previous person has signed off their deploy), with each feature branch ideally representing a user-visible agile feature (i.e. up to 2 weeks' work) - IMO you don't gain a lot by merging something that doesn't have a user-facing deliverable (how can you be sure the code you're merging is right or not?).
The develop/main split for us is not costly I'm not sure what overhead others are incurring there. We run deploys on demand throughout each day. We don't have the same sign off process or block other deploys. We just send develop to staging and once it is confirmed good we merge it to main and then out to production.
> IMO you don't gain a lot by merging something that doesn't have a user-facing deliverable (how can you be sure the code you're merging is right or not?).
I disagree but to each his own. I think you can release small parts with testing around it. I often release half of a back end feature, then the other half, then the front end all in separate branches and releases. All I really need is to have the parts broken down into logical testable chunks.
> The develop/main split for us is not costly I'm not sure what overhead others are incurring there.
Mainly the mental overhead and the risk of confusion or mistakes. Presumably you still need some co-ordination to make sure two people don't try to release at once. (What do you do if someone else merges a feature to develop while testing on staging is ongoing?)
> we merge it to main and then out to production.
Hmm, so what you deploy to production is a different artifact from what you tested on staging? I'd find that worrying.
> I think you can release small parts with testing around it. I often release half of a back end feature, then the other half, then the front end all in separate branches and releases. All I really need is to have the parts broken down into logical testable chunks.
It's sometimes unavoidable, but my thinking is: yes you can unit test, but how can you possibly know that you're testing the right thing if your change isn't user-visible? You can confirm that your code works the way you think it works, but you can't confirm that it actually delivers the functionality you want. IMO it's only worth putting something in the shared branches once you know you're not going to significantly rework it (otherwise you're causing as many conflicts as you're avoiding), and you can only know that when you've actually tested it end-to-end.
Having develop and main seems like more of a pain than PRs to main and using release branches. This model is also very limiting if you need to support multiple releases in parallel. Maybe this does not apply to your team.
We really only have staging/production so it works great for us, we don't have to support multiple releases at the same time. I agree it gets more difficult if that is a concern.
I don’t see what it gains you even with that limit, though. You’re essentially using your main branch just to hold tags. If you rename “develop” to “main”, your big merges to main go away entirely and turn into zero effort branching.
One of my biggest concerns about long lived branches is that they drift. Tiny merge issues end up accumulating over time, or people forget to merge back a hotfixes, and you can end up with your dev branch behaving slightly different from your release branch. This model can work but it is more complex and more brittle in a world of cheap branching.
I have gone through the big move from long lived branches to “trunk based development” twice in two very large code bases. In both cases the move showed us many places the codebase had unintentionally diverged over the years because we had to reconcile all of it to establish the new “main”.
For us, whether you commit directly to develop or create a branch is more of a decision whether QA needs to be involved . Once you are done, you merge with squash so develop has no merges and later it's not even visible whether you just committed into develop or worked in a branch. Obviously this might not work if your work involves large changes but we work in tiny bits.
Yeah, feature flags are increasingly mainstream / worthwhile, but I hate the idea of giving up on not just CR before merging, but also "preview deploys" (from feature branches to ephemeral deploy envs).
The main branch is always stable, releasable. Feature branches are branches off main, which are then merged back into main. No develop or release branches.
It is like trunk based development, especially if branches are kept short and regularly rebased onto origin/main, but with a point to run PR checks before merging.
The term “GitHub flow” existed prior to the Microsoft acquisition. The GitHub documentation history would be definitive, but you can find articles dating back to 2014 discussing GitHub Flow vs GitFlow. The acquisition was in 2018.
I agree that the name isn’t the best. It is however at least specific. “Trunk based development” encompasses a class of development processes. Although I don’t think I’ve ever actually described a real world process as “GitHub Flow” because almost no one knows what it specifically is anyway.
> Still just a cringey excuse to attach a trademark to a common practice, only sans the "Microsoft".
I don't understand any of this sentence. Of course it's "sans Microsoft". It predated the acquisition by years.
But also I don't see the "cringe" here. "GitHub flow" was introduced as "this is what we do at GitHub". It is (or at least was?) the GitHib flow. https://githubflow.github.io/
> OK, "git flow" was in a way even worse; that was Atlassian trying to usurp the generic "git" name for their own particular flow.
> I don't understand this mindset that starts with assuming everyone is essentially a bad actor.
It's pretty hilarious how some people use "I don't understand..." to imply that whatever it is they don't understand is bad, apparently completely oblivious to how it actually speaks more to their own powers of comprehension.
I often say "I don't understand..." because I'm willing to concede that sometimes my viewpoint is incorrect and I'm interested in correcting my views. Also because sometimes saying "I don't understand" is enough to get someone who holds an invalid/incorrect/unhelpful belief to restate their viewpoint clearly enough that they can see the problem with it (but of course that requires the other party to actually engage in a constructive way). And sometimes I say "I don't understand" because it's just more polite than insulting the the misinformed person.
For example, when I said "I don't understand" to your "Still just a cringey excuse..." comment, what I was really saying was "this sentence is poorly written and hard to understand, and you also clearly don't understand the context of where the term actually came from".
When I said "I don't understand" about assuming people are bad actors, I was really saying "you seem to be using the assumption of bad intent to mask your ignorance about the things you're talking about".
So now, I'll say I don't understand why you're nitpicking my use of the phrase "I don't understand" instead of responding to anything of substance I said.
By this of course I mean that it's clearly easier for you to attack my intellect than for you to self-reflect.
I've found that to be a great middle ground on most teams.
If you want to stretch it a little more, you could selectively do post-merge reviews for things that might be low risk (ie a UI change that's behind a feature flag that only your team sees), and keep riskier changes (like a big refactor, a data migration, etc) on the pre-merge review flow.
We have pre merge review - but for trivial stuff, text / label changes, etc, put a "tiny" tag on the PR, and if urgent paste it to the slack channel asking for a glance and nod review...
At least having another pair of eyes check text-only changes has caught so many typos, etc, and only takes a couple minutes.
1. Put all new code behind feature flags that are off by default in production.
2. Make rolling back easy.
3. Have extensive unit and integration tests.
Some of the deployment steps could be automated even further -- maybe the CI server automatically deploys Staging after a successful build.
See the books Accelerate: The Science of Lean Software and DevOps, and The Toyota Way for more.
How is that even possible? A comma change in a feature can break things, are you going to put that change behind a feature flag?
"New code" isn't just "new files/functions", so it's not always feasible to keep it behind flags, unless you use a "copy on write" methodology to all code.
Parent commenter probably meant putting new features behind a flag. I work for a major feature management company and we heavily use our own platform. Yet we don't put "all code" behind feature flags but we do with features. It's nearly impossible to put "all code" behind feature flags.
Right, basically this. If you're shipping something new that could affect production, put it behind a feature flag so the code paths that are already live are unaffected. Continuously ship small changesets so that it's easy to roll back if necessary.
> If you're shipping something new that could affect production…
Which, again, is everything. I’m all for feature flags, but they cover very specific cases. There are many changes that feature flags cannot cover. The addition of the feature flags itself can introduce bugs. They are a great feature but only one small piece of protecting production.
What's makes you think that just because you code rewiew after it hits trunk that you deploy before review?
I think the problem people have with trunk dev is they don't grok that some projects don't have the same deployment strategy as them. There is a thing called a code freeze. This is a common practice. Not everyone does it.
Just because you do trunk dev does not mean you can't also have a feature branch to try stuff out or a release branch or any other number of branches. What trunk dev means is get your ode out there to other devs quick. Not necessarily get your code out to production or to QA or the customer quick. Those decisions can be independent of branch strategy.
It's easy to confuse continuous integration and continuous deployment because they're so often mentioned in the same breath. Aren't they collectively called "CI/CD"?
(Confession: At least I "think* this is an example of getting them confused. They are different things, right...? The same difference you mentioned?)
What is the cost of a bug getting into the wild vs what is the cost of keeping the bug rate very low?
If you're NASA or a high frequency trading company I'm guessing the cost of bugs can be very high. If you're making internal tools to automate admin tasks the cost of bugs is often very low.
I'm not trying to say it's binary. That you're either NASA or your quality doesn't matter. NASA is only the furthest on one end of the scale that I could think of.
There is a scale and you have to know where you are. There is always a trade off between the amount of work you can get done and the amount of QA process you have in place. With infinite resources it wouldn't matter but us programmers are expensive.
This is the way it's done at many major tech companies. Each individual commit is reviewed then merged directly into main/master.
I've never used them but the thought of feature branches seems absurd versus simply merging small changes. Very rarely can a "feature" not be broken down into small self-contained changes.
Allowing direct push to master is fine so long as you 1) have a small team so it's not very congested 2) have a short enough build/test cycle that you can enforce running tests locally.
As soon as your test suite grows to the point that users aren't likely to run ALL tests before pushing new changes, you can't.
Also, if you want to have code reviews at all, you want them pre-merge. If you have irreversible changes (For example: you have code that writes a serialized format and once you review the code that changes it, people have already used the code, even if only in testing/staging - but you now have data serialized in the bad wire format that you may need to be able to correct, and the correction code after review will need to be maintained forever unless you accept the loss of the data).
I think: commit directly to master to scaffold and iterate quickly on a greenfield project with 1-3 devs. Then start doing feature branches and PR's to master.
Exactly this. One particularly haughty CEO I worked for came to this exact same revelation. "But what about bugs?" "Just tell the developers not to add any bugs!" I'm not even kidding. That's what he said. Continuous release is fine for applications which aren't mission critical AND you have users who are accepting of an increase in bugs. Back in the real world, it's just not a good idea. We're all trying to calibrate the right balance between QA and output, and there are many ways to find the right balance. I remain unconvinced that "YOLO" is ever the right process.
As soon as your test suite grows to the point that users aren't likely to run ALL tests before pushing new changes, you can't.
Why?
I have run trunk for years, naturally you don't tag and deploy to prod if tests fail. But the world keeps turning if trunk has a test fail. Or a build breakage. Or a code style breakage.
You just fix it. On trunk, everyone can see it and anyone can fix it.
> the world keeps turning if trunk has a test fail. Or a build breakage.
The world stops for the other developers that were unfortunate enough to pull/rebase their work when there was a build breakage. Having people shout “don’t pull, master is broken” is more overhead than is lost on PRs.
A test fail isn’t as serious as no one is completely blocked, but a CI build that has 5 test failures can easily get another 5 before 3 of the first 5 are fixed. Before the end of day 1 you have a dozen failing tests of 3-4 different ages, some of which aren’t attributed to those who broke them. Finding who should stop what they are doing and fix the test is a job that needs to be done.
_Anyone_ can fix it: you don't have to find _who_ fixes it.
That does presume you are OK with collective responsibility for the codebase, but I have never worked without that. If you live in that world, I feel bad for you son ;)
Given collective responsibility : If the same issue is on a branch its still there, its hidden and lives longer.
In my experience the _faster_ you find issues the cheaper they are to fix.
If the commiter is not there, and the issue is on a branch, its the same concern when someone _else_ has to merge the branch to develop. Its just been sat there longer. It likely did not show up in CI tools so fast. You came across it out of context.
If you insist one dev per branch and that dev merges, thats playing blame games. I don't find that helps team interactions. I find it better to write your code and your bugs together and fix them in plain sight.
We all know breakage is inconvenient so we all try not to do it on branches or trunk.
No, you can sometimes pull master with a test fail and knowing which tests are ok to fail is a skill in itself.
> That does presume you are OK with collective responsibility for the codebase
That only works on small projects and on "simpler" projects. On my last project with 100+ developers it was likely that a bunch of the failures could only be fixed by 1-2 people. Asking a tools programmer to fix a failing networking test, or a gameplay/network programmer to fix a material bug is a stupid waste lf time.
> If you insist one dev per branch and that dev merges, thats playing blame games
That's a strawman. Nobody is talking about one dev branch per person except you. There are other branching models (git flow, task branches or even feature branches) with well defined criteria for merging back to main available that allow for multiple team members and stability in main.
> We all know breakage is inconvenient so we all try not to do it on branches or trunk.
This is a very naive view and is the equivalent of "just make sure it works before you submit", and doesn't scale into even tens of developers working on a project in my experience.
> _Anyone_ can fix it: you don't have to find _who_ fixes it.
You can always "fix" the broken test by reverting the offending commit. But again, if someone has already committed on it, your history will be a mess. Actually fixing something takes developer B longer than it will take the dev who knows the context. So switching that responsibility - unless it's a typo - would be a massive waste of time. A nice "collective responsibility" perhaps, but a huge waste of time.
> That does presume you are OK with collective responsibility for the codebase, but I have never worked without that. If you live in that world, I feel bad for you son ;)
I do like having collective responsibility. But one of the most important responsibilities is having a clean history, a buildable branch to start from etc.
"Fixing other people's typos" is a nice team excercise but it's pretty far down on my list of collective responsibilities.
> If the same issue is on a branch its still there, its hidden and lives longer.
Test failures when we did commit-to-master lived for days or weeks. Working with PR's doesn't slow me down at all. It's not like feature branches have to live for days. They can live 20 minutes. Of course, if you have a 1.5h CI and then wait for a code review, then it's going to be a few hours, but I wouldn't want to go back to churning on master with 30 others. I did that for 15 years before. With all kinds of teams.
> If you insist one dev per branch and that dev merges, thats playing blame games.
I don't care who merges. And whether the dev merges their own code or whether code review is required is a tangential question. The more important point is that the branch lets build+test be done while the dev does something else, and that the target branch is green in the meeantime. Again, if you can build+test in minutes, then perhaps doing master dev is fine because you can all just agree that "Team, let's try to compile and run all tests before committing to master ok thanks". But if you have a lenghty CI then what? Do you hope every dev runs the 3 hour test suite before committing? That also sounds like a massive waste of time where the developer could be working on the next thing already.
I honestly don't encounter non-trivial merge conflicts in practice on a team of 5 developers. Our repos are scoped roughly to be team-sized so the velocity is low enough to know what everyone is working on.
I guess some of this advice applies better to repos where a large number of people are working on it.
I couldn't imagine giving up the quality gate factor of PRs. Carving out the time to dissect changes catches so many bugs (although it can be received harshly sometimes compared to face to face).
Also pushing to master vs. long lived feature braches is a false dichotomy. You can have small PRs on short-lived branches that may not be a complete feature but can be merged without making the main branch unreleasable.
There is also the political factor to consider in companies where product and sales people control the selected work items. Once something is in a working state there is pressure to move on to the next thing. Fighting for quality before it is in a publishable state is a devs best defence against later rework.
The CD community is overly obsessed with velocity. Of course removing obstacles can lead to a smoother faster workflow. Take it to the extreme and it becomes a dopamine hit activity, the goal is to merge changes fast and we become unable to take the time to think deeply and reflect since it is clear that we are valued by our rate of commits over smart decisions.
"Once something is in a working state there is pressure to move on to the next thing. Fighting for quality before it is in a publishable state is a devs best defence against later rework."
This insight is true and depressing. It's not the best way to focus effort. You fight for quality on a new feature that may or may not get use. By the time you know how well received the feature is it's too late to allocate resources to improving the code quality. So if you don't get it right the first time, this code might cost you months of wasted time when trying to make changes to it in the future.
It would be far better if time was allocated to going back and figuring out what parts of the code are causing problems and going back and fixing them. Spending time removing the features that don't get used.
How do you get the political buy in to do this? I have no idea.
Weirdly enough this came up with my life coach the other day. She said that there's a natural progression of stages for any creative work. She quoted some book or other, but she said the stages were something like, inception (having the idea), discovery (figuring out how you'll do it), doing it, then finishing it - practically and emotionally.
She thinks modern society is too obsessed with doing the work that we skip over the step of finish things properly. The result is that we don't emotionally or practically close the loop on our work. Everything is left in a "oh maybe I'll come back to that" stage.
Finishing something should involve a moment of reflection where we notice and accept that we're no longer its steward. Its both a time for celebration (Yay! We did it!) and, often it involves a bit of morning. ("Oh, that period in my life is over forever now. Huh.").
In the circles she moves in people think skipping over that step of closure is what causes burnout. For a dozen reasons we're just too keen to start the next thing, so we don't appreciate the work we've done. We don't celebrate. We don't move on. We don't clean up our code, even when we know we should. We end up feeling like we're juggling a dozen balls, because we're not really putting any of them down.
I don't know the solution to this in the workplace. But for my own work, I'm trying to find stopping points where I can take half a day off, go out for dinner and reflect on the passing of what I've accomplished. It feels really wholesome.
I feel bad about past projects where nobody else has stepped up to maintain it, which slowly code-rot as people discover issues. It's as bad as unfinished projects I never had the ideas or motivation to flesh it out into something useful.
On the flip side, if you’ve ever been on the manager side this can drive you crazy. Something passes all the tests and you get the arguments about why it suboptimal for future usage by a reviewer. Often something we don’t know we’ll ever do and complete bike shedding. I favor getting the code in and then refactoring later based on yngni. Can that come back to bite you? Absolutely, but not as many times as you’d have been warned, so the cost benefit works out.
Developers absolutely miss the big picture because they're mired in the tiny details. There is a tendency towards perfectionism.
That said many of us spend much of our time maintaining the half baked features of devs long gone. It's time consuming. It's reverse engineering, support cases, meetings, bug fixing, digging through vague logs, workarounds, knowledge transfers and there's never the political will to re-do it plus it's more than twice the work because you have to migrate existing customer workflows. It is work that is neither rewarding nor rewarded.
With that in mind there are categories of things that should be tackled upfront. Besides glaring bugs there is security holes, traceability, auditing, overengineering, real world performance, documentation, preventing bloat and my pet peeve - changes that slow down development by requiring duplicated work or affecting the ability to run locally.
The business pressure encourages developers to skip these because usually someone else will end up dealing with the mess.
I have been the pedantic reviewer. It took time to learn good code review practices and grow them into the work culture, to let go of matter-of-style issues (where I can't argue an objective benefit for the change).
Asynchronous text-based code reviews suffer from an empathy problem also. A lot of the pedantic or ego attitudes drop away when you have to discuss it face to face. Nowadays I'll often opt for a call with the team member if I think there will be a lot of comments. Better yet is to do some pair programming in advance of the code review to avoid major disagreements and discarded work.
As a developer I always insist on being in the planning meetings, and I ask lots of questions about possible future plans. We then build with those in mind. That way we can add flexibility where it’s likely to be needed and avoid the cost where it isn’t.
Oh yeah I'm with you on that. The ideal time to get something optimal is when it's been around long enough to prove it's self. If it's good to go get it out the door. You don't make money on features you haven't shipped.
My problem is 12 months down the track getting people to give you the resources to refactor when they get no new features out of the deal. Only the next few will take half the time to develop if you can do the clean up first. But the demand on the new feature with out way every thing else.
> The CD community is overly obsessed with velocity.
I think CD is about minimising the amount of code released in one go, which allows you to catch issues much faster and revert issues much quicker. Compare that to something most banks do, release once a quarter, and you'll get stuff like that UK bank that went down for days (can't remember which one it was).
I've yet to meet anyone saying you have to finish your features faster.
Velocity is also highly valuable because if you can get your code in front of users worldwide rapidly, you can more quickly “page out” that code and move on to the next work unburdened. Once you know your code is released fully and working, the cognitive overhead is greatly reduced because multitasking is reduced.
The tradeoff is that if you are pushing your code worldwide in an hour and you shipped a critical bug, your high velocity also creates outsized negative impact.
I, for one, will never understand how Trunk Based Development (TBD) is considered "sane default" these days. The power of version control isn't just in a record of history, it's also in branching; and most often, I've noticed developers move to TBD because they don't understand the intricacies of their version control system and how to leverage it for a proper async parallel development workflow. You don't need to adopt GitFlow or another workflow verbatim, understand how you want to deliver software and work within the team so that you can adapt it to your requirements.
The points made by the author are confusing to me.
Quality Assurance was under-resourced. They had a huge job of checking and re-checking every feature to verify that there were no regressions. After merging a feature into develop, they had to check again to see if there were any new issues that were introduced by bad merges or conflicting feature requirements.
If this was the case and they were fine with QA testing just the `master` branch after moving to TBD, maybe QA shouldn't have been testing their feature branches in the original workflow. Just use branches for proper code review and then QA only steps in after the branch is merged?
The threshold of conflict was amplified by the time that passed between when a branch was cut from develop to the time when it was merged back.
For bigger features, a branch's life could last one or even two weeks. The more time that passed, the greater divergence there would be from the other code.
Feature branches should be short-lived, as atomic as possible. And if you're working on a big feature, you have to update your branch frequently with upstream changes. Merges of Doom only happen if you're not following version control best practices.
This also requires a little bit of planning upfront (especially if you're working in parallel on a single feature), but forcing that thought is a good thing.
It also seems like they attributed moving to Kanban as only being possible due to the move to TBD, but it's not like it's impossible with a proper branching workflow.
So, the author made the switch to TBD and attributed it to increased velocity and better _overall morale_, but I think they're just enjoying the seemingly greener grass across the fence for a while.
I agree with almost all of your post. I would only offer that you consider that the most important task of a developer in most organizations is to eliminate complexity beyond a bare minimum. Having a simple, safe, and predictable version control workflow is within that purview, even if it means most people do not use or remain ignorant of the full power of the tools at their disposal. All other points stand, and the simple workflow doesn’t have to be TBD.
> most often, I've noticed developers move to TBD because they don't understand the intricacies of their version control system and how to leverage it for a proper async parallel development workflow.
On what do you base this assertion? I would counter that people are switching because they’ve experienced significant difficulties and that your claims they “don’t understand” version control are unfounded.
You didn’t actually explain how these developers were misusing their version control. You just asserted that they are ignorant because they hold a different opinion.
> And if you're working on a big feature, you have to update your branch frequently with upstream changes. Merges of Doom only happen if you're not following version control best practices.
The problem with this glib statement is that it ignores the cost of these frequent merges. If you’re working on a small team, or your feature branch is only changing code that no one else is changing in the trunk, sure. Merge constantly. It’s probably pretty easy. If you’re working on a large team and others are making changes to the same code in the trunk, this rapidly becomes a massive tax. Merges become more and more difficult as others continue iterating on changes in the trunk unaware of the burden they are placing on you to manually merge on top of your conflicting changes.
I have never seen a long lived feature branch merged successfully back into the trunk without major friction. The typical resolution involves locking everyone else out of the trunk to get the merge in (or doing it over the weekend, for the same effect), and turning off half the tests because they all broke in the giant clusterfuck merge.
The validation risk to long lived feature branches is very high. Tests begin to break as you make deep changes in the feature branch. For simple tests, they might be easy to fix. For more complex tests, you might need the test owner to assist, but they don’t want to, because it’s not their problem. You broke it. Except of course you didn’t. The weeks of conflicting changes broke it.
The cost of divergence is high and grows rapidly with age of the branch. Developers begin cutting corners and plan to “deal with it later” as the merge tax rises, because dealing with the merges slows down the work, making the feature branch even longer lived.
Well, because if you're using Trunk Based Development, you might as well just take backup snapshots of your code on each commit, right? That's not version control, it's just versions.
For the merge pain part of your argument, I would tell you to have a look at any successful open source project. Does it use TBD? Does it still support a big community of developers, mostly working asynchronously and shipping working software?
Now, if you're working on an internal project, please add in a bit of planning (quick note: a developer's job should not include only writing code), you will be able to ship high quality code without it slowing you down.
I'm irked by TBD because is a shortcut. It trades off deliberate collaboration loss for velocity gain – if that is the tradeoff your team wants to make, maybe you should consider it. But I do not get why experienced engineering leaders suggest using it for all teams.
> Well, because if you're using Trunk Based Development, you might as well just take backup snapshots of your code on each commit, right? That's not version control, it's just versions.
I don’t know what you’re attempting to say here. Versions are literally what version control systems manage.
> For the merge pain part of your argument, I would tell you to have a look at any successful open source project. Does it use TBD? Does it still support a big community of developers, mostly working asynchronously and shipping working software?
I would expect that a huge number of open source projects are using trunk based development. GitHub itself encourages roughly this model and hosts a huge number of projects.
Google supports tens of thousands of engineers on trunk based development in their mono repo. Their justification for this is unsurprisingly to avoid merge pain. So yeah, it can work successfully. Complaints about the way Google manages their checkin/release processes is not what I hear from my friends who work there.
I find it interesting that if you go read about the Linux model (which uses its own model that’s definitely not trunk based development), there’s a big emphasis about getting code integrated into mainline (vs maintaining a custom fork) because the cost of maintaining a custom fork is immense due to constant merges. It’s the exact same issue.
> Now, if you're working on an internal project, please add in a bit of planning (quick note: a developer's job should not include only writing code), you will be able to ship high quality code without it slowing you down.
Please say more here. This is the second time you’ve hand waved and said “better planning”. What exactly does that entail? What planning can an engineer do to make merge conflicts go away? Are you proposing that they plan to just not make conflicting changes in their long lived branches?
> I'm irked by TBD because is a shortcut. It trades off deliberate collaboration loss for velocity gain – if that is the tradeoff your team wants to make, maybe you should consider it. But I do not get why experienced engineering leaders suggest using it for all teams.
The way you say “shortcut” here sounds like you agree it works better in practice but you just don’t like it for some reason. Honestly, the vibe I get from your arguments is that long lived branches would work great if everyone else just wasn’t an idiot.
In my experience the real trade off between a trunk based development model and a long lived branch model is mostly short term pain (felt in the trunk based development model) and long term pain (felt in the long lived branch model). Everyone checking into the trunk causes immediate pain if something bad gets in. On the other hand, it tends to be much quicker to resolve vs long lived branches which can hide bad changes for as long as the branch lives until the final merge to trunk. It’s the exact same trade off for requiring build pass before checkin, or gating on tests for checkin. Deal with this shit now or deal with it later. With long lived branches it’s just much later and the impact of dealing with it so late can be so much higher. (It is also the same trade off faced by “micro repos”. They’re great up until the point that they need to take dependencies on other repos or vice versa, at which point it’s essentially just another way to trade immediate vs delayed-but-increased merge pain.)
Edit: Amusingly, the guy who first proposed GitFlow now also suggests using something simpler if you are doing service development with continuous delivery.
> This all came together for me when I was catching up on YouTube and stumbled across Dave Farley's video Continuous Integration vs Feature Branch Workflow
I really wish he didn’t overload the term "Continuous Integration” to also mean "workflow without feature branches". It will surely cause a lot of confusion to those who aren’t fully down with the concepts already.
I can already foresee a small startup where the CTO-by-confidence/coincidence and one of the "senior" devs are having extremely heated circular arguments about the pros and cons of CI, not even talking about the same thing.
OPs "trunk-based development" seems like a more suitable term for what they’re describing.
I'm not sure what distinction you're drawing. Continuous integration is entirely synonymous with trunk-based development.
It's true that as an industry we've overloaded the term Continuous Integration to mean build servers running automated tests (I just did so in a comment in this thread!), but that's where the overload is.
The overload is the absence of feature-branches and PR/MRs with review. A git-flow-like model and CI are fully compatible. OPs model is explicitly incompatible.
1) Consider main branch deployable any time, so don’t push changes that can’t be deployed
2) You can commit to main/master if it’s a reasonably small change not needing review
3) PRs uses for more complex changes, easier to review
4) Deploys off main branch
5) Tests run on all branches
Works well 99% of the time, can fall down when a large chain is queued for deploy (just merged) and someone wants to push a minor change but now it’s got to be everything (unless you revert temporarily).
Yep, if it's a one-liner that's important enough that it is worth shipping immediately on its own, it's probably got enough gunpowder behind it to blow off your foot.
I pushed a one-liner with no code review a few minutes ago.
Prior massive change of mine went through pre-merge CI tests just fine, but the post-merge build process blew up because a very large list of possible arches was missing aarch64. For various reasons the pre-merge CI tests (which take minutes) are necessarily more limited than the post-merge (which take hours).
A massive list of labels gets one more label so there's no style cleanup concerns, and our CI is already red so its not getting any redder.
The prior real PR was reviewed and everyone missed the label that wasn't there.
Turns out the fairly massive CI test that I wrote caught the bug that everyone's human eyeballs entirely failed to catch (and I doubt anyone who reviewed it actually went through the code in the CI test in the same level of detail that I went through in order to create it).
"does the code work functionally" is a fraction of the overall purpose of a code review. If you rely solely on "does the code work", you end up with the average Rails app after 1 year: 10k line controllers, deeply entangled code, half uncommented code and half deeply over-commented code. Hundreds of comments inevitably end up out of date. Whole blocks of code might be commented out. jQuery is instantiated alongside React and 2 different versions of Angular.
I'll repeat that there's not a single code change I can think of, including comments and documentation, that can safely be merged without a review from a second set of eyes. This is exponentially true the more engineers work on the codebase. I can maybe understand why someone on a team of 3 or less doesn't see the benefit in having the overhead of mandatory code reviews.
If you push into master and forget to run tests (which happens, people are human and make mistakes, especially with small changes), then your broken tests are now breaking everyone's attempts to run the build though. You don't get the feedback that your tests are broken until after it's pushed into master.
You need the two-person rule on changes to master simply to avoid compromised developer credentials being equal to a full compromise of production systems and databases.
A minimum of two sets of eyeballs on every change. CI cannot detect intentional backdoors being introduced.
CI also cannot detect the downstream effects of some small changes.
I've seen plenty of subtle bugs get introduced by someone who has an overly simplistic view of some part of a system. And they expose a simple method to share their simplified view of some part to the world. "I believe you when you say that in all of your tests this array has a length of 1. This is a failure of your test cases. Don't add a getter method which returns arr[0]. Come with me and lets chat in front of a whiteboard."
The log4j bug might have been caught with more eyeballs. "Here's a small patch which adds JNDI support in log messages" -> "Whoa hold on - what are the implications of that? JNDI is complex". But of course, most opensource code can't afford to spend developer time on code review by multiple people.
But if you push a bad commit to main without making a branch or PR first, then CI tells you about your mistake... after main is already broken and not "deployable any time"!
That seems inherently contradictory? Particularly if you put some cost on a) not being able to merge and build on the feature in another branch, b) disturbing someone to do something perfunctory
It is an inherent contraction. Having good testing and fast flow is a contradiction. The advantage of a second set of eyes is simply a sanity check... from a second pair of eyes. It's all a balancing act, but it does prevent a large set of failure classes that are basically "This dude went crazy"
All you can ever say when you push up a git commit to master is ‘I’m pretty sure the sequence of commands I ran locally will lead to pushing up a single one line commit that has no significant risk’
If you’ve never found yourself in the position where you slipped up and accidentally included one of the following in a commit that you didn’t notice until after you pushed, I envy your attention to detail:
- a line of code commented out to bypass it when running locally
- a non passing test
- a disabled test
- a config change to point to local host instead of a real server
- a developer credential
- a change to a package lock file
- some build output that should have been gitignored
Yes, you should diligently check what you will push before you push; but since git won’t stop you pushing any of these things, and when you are making a ‘quick one line change’ is precisely when your guard is down because you don’t think you could possibly be accidentally about to ship one of these things, these things will get pushed to master if you allow pushes to master.
but thats the commit that accidentally will include something else that blows up everything. It's not only about checking the desired change, but it's also about checking that it was the only thing committed
I’ve been reading Dave Farley’s new book, “Modern Software Engineering” He has a few rants related to GitFlow vs Trunk development. Many of his points agree with OP in that merging is a big pain point in GitFlow.
I’ve both used strategies on many different projects. Regardless of the development strategy I’ve seen nasty merge parties. The way to avoid those merges is to reduce your batch size and keep your un-integrated changes to a narrow scope.
If you need to make changes out side of that scope. Stash your work; create a new branch; make the change; let your teammates know what you did; before going back to your other branch. You can then integrate that fix back to your local copy. But the important thing is that your team mates can also sync that one off change to their local copy too.
The worse thing is when two developers find the same bug and fix it simultaneously in different commits. Trunk-based and GitFlow both have this problem. Stick to the scope of work that was coordinated in your standup meeting for the day and let your coworkers know if you need to go outside of that scope. Be conscientious.
(Complete aside: Try to do trunk based development in a Perforce code base and you will learn a lot about reducing your the batch size of your commits and communicating the scope of code changes. Perforce requires you to be team oriented when developing)
I have been doing this for years. I have my team push changes directly to master (they have the option to use a feature branch and code review if they feel it is necessary of course). Once per release cycle, we have meet, I put the diff off all changes since the last release and we review every change going out as a group, we talk about what is being changed, why, and people can explain their changes. Sometimes something needs to be fixed, and we can fix it right there.
- The quality of these reviews is better than any code review I’ve seen on feature branch reviews
- We review the whole collection of changes going out, on rare occasions when two changes conflict, you catch these
- Everyone keeps up to date with what is changing in the code base and why
- If for whatever reason there’s an issue after the deployment, it’s easier to fix because everyone has fresh in their head what has changed in this release
I have the feeling this was more common when people did "releases", the age when subversion was still a thing.
Even suggesting that waiting days or even weeks to put something in production makes you feel like a caveman in an age when we are benchmarked by the time from edit to production (or heaven forbid, by the number of merges to production per day).
When I get a dump on my screen and have to communicate asynchronously and with in-line emojis (sigh) that leads to protectiveness. When we sit as a group it is much easier to get a common understanding of why and how we do things (it's not personal that you put your semicolons so differently from us). There's also at least the feeling that we have the possibility to learn from one another.
I still think it is a superior way of doing review. Simultaneously, as a group. Of all the things we do, review is what gains most by ultra bandwidth in-person communication.
Yes, actually there are two small additional repos, but we only very occasionally change and review. I think it would be a bit more difficult with many repos. Perhaps you could make it work, but it’s quite convenient to pull up everything in a single diff in an IDE. On the other hand, I usually treat a repository as a “deployable unit“ more or less, so I guess releases would be scoped to a repository as well.
Also, I’ve done this with smaller teams, 5-6 people. If you had three or four times as many that might make for a long meeting.
Usually takes about two hours every two weeks, but I figure we’d have lost that time in offline code reviews anyway. I’ve had people point out that I have a small team. They have a point, I’m not sure how this would scale to a large team—but maybe if a team is too large to review their code together they could be split into smaller groups.
Also, I think these help replace what might have been other meetings as well. Sometimes product people like to sit in and listen so they know exactly what's being released--I don't think they'd ever participate or get much value from a normal code review.
I absolutely believe in this after working both ways. With feature branches I would waste time rebasing, resolving merge conflicts, to then come up with the perfect pile of commit messages before sharing with the team. Committing to master forces early collaboration & tighter feedback loops.
Edit: I don't know about post merge code reviews, that seems like a risky idea to me at scale
Early collaboration and tighter feedback loops are key. But you can make that all happen as a matter of culture, without needing committing to master as a forcing function.
1. For solo devs/founders, push directly to master. Iterate quickly to serve customers, focus on growth and being "not-dead" by default.[0]
2. The moment you have a 2nd dev working (most likely because you have some sense of product-market fit and some revenue growth), then create feature PRs off of master. Review apps on each feature branch (Heroku supports this easily).
3. 3-5 devs: Have a "develop" branch and PRs go into "develop". "develop" deploys to a staging app, which is tested, and if all is well, "develop" can be merged into master which deploys to prod.
4. > 5 devs: Then you can use the full Gitflow model, with develop/releases/master splits in your branches.
I find that the above works well to find the nice balance between productivity and risk-management. This also works nicely whether you are a consultant/services company working project by project with a client, or whether you're building a product startup. Doing the full Gitflow model as a solo dev is unproductive, and committing to master with a larger team is asking for disaster, especially if your app is critical to your customer business needs.
My favourite workflow is feature branches for a about 1-2 days work. If the feature takes longer then split it up into multiple code/code review/merge to master iterations. Use a feature flag if required.
This keeps merge conflicts low and keeps screwing up master to a minimum.
That sounds nice, where I work there's a deeply ingrained problem of long lived feature branches. In that situation every little nudge towards earlier merges is valuable, so we call that merge target branch "develop", only to be less intimidating. Master formally exists, but it's just a dead bookmark pointing at the tag of the latest release, existing only to make develop appear more inviting.
This is what we do also. Monster PR's get glancing reviews and are risky and diruptive to everyone else's feature branches when they merge latest. So the smaller the better. Quicker integrations. Less mess. Less risk.
I think GitFlow is helpful for mobile apps development. We cannot get into production every time we want, we are at mercy of app store review process. Also, if there is a bug in production it is very difficult to get a fix for all users once a version is deployed. I wonder if anyone uses successfully Trunk-based development for mobile apps development and if you could share your experience against GitFlow (pros and cons)
Generally the app store review process is much less of a problem than it used to be, with happy path being 24-48 hours rather than 1 week+.
Of course everything is context dependent (team size, codebase size, testing maturity, frequency of release), but I guess we found GitFlow seemed to be better aligned with modelling the "versioned software" approach - i.e. with releases determined by features, everybody knows/cares about what is in version 2.1 etc, you might have multiple releases all on the go at the same time with development happening for 2.1.1 bug fixes, 2.2 minor features, and maybe master has moved on to stuff that will ship in 3.x etc.
Definitely with less frequent releases (App Store Review taking 1+ weeks and being unpredictable probably had greater influence on release frequency) and a large team, we ended up with releases feeling quite painful - there was a lot of pressure to land features close to the deadline because the next release might not be for a while, meaning there was more churn on release branches as people tried to stabilise things that were borderline "ready".
Git Flow's release branching model added overhead - people might forget to merge back to develop, deal with conflicts or semantic brokenness if you had different targeted fixes on the release branch to mainline development (e.g. let's just disable this functionality for now on release branch and push it to the next version, fix forward on develop, merge release branch back to develop -> now it's disabled there too, have to unwind).
We switched many years ago (probably after reading https://barro.github.io/2016/02/a-succesful-git-branching-mo... ) to a time-based continuous release model - periodically cut a release from master (there are lots of teams shipping mobile apps on a monthly, 2 weekly or even weekly cadence these days), with trunk-based development and cactus branching for releases. If we need to hotfix just need to go to the most recent release tag/branch and ship a fix based off that point, no merging back between branches. If you need to fix a bug in the release, it's on you to make sure an appropriate fix is applied in both places, through e.g. cherry-picking, no need to worry about merging anything back anywhere.
Together with some other actions (cultural focus on reducing post-branch churn and investment in testing capability to gain confidence in release candidates faster, set in the context of a goal to increase release cadence; increased usage of feature flags; etc), we ended up significantly improving release frequency & predictability, which reduced the time for stakeholders to get changes shipped and visible to users from when they were "done" (especially valuable for small changes, of course).
Nobody ever really understood the utility of the "production" branch from GitFlow.
tl;dr GitFlow seemed to add overhead with little value; we switched to trunk based development and didn't look back.
>If we need to hotfix just need to go to the most recent release tag/branch and ship a fix based off that point, no merging back between branches
Thanks for your sharing. Just out of curiosity, how do you tag the release that is in production? Is it automatically? I don't see how you can automate this step when you merge everything only to master.
The process we follow is actually very similar but having the master/develop branches we can automate things more easily, like if a commit is applied to master we know it needs to be tagged as it should come from a release branch, if a commit lands on develop we create an alpha release, if a commit lands to a rc branch a production release is submitted to testflight/playstore beta track for final tests before publishing. We automate all this with a jenkins pipeline. I am curious to see how this is done in a trunk based branching model. Care to share?
I'm not sure I entirely follow the question. We don't attempt to have any tag in the repo itself representing what is "in production" - which itself is sort of a misnomer because of the iOS App Store 7 day phased release functionality anyway, plus of course lag in customers downloading updates onto their devices.
So if you need to patch something after a release branch point, there is inevitably a discussion dependent on the context of where any in flight release is - has it started rolling? if so how far is it? is it already completely rolled out? Are we halting the release or just following it immediately with a hotfix? Do we want the hotfix to go through phased release? We just have a separate dashboard tracking release status.
Jenkins pipeline wise very similar - land on main, will go in the next nightly build. If you need to patch a release, submit your patch to the relevant release branch, and the corresponding beta/production pipelines will trigger. (Release branches are protected and require special approval to merge to.)
So when you need to do a hotfix, how do you know which commit to branch from if you do not tag the releases that were deployed to production?
To me the only additional complication of gitflow to trunk-based development is the existence of the main branch, because you also create a release branch when you want to make a release. Getting rid off the main branch is usually through tagging, but as I mentioned is not easy to automate the tagging when you do not have an additional branch to use in your pipeline.
> The more time that passes from the point a branch is broken off from develop to the time it is merged back in, the more opportunity there is for other branches to diverge in drastic ways.
When a release is made you should merge those changes into other branches too. Then merge-conflicts or behavioural changes are found, and fixed, earlier.
This can be tricky. I worked somewhere that did this and it ends up forcing some devs to deal with multiple conflicts a week while they're working on a branch, when they could instead keep an eye on what is going on in main and deal with the conflicts on their own schedule.
The problem here seems to be devs working on a branch for a whole week. If you break down tasks smaller than this and merge them separately then you end up with far fewer merge conflicts.
I'd put that under the 'Too Bad' category. If a branch is so far behind reality/master then any testing done on it is quite likely to be invalid, and it won't have any hotfixes that were deployed in the meantime.
I think there is something wrong with the basic idea of git and similar systems, which is that anybody can just change anything because they work in parallel meaning changes they make can conflict. Therefore we have conflict resolution, but shouldn't we try to make it less likely for conflicts to arise in the first place?
Instead I think we need module-ownership and tools supporting that. At any given time every module should be assigned to an owner-programmer or small owner-team. Only they can change their modules. Others can request changes, or create their own copy of that module to modify, but not modify code owned by someone else willy-nilly.
If programmers cannot modify modules owned by others there will be no merge-conflicts, right?
I wonder why this kind of code-ownership approach isn't more widely practiced and why there doesn't seem to be much tool-support for it?
In the 90s, pretty much everywhere I worked used a locking source control system, where you'd have to lock a file before you could edit it. This meant no merge conflicts, but it did mean that you had to work quickly if you wanted to make changes to a file that was frequently edited.
ClearCase had elaborate rules to define ownership and permissions. I guess this feature creep was imposed by marketing the product to lawyers and paralegal first.
Ever ClearCase install I know of is an awful legacy system superseded in practice by git, even in environments that work on the need-to-know basis.
Lawyers and paralegal just ditched version control altogether, surviving on Office365 document sharing and distributed editing.
My experience with VSS was pretty good but I'm thinking of something more radical. VSS etc. allow you to temporarily lock a file so no-one else can edit it while you have it locked out. I'm thinking that instead of temporary lock-outs we should have persistent module ownership. Only the owners can modify the code of their modules, and perhaps temporarily grant commit-rights for their module to others. Preferably the owner should have a deputy or two who would take over if the owner gets sick.
Super-user can grant and take away ownership to any module. Super-user should have a deputy or two as well.
Here's the metaphor: In New York City and all cities you have traffic lights and there are parking rules, and pedestrian crossings and bike-lanes and some streets are one-directional. What would happen if all the rules and traffic-lights were taken out? You could still get from A to B, but probably on average much slower because you would get stuck in traffic-jams much more often.
Perhaps counter-intuitively creating rules which restrict how you can drive and park your car do not make you move slower, they make traffic more efficient. Traffic jam is like a merge-conflict. Two cars merging on to the same narrow street from opposite directions. One of them has to back out. And then so do all cars behind it. Not fun.
Git etc. are a bit like city without traffic lights, one-directional streets and parking restrictions. Anybody can do whatever they want, branch and branch and merge and resolve conflicts. It gives you the impression of great flexibility and freedom, but so would a city without traffic rules. Yet all cities have realized they need to restrict what people do on their streets, to eliminate "traffic-merge-conflicts".
Now naturally you can say that your project has rules in place as to who can modify what code and when (do you?). But I think there should be tool-support for that and persistent module-ownership instead of "module communism" where everybody owns everything. When everybody owns everything no-one is responsible.
I'm not a big rules-guy but I think traffic lights do more good than bad.
We made something quite similar work rather well at $work. However, there is a somewhat subtle trick.
The staging/prod candidate is built from master/main branch 21 days old, plus potentially some (few) cherrypicks.
This allows to fearlessly commit to master, and then the “T-21” has 21 days of completely predictable future, that can be changed by doing cherrypicks.
So we can run intense testing on “T-0” aka main, find any issues and add them to be cherry-picked into the T-21.
The bonus is that when the fixes “arrive” to T-21, the cherry-picks become void and stop being applied.
Thus, absent the bugs, the staging/prod code automatically converges to main/master over time.
And yes, we do the reviews of the commits that go into master - but from the T-21 point of view they are 21 days in the future! So there is ample time for any reaction.
Would folks be interested in a more detailed write up ?
We have 50 devs (including inexperienced juniors) and 1-3 releases per day from different teams, pushing untested changes to master would be problematic because one team (maybe with a less important feature) could break/delay everything for the others, and go figure who broke what (it used to happen with trunk-based development, hence the switch to Git flow). Sometimes there is a problem and we need to make a hotfix and redeploy, but how do you do it when 5 teams just pushed random untested crap to main? Our feature branches don't diverge much, each team/release has its ows staging environment (around 15 right now IIRC), and it's a rule to refresh them with new changes from master every day. Yes sometimes there's conflicts when two features are to be released on the same day (it wasn't caught during one of the "refreshings"), but it doesn't happen often because during PI planning we discuss possible interdependencies between teams/releases in advance to resolve problems long before merge, so those conflicts tend to be trivial. All features are required to be split into smaller subreleases which shouldn't take more than a week or two to make, so there isn't enough time for branch diverge anyway. And what happens when business requirements change and the feature is cancelled, do you unmerge all that? Sometimes priorities change and a very important client needs a feature to be released sooner, so we change the release plan accordingly, and how do you do it when everyone already pushed their untested, possibly broken stuff to master? It probably also depends on business needs, in our case we deal with statistics which drives our clients' decisions (who to fire or promote), lack of testing/review/unsupervised merges to master would be a disaster for our business.
Well here is the workflow we use here at our company:
Branch feature from master/main.
Keep feature branch in sync with master/main (merging from master/main everyday). This minimizes conflicts when we finally get to merge feature branch to master.
Any refactor made in feature branch may be cherrypicked to master any time. This reduces differences between feature branch and master/main, resulting in less code to review.
1. Do all your development and testing on one branch (develop). Vet its HEAD commit.
2. Once that's good to go, merge that into a different branch (master), and deploy a completely different commit to production!
The vast majority of devs that I have spoken to believe that, after merging develop with master that develop == master, and have no controls in place to actually guarantee that.
& in case you're thinking "I thought they were?"
* merge to master (master) # what shipped
|\
| * PR #184 (develop) # what was used by devs
Different commits == potentially different trees. (And, in a large enough company with enough commits and time, "potentially" drifts towards certainty.)
A good shop will deploy master somewhere sane like a QA env first… but still. It's brain-dead. Truck-based development is simpler, less often crashes the minds of devs who can't be arsed to learn git, and what you test/dev == what you deploy.
As I understand it, in Git Flow, the result of the merge into main should be identical to the tree that was in develop, meaning there's no difference, it's just a matter of maintaining the appropriate history.
Under normal conditions this should be trivially true. Develop forked off from main, no commits on main, therefore the merge back into main has no changes on the first-parent side and is equal to the second parent. The two ways to screw this up are:
1. You did a hotfix release and didn't merge the hotfix back into develop properly, or
2. You made changes on a release branch and forgot to merge that back into develop.
In both cases, the fact that the results of merging develop into main produced a different tree than what was in develop (or rather, in the current release branch) should be a signal that you screwed up somewhere.
Having said all that, I'm willing to bet that 98% of places that do Git Flow don't actually have any checks to ensure that the tree on main is identical to the tree on the release branch. And this is because the model is actually rather complex, most people don't understand Git properly, and the accessible documentation about Git Flow doesn't even bother to mention the possibility that the merge into main could produce a different tree.
Ah, so what they really learned is that Gitflow is garbage. Yeah, long lived working branches are an enormous pain in the ass. I’m not convinced they are ever worth the hassle. Certainly for small teams they are not.
I am highly doubtful of the “commit first, test and code review later” model, though. Maybe this works for a very small, tight team who are all very highly skilled and care a ton about engineering quality. But this model can fall apart quickly. Bad code blocks the whole team and everyone pays the cost. You eventually end up with someone babysitting the build and test process to keep it moving and then some bright mind asks why you don’t put this stuff before checkin.
Maybe it’s fine to put code reviews later in the process but I think I’d be worried about the frequency of changes that break the build. Is there some way to avoid that? Debugging failures only to discover that the build was broken (in the sense of a test failure, but also could be a JavaScript syntax error I suppose) underneath you is surely bad for velocity.
Certainly I think there are times when having multiple people pushing straight to a feature branch is good but I’d worry about a codebase where features touch so much of the same stuff that merges are so painful. But maybe OP’s environment is just alien to me and there are reasons for both of these things.
If you're using something that supports it (like Github) a good middle-ground would be auto-merging PRs. Branch from main -> code -> PR w/ automerge -> move on to the next task. This would give you speedy merges to main branch with the protection of the per-PR build checks.
Obviously I don’t know much about the normal git process (the one I use is a little unusual) but what is the alternative to auto-merging PRs? Do people need to manually hit merge once tests are automatically run and code reviewed or is there some smaller set of people with the ability to merge? Do people ever do code review after a merge to make sure it worked properly?
In my experience you can't use feature flags for everything.
Also, if you are having so much trouble merging because of lot of changes and commits, that's a smell of a bad design or a project too big that must be broken up.
> With Trunk-Based Development (or Continuous Integration), developers are encouraged to push their code to the main branch frequently. Not just when a feature is finished, but every time there is new meaningful working code.
I'm a big fan of checking in small amounts of meaningful code and utilizing feature flags when necessary. Like others have mentioned tight feedback loops are key. I have never felt like I needed to ditch branches though. It honestly sounds like a nightmare to commit directly to main in a team setting. On the other hand, maybe it would force developers to think twice about what they're changing.
This new to me as well. I am very curious what situations people are in where they don't/can't wait for a PR into master.
If you reading this do this, I'd be interested in hearing what the benefit of pushing to master is for you. Is it purely for velocity? And if so why is velocity that important?
Coming back to my comment having finished the article in full.
I wonder how the author's code base will develop using a weekly reflection on the code committed instead of PRs. I'm not sure how well their process will handle someone's poor design decisions. My experience has been that poor decisions tend to stay in code once they are there, and especially as more things become dependent on those decisions. Not that reviews will catch everything, but that second set of eyes can go a long way.
I'm curious to know how does your team avoid shooting itself in the foot with unreviewed code? As in, is there some process that keeps unmaintainable elements from sneaking in?
Fair enough. My experience with reviews has been they are quick quality checks, and at least for teams I have been on, were beneficial. I can see how they could become more of a ritual process than a beneficial one though, especially after seeing some other comments in the thread.
Surprising. We do a PR and review for nearly every block of work we commit. Most pass without comment. The ones that have comments usually have good suggestions, or actually catch an error.
I really feel like most of the processes we have in software development are unnecessary if not detrimental.
CI/CD is the best and most impactful thing in years and optimizing for it is the best you can do IMO. Anything that gets in the way of CI/CD you want to avoid. Anything that helps with CI/CD you want more of it.
Using that as a guide post, trunk-based development is better than feature branches. Kanban is better than Scrum. Monoliths vs Microservices is less clear cut, depends on how costly the monolith is to build vs how annoying all the services are to deploy.
Am i the only one who worked on a project with multiple feature branches in parallel by different subteams, with non trivial code merge conflicts (and data model conflicts) and a upper management that constantly reshuffle the delivery dates for these features.
Plus maintenance branches to support previous versions with a hotfix branch (and also backporting some newer features to older version).
That might be unusual, but even GitFlow in that case is very poor handling those kind of deliveries.
What is wrapping all new features in a #ifdef NEW_FEATURE //code #endif called?
My process over the last 7 years working in game dev has slowly evolved to this where I'll ask a contractor to implement a feature on the main branch but make sure that it is completely toggleable via a #define NEW_FEATURE
I thought I was pretty clever until I read an article on HN a few years ago that this is exactly what Google does lol
This approach works surprisingly well for most features, but care should be taken to remove those 'ifs' once the feature is stable/complete. Or, when implemented cleanly, they could also be left in place and used as long lived feature toggles that can be driven by configs (or user settings, ...).
The hairy part is when the new feature or change cuts across different parts of the code and then, given enough time, those 'ifs' pollute the codebase. Solution here is to be mindful of how the system grows, keeping things isolated and (perhaps counter-intuitive) favor duplication over shared logic/libraries. Regardless if it's about one big thing (i.e. monolith) or multiple small things (different projects even).
Yeah, and let me tell you, it's an absolute nightmare to get to work when you need two features that weren't written to compile at the same time.
I'm not saying it's necessarily a bad idea. Just that it isn't a replacement for abstraction. If you have more than a couple of #ifdef blocks per feature, you might be setting yourself up for future pain.
I think you are describing build-time feature flags or feature toggles. Similarly, this type of thing can be managed at runtime via integrations or implementing a database backed version yourself.
all developers develop features, we then cherry pick what features we want in the next release, do a release branch, merge in features, test (and if necessary, fix) then merge into main.
Well now you have to tell us why and how 600 people are sharing a repo! That’s a lot. Granted at Microsoft thousands of people worked in the same codebase. But in my recent experience we’ve generally been working on a repo per project, which typically maps to a small team.
> Well now you have to tell us why and how 600 people are sharing a repo
Any answer you could possibly get from this question will eventually boil down to project repos vs monorepo. There are pros and cons to each, which are more or less meaningless depending on the amount of developers working in parallel.
I figured as much, and just wondered about the challenges of trunk based development with a large contributor pool. If so many teams are working in different subtrees of the monorepo, it seems like trunk based development could be just fine.
The idea of post-merge code review is horrifying to me. I guess this is the "move fast and break things" attitude in action.
Deploy now, catch bugs...sometime, maybe (TM)!