Hacker News new | past | comments | ask | show | jobs | submit login
Code review review is the manager’s job (hecate.co)
182 points by kiyanwang on Aug 15, 2018 | hide | past | favorite | 103 comments



The manager can do code review if they're a player-manager. If they're a people manager, it's a lot harder to meaningfully review. Managers usually hired in to manage after the team grows (and splits), rather than trying to convert developers into managers, so they may not have a good idea about the code at all.

I'm more inclined to think that it's senior technical responsibility to review, since they'll have a lot more knowledge of the codebase, its evolution and future direction, potential gotchas and various cultural dos and don'ts.

For sure, it's important to keep an eye on the code review culture, though. It's easy for code review to degenerate into nitpicking small things like whitespace, naming, coding conventions - these should be handled by tools instead. And of course if people are being assholes, they need to be pulled up on it.


The double "review" in the title is not an error. The manager's job isn't to do the code reviews. Their job is to review the code reviews and make sure that they are being done well and that review feedback is being given constructively.


You're right about the double review, but I think the argument still loosely applies even removed one step.

For instance, I've worked somewhere with a rule that x% of code should be unit tested. In fact that was the rule applied in code reviews, anything else was a free for all between projects including any other form of testing coverage. And the rule was applied regardless of the size of the project, whether it was UI code, an I/O library or a few lines of tcl running on an F5.

If I was running a project and we didn't make the mark for unit testing it was usually something I could justify to a player (or ex-player) manager. A couple of others would call a full stop to development, pull everyone they could into a meeting and try and berate us for being substandard. Eventually someone with a technical background further up the chain would calm them down until it happened again.

I guess that's partly a function of them being poor managers and I'm sure some less technical mangers know where the line is, but I've come to consider anything about the tools and process as the responsibility of the technical lead and their team and outside the remit of management. If what the team produce is substandard they need to fire everyone, have someone technical look at who to fire first, or some smaller tweak, rather than have people who've never done the job make a judgement on the efficacy of the approach.

Of course, there isn't a binary divide between technical and non-technical and all projects are different so YMMV.


My first reaction based on the title was probably the same as a bunch of people here, but then I reread the title. I agree, it absolutely is the manager's job to make sure the process is positive.


It's also easy for code review to devolve into "this isn't quite the way I'd implement it; scrap all this work and start over"


In a lot of cases, I see variations of this when there was not enough up front thought put into the design (before writing code).

Slight tangent, but a good way to tackle that is to outline what you want to do (sketch the API you're going to fill in, etc), and run that by a reviewer


In a dysfunctional enough culture there's always someone you didn't include who has a show-stopping objection.


That's not a code review. That's either a suggestion that the initial commit is 'wrong' or the reviewer is a jerk. Either way, this is the PM's fault - not enough information to finish it right or too reviewer nitpicky on useless stuff.


There's a really good code review guide on the plaid blog that addressed this: https://blog.plaid.com/building-an-inclusive-code-review-cul....

Money quote: "Nothing should be surprising to a reviewer in a code review: discuss any significant design decisions before code is written. Code reviews are a time to iron out implementation details, not discuss major design or architecture decisions."


What if you use pull requests to prototype architectural change? My take is your pull request should include the context of the change and the sort of feedback you want.


I think that explicitly noting “this isn’t ready, but I’d like to discuss x,y,z” ought to cover it.


It was pretty common at my last job; seemed like every third PR got at least one request for a total or near-total rewrite, usually for completely non-functional changes without obvious benefits over the solution implemented.


I've been asked to make my solution more complicated and to add usage of things like the friend keyword to unit tests instead of testing a public static function.


And that's just one of the differences between a good coder and a good senior engineer.


> For sure, it's important to keep an eye on the code review culture, though. It's easy for code review to degenerate into nitpicking small things like whitespace, naming, coding conventions - these should be handled by tools instead.

I agree with this, but think there needs to be some balance. My experience has been that people generally add rules over time while rarely removing them - usually to quell disagreements without regards to the benefit of standardising the answer versus the cost of more rules.


a place where I've worked the people that make all the strategic decisions about what to work on don't/can't work on the code base. The steady stream of senior developers that exit the company suggest that this strategy isn't the best one. I think player-managers should arguably become the norm.


"Architect who doesn't code" is a terrible thing. I've designed projects and then turned them over to other skilled developers more than once, while I help them out with reviews and such. My knowledge is noticeably rusty in under a month and by month three I'm easily wrong at least as often as I'm right about what some other bit of code is doing. To even help them out on occasion requires a bit of re-orientation; I can't imagine trying to hand them fresh architecture requirements.

In fact, I've noticed it's kind of concerning when this doesn't happen; it means the engineers who took it over lack the confidence for some reason to make non-trivial changes to the code base and are just pecking at the sides. Unless the project is deliberately in maintenance mode, that's a problem. Could be people, could be me, could be them, could be technical issues, could be lots of things, but there's certainly some sort of problem there.


If your team has pure “people manager”, run!


Recent experience has taught me to think of those type of people as delivery managers, rather than people managers, and so long as they understand the limits of their knowledge they can be invaluable.

I've worked with two people who were excellent in that role, they work with the rest of the business to clearly define priorities, and to ensure they understand what we can and can't deliver. When it comes to hard technical decisions they delegate to technical leads and developers, but they have enough experience of engineering to be able to intuit whether what they're hearing makes sense.

I've also worked with someone in the same position, but without the self awareness to know where his knowledge ended and it was time to delegate. That was possibly the worst six months of work I've done, but thankfully he was eventually forced out by the board.


One of the best managers I ever worked for was a “people manager”. He knew his limitations and worked within them. He was reliant on the tech lead for technical knowledge but between the two of them made a pretty good team.

Some of the worst managers I’ve worked for have been technical people who don’t really want to manage but git Peter Pricipled into management.


Not a pure people manager - managers with a background of coding are important so they won't swallow BS, can understand tradeoffs between e.g. refactoring vs accepting tech debt to get a feature finished quickly, etc., understand the reliability or otherwise of dev estimates, etc.

But few strong developers are also strong people managers. The two attributes seldom overlap. And I think being good with people is more important in a manager than being good with code.


Of the companies I've worked at, my favorite setup was:

* people manager dealt with people issues

* technical reviews performed by senior technical people (under the same people manager, or cross teams if noone else is at the same / higher level)

* technical review is a part of the people manager's management responsibility: is it done on time, is the reviewee satisfied with the content and criticism of the review, etc.

This prevented a lot of personal opinion blight from influencing technical reviews- aka avoided subjective criticism from being part of the review score, focused instead on more objective metrics.

The time and place for subjective criticism and discussion is in planning and in-situ reviews (aka pull / merge requests), not formal reviews.

That's my experience at least; the particular managers I had at that company (3 over 7 years) also happened to be excellent people on the whole. A bad people manager is, of course, just as bad as a bad technical manager.


Management skills are orthogonal to technical skills.

People who are good technically are not necessarily good at management and should not be put in that position. They should however be put into a code-reviewing position.


Where I work, we separate the roles of Product Manager and Engineering Manager. The former focuses on roadmaps and priorities and works with the engineers directly while the latter focuses on people’s growth and career goals, and has almost no say on what people are working on or the handing out of tasks.

It works well. It’s nice to have someone to talk to and work with who isn’t invested on what you are working on today or riddled with conflicts of interest.


The author tried to be clever with the title, but failed to take into account how human perception works. The second "review" gets filtered out like that gorilla in the basketball video.


Oh my gosh...I didn't even realize the word repeats...


holy shit - got me too!


FWIW, I did catch it - at first I thought it's a typo, and after a few seconds realized it was intentional. It is a fitting title though, if a bit... smart.


Ya got me


Yeah, I see more projects with bad code review culture than good. There are many things that managers can do about it.

* Assign enough time for code reviews - otherwise developers will just care about their own code submissions and let reviews rot.

* Establish productive code review rules. E.g. always go from big picture to details, otherwise you'll waste time on details that you end up throwing away.

* Make reviewers co-responsible for the quality of code reviewed, otherwise instant acceptance for everything becomes attractive.


I just left a job with a truly toxic code review culture.

- managers would swoop in and leave incorrect criticisms. Asserting plainly false things and in one case reviewing python code as though it was JavaScript.

- devs would make conflicting requirements for improvement. Dev A demands change A. Dev B wants change B. A and B are incompatible and neither dev will back down. Management of course refused to help settle the argument.

- change requests completely unrelated to the PR being reviewed would get crammed into the PR. So every small code change could balloon into weeks on refactoring work.

- Often times code review comments were made without ever reading the code being reviewed. Such as a comment of “Why aren’t you using library X?” sandwiched in between invocations of library X.

The culture was clearly rewarding people for shitting on others work instead of doing any work themselves. Getting even the simplest code changes past required endless meetings explaining the most basic facts about computers.

So glad to be gone from there.


>devs would make conflicting requirements for improvement. Dev A demands change A. Dev B wants change B. A and B are incompatible and neither dev will back down. Management of course refused to help settle the argument.

This was a problem in my previous job. Options I thought of:

1. Get a moderator. This is what we did. He did not just settle disputes - he did more than that. But it's good to have at least this role.

2. Have a total of 4 reviewers review the disputed section. If there is a majority, change it. If it's a 2-2 split, just leave it as is. (Team did not accept this).

3. Disputed code (whether changed or not) should have comments saying this vs the alternative was discussed in a code review. Why? Because if there isn't consensus, over time, people are going to keep changing those lines of code back and forth (some developer who was not involved in the review will decide to refactor - rinse and repeat).


Some of these are really something else. Of some I have seen minor cases. "While you are changing this code, why don't you also..." is sometimes okay. Conflicting requests from reviewers, I have seen those happen due to neglect from the reviewer most in charge of the code. If they cared, they'd have used their technical competence, or rank if necessary, to set the direction.


Not sure if management is right for #2. There should be some kind of tech lead in place to resolve technical conflicts. Design by concensus doesn't work well.


Some of this stuff is bananas, and you're totally within your rights to say, "please read this PR; I am using this library / this is Python.". But if someone's asking me for a refactor, I say "let's make a separate ticket for that", and if multiple people ask me for conflicting things, it's totally fine to say "please figure this out and update this PR, or I'm ignoring both of you". I think you made a good call switching jobs, but sometimes a great job is worth dealing with some troublesome processes.


This has become the norm in the assembly-line culture of agile development? Last two gigs I heard about, this was baked into their process. Projects were delayed weeks or months; projects missed deadlines.


I've mostly battled with 1 and 3, like they're two extremes on the spectrum.

In the first instance, I suspect there is blame culture or otherwise the developers previously had that experience and thus carried the baggage to the new job. I believe this is how you can get pull requests that either never get approved or merged (nobody wants their name attached to it in case they get called out). When it comes to not having time to review at all, I suspect that's the classic "I'm too busy" excuse, where if the whole team is too busy working independently then they can't ever expect to get anything to production without a serious effort each time.

In that instance the code review and lack of productivity is just a symptom of a lack of team dynamic.

The latter point is weird because I've had that and I didn't enjoy it at all. I could never tell if the approval was because the work was good or because it was a rubber stamp operation. Especially based on other feedback it felt like I was being put on a pedestal and I hated that I could basically do no wrong, even when I did because we merged and then I had a gut feeling that I fucked something up.

I'm not sure accountability would fix that particular manifestation of the problem.


> Establish productive code review rules. E.g. always go from big picture to details, otherwise you'll waste time on details that you end up throwing away.

Well, that's a nice rule. Another one I heard people say is that code reviews are not for arguing about architecture or design decisions. If we take those two rules together, code reviews are good neither for discussing the details nor for discussing the high-level picture. Which in my experience is about right.


Right. Code Review should be about catching mistakes. The high level design is already known, and the low level details are checked by a linter. Code Review just makes sure that you didn't accidentally introduce a bug and that the code has appropriate tests.


> Code Review should be about catching mistakes.

Overly-complicated code is a mistake. Maintainability matters almost as much as a passing unit test. You want your code reviews to catch meaningful things, but they often end up being about things like this (fixing inefficient small bits of code).

e.g.

     function isDry(weather) {
         if (typesUtil.isSunny(weather) ||
             (!typesUtil.isSunny(weather) && typeof weather.hasSnow === 'boolean' && !weather.hasSnow)) {
            return true;
        }
        return false;
    }
vs

    function isDry(weather) {
        return typesUtil.isSunny(weather) || 
        (typeof weather.hasSnow == ‘boolean’ && !weather.hasSnow);
    }
vs

    function isDry(weather)
        var hasSnow = false;
        if(typeof weather.hasSnow === 'boolean') {
            hasSnow = weather.hasSnow;
        }
        return (typesUtil.isSunny(weather) || !hasSnow);
    }


When you are dealing with an open source project, people will just show up with small and sometimes large changes. In that case, you may need to review design.

There is also "minor" stuff a linter won't catch, such as bad naming. IMO naming is actually quite important.


Those are all really good points.

I have a recollection of seeing a study once that found the best balance point of review time vs dev time but haven't found it again. I recall it being somewhere around 20-30% but now that I can't find it I'm questioning my recall.


I definitely overstated here sorry. Found the excerpt I was remembering in my copy of code complete "On a project that uses inspections for design and code, the inspections will take up about 10-15 percent of project budget and will typically reduce overall project cost".

So assuming the same payoff benefit (big if, but let's go with it) between PR style reviewing and more formal inspections, every engineer should be spending around four or five hours a week performing reviews to hit the sweet spot.


The company where I work has senior engineering staff whose main responsibility is to provide technical guidance. This includes doing code reviews and creating a good review culture by example. We're judged by the overall project outcome in the same way that the managers are.

Even with that structure I found it challenging to develop a good reviewing style that picked up the important stuff without getting bogged down in silly details that lead to arguments. Reading John Ousterhout's "A Philosophy of Software Design" was really helpful. He focuses on module decomposition, quality of public interfaces, and documentation among other things.


So often an ivory-tower group with gate-keeping authority into the code base can derail a project and frustrate developers to the point they quit. Comments about module decomposition seem pointless to someone under the gun to deliver by a demo date. The poor developer is reviewed badly and their career suffers, because they were essentially powerless to deliver on time because their null-pointer-checking didn't appeal to some obstructionist reviewer.

So I won't submit to a committee review before my code is accepted. Its a recipe for anguish and failure.


What I'm describing is not a committee review any more than having Linus Torvalds review Linux kernel changes or Ben Pfaff review Open vSwitch changes. I can write code just fine myself, but it's more effective to focus on making others productive because the whole project gets more work done that way.

Module decomposition is usually something I try to pick off in design, but when accepting pull requests from outside it's a critical consideration. We're not blocking creativity but trying to ensure that the code base not only remains high quality and that the design morphs appropriately over time as we meet new requirements. By making this a review point you can help create a culture where engineers think about it before submitting. That in turn makes it easier to justify dates to managers.


Before jumping in with your thoughts on how managers should or should not be involved in code review, note that this is about code review review. Reviewing the code review process, and reviewing the code reviews.


I think you've said the most key thing here. Much of the HN conversation has drifted out of focus.

The best management I've seen was tapped into the pulse of the team and knew how people interacted and the problems we were struggling with. Keeping an ear to code reviews is no different than observing how your team interacts in person or in emails.


Hoping there will be a thought leader who will show us how to conduct code review, review, review.


His name was Andy Grove.


10 Agiles to Griffindor!


I agree with this in principle. Gets a lot tougher in practice, though.

"Manager" really shouldn't be a job in tech, at least when it comes to individual teams. Management should be a skill, just like all the other dozens of skills tech teams manage in various ad-hoc ways.

Groups of teams plays out differently, since you're really starting to work at the meta level. Even then, though, management is just one of a bunch of skills necessary. (At that point, however, it may be so much of a primary skill to warrant a separate role)


> Management should be a skill, just like all the other dozens of skills tech teams manage in various ad-hoc ways.

I would phrase it as "like all the other dozens of skills tech teams half-ass in various broken ways".


I think an important point in modern technology development is that to make anything of value happen, you constantly deal with dozens of various silos, each of which can be a lifelong specialty area. You have to learn to skim most of the time and be able to dive deep -- a bit -- as necessary.

We may be saying the same thing, but I like my way better :)


I agree that everyone needs some management skills, but I'm not convinced that most teams can be run without a dedicated manager.


Agreed. The same as most teams cannot be run without a dedicated DBA. The question is whether or not the DBA does other things, not whether that skill is needed or not.

If you've got six or seven people that need constant management, you're working at a burger joint, not a tech shop.

If I remember correctly, in the book "In the Plex", Google scaled out to thousands of developers without having a dedicated management slot. (This is a good time to mention that all generalizations are false, including this one)

In general, in the tech world I find the biggest supporters of "X should be its own job" are either the people who do X -- or train people to do X. Customers don't care one way or another. Neither should we. If we need an X, we get one.


> In general, in the tech world I find the biggest supporters of "X should be its own job" are either the people who do X -- or train people to do X

That strikes me as a characterization with more cynicism implied than necessary.

I, personally, routinely support job specialization, even outside my own specialty. Partly, this is bias/solidarity, but, partly, it's that, even though I'm happy doing Y, I don't want to end up spending 20% of my time doing X (and neither does almost anyone else, which they may not admit).

> Customers don't care one way or another. Neither should we. If we need an X, we get one.

Except that, no, "we" don't get one, not if we've trained ourselves, in the entire industry, to think X isn't its own job. Instead, we get a Z-that-can-also-do-some-X (but may, as I alluded to above, actually not want to do that X, or, as other comments in the thread suggest, can't do all of X).

This is especially noticeable among startup job postings over the last 5 years where Z is programmer and X is either manager or ops (including the sub-specialty of DBA).

How do someone even know they need an X if no X exists in their world? How we talk about things, especially to newer entrants into the industry matters. We should care.


I apologize. I believe you may have misunderstood brevity for cynicism.These are systemic problems. There are no bad actors and everybody means well.

I came to HN because it was supposed to be a place to learn about how startups work.

I'm still here years later. One of the things I learned is that you let the market pull your startup. You can't push it. In other words, you don't know when trying to make something people want where this is going to lead you. Instead you interact with the folks you're trying to help and that interaction drives the things you do.

There is much wisdom here.

The other major thing I've learned in this area, from my own work with tech groups instead of HN, is that people like doing what people like doing. That is, given their druthers, people will always drift back to doing certain types of things. Like making reports? I'll bet you we can take you out of this job, put you somewhere else in a completely new surrounding -- and in a few years you'll be doing reports. It's what you like doing.

We tech folks are really smart people, so we've managed to take everything we've touched, split it into smaller pieces, and make them complicated. It's what we like doing. Over many years, this means that any significant tech effort contains dozens of skillsets, all of which can be very complex. This number continues increasing.

So with limited resources, as a small org or startup, you're faced with either letting the job drive out what skills you have to learn/use -- or not getting the work done. In larger companies, you can have the illusion that somehow you just need to grow enough experts. Small efforts can't do this.

>>How do someone even know they need an X if no X exists in their world?

Because X is a skill, not a role. If you transition from "jobs require these roles" to "jobs require these skills", and you let the problem drive your learning and adaptation, it all works out. If not? Overhead and friction increase as the required number of people increase, leading quickly to very slow projects. Everybody gets a role where they do what they want, but it's at the cost of focusing on the value instead of the employees.

We naturally tend to view tech development backwards from what it actually is. So good people, doing their best to help folks, end up creating interwoven problematic nomenclatures, policies, and systems. Hopefully that sounds less cynical.


> I apologize. I believe you may have misunderstood brevity for cynicism.These are systemic problems. There are no bad actors and everybody means well.

I, too, apologize, as I didn't mean to suggest your were implying malice. I meant cynicism in its fairly strict/traditional sense of presuming action based on (primarily or exclusively) self-interest. Being self-interested doesn't automatically mean someone is a bad actor.

Thinking further, I would object even to the brevity, since it appears to dismiss (or oversimplify) the supporters' motivations.

> The other major thing I've learned in this area, from my own work with tech groups instead of HN, is that people like doing what people like doing. That is, given their druthers, people will always drift back to doing certain types of things.

I absolutely agree, and I also think that, more importantly, the converse applies, hence my remark about people not wanting to do X.

> we've managed to take everything we've touched, split it into smaller pieces, and make them complicated. It's what we like doing. Over many years, this means that any significant tech effort contains dozens of skillsets, all of which can be very complex. This number continues increasing.

I disagree. There seems to be little evidence of it. We may have constantly shifting sub-specialties based on currently popular tools, but, for example, programmers are still programmers.

I don't believe there was ever a time of Renaissance Man tech folks, where the same person could build a computer from silicon and then program it equally competently.

> Because X is a skill, not a role. If you transition from "jobs require these roles" to "jobs require these skills", and you let the problem drive your learning and adaptation, it all works out.

I don't get it, I'm afraid. Is there more to "skill not a role" than semantics? Even the statement "jobs require these roles" sounds awkward because its just too many words. The job is the role, so no transition needed. I'm pretty sure it's already understood that the job/role requires those skills, but, more importantly, is primarily (or even exclusively) about those skills. For people who want to do X (as well as for those who want to avoid X, even if they possess some of the skill), that can be important.


My understanding of what a good manager's job is consists of largely managing the interface between the team doing the tech work, and the rest of the company: ensuring there's no external blockers, making sure that requirements are accurate, that realistic expectations are being communicated in both directions, etc.

I don't imagine it as someone whose job it is to ask me once an hour, "so how's it going, can you code any faster?"


Yep. Clearing obstacles and managing externals -- although once you give somebody the title there seems to be a lot more involved.

The good managers I've seen don't talk a lot. The best managers are developers who have a natural skill for the work and got voted into the job, They do that obstacle-clearing and coordination on an as-needed basis, working alongside everyone else the rest of the time.

The core of the problem is that tech development involves way more skills than there are people, no matter how you want to slice them. When you go for training or start putting folks in roles, the assumption is that your role is at the center of the org. Not the value creation stream. There's an oversimplification and false focus that occurs that's antithetical to getting things done. So yes, that's the manager's job. But it rarely stays like that.


Management should be a skill, just like all the other dozens of skills tech teams manage in various ad-hoc ways.

You mean like how system administration has been downgraded to a skill that is tacked onto developers via DevOps?


>Code review, whether via a pull request or other means, still is one of the best bang for buck methods for finding defects in code.

I don't find that to be true. It's a good way of keeping code quality in check, which has an indirectly positive effect on catching defects (as well as a number of other positive effects), but it's not a good way of catching defects itself.


Author here. The post is largely written from a point of personal experience and philosophy but that's the one statement I'm comfortable saying has empirical backing.

I link to https://blog.codinghorror.com/code-reviews-just-do-it/ in the post which has an excerpt from code complete:

> … software testing alone has limited effectiveness – the average defect detection rate is only 25 percent for unit testing, 35 percent for function testing, and 45 percent for integration testing. In contrast, the average effectiveness of design and code inspections are 55 and 60 percent.


As a firm believer in TDD and pair programming, I would caution you that since ultimate goal of programming productivity is unmeasurable, defect rate could also just be a measure of how much slower those practices are.

I don't think that's the case, but really we have no way to prove that it's true. Just a word of caution if you are ever trying to convince someone, it's best to be up front with the possible flaws of those studies.


My experience is that having a really great design eliminates a lot of bugs (60% is perhaps not unreasonable, although pretty hard to measure), but having an awesome design is not even close to being the same thing as doing regular code reviews.

I'm often thrown in to a project with bad to mediocre design and I am not able to just change the design to become "good" just by reviewing PRs. I can only attempt to push the design into a slightly better direction over a long period of time.

Similarly, just because it's reviewed doesn't mean that the design is great.

I am probably able to hit something approaching 45% defect detection rate with integration testing on most projects though.

With code reviews I'd say it's about 1 or 2%. Bug-wise, I usually only spot fairly obvious "language" gotchas (e.g. initializing an empty list in a method signature in python).


How do these benefits compare to what you get from pair programming? If the pair works well together I think pairing provides more learning and better quality.


Code review is (from my point of view) a way to have a second or third person look at the code from a distance, in his own time, with less context than the author. When I write code I'm far into it and stop seeing the bigger picture. When you pair program, I think the same thing happens. Finally there's the end product, which can be looked at from a distance by a reviewer.

While yes, pair programming is preferred over just pushing to master, I think code reviews (and moreover formal code reviews) are better than pair programming for the final OK.


How much time do you take for a code review? If I did a full review that really tries to understand the code it would take me close to the time it took to write the code. But I don't get that time. So I pretty much nitpick on some style issues but rarely find any real problems.


That's just it, code review done right takes time. If the author spent four hours on a change, I might spend an hour just reviewing it. You have to understand the problem and the solution just as well as the author.


I'm not super across the evidence on pair programming so I'd hate to try and have a definitive answer. https://www.cs.utexas.edu/users/mckinley/305j/pair-hcs-2006.... suggests it's pretty good for quality and so-so for utilisation.

My anecdotal experience with my own teams is relying too much on pairing as a training mechanism, especially between senior and junior, has a couple of issues. The first is tradeoff between training quality and utilisation - it may shorten the training cycle but the payoff isn't high enough for the output constraint on the more senior member. The second is I've sometimes had trouble weaning the junior off always having guidance on tap and it takes them longer to develop confidence to break through certain problems alone.


You get different things from pair programming compared to post-completion code review. They share some benefits and to some extent can compensate for the lack of each other, but ideally you want both.

If you're reviewing someone's code at the code review stage, you can only do so much. If it's working, tested code then asking for major changes is either pointless or very costly; there's little point in suggesting out a better way of approaching a problem if it requires major surgery. Unless you spot a potential bug or something that will cause major problems, you're realistically restricted to suggesting minor changes like style suggestions or better names for things.

With pair programming you have two people collaborating on a design from the start where suggestions for better ways of doing things are much easier to incorporate. If pair programming for a complete feature is either not feasible or undesirable for some reason it is at least often worth pairing to start a feature off.

Code review on the other hand also has some benefit in getting a fresh pair eyes on a piece of code to spot obvious mistakes that people elbows in deep in it have glossed over or find hard to see, such as left-in debugging code or user credentials.


I think code review is the async way to share thoughts about the code and managers (in my experience) tend to be busy with many meetings making 1 on 1 code review more difficult therefore in this case pull request might not be the best option but a good one.


Are there any field studies that demonstrate the impact and efficiency of these practices? Is there a body of evidence to support the assertions that are made in this post?


Hi - author here.

The quality assertions are the most defensible empirically. I wrote the post at the office away from my bookshelf which was the main thing keeping me from including them up front (flicking through my old copy of code complete now). Capers Jones' "Software Defect-Removal Efficiency" (https://ieeexplore.ieee.org/document/488361/) would be the main one on the effectiveness rate of code inspections. I've collected a few other relevant papers over at https://github.com/joho/awesome-code-review

The second two themes around education and culture come more from my personal perspective and experience - largely from running the engineering teams at Envato and 99designs respectively. My feeling is the educational and safety elements of code review have come up online a lot more over the past couple of years - but that's just my anecdotal view.


Some good points - many thanks.

I think that there is a market failure / exec education issue around these point though. The impact of the improvements measured/evidenced is in the maintenance cost of the delivered artefact and the improvement of the process for future delivery. These are "over the fence" issues for the owners of the delivery shop. We need to get it into the bosses minds that they are paying for things that will be either good in the long term or expensive in the long term - I think that there is a lot of hiding and hedging and shifting of burden (to users) that goes on around that.


I agree and trying to cover that education gap is what I'm trying to achieve with the post (and the product behind it)


Unfortunately, productivity is always context dependent.


Here's the thing, there's no way to measure programming productivity at all, so, no, there's no valid studies about anything ever. The studies that try always suffer from this inherent flaw. They count bugs, lines of code, function points, all metrics that aren't actual business value.

Pair programming vs code review, Nodejs vs Ruby, React vs Vue, Vim vs Atom, etc etc etc are all 100% emotional and subjective decisions. Therefore they are subject to fashion, being cool, and herd mentality.

Accepting this is an essential step on the path to become a well-rounded engineer.


That's not really true. A better formulation is that there's no way to manage programming productivity cheaply and/or non-intrusively.

I agree that the bulk of X vs Y comparisons are emotive and subject to what Alan Kay calls the "pop culture" but that in no way demonstrates that measurement is out of reach. It's just in most contexts we don't care to do the work to measure it out of personal preference or economic reality.


Good point, productivity measurement is probably possible, but so vastly expensive enough that it's not happened yet.

I really like this post and the one about managers programming btw. Your common sense explanation to why most new managers end up coming back and reporting that they shouldn't code is exactly what I've seen as well.


There was an old Tom DeMarco article I'm a fan of: http://www.systemsguild.com/pdfs/DeMarcoNov2011.pdf

In it he has a pretty interesting insight about late projects. "If a project offered a value of 10 times its estimated cost, no one would care if the actual cost to get it done were double the estimate. On the other hand, if expected value were only 10 percent greater than expected cost, lateness would be a disaster."

I imagine as our field matures measuring productivity and hitting estimates will get more important once software has finished eating the world. The unlimited (or close to) upside dev projects will be largely behind us and we'll need to hit tighter economic targets. I'm just glad I'm working in the era where I don't need to measure it :D

Glad you like the posts. Thank you!


You are the first person I've talked to who also thinks the "eating the world" will come to a natural end! I think the natural conclusion is that in X years there will be only a fraction of the number of working programmers as today. The margins will get tighter, and the gains smaller.

It is sad in a way, but also exciting as you say to be in the golden age.


Mind you, the maintenance burden and the dependency of the users on the legacy will be ever larger. Also - as things are digitized the opportunity for more and more digitization grows.


I think that DeMarco is wrong about that already in the corporate world. The budgets and schedules that I work with are organised around knowing what is going to arrive when, crudely 10x -> 5x means that 5x is not available for the next round of projects, which means that 5 projects get canned. Any project running 2x will result in a big change of "staff responsibility" basically because no one trusts anyone involved any more.


This is a really interesting read. Thanks. I have found in my own experience that developers tend to include the same people over and over again in their PRs and what ends up happening is people tend to just "trust" what the submitter has committed. This often leads to PRs that are almost blindly approved. By involving management to keep the PRs as a tool for discussion (social and technical), I think this could be avoided. It's probably also a good idea to mix up the reviewers (a healthy mix of people familiar with the code and those not familiar with it).


But is the code review review review the manager’s manager’s job?


Of course it is it is


-1 Moving the evolution of a groups code into the hands of the person who works with it the least is an anti-pattern. Driving change from within a team can be facilitated by a manager who reads the group's code, but any manager who thinks they are going to be the judge of what the team is achieving is deluded and overly enthusiastic about their authority.


Post author here - logging off to go to bed because it's bedtime down under. Will pop by again in the morning.

Thanks for reading and all your comments.


A manager sitting in meetings all day will be poor at responding to review requests. Its the same reason they make crappy oncalls.


Please ignore the advice in that article. It's hinged on the false idea that a manager always knows best.

Imagine for a second that the manager is wrong in his/her judgement. Now the developer has to choose between arguing with the manager and risk being viewed as insubordinate or just going along with the feedback and letting the software suffer the consequences.


> It's hinged on the false idea that a manager always knows best.

No it's not. It's hinged on the idea that the manager's job is to teach his people with the knowledge and experience he's acquired. None of this suggests he's infallible, nor does the article say as much at any point.

> Now the developer has to choose between arguing with the manager and risk being viewed as insubordinate or just going along with the feedback and letting the software suffer the consequences.

If the culture is such a rigid hierarchy, they're not going to be able to deal with a senior engineer any better.

And when you have subordinates this cultural problem is easy to fix. They'll invariably think you're wrong, and if they're reluctant to say so, you just tell them, "if you think I'm wrong, it's your job to say so, so out with it."


How is an un-self-aware manager ever going to be effective at anything?

The author notes your objection, and addresses it:

"There’s a fine line to be walked between managing the pull request process and micro-managing it. What happens within any specific pull request is the domain of the team so you should resist the urge to intervene frequently within the process itself. The behavioural trends in the process are your domain and if you’re running a meta-review process well your work will be predominantly behind the scenes."


> meta review

Is "meta review" really a value add for software companies? Is this advice for Managers of peace-time companies who can't find anything more valuable to do with their time?


I've seen pull request interactions where a moderating authority would have made all the difference.

We mostly accept the need these days to moderate other online social behavior, like on HN, to reduce potential damage from ego/politics. Hopefully the need is lower in a professional environment like SD, but I could still see it being useful when things go off the rails.


I suspect manager here does not mean high level manager, but someone who has at most a dozen or two people under them.

And yes, effective culture around PRs seems valuable. My current team seems to have very inconsistent expectations, and would benefit from someone with authority steering people in the right direction.


This is a very good article.


On my team we're all managers. We manage ourselves and our output, and we manage our team, team's output and each other by supporting and assisting each other. So from that standpoint I agree. I don't see the need for dedicated managers though I do value our dedicated scrum/agile coach though, who encourages all of that.


This is a terrible idea because it will lead to a more hierarchical corporate environment. My manager has been coding for 15 years in the embedded space, but he is rarely on my PRs (unless we happen to be working on an overlapping feature).

At my company, PRs will automerge if at least one reviewer with write access approves and CI checks pass. This is the best way IMO. It’s fast, too.


No its not, code review is the team's job, including the manager.


It says code review review, not code review.


This is fine if the manager is also shipping code. If the manager is not shipping code then it's really not cool to do code review. You won't earn respect that way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: