Hacker News new | past | comments | ask | show | jobs | submit login

A conundrum for me is how to get other people to code review the way I want to be code reviewed? Particularly, I noticed code reviewers on my team are pretty pedantic, obsessed with correctness, and need to be explained why each change is okay. These are people that regularly write good quality code themselves, but there is a high amount of distrust. Why doesn't a team of talented programmers trust each other?

(in case it needs to be stated, to date, I have reviewed about as much code as I have written, upwards of 100k lines, as have most of the other people on my team. We aren't amateurs, but it often seems like we're babies.)




>obsessed with correctness, and need to be explained why each change is okay.

Isn't this the point of code review? When I click accept on a code review, I am saying that I have looked over the change and believe that it is correct and okay. If I just arrive at the conclusion by saying "Joe wrote this, and I trust Joe" the there is no point in me reviewing it.


It continues with "and need to be explained why each change is okay". If reviewer suspects there is a bug, reviewer should check for it instead of having reviewee explain him or defend every detail. It is micromanagement this way - not just being similar or kinda like micromanagement, but it is literally it. If every five lines big code change in run of the mill fronted requires two hours long negotiation, then something is wrong.

When it is unreadable and hard to see whether it works it is different thing, but then the complain in review should be some specific variant of "hard to read".

I review mostly for architectural compliance and "bad idioms" or code smells. There is difference between not like I would write it and badly written mess and many programmers confuse them.


>If reviewer suspects there is a bug, reviewer should check for it instead of having reviewee explain him or defend every detail.

It is not a matter of if the reviewer "suspects there is a bug", but rather a matter of if the reviewer is convinced that there is not a bug.

If the reviewer needs to have someone explain why every minor code change is correct, they are either lazy or incompetent (or the code is badly written). Further, it is generally a bad idea to go to the person who wrote the code to explain why it is correct, as you are then much more likely to make the same mistake the original author made. However, being "okay" often goes beyond correctness, and into business decisions. Often times, these business are not documented (or if they are documented, it is in some management document that is not linked to from the code), so in order to review it, the reviewer has to ask the author what the bussiness consideration that led to the change was.


And other times, reviewer is doing that to look detail oriented and responsible. He can do that so everybody sees how great and attentive he is. It is opposite of laziness in such case, but still unfair to reviewee.

It is similar with being petty in code review - sometimes people do it because they think it makes them look like better coders.

As for checking business case, yes sometimes you have to do it. But there is also such a thing as doing it too often and second guessing every requirement that you see during code review. This has tricle down effect on analysts/managers/or even customers (he got pissed after a while of this - true story) who then have to litigate and defend every detail and solve programmers conflicts over insignificant details.


> However, being "okay" often goes beyond correctness, and into business decisions.

Code reviews occur far too late in the development process for an uninvolved developer to provide a good, timely review of “bigger picture” concerns like business consideration (or software design) simply because the reviewing developer needs time to come up to speed on why decisions were made. In these cases it’s better to have the reviewing developer involved in a review of business requirements before the code is written. The code review can then stay focused on code.


Imo, business reasonability should be reviewed after the coder finished work, but not so much by programmer. It should be done by analyst, project manager or tester who knows a lot about the business. Basically, it should be done by person who communicated with customer or is responsible for overall vision.


> I review mostly for architectural compliance and "bad idioms" or code smells. There is difference between not like I would write it and badly written mess and many programmers confuse them.

This ^ I just had this discussion last week in a PR, if the comment is a matter of opinion, the whole team should agree and put a rule in the linter or style guide. If not every one is free to have their own preferences.


Proving correctness is not the point of a code review. In fact it would be difficult to make such a proof in sufficiently complex software. Functional correctness is typically "proven" by tests.

A code review ensures that the non-functional quality of the code is high. I.e. that the code is understandable/maintainable by someone other than the programmer himself, that there are no anti-patterns or dangerous-but-correct usages of language features, that the implementation fits to the intent of the specification etc.


Disagree. A code-review is not purely checking for non-functional properties of the code, it's checking for overall code quality, and that includes bugs.

Occasionally a reviewer will spot a bug. Occasionally there will be a false positive that turns out not to be a bug.

You really want to deliberately discard this bug-finding opportunity? Why?

Even if it's a false positive, doesn't that indicate that something bears commenting?


> Occasionally a reviewer will spot a bug. Occasionally there will be a false positive that turns out not to be a bug.

And most often of all, while explaining what they've done, the reviewee will say "and this does... oh crap, wait" and fix the mistake themselves. Among all their other virtues, a code reviewer is a level 2 debugging duck. (Although all of my code review experience is with buddy check-in style reviewing rather than sending it off to be reviewed. I can understand endless nitpicky questioning being annoying if you're doing it offline.)


This "oh crap" effect is one of the reasons I like "self review" as the first step in a code review process.

That is, go over the diff and add commentary explaining to the reviewer why you made particular changes (not explaining what the changes do, that should be in comments in the source).


GitHub supports posting comments to a commit. Might be a good fit.


> Occasionally a reviewer will spot a bug

Isn't the code review generally pretty late, i.e. just prior to release? At that point, the code should be passing all unit tests and I'd expect obvious bugs to be pretty unlikely. Non-obvious bugs generally won't be spotted in a code review setting.


Not impossible that the reviewer might spot a bug. There's value in more eyeballs. Useful for spotting dangerous constructs or things like UB in C/C++ that appear to work fine... for now.

Also, lots of software houses don't have formal testing of all new code. Not unusual with GUI code, for instance.


Not sure what you mean by 'late'. I guess it all depends on process defined for your project. Most SCM systems have some way to make code visible before it is published to the live code base (via branches, etc).

Also, passing tests is meaningless if the tests are worthless or incomplete. So, those alone can't be used as a metric of code sanity; Meaning obvious bugs are still possible.

Tests and reviews are just to reduce the number of released bugs. and hopefully increase maintainability.

Also, what's obvious to one developer might not be to another.


> Proving correctness is not the point of a code review.

> A code review ensures that [...] the implementation fits to the intent of the specification.

I don't see how both of these can be true.

> Functional correctness is typically "proven" by tests.

You can (and, in my view, should) review tests, too.


I'm an overly pedantic reviewer because I feel the review is one of the few places where I can counteract the accumulation of technical debt. Once the less than ideal code is in the codebase it's there to stay. I also ask a lot of pretty dumb questions during a review because I want to make sure that my understanding of the requirements matches the understanding the other engineer had.


>I'm an overly pedantic reviewer because I feel the review is one of the few places where I can counteract the accumulation of technical debt.

Pedantic is the wrong approach to technical debt though. Almost every aspect of technical debt is about trade-offs.

If what you're doing appears pedantic it should come accompanied by an explanation of why the trade off the developer took is not ideal.

>I also ask a lot of pretty dumb questions during a review because I want to make sure that my understanding of the requirements matches the understanding the other engineer had.

Your code should really come accompanied by tests which you can read and use to understand what the precise interpretation of those requirements was.


> Once the less than ideal code is in the codebase it's there to stay.

That is not objective fact across projects. That means either the process or culture is bad.


I've never seen a project where you go back to fix something just because its implementation is not ideal. If it works it stays, because there is always something more important to do.


I do that regularly. Not really when it is less then ideal, I don't think less then ideal necessary constitutes technical debt. But when it was hard to read and I just had to read it, when it was hard to modify and I had to modify it. Or when I was modifying something nearby and this was for free. We also have occasional actions like "remove this particular code smell from everywhere where you find it" and explicit "refactor that part" issues that are routinely assigned as part of sprints.

Obviously, we do weight how much time it takes to fix it vs how much of the debt it actually is. We would not had that much independence in these decisions if we did not.


This, this and this.

You get told to quickly write something as a Proof of Concept. Then get told, it works so put it in the code and reelase it. We will rewrite it later.

You never get to rewrite it later.

Over time the code base goes to hell. Every single time


I have hard time to believe you never refactor function you come across. I see how refactoring whole architecture or something major is avoided, but when it comes to smaller functions and pieces of code, refactoring it as you go takes around same amount of time as refactoring it during that detailed code review (e.g. often very little).

The really big mess tend to be emergent - when features and code base grew too much for original architecture over time. You cant prevent that one by detailed code review and it takes a lot of effort to fix it.


At my last job, if you had unecessary changes for the work item (e.g. refactoring) you failed the code review.

But then we had 5% unit test coverage on a 5million+ LOC code base, they were terrified that refactoring (with no tests to back it up) might break things.

We were writing medical software and they were very risk adverse (obviously), so if i refactored a method I would have to raise that and would mean test cases would have to be written and manually run to ensure I hadn't broken anything.

So refactoring hardly happened (The code was an absolute mess), when i left they were working on a project to fix it but the timescales for it were 5 to 10 years.


Not exactly. In a lot of fast paced environments it’s rare to get a chance to rewrite something that functions properly just because it could be labeled as technical debt. It’s not about culture, it’s about moving on to the next thing. I code review hard to make sure we have the smallest amount of technical debt possible because it’s usually there to stay for a considerable amount of time.


That would fall under "culture".


Exactly, just sounds like good review practices to me.


If you can't see why a line of code is okay without looking at it, doesn't that mean that it might need a comment or something explaining why it is?

As a corollary, when I ask a question in code review, I usually don't want it to be answered there - I'd prefer it to be answered in the code, if need be using a comment.


Exactly, because you don't want every future reader of the code to have the same question.

This is why I hate implicit assumptions. If I need some special knowledge or some special assumption that is no where in the immediate vicinity of the code, then maybe your approach to the problem is flawed. Sure it'll work but good luck to future coders working with it


I mostly agree, with a small twist: sometimes the question is about the code; sometimes it is about the change. The code should be described in the file; the change should be described in the review.

This discussion makes me wonder about more tightly connecting the code with past reviews. What if when reading some code, the user had not only access to the revision history, but also the review history around the code in question? Something along the lines of storing reviews in revision control metadata.


it took me a couple teams and some patient mentoring to get the picture. if everyone is playing along its a total joy.

when posting the review, you try to eliminate as much of the cruft as possible.

when reviewing code, sit down, you're actually going to read it for understanding.

your mutual goal is to improve code and product quality. if after a few passes something seems wrong, or unclear you raise it. now the two of you get to have a pleasant discussion about what the right thing to do is. maybe some rework. maybe just a comment about unhanded error conditions. maybe nothing.

if someone raises an issue that not important either way, just change it to save time. and don't raise something unless you think its important. so, if you both think its important, you can have a discussion and reach consensus.

i think thats the real problem, that somewhere along the way the consensus culture got lost. that was a really important thing.


Too many companies have "standards" that dictate exactly how things like code formatting should look, and are much weaker about good design. So reviewers look for problems where the light is: standards violations are easy to see and point out, whereas more important problems tend to be more nebulous and subjective.

I would take the position that written code standards enforced by humans should not be allowed. Either a given standard is important, in which case you enforce it automatically, or it isn't, in which case you drop it. That frees up code review to focus on more important issues.


>A conundrum for me is how to get other people to code review the way I want to be code reviewed?

Write a document which describes how you want code to be reviewed, share it with the team and then lobby for everybody to agree upon it / sign it during retrospectives?

I think it's better to develop a strong rapport with members on your team and only join teams with strong shared values about what constitutes good code, but in lieu of that you could do worse than to draw up a shared written agreement about what constitutes good code (and iterate upon it).


Ive had this pedantic-ness in the past. I've noticed the obsessive compulsive behaviour of being attached to code. People I had to work with cared more about having every for-loop contain a ++i instead of i++ when generally most compilers optimize for that. They ignored the fact that I pointed out a problem in their architecture and basically told me to get on with it. If someone is attached to code and takes offence because you gave constructive criticism, they shouldnt be in the job.


>> they shouldn't be in the job

or may be we should find a job where constructive criticism is part of the culture and you then leave the current job! Relatively easier to find where the change exists than investing effort to bring about a change.


You probably need to get everyone on the same page on the coding idioms and style.

Personally I find that non-idiomatic code can be a bit distracting when I'm trying to evaluate the code for its overall approach and architecture.


And if people are writing 150 character plus lines, those are really, no joke, a lot harder to read. If the line is going to break anyway, why not just break it yourself at a spot that makes sense?


> These are people that regularly write good quality code themselves, but there is a high amount of distrust. Why doesn't a team of talented programmers trust each other?

This has nothing to do with trust or any other personal matter. Everybody makes mistakes, everybody might misinterpret code they use and not every programmer has a complete understanding of the scope of the changes they are making. This happens for both senior and junior engineers. (Obviously one would expect this happening much less for seniors)

The only way minimising the amount of future issues is to have thorough, rigorous and pedantic code reviews. Another major benefit for this approach to code reviews is having more people on the team understand deeply the component you coded.


> Particularly, I noticed code reviewers on my team are pretty pedantic, obsessed with correctness, and need to be explained why each change is okay.

That’s funny; I thought you were going the exact opposite direction with that. I wish my code reviewers were pedantic and obsessed with correctness!

The few time’s I’ve had the pleasure of working with them (usually open source) were the times I’ve learned the most about programming in the shortest time, by far. Especially about readability and quality of comments.


> Why doesn't a team of talented programmers trust each other?

If it's not immediately clear to a reviewer that your change is correct, then either your code isn't clear, you're not writing tests, your colleagues don't have sufficient understanding of the code base, or some (non-trivial) linear combination of the above.

The first two cases are something you can work on. If people don't understand your code, then try to write code that is more clear. If your colleagues are as talented as you say, then they should be able to understand your code without explanation. If you're not writing tests, then well, maybe there is just no hope.

If your team does not have sufficient understanding of the code base, then do communal code-review sessions where you pick parts of the system and read through it together. Or organize knowledge share sessions.

Ultimately, as Lenin said, 'trust is good, but control is better'. The whole point of code reviews is that you want to get extra eyes on your code. Trying to bypass that by asking your colleagues to blindly trust you is undermining that process and ultimately undermining the quality of your system.


One thing we did to get past the most picky review comments was institute some static analysis with some agreed upon rules which are run automatically during build checks of pull requests (sonar and checkstyle). If someone brings up an issue along these lines that you believe is unrelated to the task at hand you can ask them to propose a change to the rules instead.

That said, in the end, correctness is important and justifying and communicating your change to others is the purpose of a change request. If you aren't following define standards and idioms then you should get on board. If they aren't defined then they should be defined and reviewed to get everyone on the same page. If everyone is using code reviews to grandstand or show off their knowledge that's a cultural problem that needs to be addressed.


It's probably not that they think you are wrong or can't be trusted. Think of it like defending a thesis. A typical code review process for me is to just ask the author a bunch of questions. If they have satisfying answers then great, the code goes in and we now have recorded answers for a bunch of questions about it (helpful if issues arise later on). But it shouldn't be surprising if there is something they haven't considered, despite being someone who writes quality code. Perhaps your team is guessing about correctness rather than asking about it and it comes across badly?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: