Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No amount of automated tooling can account for subtle security issues, wrong understanding of the spec (something that happens easily especially to people new to the project, but also every time you work with an area of the code you're not familiar with), gotchas previously encountered, etc. There are things that humans (especially ones with a lot of experience) are good at catching that machines aren't.

If you do "review after merge" and you deploy after every merge, I think that's highly irresponsible. If you don't, then your first point still applies - there is a delay between the function being "complete" and being delivered.



That's just blame diffusion though. No review is guaranteed to catch issues like that, all it does is say "well we followed The Process, guess it sucks that one got through" when something bad inevitably makes it to production because the reviewer didn't have the specific knowledge to catch the specific problem. That's more likely to happen when the scope of a review is limited to a diff - either implicitly because that's all the tooling shows you, or explicitly because of "why should I fix code I didn't write on this ticket" reactions to review feedback.

That's not to say blame diffusion is without value! You might need it to avoid toxicity around the team. But that's a different problem.


It's not "blame diffusion", it's risk management. Two sets of eyes are more likely to catch issues than one, and if I specifically know that some part of the code is tricky and person X is familiar with that part of the code, I might even ask that specific person to review.

Honestly, it sounds like you have a very cavalier attitude towards breaking production. That might work in some settings, but definitely not all of them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: