Cracking the code review (Part 1): Smaller is better – Andy G's Blog

 2 years ago
source link: https://gieseanw.wordpress.com/2020/06/25/cracking-the-code-review-part-1-smaller-is-better/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Cracking the code review (Part 1)Skip to content Skip to menu

(This is the first part in a series about working effectively to pass code review quickly and without missed defects. In this series I will use Git terminology with respect to source control and branch management.)

Photo by Jon Tyson on Unsplash

Let me tell you a code reviewer’s horror story. It’s the start of a sprint and Taylor is assigned to develop a spiffy new feature. They excitedly get to work. They carefully review all the requirements, spend a few hours on a whiteboard designing a solution, and then start by writing unit tests. After just 1.5 weeks of the 2 week sprint, Taylor has a robust, performant implementation that has 100% test coverage.

(shines flashlight under face) Then… they create a pull request into master for the code to be reviewed (frightened screams erupt from code reviewers around the campfire).

It’s tempting to think that one feature equates to one code review, but falling into this trap causes reviews to take longer, and with higher risk of missing defects.

The number one thing you can do to get your code accepted quickly is to split your changes up into smaller reviews.

Why reviews at all?

Because bugs are expensive.

Catching a bug during acceptance testing can cost twice as much as if it were found during review. Worse, if the bug isn’t caught until afterwards, it could be as high as 100 times as expensive (!) [1].

Source: Dawson, Maurice, et al. 2010

> "I’ll just write really good unit tests"

If only it were that easy. Automated testing seems to only be able to catch up to 45% of defects, whereas code reviews can catch 60% [2]. Sure, you write better tests than everyone else. The best tests. Nobody writes better tests than you. People aren’t going to believe the kind of tests you write… but if you were (that* quality-oriented, you would take advantage of the battle-tested tool that is peer review.

"So we need code reviews"

Right. Specifically small reviews. Research has consistently shown that the size of the review is the most important factor in:

  • whether the changes will be accepted at all, and
  • how quickly the changes will be incorporated [3] [4] [5]

Having small reviews is so important that Microsoft invested in creating a tool for automatically partitioning a review [6]. MIT found that automated partitioning decreased the number of errors made by reviewers [7]. Plus, getting changes in faster shortens the time to deploy new features, leading to happy customers and fat wallets.

What you can do to make your reviews small*

(*a non-comprehensive list)

Use a feature branch, dammit

A gigantic code review that could have been reviewed piecemeal in a feature branch is a slap in the face to code reviewers.

It says, "My time is worth more than yours." Instead of spending their own time dealing with merge conflicts, the developer decided to plop 2000 lines of code to review on reviewers’ plates first thing Monday morning. This review isn’t going to be completed in a day, or even two days. Heck, just looking at it makes me tab away.

Often nobody knows when a task is finished until there are a couple of pieces working together. That’s fine — the project doesn’t need to compile to be reviewed. Have your changes reviewed and re-reviewed along the way in your branch. When they finally come to master, reviewers only need to check a few last things before they give the thumbs-up.

I’ve seen this approach work to great effect even for branches that a single person is working on. Often the final review is simply the merge into the master branch, and so long as the tests all pass, the reviewers just need to rubber stamp it because they’ve already reviewed everything.

Split along language boundaries

For example, if you are writing some low-level C code that you’ll eventually wrap into a snazzy Python API that all the teens will talk about, break the review along that wrapper boundary. Reviewers are usually more expert in one language than another, so the parts of the review in the "other" language are mostly just noise.

One review for the C code, and one review for the wrapper code will suffice. The set of reviewers may be completely different between the two (and that’s fine).

Refactorings first, behavioral changes second

Refactor your heart out! It’s one of my favorite activities. Just don’t mix functional changes into the subsequent review. That way, reviewers can focus more on "did these changes subtly break anything?", and less on "is this entire thing broken in the first place?"

(Take it a step further and break up refactorings on the type of refactor. e.g., one review for fixing warnings, and another for building class mocks).

Why is smaller better though?

Software engineering research has definitively shown us that small reviews are better regardless of any other factor. But why? What makes small better?

Later entries in this series will dig deeper into answering that question.

Next entry: Make them seem small


  1. Dawson, Maurice, et al. "Integrating software assurance into the software development life cycle (SDLC)." Journal of Information Systems Technology and Planning 3.6 (2010): pp 49-53.

  2. McConnell, Steve. "Code complete: a practical handbook of software construction." (1993): pp 574.

  3. Baysal, Olga, et al. "The influence of non-technical factors on code review." 2013 20th Working Conference on Reverse Engineering (WCRE). IEEE, 2013.

  4. Weißgerber, Peter, Daniel Neu, and Stephan Diehl. "Small patches get in!" Proceedings of the 2008 international working conference on Mining software repositories. 2008.

  5. Jiang, Yujuan, Bram Adams, and Daniel M. German. "Will my patch make it? and how fast? case study on the linux kernel." 2013 10th Working Conference on Mining Software Repositories (MSR). IEEE, 2013.

  6. Barnett, Mike, et al. Helping developers help themselves: Automatic decomposition of code review changesets. 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering. Vol. 1. IEEE, 2015.

  7. Tao, Yida, and Sunghun Kim. "Partitioning composite code changes to facilitate code review." 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories. IEEE, 2015.

Report this ad

About Joyk

Aggregate valuable and interesting links.
Joyk means Joy of geeK