How to Have Successful Commit Reviews

Apr 7, 2021 | Best Practices

QA team conducting commit review

Most development and DevOps teams recognize check-in reviews as a necessary part of their workflow. However, approaching these commit reviews incorrectly can make them more difficult, more time-consuming and less rewarding than necessary.

Here are five ideas to put in practice so that commit reviews pay off for your team.

Enable continuous testing

Do reviewers find reviews dreary, boring or tedious? They’re probably reacting to having to over-invest in routine aspects of the review. For instance, does someone have to twist people’s arms to get reviewers to test new code adequately? That problem has a solution: continuous testing (CT).

Combine policies and tooling so that every review arrives with its own built-in tests, which the CT platform itself executes and exhibits. That way, all reviews start with routine testing already complete. If some of the testing is not routine, so much the better; with well-styled tests, the human reviewers can concentrate on the tricky aspects, while leaving all the customary regression confirmation to CT. The result: more reliable tests, and less boredom.

Automate checks

It’s not just conventional unit tests that can be automated in this way. Does your team have a recurring issue with, say, use of tabs? Does someone submit source files separated by DOS-like CRLFs in place of Unix’s newlines, or vice versa? Don’t depend on error-prone humans to detect these anomalies. Write a dozen-line script that verifies those stylistic checks automatically.

Any aspect of reviewing — Are parentheses properly punctuated? Have all unused variables been removed? Are color values chosen from an approved palette? — shunned by reviewers is a candidate for scripting as an automatic detector.

Take smaller steps

Do reviews exhaust reviewers? Make more of ’em — but make ’em smaller.

Here’s an example: A single functional requirement probably involves several distinct implementation steps, including new tests, refactored class definitions, perhaps an updated user-interface layout, and changes in a few function names to “make room” for the new functionality. What looks to business analysts like one enhancement involves almost 200 lines of source spread over 13 source files. That is daunting to review!

Suppose, though, that the commit is rewritten as five rounds of commits. The first three leave functionality invariant; they’re just refactorings to make for a better foundation for the real enhancement. The fourth and fifth commits for this example sequence strictly focus on implementing and testing the new functionality, and they are clearly separated from all the existing application does. At that point, each of the five rounds of commits involves no more than four dozen lines of source change — just a couple of screens — and is easy to understand. The five rounds of smaller reviews become easier to digest than the one large original review.

Decomposition of reviews into smaller units this way both takes and rewards practice. The goal is to make each review so easy to understand that reviewers can clearly and immediately see the correctness of the commit.

Smaller steps of this sort also help locate any confusions. When an implementer misunderstands the requirements but has good technique at writing multiple small reviews, it’s generally the case that the misunderstanding affects only a minority of the reviews. That minority of confused-but-small commits are more readily corrected, while the majority of the correct-and-small commits can be reviewed and approved without change.

Eventually, each review should take only a few minutes to scan and grasp at a high level, something that can be practiced at least a couple of times a day. Even when a commit is deep and takes hours to understand fully, it should be possible to view it in relative isolation from all the name changes, packaging and other “housekeeping” distractions that too often obscure the real ideas behind an implementation.

Contrast this with the too-frequent anti-pattern where commits are so large that reviewers find them difficult and put them off until a deadline. That combination of attitude and scope sets everyone up for failure.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform


Another aspect of making reviews easier is to free them from a fixed calendar. As the development world increasingly virtualizes to remote work, allow most reviewers to set their own schedules for most reviews.

Especially with small, atomic commits, there’s no advantage to having everyone review at the same time and in the same place. Instead, train reviewers to turn around reviews swiftly but conveniently.

Resolve conflicts

For best results, submit conflict-free review requests.

It’s common in some teams for a developer to start an implementation, finish it in good shape, submit a proposed branch or commit or request (depending on the local workflow), begin to receive reviews … and only then notice that the commit is in conflict with the current head or master or authority.

Save everyone extra work and resolve any such conflicts as soon as they’re detected. The alternative — where approvals are collected, then the conflicts are resolved, and then a re-review of the conflict resolution is required — just makes for extra work. Arrange that reviewers see conflict-free requests.

Most version-control, continuous integration (CI) or source-control systems can be configured to detect conflicts. With such a configuration, it can become a point of pride to keep requests free of conflict and in good shape for reviewers.


Reviews are more likely to be successful when reviewers expect them to be successful. The few shifts above will help cultivate that positive attitude.

Related Posts:

5 Software Quality Metrics That Matter

5 Software Quality Metrics That Matter

Which software quality metrics matter most? That’s the question we all need to ask. If your company is dedicated to developing high-quality software, then you need a definition for what “high-quality” actually looks like. This means understanding different aspects of...

The Ins and Outs of Pairwise Testing

The Ins and Outs of Pairwise Testing

Software testing typically involves taking user requirements and stories to create test cases that provide a desired level of coverage. Many of these tests contain a certain level of redundancy. Traditional testing methods can lead to a lot of wasted time and extend...