Find the Right TDD Approach for Your Testing Situation

Aug 16, 2018 | Best Practices, Test Automation Insights

TDD Approaches
While the test-driven development (TDD) cycle is simple — write a test, get it to pass, refactor — developers have found numerous ways to tweak the programming technique. In other words, no one true way to practice TDD exists. That means your oddball approach to TDD is probably OK, but it also means you can find a lot to learn from exploring some of these TDD “alternatives.”

One Practice, Multiple Definitions

I described the TDD cycle simply as “write a test, get it to pass, refactor.” You might have also heard the mantra “red, green, refactor” as a summary of this cycle.

This simple description might be enough to help you hit the ground running, but here’s a more involved description of what doing TDD really means:

  • You write a unit test to describe a small bit of behavior that does not yet exist. The test consists of statements that first put the system under test into a known state, then exercise the desired behavior. The unit test needs at least one assertion — a statement that verifies whether or not some expected condition holds true.
  • You run the unit test using a tool specific to your programming language. The tool will tell you that the test passed if all its assertions held true, or it will report that the test failed. With TDD, we want to ensure that the test failed.
  • You write the minimal amount of code needed to make the test pass.
  • Once you get the test to pass, you clean up any deficiencies in the code — things that will make it hard to understand and maintain in the future.

Robert (“Uncle Bob”) Martin presents TDD by specifying three rules you must follow:

  • Write no production code unless it is to make a failing unit test pass
  • Write no more of a unit test than is needed to fail. Compilation failures count as failures.
  • Write no more production code than is needed to pass the one failing unit test.

So, what’s the difference? For one, Uncle Bob’s second rule, which implies that you must stop writing the test as soon as you receive compilation failures, is considerably more prescriptive about how to write a test — let’s call this “incremental test writing.” The stepwise description of TDD does not delve into an approach for writing the unit test, leaving that choice up to you.

I’ve done a mixture of both incremental test writing and wholesale “just-slam-the-whole-thing-out” test writing, and I found each to be useful. Often, it’s easiest for me to follow a stream of consciousness and flesh out an entire test. But I believe Uncle Bob includes the incremental-test-writing rule because there’s value in taking smaller steps that provide feedback sooner.

Your language of choice might help you decide which approach works best for you. If you’re in C++, where one compilation error triggers dozens more, it might be most effective to do incremental test writing. As soon as you receive a compilation error (nowadays indicated dynamically in a good IDE without the need for an explicit compile step), fix it. Taking such small steps will help you better correlate a given compilation error to its cause.

As with all the alternative approaches that follow, I highly recommend experimenting with this form of incremental test writing. You might find the results illuminating enough to improve your practice of TDD.

One marked difference between the two descriptions of TDD is that the three rules don’t mention refactoring. The rules don’t say not to refactor, either, and I’m sure Uncle Bob believes it’s critical to success. Still, I prefer the stepwise description and its explicit inclusion of the refactoring step, because I believe the ability to continually address code cleanliness through refactoring is the best reason to adopt TDD.

Assert First

In the book “Test-Driven Development: By Example,” Kent Beck tells us to try writing the assertions first. This prescriptive suggestion, which has you essentially working backward, can help you think more about the outcome (the “what”) rather than the implementation details (the “how”).

During my very long history with TDD, I’ve grown too accustomed to not writing the assertions first — in other words, I write the test more or less top to bottom. But occasionally writing assertions first makes the most sense for the challenge at hand, most typically when I have a lot of unanswered questions about the codebase and how the new behavior will impact it.

One side effect that assertion-first approach seems to have is that the focus on outcome means I often end up using programming by intention: Because I don’t yet know what the details need to be in the rest of the test, I start by writing the name of a yet-to-be-implemented helper method. My test rises in its level of abstraction — the focus is on what to do, less how to do it. I then flesh out the helper methods.

Here’s a unit test that I coded top to bottom many years ago:

public void returnsHoldingToBranchOnCheckIn() {
     service.checkOut(patronId, bookHoldingBarcode, new Date());

     service.checkIn(bookHoldingBarcode, DateUtil.tomorrow(), branchScanCode);

     Holding holding = service.find(bookHoldingBarcode);
     assertThat(holding.getBranch().getScanCode(), equalTo(branchScanCode));
The test reads procedurally well, but the three lines of assertion are a bit of a mess. I got there by knowing I could retrieve a holding using the service, then asking some questions of it. With an assert-first approach, I would start with a single line to express the expected outcome:

assertThat(service.find(bookHoldingBarcode), is(availableAt(branchScanCode)));

The matcher method availableAt doesn’t exist yet. At this point in writing the test, I don’t yet know exactly the steps I’ll use to implement it; nor do I care. Meanwhile, I’ve been able to craft a very literary assertion that declares the outcome rather than making the reader work stepwise through it.

TDD itself is a programming-by-intention technique.

A Single Assert Per Test

Dave Astels promoted the controversial notion of one assert per test almost 15 years ago. His advice isn’t quite as controversial when it comes to preventing run-on tests that work through multiple cases:

public void bankAccount() {
    var account = new BankAccount();

    // balance is zero when created
    assertThat(account.balance(), is(equalTo(0)));

    // deposits
    assertThat(account.balance(), is(equalTo(100)));

    assertThat(account.balance(), is(equalTo(300)));

    // withdrawals
    assertThat(account.balance(), is(equalTo(250)));

    // ...
It’s a little easier to slap together a run-on test. Often the various cases (indicated by the comments in the above example) depend on a bit of setup context. Creating a separate test method for each case would require some redundancy in the setup for each individual case, and perhaps that’s why some people balk at the idea.

It’s easy to factor out such redundancies, however, using setup hooks and helper methods. Some folks perhaps are concerned about the execution redundancy, but if we’re writing isolated unit tests that have no dependencies on slow collaborators, adding new sub-millisecond tests is a non-issue.

Here’s what a single-assert-per-test approach looks like:

private BankAccount account;

public void createAccount() {
    account = new BankAccount();

public void hasZeroBalanceWhenCreated() {
    assertThat(account.balance(), is(equalTo(0)));

public void increasesBalanceOnDeposit() {

    assertThat(account.balance(), is(equalTo(300)));

public void decreasesBalanceOnWithdraw() {


    assertThat(account.balance(), is(equalTo(250)));
Each test describes one behavior, which provides a few advantages:

  • The isolated nature of each test can make it much easier for readers to understand the intended behavior
  • The test name concisely summarizes the behavior, making it possible for the list of test names to help maintainers understand where their changes need to go
  • On test failure, it’s much easier to uncover the source of the failure — side-effect errors created by one case do not generate errors in subsequent cases

Does “one assert per test” always make sense? What if you’re verifying that a dozen fields were shuttled over from a cursor into a domain object?

Perhaps it’s better to think of “one assert per test” as “one behavior per test.” You might consider that copying a bunch of related columns into associated fields is a singular behavior. How do you know?

My take: Start with a single assert. If you can’t think of a meaningful way to name the next test with a unique behavioral description, you’re probably OK with combining the asserts into a single test. Otherwise, stick with a single assert per test.

Always consider that odd coding challenges like this one might represent a smell. Does the compulsion to combine multiple asserts into a single test indicate something suspicious about the design of your production code? In the case of data shuttling, a data dictionary approach might be the right cure that simplifies your system overall and allows you to stick to one assert per test.

Test Naming

In TDD, there are various approaches to naming your tests. You might use the form DoesSomethingWhenSomeContextExists. You might also go with WhenSomeContextExistsSomethingHappens, or you might even use GivenSomeContextWhenSomeEventOccursThenSomeConditionHoldsTrue.

For a few years, I’ve promoted an alternative: I name my test classes or fixtures starting with the article “A” or “An.” The test class combined with each test name completes a sentence:

TEST_F(AnAutomobileWithEngineStarted, HasLowIdleSpeed) {
   /* … */


class ACheckedOutHolding {
   [Test] public void IsAvailableAfterReturnToBranch()

      /* … */

Naming is one of the most important things you do! Choose whichever naming form is most appealing to you. It won’t matter as long as you’re consistent across the tests and the test names clearly describe intended behavior.

Nameless Tests

When test driving, there are many ways to skin a cat. I rarely believe there is an absolute one right way to do any given thing. That means it’s up to you and your team to discuss and settle on a technique that works best for your situation.

With most of the above choices I’ve described, I settled on one approach because I found value through employing it. My choice doesn’t imply that the other approaches are wrong; if your team takes an alternate approach, I’m happy to go along with it. Only in rare cases have I recoiled in horror upon seeing an alternate approach.

All of my tests are named. That’s to support their value of documentation. If you’re going to invest this much effort in writing tests, they should pay off in multiple ways. Describing the intended behaviors of the system is one such way.

I’ve heard at least one person espouse the notion of nameless test cases, however. Their contention:

  • Test names are comments, and as such could be lies that inaccurately describe the test code contained within.
  • Tests should be written as highly readable examples, meaning they should not need a summary.

I played with this idea of nameless tests with an open mind for a day or two. As with any of the earlier variants I described, I always recommend experimenting with the ones you’re not comfortable with before making a decision.

In this case, I firmly came down on the side of “no way.” First, I don’t want to waste time reading through dozens of lines of examples in order to find the ones that pertain to what I need to change in the code. Sub-section headings exist in textbooks for a very similar reason. Second, an example of behavior, no matter how well you name the variables and functions and variables it employs, doesn’t always concisely express the real intent. Nameless tests are an interesting idea, but one I think is ultimately damaging. I tried it fairly, I didn’t care for it, and I won’t employ or recommend it.

Consider Your Feedback

As a TDD practitioner, part of your job is to gain feedback from short-cycled experiments (test cycles) and adjust accordingly. Similarly, consider it your job to continuously seek improvement: Treat each of the above variants from your normal practice as a possible experiment. Run the experiment fairly and see if the variant adds value to your TDD repertoire. If you hate it after a fair shake, drop it — that’s fine, too!

Related Posts:

Secure Your Code, Harden Your App, Automate Its Testing

Secure Your Code, Harden Your App, Automate Its Testing

With DevOps practices more popular than ever in software engineering, there has been a push to integrate security, optimization, and frequent testing into the development process. However, as the baseline for what's considered good software evolves, so does the need...

A Guide to Test Driven Development (TDD)

A Guide to Test Driven Development (TDD)

For developers who work on many agile projects, test-driven development (TDD) may be something you wish to incorporate into your software development life cycle (SDLC). It’s a way to implement software programming while integrating unit testing, programming, and...