Different scales in continuous testing

It’s important that tests be automatic and thorough. Continuous testing is an approach for automated testing that gives fast feedback by testing early and often. But “continuous” doesn’t mean all tests need to run at all times. Here are four tips for organizing your continuous testing system.

1. Know yourself

Continuous testing (CT) is valuable to essentially all developers. But different developers are different enough that the value will vary a great deal. Run CT for the specific benefits it brings in your situation, not just because CT is a hot topic right now.

Programmers compiling Java for million-line embedded automotive modules have a different focus from freelance mobile app developers, and both their needs are distant from those of a data scientist writing Python to turn around proteomic results quickly. What are the metrics of your own situation?

How often do your programmers commit updates? How long do your unit tests take to run? How long is your build process? How long do your automated integration tests take to run? How often can your quality assurance (QA) crew test a release? How many committers are active?

Before you launch a CT initiative, measure all these aspects of your process, at least crudely. Your plans will be better for being based on reality, rather than your impression of what you and your team do.

2. Everybody runs unit tests

Unit tests need to be automatic, cheap, and easy. If they take 40 minutes to run, they’re not unit tests. If they can only be run on certain hardware, they’re probably not unit tests. Tests that demand human oversight—such as “Is that click in the right place on the screen?”—are not unit tests. If a supervisor has to enter a secret passphrase at a front-line contributor’s desk, the context is definitely not for unit tests.

That doesn’t mean those other tests aren’t valuable; they might even become unit tests some day. One of the first actions to implement a useful CT plan, though, is to identify a kernel of all your tests that are automatic, cheap, easy, and ready right now. Segregate those as unit tests.

Make sure any commit that fails a unit test tosses an alarm, and make sure all your programmers know how to run the unit tests for themselves. With those two elements in place, your CT will begin to pay off, even if the unit tests cover only a small fraction of all the tests you eventually configure.

Build, Test, and Deploy with Confidence

Scalable build and release management

3. Select which tests work for continuous testing

Next up, identify tests that are feasible for your developers but not your CT system, or vice versa.

How can that be? Here are examples:

  • An important test takes eleven hours to run. While that’s impractical for humans, and shouldn’t be allowed to stall results from your CT’s examination of individual commits, your CT can execute such long-running tests in the background and report results at least a couple of times daily
  • For a variety of reasons, your CT might be on a network that is distant from developers and lacks certain resources. Maybe security restrictions mean that the CT can’t access testing databases, at least for several months, but developers and QA can run through a whole suite of database-dependent tests in 30 seconds. In such a case, establish a policy that humans are responsible for running these integration tests. Create a plan for a proxy, new resource or something that will eventually make the test possible for CT
  • Certain tests might appear to be available in any environment, but licensing restrictions effectively confine them to a mechanical schedule. Run such tests using CT. While not being able to run them directly might mildly frustrate programmers, they’ll appreciate the deterministic results from CT and find ways to fix errors that turn up

Not all tests deserve to be in the same category. Manage tests with enough care that the whole team understands which are unit tests, which are integration tests only run by CT, which are integration tests for any environment, and so on.

4. Plan for the future you want

Does your testing plan still have gaps? Are there error-handling segments of source that are hard to test, or tests that require human intervention and judgment, or errors that show up only for end-users?

Like anything in life worth doing, your CT’s unlikely to be perfect at the beginning. Document your gaps, and figure out which ones will reward correction the quickest and best. Go slow on tests that promise little return, and learn to recognize tests that cost more than they’re worth.

Whatever you do, uphold the integrity of your CT: Software can’t be released unless it passes tests. Don’t let testing become a spectator sport or advisory pursuit; cultivate a culture that understands tests must “go green” before software merges or releases. Do these things, and CT will pay off for you.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

About the Author

Cameron Laird is an award-winning software developer and author. Cameron participates in several industry support and standards organizations, including voting membership in the Python Software Foundation. A long-time resident of the Texas Gulf Coast, Cameron's favorite applications are for farm automation.

You might also like these articles