Many of the limits of test automation under DevOps are really just limits of test automation in general:
- Tests are not proofs of correctness
- Tests are incomplete
- Requirements are often erroneous or ambiguous
- Tests are expensive
- Organizational politics can lead to misuse of test results
These limitations cause the majority of testing challenges in DevOps organizations but are already well-documented elsewhere. It’s more interesting to consider test limitations specific to DevOps, or at least more prominent in DevOps:
- A focus on automated tests to the exclusion of others
- Particularly rapid cycling
- Sensitivity to faults in production
- Absolute dependence on strong testing
- Greater transparency and less isolation between functions
Let’s look into the causes of these test limitations prevalent in DevOps and explore some of their countermeasures.
An obsession with automation
An absolute attitude toward automation, such as requiring QA to automate all their test cases, is typical of DevOps culture. However, the reality is a bit more nuanced.
Sophisticated testing practitioners can help even the most aggressive DevOps organizations moderate their attitude toward automation by serving as a source of expertise in automation techniques; reminding the whole team about the place of manual testing in DevOps; and modeling active management of individual tests, particularly their progression from being manually executed to automated.
The last of these is particularly important. When wise testing professionals arrive in a DevOps team, they shouldn’t discard any tests other than automated ones. Instead, they should categorize all test assets.
Some tests are easily automated and readily fit into the company’s continuous integration framework. Some tests might be automated but are so slow or demand such expensive platforms that they’re inappropriate for routine workflow. Some tests haven’t been or can’t be automated. It’s important to keep all these assets in good shape and to review them actively. As the organization matures, risk profiles, technology or even availability may change. A specific test might start out as manual only, be rewritten as a fully automated version executed with every source commit, and eventually “graduate” to automated operation on a slower schedule decoupled from developer milestones. An incremental, strategic approach has a lot to offer the testing world.
At the implementation level, this kind of categorization means that, in my own work, I have multiple directories of tests. Jenkins directly accesses some of them: they’re fully automated and fit to be executed on each check-in. Others involve human intervention of one sort or another; or are even run less often just to reduce licensing fees.
Another way to think about DevOps is as a shift in risk-reward boundaries. Suppose a non-DevOps organization depends on a set of partly automated GUI tests. When that same organization adopts DevOps, it might decide that the GUI tests are too expensive to automate immediately, but it can introduce enough hooks to create corresponding functional tests that mimic each of the results of the GUI tests. At that point, DevOps has automated all the functional aspects of the original GUI testing and reduced its exposure on each check-in purely to the correspondence between GUI and hooks. That correspondence presumably can be tested outside the DevOps framework.
Timescale skew
DevOps is all about rapid: rapid development, rapid results, rapid failure and so on. One specific consequence is that certain tests require special handling even though they’re fully automated.
Suppose, for instance, that a certain DevOps team commits a new source every hour on average, and that its continuous integration is configured to run 3,000 tests, each of which averages 100 milliseconds to complete. That means that the whole automated testing suite completes in five minutes. That’s a good situation: While detection of a new error isn’t immediate, it happens soon enough after check-in that a developer can reasonably respond to and remedy failure reports.
Imagine now that one new test is fully automated, but it takes five hours to complete. That test should not take place with all the other tests as part of the validation of each commit. Five minutes of latency during validation is fine; five hours, though, unacceptably degrades the experience for developers. Don’t let this happen in your organization.
One easy alternative is to have each commit spawn two test suites. One is the usual “fast” one, which gives a preliminary result in the few minutes before human developers lose their focus on the current software. The second is a “slow” collection, which might take hours or even days to complete. When a failure comes back from the slow side, it typically is a challenge to diagnose; still, it’s far better to find out about an error this way than from a customer.
An alternative tactic to managing such situations is to run the slow collection on a fixed, periodic schedule, rather than with each commit. While this approach makes it even harder to identify the specific source change that produced the error, it slashes the number of long-running tests performed each day, and that might better fit the resources available in the organization for testing.
Testing in Production
The third peculiarity of testing for DevOps teams has to do with platform dependence. One of the advantages of DevOps is its impatience with the “it works on my machine” fallacy.
Healthy DevOps teams generally share the recognition that tests need to migrate smoothly through all environments, from development desktops to integration hosts to staging and production. Construction of portable tests is likely to challenge some team members, and that’s a good place for a test professional to lead the team forward.
An extreme form of this need for portability arises with testing in production. While testing in production is a big enough topic to merit separate attention, the point, for now, is that smart testers make explicit all requirements and expectations about the portability of tests.
Testing is key to DevOps. However, not all testing professionals have experienced this revelation. Some organizations train their testers to expect cyclical loads: No one pays attention to them until releases, then work must be rushed through. DevOps generally needs testing help at all phases of its cycles.
More sharing
DevOps emphasizes transparency, sharing, and cooperation, where more traditional teams are organized in terms of ownership. In typical DevOps, anyone might update the source of a test; tests are no longer the private assets of testers. Assumptions probably have to be documented more explicitly than is necessary when testers are the only readers.
Testers in DevOps will definitely experience changes from the existence they’re used to, and some of it might be uncomfortable initially. Leadership continues to pay off, though, and testing insight will always benefit DevOps teams.