In testing professionals’ language, “error” often plays the role of villain. After all, the purpose of testing is to cleanse software of errors.
As convenient as this simplification is, it’s ultimately misleading. The real aim of testing is more positive: to validate that a product meets its specifications, or, more broadly, to report on any findings that degrade customers’ experience. The role of an error in these higher business targets is more subtle.
Here are five distinct types of error that you may encounter during testing, each of which needs to be managed with care.
1. “Real” errors
When someone in a testing department speaks of an “error” or “bug” with no other qualification, most often the focus is on a specific divergence between requirements and observed behavior. If calls to a 911 call center that arrive within ten seconds of the hour are always sent to voicemail, that’s an error. If employees who reside in Canada and whose start date was in August are never included in company reports on vacation usage, that’s an error.
But even this simple category is not clear-cut. If a specification calls for color names to be written out, is sometimes using “grey” and sometimes using “gray” an error? If a user interface mishandles customer orders over $1,000,000 but no customer has ever ordered more than $4,300, is that an error?
The short answer: yes. Testing professionals generally should report all their findings, even the unrealistic ones.
This sort of pedantic legalism generally brings at least a couple of benefits. For one, an error that only appears with unrealistic inputs frequently is a symptom of a more pervasive error that simply hasn’t been observed yet. If all dollar values greater than a million are mishandled, perhaps $311.78 is also. Errors on wild values give programmers a chance to focus on fragile code segments that might be improved.
At the very least, such errors suggest flaws in specification. “Grey” vs. “gray” represents a chance for specialists in requirements to think more carefully about what “color name” means and how named colors contribute to users’ experience with software. Sometimes the spelling difference is a crucial part of branding or function; sometimes it doesn’t matter at all. While testers aren’t authorized to make such decisions, they can be great at raising questions.
2. Tactical testing errors
Also common in testers’ daily experience are testing errors, or cases where a test fails but the tested software isn’t at fault. Specialists sometimes call these “false positives” or “Type I” errors. Teammates outside testing don’t want to hear about these, and, for the most part, they shouldn’t.
Testing errors can also be instructive, though. A performance evaluation that frequently fails because of differences in hardware, or an automated GUI checker that often complains about discrepancies that a human reader sees are inconsequential, are symptoms of fragile tests. There probably are ways to make the tests more intelligent or to hook into the software at a level that makes it easier to isolate invariants.
Testing errors generally shouldn’t be reported outside the testing team. There might well be calls to action within the testing team, though.
False negatives, or Type II errors, also happen. But for complicated historical and organizational reasons, the cases where tests don’t report defects that are present are often not labeled as errors.
3. Strategic testing errors
Testers also may encounter “mistakes” that those outside testing don’t recognize as errors. For instance, a test suite might work perfectly but may have accumulated so much technical debt that it’s nearly impossible to keep it “alive” when circumstances change. That fragility is an error but at a different level than interests the clients of the testing department—at least in the short run.
Testers need to be sensitive to these errors, but they should use them to update and correct their own practice, rather than report them as “real” errors.
4. Operational errors
What does the service or product do when a file system fills up, or DNS goes offline, or the organization’s security certificate expires?
Sometimes requirements explicitly specify these kinds of error-handling; in those cases, of course, testers have a requirement to verify that software correctly responds to resource starvation. Sometimes organizations decide that such events have their own pathways, and it’s OK for a few customers to see “Mysterious failure 17” because a special team is supposed to handle failure 17 and other surprises. Sometimes organizations lose track of their dependence on correctly configured firewalls, license servers and all the other easily forgotten pieces that cooperate behind the scenes of modern software.
While testers should feel a professional responsibility to understand these faults, operational errors are at best secondary for nearly all testing teams. For the most part, it’s enough to report, “If the software encounters an otherwise undiagnosed error, then the end user sees standard error screen G,” or even, “… then what the end user sees is unpredictable.”
Notice that “operational” errors, in the sense used here, cannot be exhibited through automation of a GUI. They result from a change in the environment, rather than the inputs that are the usual focus of testing.
5. User errors
Finally, one of the kinds of errors that most deserves testers’ attention is user errors. End users will upload a PDF rather than a JPEG whenever it’s possible, just as their attempts to update personal settings frequently leave them with invisible or untypeable names and attributes.
If requirements explicitly specify what the user should see in such cases, so much the better. As a practical matter, testers almost always can find user errors that the product team hadn’t considered. Mishandled potential user errors are particularly important to report. While few organizations are as grateful for such reports as they ought to be, conscientious testing professionals recognize how much handling errors correctly improves end users’ experience. Make time to describe user errors.
Testing efforts inevitably encounter more than just “real” product errors. Recognize ahead of time that testers’ effort and deliverables need to manage all five kinds of errors. It’s far better to make a deliberate decision to report only on “real” errors than to have it happen accidentally.