Managing 5 Common Types of Errors in Software Testing

Mar 1, 2022 | Test Automation Insights

Software testing error types
In testing professionals’ language, “error” often plays the role of villain. After all, the purpose of testing is to cleanse software of errors.

As convenient as this simplification is, it’s ultimately misleading. The real aim of testing is more positive: to validate that a product meets its specifications, or, more broadly, to report on any findings that degrade customers’ experience. The role of an error in these higher business targets is more subtle.

Here are five distinct types of error that you may encounter during testing. Each of them deserves thoughtful, nuanced, and specific management.

1. “Real” errors

When someone in a testing department speaks of an “error” or “bug” with no other qualification, most often the focus is on a specific divergence between requirements and observed behavior. If calls to a 911 call center that arrive within ten seconds of the hour are always sent to voicemail, that’s an error. If employees who reside in Canada and whose start date was in August are never included in company reports on vacation usage, that’s an error.

But even this simple category is not clear-cut. If a specification calls for color names to be written out, is sometimes using “grey” and sometimes using “gray” an error? If a user interface would mishandle customer orders in excess of $1,000,000, but no customer has ever ordered more than $4,300, is that an error?

The short answer: yes. Testing professionals generally should report all their findings in these categories, even the unrealistic ones.

The pedantic legalism that reports everything generally brings at least a couple of benefits. For one, an error that only appears with unrealistic inputs frequently is a symptom of a more pervasive error that simply hasn’t been observed yet. If all dollar values greater than a million are mishandled, perhaps a particular value such as $311.78 is also subject to the error. Errors on wild values draw programmers’ attention to fragile code segments that admit improvement.

At the very least, such errors suggest flaws in specification. “Grey” vs. “gray” represents a chance for specialists in requirements to think more carefully about what “color name” means and how named colors contribute to users’ experience with software. Sometimes the spelling difference is a crucial part of branding or function; sometimes it doesn’t matter at all. While testers aren’t authorized to make such decisions, they can be great at raising questions.

2. Tactical testing errors

Also common in testers’ daily experience are testing errors, or cases where a test fails but the tested software isn’t at fault. Specialists sometimes call these “false positives” or “Type I” errors. Teammates outside testing don’t want to hear about these, and, for the most part, they shouldn’t.

Testing errors can also be instructive, though. A performance evaluation that frequently fails because of differences in hardware, or an automated GUI (graphical user interface) checker that often complains about discrepancies that a human reader sees are inconsequential, are symptoms of fragile tests. There probably are ways to make the tests more intelligent or to hook into the software at a level that makes it easier to isolate appropriate invariants.

Testing errors generally shouldn’t be reported outside the testing team. There might well be calls to action within the testing team, though.

False negatives, or Type II errors, also happen. But for complicated historical and organizational reasons, the cases where tests don’t report defects that are present are often not labeled as errors. Discovery of an error through Customer Support illustrates one such complication: it often happens that a Support department receives a report of a defect that was unknown before its report by a user. This is certainly a Type II error.

Support departments are generally trained to verify such reports, and communicate them to product decision-makers. Few organizations, though, systematically communicate these findings to Testing or Quality Control (QC) departments. A channel for communication from Support to Product exists in typical organizations, but not from Support to QC. One result is that, even though Type II errors are known and managed by an organization as a whole, Testing might have no systematic record of any of them at all.

3. Strategic testing errors

Testers also may encounter “mistakes” that those outside testing don’t recognize as errors. For instance, a test suite might work perfectly but may have accumulated so much technical debt that it’s nearly impossible to keep it “alive” when circumstances change. That fragility is an error but at a different level than interests the clients of the testing department—at least in the short run.

A variation on this theme has to do with tooling. Think of a test suite which functions well, and even possesses so little technical debt that it can swiftly follow enhancements. This becomes a strategic error, however, if the tests depends on tools which can no longer be licensed, or whose license doesn’t cover new environments into which the product is migrating.

Testers need to be sensitive to these errors, but they should use them to update and correct their own practice, rather than report them as “real” errors.

4. Operational errors

What does the service or product do when a file system fills up, or DNS goes offline, or the organization’s security certificate expires?

Sometimes requirements explicitly specify these kinds of error-handling; in those cases, of course, testers have a requirement to verify that software correctly responds to resource starvation. Sometimes organizations decide that such events have their own pathways, and it’s OK for a few customers to see “Mysterious failure 17” because a special team is supposed to handle failure 17 and other surprises. Sometimes organizations lose track of their dependence on correctly configured firewalls, license servers and all the other easily forgotten pieces that cooperate behind the scenes of modern software.

While testers should feel a professional responsibility to understand these faults, operational errors are at best secondary for nearly all testing teams. For the most part, it’s enough to report, “If the software encounters an otherwise undiagnosed error, then the end user sees standard error screen G,” or even, “… then what the end user sees is unpredictable.”

Notice that “operational” errors, in the sense used here, cannot be exhibited through automation of a GUI. They result from a change in the environment, rather than the inputs that are the usual focus of testing.

5. User errors

Finally, one of the kinds of errors that most deserves testers’ attention is user errors. End users will upload a PDF rather than a JPEG whenever it’s possible, just as their attempts to update personal settings frequently leave them with invisible or untypeable names and attributes.

If requirements explicitly specify what the user should see in such cases, so much the better. Ideally, QC and Product co-operate during the definition of a project to specify requirements which include not only user errors, but also documentation, testability, and other product dimensions often neglected. As a practical matter, though, testers almost always can identify near the time of release user errors that the product team hadn’t considered. Mishandled potential user errors are particularly important to report. While few organizations are as grateful for such reports as they ought to be, conscientious testing professionals recognize how much correct handling of errors improves end users’ experience. Make time to describe and track user errors.

Plan ahead

Testing efforts inevitably encounter more than just “real” product errors. Recognize ahead of time that testers’ effort and deliverables need to manage all five kinds of errors. It’s far healthier to make a deliberate decision to report only on “real” errors than to have it happen accidentally. A sufficiently mature test team explicitly captures all these errors, then reports to other parts of the organization only the categories that are meaningful to the different audiences.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

Get a free trial of Ranorex Studio and streamline your automated testing tools experience.

Start your intelligent testing journey with a free DesignWise trial today.

Related Posts:

Ranorex Introduces Subscription Licensing

Ranorex Introduces Subscription Licensing

Software testing needs are evolving, and so are we. After listening to customer feedback, we’re excited to introduce subscription licensing for Ranorex Studio and DesignWise. This new option complements our perpetual licenses, offering teams a flexible, scalable, and...

Seamlessly Test SwiftUI Apps with Ranorex Studio

Seamlessly Test SwiftUI Apps with Ranorex Studio

As mobile development continues to evolve, Apple’s SwiftUI framework has become a favorite among developers for building intuitive, cross-platform user interfaces. But as SwiftUI’s adoption grows, ensuring these applications meet the highest quality standards requires...