Test Automation Best Practice #6: Resolve Failing Test Cases

Oct 8, 2021 | Best Practices, Test Automation Insights

Introduction

Even when test cases have been carefully designed to be stable and maintainable, test failures can happen. There are several possible uses of the term “test failure,” so let’s distinguish between them:

lineicon_negative-test-case

A negative test case

This is a test case that you expect to return an error from the application under test (AUT), such as an invalid password. This type of test case succeeds when it returns the expected error message.
Maintenance Mode

A test case that uncovers a defect in the application

This is actually a successful test case. After all, identifying defects is one of the main goals of software testing. A defect can be defined as any difference between how the application is intended to behave, and its observed behavior in testing. The definition of a “defect” also includes unhandled user errors, such as when a web page fails respond appropriately if the user enters invalid data or presses the wrong key.  When a test case returns a “real” error or defect, it’s a best practice to add it to your regression test set, to ensure the defect doesn’t return in future releases.
Ranorex Remote

A test case that fails for a reason unrelated to the functionality of the application

This is the meaning of the term “failed test case” as used in this article. Causes for failed test cases can include false positives, false negatives, errors due to environment or setup issues, and failures due to fragile or flaky test automation.
How often do test failures occur in the real world? Recently, Ranorex partnered with Pulse Research to ask technology leaders about the frequency of test failures. The results are shown in the test below. Approximately 60% of test failures are either “real defects” or “unhandled user errors,” which are not test failures for the purposes of this article. 
This article focuses on the 28% of test failures that result from issues such as missing or invalid test data, problems with the test environment, a bug in the test automation code, or changes in the AUT that are not defects. If the cause of a test failure is not immediately clear, you may need to troubleshoot the test case itself before reporting a defect in the application.

It may be tempting to simply re-run a failed test case to see if it passes. But a test case that passes sometimes and fails on other occasions for no discernable reason is a “flaky,” unreliable test case. It’s important to resolve the issue that caused it to fail so that you can have confidence in the results of your automated testing.

Configure test runs to assist debugging

An earlier article in this series, “Build Maintainable Tests” described best practices for designing test cases that make them more stable and less likely to fail. These included eliminating dependencies between test cases as much as possible, ensuring that your test environment is stable, and removing tests that you expect to fail (such as ones for unresolved defects) from the test run. It is also helpful to configure your test cases to take a screenshot when a failure occurs.

In addition to these recommendations, be sure to configure the test run to handle failures appropriately. Only allow a failing test to stop the entire test run if that makes sense for the situation – for example, if the application fails to launch, or smoke tests fail. Ranorex Studio’s modular approach to test case design includes several options for continuing after a test case returns an error, including “continue with iteration,” “continue with sibling,” and “continue with parent.” You can also automatically retry a failed test case. To learn more, read the Ranorex User Guide chapter on the Ranorex Test Suite.

It’s also important to manage the size of test run reports by focusing only on true errors and failures. For example, Ranorex supports multiple pre-defined report levels, including “debug,” “information,” “warning,” and “success.” In a large test run, reporting information at this level may result in an excessive amount of data. Consider reporting results only for the “error” and “failure” levels to make it easier to spot true problems that need to be resolved.

Isolate the problem

If many test cases are failing, look for a problem with the environment, test framework, or the AUT.

lineicon_environment-error

Environment

Issues with the environment can include required services not running, or not running in administrative mode if required.
lineicon_test-framework-error

Test Framework

Look for issues with the test framework, such as a licensing error, or a remote agent not configured properly.
lineicon_system-error

Application Under Test

Verify that the AUT is prepared correctly. This can include issues such as location-specific system settings, the wrong browser version, or even a different system language. Or, there could be a pending O/S update that blocks the user interface.
If most test cases in your test run have succeeded, then suspect issues with the individual failing test case(s). There may be an error message that points to the cause. If not, don’t just assume that the test case failed “accidentally” and re-run it. All test failures happen for a reason. A test case that appears to succeed or fail for no discernable reason is a “flaky” test. To get to the root of the problem, refer to the probable-cause checklist below.

Troubleshoot failed test cases

Work through a probable-cause checklist to troubleshoot each failed test case, asking questions such as the following:

  • Is the test case up-to-date with the AUT? For example, has the test case been updated with any/all changes in UI elements?
  • Is the input data correct and available to the test?
  • Are all parameters set correctly?
  • Are the expected results valid? Does the test case expect a single valid result, but the application returns multiple valid results?
  • Does the test case have any dependencies on earlier test cases that might have caused the problem? To avoid this situation, make test cases as modular and independent of each other as described in the blog article Build Maintainable Tests.
  • Did the teardown of the most recent test run work correctly? Is the AUT in the correct state, for example, with all browser windows closed? Has all the data entered during the last test run been deleted or reset?
  • Is there a timing issue? A study of flaky tests done by the University of Illinois at Urbana-Champaign found that flaky tests are often caused by asynchronous waits: the test fails because the AUT doesn’t return the expected result fast enough. In this case, it may be necessary to add a wait time to the test case step so that it doesn’t fail unnecessarily. For more information on how this works in Ranorex, refer to the user guide chapter Waiting for UI Elements.

Use your debugging tools

Make use of the tools available to you that may help resolve failing test cases. For example, Ranorex Studio provides several tools to assist in troubleshooting failed test cases, including the following:

Debugger

Debugger

This tool allows you to set breakpoints and step through a failed test case, examining the value of variables and expressions for each statement.
Maintenance Mode

Maintenance Mode

This tool allows you to identify and repair failing test cases directly from the test run report. Learn more in the following article: maintenance mode.
Remote agents

Ranorex Remote

This is a great tool for troubleshooting test failures that occur on virtual machines. Use the Ranorex Remote Agent to update a run configuration to perform only the steps necessary to reach the point just before the failure occurred, so that the AUT is in the correct state. Then, connect to the virtual machine and troubleshoot the failed test case, as described in the blog article How to Reconstruct Failed Test Cases in CI Systems.
Video Reporting

Video Reporting

In Ranorex Studio, you can enable the video reporting feature to automatically produces a video of either all of your test cases in your test suite or just the ones that failed. With video reporting enabled, you can see exactly what caused a test run to fail.
Taking the time to resolve your failed test cases, and to learn from the failures, will help make your entire test suite more reliable.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

Related Posts:

Effective Black Box Testing Methods You Need to Try

Effective Black Box Testing Methods You Need to Try

When users open software solutions, they expect them to function as needed. For example, when a business analyst opens Excel, they hope to work with data without requiring knowledge of what’s happening with the application internally. If something breaks, they won’t...

8 Steps to Create a Data Migration Plan

8 Steps to Create a Data Migration Plan

When companies change systems or move information to a more secure location, they typically need to perform a data migration. If a company wants to use cloud-based solutions, it must transfer existing information from localized hardware to a cloud environment. A...