When users open software solutions, they expect them to function as needed. For example, when a business analyst opens Excel, they hope to work with data without requiring knowledge of what’s happening with the application internally. If something breaks, they won’t...
Introduction
Even when test cases have been carefully designed to be stable and maintainable, test failures can happen. There are several possible uses of the term “test failure,” so let’s distinguish between them:
A negative test case
A test case that uncovers a defect in the application
A test case that fails for a reason unrelated to the functionality of the application
It may be tempting to simply re-run a failed test case to see if it passes. But a test case that passes sometimes and fails on other occasions for no discernable reason is a “flaky,” unreliable test case. It’s important to resolve the issue that caused it to fail so that you can have confidence in the results of your automated testing.
Configure test runs to assist debugging
An earlier article in this series, “Build Maintainable Tests” described best practices for designing test cases that make them more stable and less likely to fail. These included eliminating dependencies between test cases as much as possible, ensuring that your test environment is stable, and removing tests that you expect to fail (such as ones for unresolved defects) from the test run. It is also helpful to configure your test cases to take a screenshot when a failure occurs.
In addition to these recommendations, be sure to configure the test run to handle failures appropriately. Only allow a failing test to stop the entire test run if that makes sense for the situation – for example, if the application fails to launch, or smoke tests fail. Ranorex Studio’s modular approach to test case design includes several options for continuing after a test case returns an error, including “continue with iteration,” “continue with sibling,” and “continue with parent.” You can also automatically retry a failed test case. To learn more, read the Ranorex User Guide chapter on the Ranorex Test Suite.
It’s also important to manage the size of test run reports by focusing only on true errors and failures. For example, Ranorex supports multiple pre-defined report levels, including “debug,” “information,” “warning,” and “success.” In a large test run, reporting information at this level may result in an excessive amount of data. Consider reporting results only for the “error” and “failure” levels to make it easier to spot true problems that need to be resolved.
Isolate the problem
If many test cases are failing, look for a problem with the environment, test framework, or the AUT.
Environment
Test Framework
Application Under Test
Troubleshoot failed test cases
Work through a probable-cause checklist to troubleshoot each failed test case, asking questions such as the following:
- Is the test case up-to-date with the AUT? For example, has the test case been updated with any/all changes in UI elements?
- Is the input data correct and available to the test?
- Are all parameters set correctly?
- Are the expected results valid? Does the test case expect a single valid result, but the application returns multiple valid results?
- Does the test case have any dependencies on earlier test cases that might have caused the problem? To avoid this situation, make test cases as modular and independent of each other as described in the blog article Build Maintainable Tests.
- Did the teardown of the most recent test run work correctly? Is the AUT in the correct state, for example, with all browser windows closed? Has all the data entered during the last test run been deleted or reset?
- Is there a timing issue? A study of flaky tests done by the University of Illinois at Urbana-Champaign found that flaky tests are often caused by asynchronous waits: the test fails because the AUT doesn’t return the expected result fast enough. In this case, it may be necessary to add a wait time to the test case step so that it doesn’t fail unnecessarily. For more information on how this works in Ranorex, refer to the user guide chapter Waiting for UI Elements.
Use your debugging tools
Make use of the tools available to you that may help resolve failing test cases. For example, Ranorex Studio provides several tools to assist in troubleshooting failed test cases, including the following:
Debugger
Maintenance Mode
Ranorex Remote
Video Reporting
All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform
Related Posts:
Effective Black Box Testing Methods You Need to Try
When users open software solutions, they expect them to function as needed. For example, when a business analyst opens Excel, they hope to work with data without requiring knowledge of what’s happening with the application internally. If something breaks, they won’t...
Benefits of Using the Top BDD Testing Tools
Explore the most popular and best types of BDD testing tools available for developers across different programming languages and development platforms.
8 Steps to Create a Data Migration Plan
When companies change systems or move information to a more secure location, they typically need to perform a data migration. If a company wants to use cloud-based solutions, it must transfer existing information from localized hardware to a cloud environment. A...