What do you get when you combine artificial intelligence (AI) with robotic process automation (RPA)? The answer is automation intelligence — an IT trend that combines the benefits of artificial intelligence and rules-based automation (RBA) to create something new,...
Even when test cases have been carefully designed to be stable and maintainable, test failures can happen. There are several possible uses of the term “test failure,” so let’s distinguish between them:
A negative test case
A test case that uncovers a defect in the application
A test case that fails for a reason unrelated to the functionality of the application
It may be tempting to simply re-run a failed test case to see if it passes. But a test case that passes sometimes and fails on other occasions for no discernable reason is a “flaky,” unreliable test case. It’s important to resolve the issue that caused it to fail so that you can have confidence in the results of your automated testing.
Configure test runs to assist debugging
An earlier article in this series, “Build Maintainable Tests” described best practices for designing test cases that make them more stable and less likely to fail. These included eliminating dependencies between test cases as much as possible, ensuring that your test environment is stable, and removing tests that you expect to fail (such as ones for unresolved defects) from the test run. It is also helpful to configure your test cases to take a screenshot when a failure occurs.
In addition to these recommendations, be sure to configure the test run to handle failures appropriately. Only allow a failing test to stop the entire test run if that makes sense for the situation – for example, if the application fails to launch, or smoke tests fail. Ranorex Studio’s modular approach to test case design includes several options for continuing after a test case returns an error, including “continue with iteration,” “continue with sibling,” and “continue with parent.” You can also automatically retry a failed test case. To learn more, read the Ranorex User Guide chapter on the Ranorex Test Suite.
It’s also important to manage the size of test run reports by focusing only on true errors and failures. For example, Ranorex supports multiple pre-defined report levels, including “debug,” “information,” “warning,” and “success.” In a large test run, reporting information at this level may result in an excessive amount of data. Consider reporting results only for the “error” and “failure” levels to make it easier to spot true problems that need to be resolved.
Isolate the problem
If many test cases are failing, look for a problem with the environment, test framework, or the AUT.
Application Under Test
Troubleshoot failed test cases
Work through a probable-cause checklist to troubleshoot each failed test case, asking questions such as the following:
- Is the test case up-to-date with the AUT? For example, has the test case been updated with any/all changes in UI elements?
- Is the input data correct and available to the test?
- Are all parameters set correctly?
- Are the expected results valid? Does the test case expect a single valid result, but the application returns multiple valid results?
- Does the test case have any dependencies on earlier test cases that might have caused the problem? To avoid this situation, make test cases as modular and independent of each other as described in the blog article Build Maintainable Tests.
- Did the teardown of the most recent test run work correctly? Is the AUT in the correct state, for example, with all browser windows closed? Has all the data entered during the last test run been deleted or reset?
- Is there a timing issue? A study of flaky tests done by the University of Illinois at Urbana-Champaign found that flaky tests are often caused by asynchronous waits: the test fails because the AUT doesn’t return the expected result fast enough. In this case, it may be necessary to add a wait time to the test case step so that it doesn’t fail unnecessarily. For more information on how this works in Ranorex, refer to the user guide chapter Waiting for UI Elements.
Use your debugging tools
Make use of the tools available to you that may help resolve failing test cases. For example, Ranorex Studio provides several tools to assist in troubleshooting failed test cases, including the following:
All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform
Here’s an ultimate guide to automation intelligence that covers its components, key benefits, applications & more!
When you work in the software industry, continuous testing is the name of the game. From quality assurance tests to blockchain testing, understanding, performing, and interpreting tests in an efficient manner is an integral part of the software development life...
One of the challenges of software production is the QA (quality assurance) process. You want your end users to have the absolute best experience once your software rolls out. To make that happen, you first need to test your software with numerous scenarios. One way...