One of the top challenges in test automation is maintaining existing automated tests due to changes in the UI. Another challenge is identifying and improving flaky tests – those tests that work sometimes, and fail other times for reasons unrelated to the AUT. This article describes approaches to test case design, coding, and execution that help manage these challenges and reduce time spent in test maintenance.
Design tips to minimize maintenance
Decide what to test before you decide how to test it.
A well-designed test case is less likely to need future maintenance. Begin by identifying each function to be tested, and then break down each test for that function into a sequence of simple steps with a clear definition of the expected results. Only after this is complete you should decide which tests will be done manually, and which will benefit from automation. Refer to the first article in this series for suggestions on the best types of test cases to automate.
Document your test cases.
Documentation can help ensure that each test case has been well-designed with preconditions for the test, steps to execute, and expected results. It can be very helpful to use a test case template or a tool such as TestRail to organize your test case documentation.
Keep it simple.
Ideally, each test case should check a single function and should fail for only one reason. Complex test cases are more likely to be flaky. If you find that a test case requires many steps, consider dividing it into two or more test cases.
Use naming standards.
The names of UI elements and test objects should be self-explanatory. If you find that comments are necessary to document a given test case or test step, consider whether the test case is too complex and needs to be simplified.
Coding tips to minimize maintenance
Use a modular structure.
A given test case should be able to run independently of other test cases. As much as possible, a test case should not be dependent on the outcome of an earlier test. For example, a test case that verifies payment processing for a web store should not be dependent on a previous test case that puts items in a user’s shopping cart. Keeping your test cases independent will not only make your tests easier to maintain, but will also allow you to take advantage of parallel or distributed execution. However, if you do find that a test case dependency is unavoidable, then use the available features of your test automation framework to ensure that the dependent tests are executed in the proper order.
Create automated tests that are resistant to UI changes.
To keep your automated tests working even when there are changes in the user interface, don’t rely on location coordinates to find a UI object. In addition, if you are testing a web application, avoid relying on the HTML structure of the page or dynamic IDs to locate UI elements. Ideally, the developers of your AUT will include unique IDs for the UI elements in your application. But even if they don’t, the Ranorex Spy tool automatically applies best practices in UI element identification to help make your tests more stable.
Group tests by functional area.
Group your test cases according to the functional area of the application covered. This will make it easier to update related test cases when a functional area is modified, and also allow you to execute a partial regression suite for that functional area. If you are using Ranorex Studio, you can create reusable user code methods and store them in a central library. Then, you can organize the user code methods for a functional area into a collection. Both user code methods and collections can have a description that appears in the user code library, to help testers select the right user code method for a test.
Don't copy-and-paste test code.
Instead of repeating the same steps in multiple tests, create reusable modules. For example, you should only have one module that launches your application. Reuse that module in your other test cases. Then, if the process to launch the application changes, you will only need to update that one module. With Ranorex Studio’s support for keyword-driven testing, local and global parameters, and conditional test execution, you can easily build sophisticated test cases from your individual test modules.
Separate test steps from test data.
Avoid hard-coding data values into your automated tests. Instead, store data values for your tests in an external file and pass it to your tests using variables or parameters. Read more about data-driven testing in the Ranorex User Guide.
Use source control.
Developers use source control tools such as Git, Subversion and Microsoft TFS to collaborate on application code and to revert to earlier versions of the application code when necessary. If possible, you should use the same source control tool that manages your application code to manage the code for your automated tests.
Execution tips to minimize maintenance
Ensure that your test environment is stable.
Unreliable servers or network connections can cause otherwise stable tests to fail. Consider using a mock server to eliminate potential points of failure that are not related to the AUT itself.
Use setup and teardown processes.
Use setup processes to handle preconditions and ensure that the AUT is in the correct state for the tests to run. A setup process will typically handle launching the application, logging in, loading test data, and any other preparation necessary for the test. Use teardown processes to return the AUT to the proper state after the test run completes, including cleaning up any test data.
Another key principle of efficient test design is to “fail fast.” If there is a serious issue with the application that should stop testing, identify and report that issue immediately rather than allowing the test run to continue. Set reasonable timeout values to limit the time that your test spends searching for UI elements.
Fail only when necessary.
Allow your entire test run to fail only when necessary. Stopping a test run after a single error potentially wastes time, and leaves you with no way of knowing whether the other test cases in the run would have succeeded. So, in addition to giving you the ability to stop a test run after an error, Ranorex Studio offers three options for continuing after an error: continue with iteration, continue with sibling, and continue with parent. Read more about these options in the Ranorex User Guide.
Isolate expected failures.
Execute only the automated tests that you expect to succeed. If you have tests for a defect that hasn’t been resolved, remove those test cases from your main test run and execute them separately. This will make it easier to determine if there are real issues in the main test run. Likewise, remove any flaky tests from the main test run, and perform manual testing to cover that functionality.
Configure your automated tests to capture screenshots and use your reporting mechanism to provide detailed information that will assist in troubleshooting a failed test. Ranorex Studio includes a maintenance mode that allows you to pause a test run so that you can diagnose and resolve errors directly during the test run. To see the maintenance mode in action, watch the screencast below.
Following these tips will help you build maintainable test cases, so that you only need to modify the minimum possible number of existing test cases when the application changes. Building maintainable test cases also increases stability and makes debugging easier.