Test Automation Best Practice #3: Build Maintainable Tests

May 11, 2021 | Best Practices, Test Automation Insights

Automated test running smoothly
One of the top challenges in test automation is maintaining existing automated tests due to changes in the UI. Another challenge is identifying and improving flaky tests – those tests that work sometimes, and fail other times for reasons unrelated to the AUT. This article describes approaches to test case design, coding, and execution that help manage these challenges and reduce time spent in test maintenance.

Design tips to minimize maintenance

Easy maintenance is no random accident. The tips below help ensure you have the right tests through the lifetime of your application.

Automation strategies

Decide what to test before you decide how to test it.

A well-designed test case is less likely to need future maintenance. Begin by identifying each function to be tested, and then break down each test for that function into a sequence of simple steps with a clear definition of the expected results. Only after this is complete you should decide which tests will be done manually, and which will benefit from automation. Refer to the first article in this series for suggestions on the best types of test cases to automate. 
Reporting

Document your test cases.

Documentation can help ensure that each test case has been well-designed with preconditions for the test, steps to execute, and expected results. It can be very helpful to use a test case template or a tool such as TestRail to organize your test case documentation. 

smoke tests

Keep it simple.

Ideally, each test case should check a single function or conceptual action; and should fail for only one reason. Complex test cases are more likely to be flaky. If you find that a test case requires many steps, consider dividing it into two or more test cases.
lineicon_testcase-naming

Use naming standards.

The names of UI elements and test objects should be self-explanatory. If you find that comments are necessary to document a given test case or test step, consider whether the test case is too complex and needs to be simplified.

Comments that address why a test is as it is are an exception, that is, such comments are beneficial. “Product management set a requirement that this operation must complete within three seconds” is an example of a good comment in relating a test to a decision made outside the tests. “Calculate the result” is almost certainly a bad comment, because at best it communicates something better expressed in the test itself.

Coding tips to minimize maintenance

Specific stylistic choices make maintenance easier, whether a particular test is fully automated or not.

End-to-end testing

Use a modular structure.

A given test case should be able to run independently of other test cases. As much as possible, a test case should not be dependent on the outcome of an earlier test. For example, a test case that verifies payment processing for a web store should not be dependent on a previous test case that puts items in a user’s shopping cart. Keeping your test cases independent will not only make your tests easier to maintain, but will also allow you to take advantage of parallel or distributed execution. However, if you do find that a test case dependency is unavoidable, then use the available features of your test automation framework to ensure that the dependent tests are executed in the proper order.

At the same time, certain actions are likely to appear in many distinct cases — a successful login, for example. Part of modularization is the definition of useful common initializations or setups which multiple otherwise-separate tests share.

smoke tests

Design automated tests to be immune to UI changes.

To keep your automated tests working even when there are changes in the user interface, don’t rely on location coordinates to find a UI object. In addition, if you are testing a web application, avoid relying on the HTML structure of the page or dynamic IDs to locate UI elements. Ideally, the developers of your AUT will include unique IDs for the UI elements in your application. But even if they don’t, the Ranorex Spy tool automatically applies best practices in UI element identification to help make your tests more stable. 
User code library

Group tests by functional area.

Group your test cases according to the functional area of the application covered. This will make it easier to update related test cases when a functional area is modified, and also allow you to execute a partial regression suite for that functional area. If you are using Ranorex Studio, you can create reusable user code methods and store them in a central library. Then, you can organize the user code methods for a functional area into a collection. Both user code methods and collections can have a description that appears in the user code library, to help testers select the right user code method for a test.
Efficient, modular tests

Don't copy-and-paste test code.

As the modularization section above mentioned, instead of repeating the same steps in multiple tests, create reusable modules. For example, you should only have one module that launches your application. Reuse that module in your other test cases. Then, if the process to launch the application changes, you will only need to update that single module. With Ranorex Studio’s support for keyword-driven testing, local and global parameters, and conditional test execution, you can easily build sophisticated test cases from your individual test modules. 
Source control

Separate test steps from test data.

Avoid hard-coding data values into your automated tests. Instead, store data values for your tests in an external file and pass it to your tests using variables or parameters. Read more about data-driven testing in the Ranorex User Guide.
lineicon_source-control

Use source control.

Developers use source control tools such as Git, Subversion and Azure DevOps to collaborate on application code and to revert to earlier versions of the application code when necessary. If possible, you should use the same source control tool that manages your application code to manage the code for your automated tests.

The workflows for application source code almost certainly apply equally well to test source code. Does your programming team review updates to the source? Does the source conform to definite stylistic rules? The source code for tests deserves the same standards: don’t just capture tests in a source control system, but practice code review and automated validation to help ensure the quality of tests under source control.

Manage Your Source Code Securely

Hosted and on-prem solutions for Git, SVN and Perforce

Execution tips to minimize maintenance

The best tests don’t just require little maintenance in a general sense, but they share specific execution-time characteristics: their results are reproducible, and their resources demands for time, memory, and disk space are predictable.

Take advantage of modern virtualization techniques. Test on “clean” hosts, whether newly-launched virtual machines or fresh container images or otherwise known safe starting points. This kind of computing hygiene minimizes contamination of test results by external confusions or accidents.

smoke tests

Ensure that your test environment is stable.

Unreliable servers or network connections can cause otherwise stable tests to fail. Consider using a mock server to eliminate potential points of failure that are not related to the AUT itself. Well-implemented mocks also typically improve performance, sometimes dramatically, so that tests become less expensive to run.
smoke tests

Use setup and teardown processes.

Use setup processes to handle preconditions and ensure that the AUT is in the correct state for the tests to run. A setup process will typically handle launching the application, logging in, loading test data, and any other preparation necessary for the test. Use teardown processes to return the AUT to the proper state after the test run completes, including cleaning up any test data. 
smoke tests

Fail fast.

Another key principle of efficient test design is to “fail fast.” If there is a serious issue with the application that should stop testing, identify and report that issue immediately rather than allowing the test run to continue. Set reasonable timeout values to limit the time that your test spends searching for UI elements.
smoke tests

Fail only when necessary.

Allow your entire test run to fail only when necessary. Stopping a test run after a single error potentially wastes time, and leaves you with no way of knowing whether the other test cases in the run would have succeeded. So, in addition to giving you the ability to stop a test run after an error, Ranorex Studio offers three options for continuing after an error: continue with iteration, continue with sibling, and continue with parent. Read more about these options in the Ranorex User Guide.
et-waypoint et_pb_animation_off et-animated

Isolate expected failures.

Execute only the automated tests that you expect to succeed. If you have tests for a defect that hasn’t been resolved, remove those test cases from your main test run and execute them separately. This will make it easier to determine if there are real issues in the main test run. Likewise, remove any flaky tests from the main test run, and perform manual testing to cover that functionality. Eliminate distractions and noise, and thus ensure the tests you execute have the best chance of yielding meaningful, informative results.
Continue on failed tests

Take screenshots.

Configure your automated tests to capture screenshots and use your reporting mechanism to provide detailed information that will assist in troubleshooting a failed test. Ranorex Studio includes a maintenance mode that allows you to pause a test run so that you can diagnose and resolve errors directly during the test run. To see the maintenance mode in action, watch the screencast below.

 

Following these tips will help you build maintainable test cases, so that you only need to modify the minimum possible number of existing test cases when the application changes. Building maintainable test cases also increases stability and makes debugging easier.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

Get a free trial of Ranorex Studio and streamline your automated testing tools experience.

Start your intelligent testing journey with a free DesignWise trial today.

Related Posts:

Test Design: What Is It and Why Is It Important?

Test Design: What Is It and Why Is It Important?

In software development, the quality of your tests often determines the quality of your application. However, testing can be as complex as the software itself. Poorly designed tests can leave defects undetected, leading to security vulnerabilities, performance issues,...

Ranorex Introduces Subscription Licensing

Ranorex Introduces Subscription Licensing

Software testing needs are evolving, and so are we. After listening to customer feedback, we’re excited to introduce subscription licensing for Ranorex Studio and DesignWise. This new option complements our perpetual licenses, offering teams a flexible, scalable, and...