Test Automation Best Practice #1: Know What to Automate

Mar 8, 2021 | Best Practices

test automation best practices
Welcome to the first article in the series, Best Practices in Test Automation.

Efficient product development is always about trade-offs. One of the first considerations in undertaking any test automation project is where to focus your efforts. Resources are invariably limited, so which efforts will give you the greatest payoff? Which test cases will give you the highest return on the time invested? This article provides recommendations for three types of test cases: those to automate, those that will be challenging to automate, and those that shouldn’t be automated at all.

What to Automate

In principle, any software test can be automated: humans who understand the requirements for an application can create tests that express those requirements. Wise testers always ask, though, whether a particular test will cost more to develop and maintain than it will save in the effort of manual testing. To get the best return on your effort, focus your automation strategy on test cases that meet one or more of the following criteria:

Tests for stable features

Automating tests for unstable features may end up costing significant maintenance effort. To avoid this, test a feature manually as long as the requirement remains experimental, or under development.

Note that this is different from the stability of the implementation. Once functionality has been settled, it’s particularly valuable to have good automated tests if the development team continues to experiment with alternative implementations.

Regression tests

A regression test is one that the system passed in a previous development cycle. Re-running your regression tests in subsequent release cycles helps to ensure that a new release doesn’t reintroduce an old defect or introduce a new one. Since regression tests are executed often, they belong at the top of your priority list for automation. Why does frequent execution matter? Because each automated execution saves the manual effort otherwise needed to perform the test. Multiply that savings by a large count of executions, and the overall gain is proportionally large.

To learn more about regression testing, refer to the Ranorex Regression Testing Guide.

High-risk features

Use risk analysis to determine which features carry the highest cost of failure, and focus on automating those tests. Then, add those tests to your regression suite. To learn more about how to prioritize test cases based on risk, see the section on risk assessment in the Ranorex GUI Testing Guide.

Smoke tests

Depending on the size of your regression suite, it may not make sense to execute the entire suite for each new build of the system. Smoke tests are a subset of your regression tests which check that you have a good build prior to spending time and effort on further testing. Smoke testing typically includes checks that the application will open, allow login, and perform other high-profile functions. Include smoke tests in your Continuous Integration (CI) process and trigger them automatically with each new build of the system.

A smart test team labels and actively maintains different categories of tests. A test of a specific functionality might move in and out of the smoke test suite at different times during the lifecycle of the application. When a particular login method is widely used, it deserves to be a smoke test. If later it’s deprecated in favor of a different method, it might safely be moved away from the smoke tests. Similarly, a test that once was too time-consuming to be a smoke test can become a smoke test if it’s accelerated enough to fit in CI/CT (“CT” abbreviates “Continuous Testing”).

Data-driven tests

Any tests that will be repeated are good candidates for test automation, and chief among these are data-driven tests. Instead of manually entering multiple combinations of username and password, or email address and payment type to validate your entry fields, let an automated test do that for you. How to design good data-driven tests will be explored further in articles on parameterized and property-based tests in this series.

Load tests

Load tests are simply a variation on data-driven testing, where the goal is to test the response of the system to a simulated demand. Combine a data-driven test case with a tool that can execute the test in parallel or distribute it on a grid to simulate the desired load.

Load and other performance tests often are too expensive and time-consuming to execute with each commit. This just means that they need their own schedule for execution, one that doesn’t slow the CI cycle.

Cross-browser tests

Cross-browser tests help ensure that a web application performs consistently regardless of the version of the web browser used to access it. It is generally not necessary to execute your entire test suite against every combination of device and browser, but instead to focus on the high-risk features and most popular browser versions currently in use. As of October 2020, Google Chrome is the leading browser on both desktop and mobile, and the second-largest on tablets behind Safari. So, it would make sense to run your entire test suite against Chrome, and then your high-risk test cases against Safari, Firefox, Internet Explorer, and Microsoft Edge.

Along with other reasons, it’s good to automate cross-browser and cross-device tests because humans typically do not perform well in this tedious and repetitive role. Automation has proven far better at spotting environment-specific problems such as browser incompatibility.

Cross-platform

Cross-device tests

Mobile apps must be able to perform well across a wide range of sizes, screen resolutions, and O/S versions. According to Software Testing News, in 2018, a new manual testing lab would need almost 50 devices just to provide 80% coverage of the possible combinations. Automating cross-device tests can reduce testing costs and save significant time.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

What is Difficult to Automate

The following types of test cases are more difficult to automate. That doesn’t mean that they shouldn’t be automated – only that these test cases will have a higher cost in terms of time and effort to automate. Whether a particular test case will be challenging to automate varies depending on the technology basis for the AUT (application under test). If you are evaluating an automation tool or doing a Proof of Concept, be sure that you understand how the tool can help you overcome these difficult-to-automate scenarios.

This last point is so important it bears repeating: a test might be worth automating even though it’s difficult to automate. Sometimes the difficulty of automation of a test reflects that the corresponding manual test is particularly time-consuming or error-prone or sensitive. That case probably means that such a test is especially valuable to automate, or perhaps redefine to be less expensive.

One general response to difficult automations is to seek help, whether the leverage of a high-quality test framework that solves hard problems, or counsel from fellow professionals who’ve faced similar problems.

Mixed-technology tests

Some automated tests require a mix of technologies, such as a hybrid mobile app or a web app with backend database services. To make automating end-to-end tests in this type of environment easier, the ideal solution is to implement an automation framework that supports all of the technologies in your stack. To see whether Ranorex Studio is a good fit for your stack, visit our Supported Technologies page.

Dynamic content

There are many types of dynamic content, such as web pages built based on stored user preferences, PDF documents, or rows in a database. Testing this type of content is particularly challenging given that the state of the content is not always known at the time the test runs. Learn about the issues with dynamic content and how Ranorex helps overcome them in our User Guide.

Waiting for events

Modern user interface technologies make some aspects of testing having to do with time particularly difficult. For technical reasons having to do with Web browsers’ Document Object Model (DOM), it’s much easier to instruct a human tester, “when the login form pops up”, than it is to communicate the corresponding, “when the login form finishes rendering”, to an automated test. Waiting for events, especially those having to do with completion of a visual display element, is a consistent programming challenge.

Automated tests can fail when an expected response is not received. It’s important to handle waits rigorously so that a test doesn’t fail just because the system is responding slower than normal. However, you must also ensure that a test does fail in a reasonable period of time so that the entire test suite is not stuck waiting for an event that will never happen. Issues around synchronization and waits are particularly important when comparing test frameworks. To learn how to configure waits in Ranorex automated tests, refer to the description of the Wait for action in the Ranorex User Guide.

Handling alerts/popups

Similar to waiting for events, automated tests can fail due to unexpected alerts or pop-ups. To make them more stable, be sure to include logic in your test to handle these special events. Ranorex Studio includes an automation helper that makes it easy to handle alerts and pop-ups.
End-to-end testing

Complex workflows

Automation of a workflow brings several challenges. Typically, a workflow test will consist of a set of test cases that each check steps in the workflow. When one step fails, it’s pointless to run subsequent test steps: the failure means that results which arrive afterward can’t be trusted. Because the steps must be performed in order, they can’t be split across multiple endpoints to run in parallel. Another challenge is that automating a workflow involves choosing one particular path through the application, possibly missing defects that occur if a user chooses a different path in production.

To minimize these types of issues, make your test cases as modular and independent of each other as possible, and then manage the workflow with a keyword-driven framework. Measure source coverage accurately to minimize the count of unexercised lines of source.

Challenging aspects of web applications

Web applications have aspects that present unique challenges to automation. One of the primary issues is recognizing UI elements with dynamic IDs. Ranorex provides “weight rules” to tweak the RanoreXPath for specific types of elements, which helps ensure robust object recognition even on dynamic IDs. Other challenges in automating web applications include switching between multiple windows and automating iframes — especially those with cross-domain content. Ranorex Studio detects and automates objects inside cross-domain iframes, even when web security is enabled.

Challenging aspects of mobile applications

Mobile apps also can be challenging to automate. For example, you must ensure that your application responds appropriately to interruptions such as the phone ringing or a low battery message. You must further ensure that your tests provide adequate device coverage, which is a particular challenge for Android apps due to the wide variety of screen sizes, resolutions, and O/S versions found in the installed base. Finally, due to differences between iOS and Android, tests that are automated for a native app on one platform will likely require adaptation to perform as expected on the other platform. As with other difficult-to-automate tests, it’s essential to have a testing framework that supports the full technology stack for your application under test.

What You Shouldn’t Automate

There are some types of tests where automation may not be feasible or advisable. This includes any test where the time and effort required to automate the test exceeds the potential savings. Plan to perform these types of tests manually.

lineicon_manual-test

Single-use tests

It may take longer to automate a single-use test than to execute it manually once. Note that the definition of “single-use tests” does not include tests that will become part of a regression suite or that are data-driven.

Tests with unpredictable results

Automate a test when the result is objective and can be easily measured. For example, a login process is a good choice for automation because it is clear what should happen when a valid username and password are entered, or when an invalid username or password are entered. If your test case doesn’t have defined pass/fail criteria — if it lacks clarity — it would be better have a tester perform it manually.

Features that resist automation

Some features are designed to resist automation, such as CAPTCHAs on web forms. Rather than attempting to automate the CAPTCHA, it would be better to disable the CAPTCHA in your test environment or have the developers create an entry into the application that bypasses CAPTCHA for testing purposes. If that isn’t possible, another solution is to have a tester manually complete the CAPTCHA and then execute the automated test after passing the CAPTCHA. Just include logic in the test that pauses until the tester is able to complete the CAPTCHA, and then resumes the test once login success is returned.

Unstable features

It is best to test unstable features manually. As mentioned above, invest the effort in automation once the feature has reached a stable point in development.
Mobile testing on real devices and simulators

Native O/S features on mobile devices

Particularly on Apple iOS, non-instrumented native system apps are difficult or impossible to automate due to built-in security.

Conclusion

To ensure that you achieve your automation goals, focus your automation efforts on the right test cases. And be sure to build in time for exploratory testing and UX/usability testing – by their nature, these types of tests can’t and shouldn’t be automated.

To help determine whether or not to automate a particular test case, you can use the Test Case ROI Calculator spreadsheet. This simple spreadsheet compares the estimated time and costs to automate a test case vs. the time and costs to execute the same test case manually; it is not designed to determine the ROI of a test automation project as a whole. With a little up-front analysis, though, you can make tactical decisions about individual tests that yield the best possible results for your project: the biggest bang of completed, informative tests for the buck of the effort invested in testing.

Watch our on-demand webinar

Watch our on-demand webinar

Strategies for a Successful Automation Project: Learn how to ensure that your automation project accomplishes your goals.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

Related Posts:

Support Corner: API Testing and Simple POST Requests

Support Corner: API Testing and Simple POST Requests

Ranorex Studio is renowned for its robust no-code capabilities, which allow tests to be automated seamlessly across web, mobile, and desktop applications. Beyond its intuitive recording features, Ranorex Studio allows custom code module creation using C# or VB.NET,...

The Top 10 Test Automation Challenges

The Top 10 Test Automation Challenges

It’s difficult for any IT organization to execute DevOps effectively without test automation. However, it’s often easier said than done. Overcoming the challenges of automated software testing can end up slowing down product delivery and impacting quality, the exact...

7 Best Android Testing Tools

7 Best Android Testing Tools

There are more and more Android testing tools available for mobile app developers. These are our favorites for performance, accessibility, and security.