What can be automated
Welcome to the first article in the series, 10 Best Practices in Test Automation. Because testing resources are limited, one of the first considerations in launching a test automation project is where to focus your efforts. Which test cases will give you the highest return on the time and effort invested? This article provides recommendations for three types of test cases: those to automate, those that will be challenging to automate, and those that shouldn’t be automated at all.

What to Automate

In theory, any software test can be automated. The question is whether a particular test will cost more to develop and maintain than it will save in testing. To get the best return on your effort, focus your automation strategy on test cases that meet one or more of the following criteria:

Tests for stable features

Automating tests for unstable features may end up costing significant maintenance effort. To avoid this, test a feature manually as long as it is actively undergoing development.

Regression tests

A regression test is one that the system passed in a previous development cycle. Re-running your regression tests in subsequent release cycles helps to ensure that a new release doesn’t reintroduce an old defect or introduce a new one. Because regression tests are executed often, they should be at the top of your priority list for automation. To learn more about regression testing, refer to the Ranorex Regression Testing Guide.

High-risk features

Use risk analysis to determine which features carry the highest cost of failure, and focus on automating those tests. Then, add those tests to your regression suite. For more information on how to prioritize test cases based on risk, see the section on risk assessment in the Ranorex GUI Testing Guide.

Smoke tests

Depending on the size of your regression suite, it may not make sense to execute the entire suite for each new build of the system. Smoke tests are a subset of your regression tests which check that you have a good build prior to spending time and effort on further testing. Smoke testing typically includes checks that the application will open, allow login, and perform other key functions. Include smoke tests in your Continuous Integration (CI) process and trigger them automatically with each new build of the system.

Data-driven tests

Any tests that will be repeated are good candidates for test automation, and chief among these are data-driven tests. Instead of manually entering multiple combinations of username and password, or email address and payment type to validate your entry fields, let an automated test do that for you. How to design good data-driven tests will be explored further in another article in this series.

Load tests

Load tests are simply a variation on data-driven testing, where the goal is to test the response of the system to a simulated demand. Combine a data-driven test case with a tool that can execute the test in parallel or distribute it on a grid to simulate the desired load.

Cross-browser tests

Cross-browser tests help ensure that a web application performs consistently regardless of the version of the web browser used to access it. It is generally not necessary to execute your entire test suite against every combination of device and browser, but instead to focus on the high-risk features and most popular browser versions currently in use. Currently, Google Chrome is the leading browser on both desktop and mobile, and the second-largest on tablets behind Safari. So, it would make sense to run your entire test suite against Chrome, and then your high-risk test cases against Safari, Firefox, Internet Explorer, and Microsoft Edge.

Cross-device tests

Mobile apps must be able to perform well across a wide range of sizes, screen resolutions, and O/S versions. According to Software Testing News, in 2018, a new manual testing lab would need almost 50 devices just to provide 80% coverage of the possible combinations. Automating cross-device tests can reduce testing costs and save significant time.

What is Difficult to Automate

The following types of test cases are more difficult to automate. That doesn’t mean that they shouldn’t be automated – only that these test cases will have a higher cost in terms of time and effort to automate. Whether a particular test case will be challenging to automate varies depending on the technology basis for the AUT. If you are evaluating an automation tool or doing a Proof of Concept, be sure that you understand how the tool can help you overcome these difficult-to-automate scenarios.

Mixed-technology tests

Some automated tests require a mix of technologies, such as a hybrid mobile app or a web app with backend database services. To make automating end-to-end tests in this type of environment easier, the ideal solution is to implement an automation framework that supports all of the technologies in your stack. To see whether Ranorex Studio is a good fit for your stack, visit our Supported Technologies page.

Dynamic content

There are many types of dynamic content, such as web pages built based on stored user preferences, PDF documents, or rows in a database. Testing this type of content is particularly challenging given that the state of the content is not always known at the time the test runs. To learn more, refer to the Ranorex blog article Automated Testing and Dynamic IDs.

Waiting for events

Automated tests can fail when an expected response is not received. It’s important to handle waits so that a test doesn’t fail just because the system is responding slower than normal. However, you must also ensure that a test does fail in a reasonable period of time so that the entire test suite is not stuck waiting for an event that will never happen. To learn how to configure waits in Ranorex automated tests, refer to the list of available actions in the Ranorex User Guide.

Handling alerts/popups

Similar to waiting for events, automated tests can fail due to unexpected alerts or pop-ups. To make them more stable, be sure to include logic in your test to handle these types of events. Ranorex Studio includes an automation helper that makes it easy to handle alerts and pop-ups.

Complex workflows

There are several challenges to automating a workflow. Typically, a workflow test will consist of a set of test cases that each check steps in the workflow. If one step fails, subsequent test steps will not run. Because the steps must be performed in order, they can’t be split across multiple endpoints to run in parallel. Another challenge is that automating a workflow involves choosing one particular path through the application, possibly missing defects that occur if a user chooses a different path in production. To minimize these types of issues, make your test cases as modular and independent of each other as possible, and then manage the workflow with a keyword-driven framework.

Certain aspects of web applications

There are aspects of web applications that present unique challenges to automation. One of the primary issues is recognizing UI elements with dynamic IDs. Ranorex provides “weight rules” to tweak the RanoreXPath for specific types of elements, which helps ensure robust object recognition even on dynamic IDs. Other challenges in automating web applications include switching between multiple windows and automating iframes — especially those with cross-domain content. Ranorex Studio can detect and automate objects inside cross-domain iframes, even when web security is enabled.

Certain aspects of mobile applications

Mobile apps can also be challenging to automate. For example, you must ensure that your application responds appropriately to interruptions such as the phone ringing or a low battery message. You must also ensure that your tests provide adequate device coverage, which is a particular challenge for Android apps due to the wide variety of screen sizes, resolutions, and O/S versions found in the installed base. Finally, due to differences between iOS and Android, tests that are automated for a native app on one platform will likely require adaptation to perform as expected on the other platform. As with other difficult-to-automate tests, it’s essential to have a testing framework that supports the full technology stack for your application under test.

What You Shouldn’t Automate

There are some types of tests where automation may not be possible or advised. This includes any test where the time and effort required to automate the test exceeds the potential savings. Plan to perform these types of tests manually.

Single-use tests

It may take longer to automate a single-use test than to execute it manually once. Note that the definition of “single-use tests” does not include tests that will become part of a regression suite or that are data-driven.

Tests with unpredictable results

Automate a test when the result is objective and can be easily measured. For example, a login process is a good choice for automation because it is clear what should happen when a valid username and password are entered, or when an invalid username or password are entered. If your test case doesn’t have clear pass/fail criteria, it would be better have a tester perform it manually.

Features that resist automation

Some features are designed to resist automation, such as CAPTCHAs on web forms. Rather than attempting to automate the CAPTCHA, it would be better to disable the CAPTCHA in your test environment or have the developers create an entry into the application that bypasses CAPTCHA for testing purposes. If that isn’t possible, another solution is to have a tester manually complete the CAPTCHA and then execute the automated test after passing the CAPTCHA. Just include logic in the test that pauses until the tester is able to complete the CAPTCHA, and then resumes the test once login success is returned.

Unstable features

It is best to test unstable features manually. Invest the effort in automation once the feature has reached a stable point in development.

Native O/S features on mobile devices

Particularly on Apple iOS, non-instrumented native system apps are difficult or impossible to automate due to built-in security.

Conclusion

To ensure that you achieve your automation goals, focus your automation efforts on the right test cases. And be sure to build in time for exploratory testing and UX/usability testing – by their nature, these types of tests can’t and shouldn’t be automated. To help determine whether or not to automate a particular test case, you can use the Test Case ROI Calculator spreadsheet. This simple spreadsheet compares the estimated time and costs to automate a test case vs. the time and costs to execute the same test case manually; it is not designed to determine the ROI of a test automation project.