6 Steps to Find and Fix Flaky Automated Tests

May 5, 2021 | Best Practices, Test Automation Insights

List of flaky tests with repair tool

Test automation is critical for continuous testing, DevOps, and CI/CD pipelines. Teams set up tests to be triggered automatically, helping find defects early in the development process. Organizations make huge investments in automation in the hopes of saving time, costs, and effort in the long run.

Unfortunately, what often ends up happening is teams get excited during the development of automated tests, but as the suite grows bigger, they start noticing problems with flaky automated tests and have to spend a considerable amount of time maintaining them. As a result, automation becomes lower-priority work, and testing once again goes back to being fully manual because no one wants to deal with unstable tests.

This is the reality in most companies, but it does not have to be this way. Change is possible.

Here is a six-step process to isolate and fix flaky tests before they become a burden.

1. Start small

Stick with the basics of writing automated tests. Each test should be small and have a single purpose to test one particular functionality. Try not to write tests that are dependent on one another. This also helps you know exactly why a test failed without having to look into the application code.

The goal is to have the ability to pick any group of tests and run them in any order. If this is not possible, then consider splitting the tests.

2. Run tests regularly

Once you write tests, run them regularly — preferably daily or at least weekly. The tests should get triggered automatically when new code is checked into the branch.

After the same test runs multiple times, you will have a lot of information about it: the average time it takes for the test to run from start to finish, the number of times the test passed, the actual functionality it is testing, how and when the test gets triggered, and other useful information.

3. Identify unstable tests

Based on the number of times you run a test, you will start figuring out whether a particular test is flaky or not. The test can fail for multiple reasons, such as page load times, assertions, bad data, problems in the environment, synchronization problems, and much more.

Analyze why the same test passes and fails intermittently. What is happening underneath the hood that makes the test act in a certain way? What types of failures are repeatedly happening? Are you starting to see a pattern?

You can also check the error logs to understand the failure.

4. Separate out flaky tests

Once the failed test analysis is complete, separate the flaky tests from the stable test suite. Just because a test fails intermittently, it does not mean that all other stable tests shouldn’t run, or that you will get a failed test report every time you run a set of tests.

It is important to keep the tests “green” as much as possible so that people take the results seriously and trust that automation is adding value. So, separate the unstable tests from the rest as soon as you start noticing the intermittent failures.

5. Fix flaky tests one at a time

One common mistake testers make is to try to be efficient by taking out many flaky tests and trying to fix and run them all at once. This will actually consume more time, because it will be hard to get to the root cause of the failures and create a permanent fix.

Work on flaky tests one at a time. While doing so, check whether a test has any dependencies on other tests. Debug the root cause of the problem by commenting on some code, adding print and wait statements as needed, adding breakpoints, and constantly monitoring the logs.

6. Add tests back to the stable test suite

Once you fix a flaky test, run it multiple times to ensure it is passing. After consistent successful runs, add the fixed test back to the stable test suite. Rerun the stable test suite multiple times to ensure there are not any unexpected outcomes.

Following the above approaches will help you scale automated test suites while ensuring stability.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

Related Posts:

Model-Based Testing with Ranorex DesignWise

Model-Based Testing with Ranorex DesignWise

Model-based testing (MBT) has emerged as a powerful strategy for maintaining high standards of quality in an efficient and systematic manner. MBT transforms the development process by allowing teams to derive scenarios from models of the system under test. These...

What Is OCR (Optical Character Recognition)?

What Is OCR (Optical Character Recognition)?

Optical character recognition technology (OCR), or text recognition, converts text images into a machine-readable format. In an age of growing need for efficient data extraction and analysis processes, OCR has helped organizations revolutionize how they process and...

Support Corner: API Testing and Simple POST Requests

Support Corner: API Testing and Simple POST Requests

Ranorex Studio is renowned for its robust no-code capabilities, which allow tests to be automated seamlessly across web, mobile, and desktop applications. Beyond its intuitive recording features, Ranorex Studio allows custom code module creation using C# or VB.NET,...