Regression testing

In an application release cycle, QA personnel typically perform the following three types of tests:

  • Release-specific tests, to verify new features of the application under test (AUT).
  • Defect-fix verification tests, to ensure that a reported defect has been resolved
  • Regression tests, which are test cases that the AUT passed in a previous release cycle. Regression tests check that new defects have not been introduced, and old ones have not re-occurred. This includes functional regressions, or failures of the system to perform as expected, and visual regressions, which are unanticipated changes in the appearance of an application.

The first blog post in this series, Know What To Automate, lists the key characteristics of test cases that have the best return on your automation investment. These include test cases that will be executed repeatedly and test cases for features that are relatively stable. Regression tests meet both of these considerations, so they should be among the first test cases that you automate. Following are some of the best practices in automating regression test cases.

Prioritize high-value regression test cases

Not all regression test cases have equal importance. Focus your automation efforts on the following types of regression tests:

smoke tests

Smoke tests

These tests ensure that basic functionality is working after a new build of the AUT. Smoke tests check that the application opens and performs tasks such login, display the welcome screen, fetch new data, and so forth. As a best practice, automate your smoke tests and trigger them automatically for each new build of the application so that you know you have a good/“green” build before investing resources in further testing.

smoke tests

Sanity tests

These tests deeply test the most critical functions of your application. If your AUT is a web shopping application, a sanity test would ensure that a user could log on, search for an item, add the item to a cart, and check out. Plan to include all high-priority functions, any functions or modules that have changed, and highly-trafficked workflows in your sanity tests.

smoke tests

Successful test cases

A successful test case is one that has uncovered a large number of defects in past release cycles. Include these in your regression suite to increase your odds of uncovering a new defect – or an old one that has been re-introduced.

Make your regression tests maintainable

Certain practices will make your automated tests more maintainable. These practices apply to all types of automated tests, not just regression tests. Key among these is to set up your automated tests to be as modular and independent of each other as possible. Don’t copy-and-paste code between test cases, as this will multiply your maintenance requirements. Instead, if you are going to reuse code, such as a login procedure, create it as an independent module and then re-use it. If your login procedure changes, you will only have one module to update rather than dozens or more. Another best practice is to keep the definition of your UI elements separate from the action steps in each test case. Also, don’t hard-code your test data, but instead maintain it in a spreadsheet or database. Look for more information on the topic of maintainability in a later article in this series.

Run a full regression suite only when necessary

It is not always necessary to execute your full regression suite for each new build. For a minor release, it may make more sense to run just your smoke tests, plus regression tests for any modules that have changed. To make this easier, organize your regression test cases according to the module of the AUT covered by each test. For example, if a release includes a change to the payment types accepted for an online store, it may only be necessary to run your regression tests for the payment process, but exclude regression tests for other features such as searching for items and placing them in the cart. On the other hand, it may make sense to run the complete regression suite when a release cycle includes changes to many areas of the application, such as localization for a new language. To learn more about prioritizing your regression test cases for a particular release cycle, refer to the Ranorex Regression Testing Guide.

Leverage your development toolchain

An automated regression test is code, so treat it like code, and take advantage of your existing development environment.

Source control

Use source control

The purpose of a source control system like Git or Subversion is to allow multiple developers to work on the same application at the same time without overwriting each other’s changes. Source control also makes it possible to maintain the correct version of a given regression test with the corresponding version of the application.

smoke tests

Integrate with a CI process

If your development team is following a continuous integration process, integrate your regression tests with your CI server. This will allow you to automatically trigger your regression tests for each system build.

smoke tests

Integrate with a defect-tracking application

Defect-tracking tools like JIRA and Bugzilla are essential for reporting defects and tracking them through to resolution. Configure your automated regression tests to report defects automatically. Also, use the defect-tracking process to document each defect found in manual testing and the steps to reproduce it. This documentation will provide candidates for new regression test cases in the next development cycle.

Manage the size of your regression suite

With each release cycle, you will likely add a number of new regression test cases. Over time, this can cause the regression suite to become large and require a lot of resources to execute. Keeping an increasing number of regression tests up-to-date with the changing application could become a burden. To prevent this, keep the size of your regression suite manageable. Each release cycle, remove test cases that don’t provide value for the testing process, such as tests for obsolete features, or low-priority tests that the AUT consistently passes. Carefully review new test cases for their ability to add value to the testing process, such as ones that were successful in uncovering defects in the previous release cycle, or that test critical new functionality.

Be aware of the limits of regression testing

Like a well-traveled path through a minefield, your regression suite may execute perfectly but miss new defects. Remember that the purpose of regression testing is to ensure that code changes haven’t reintroduced old defects or caused new defects in previously-working code. The fact that your AUT has passed all of its regression tests tells you nothing about the quality of new functionality. One of the key benefits of automating your regression tests is that manual testers can have more time to focus on exploratory testing of new features and ensuring a great user experience.