In an application release cycle, QA personnel typically perform the following three types of tests:
- Release-specific tests, to verify new features of the application under test (AUT).
- Defect-fix verification tests, to ensure that a reported defect has been resolved
- Regression tests, which are test cases that the AUT passed in a previous release cycle. Regression tests check that new defects have not been introduced, and old ones have not re-occurred. This includes functional regressions, or failures of the system to perform as expected, and visual regressions, which are unanticipated changes in the appearance of an application.
The first blog post in this series, Know What To Automate, lists the key characteristics of test cases that have the best return on your automation investment. These include test cases that will be executed repeatedly and test cases for features that are relatively stable. Regression tests meet both of these considerations, so they should be among the first test cases that you automate. Following are some of the best practices in automating regression test cases.
Prioritize high-value regression test cases
Not all regression test cases have equal importance. Focus your automation efforts on the following types of regression tests:
Successful test cases
Watch our on-demand webinar
Get started in test automation with regression testing: Reduce delivery cycles, expand test coverage, and work more efficiently with this free on-demand webinar.
Make your regression tests maintainable
Certain practices will make your automated tests more maintainable. These practices apply to all types of automated tests, not just regression tests. Key among these is to set up your automated tests to be as modular and independent of each other as possible. Don’t copy-and-paste code between test cases, as this will multiply your maintenance requirements. Instead, if you are going to reuse code, such as a login procedure, create it as an independent module and then re-use it. If your login procedure changes, you will only have one module to update rather than dozens or more. Another best practice is to keep the definition of your UI elements separate from the action steps in each test case. Also, don’t hard-code your test data, but instead maintain it in a spreadsheet or database. Look for more information on the topic of maintainability in a later article in this series.
Run a full regression suite only when necessary
It is not always necessary to execute your full regression suite for each new build. For a minor release, it may make more sense to run just your smoke tests, plus regression tests for any modules that have changed. To make this easier, organize your regression test cases according to the module of the AUT covered by each test. For example, if a release includes a change to the payment types accepted for an online store, it may only be necessary to run your regression tests for the payment process, but exclude regression tests for other features such as searching for items and placing them in the cart. On the other hand, it may make sense to run the complete regression suite when a release cycle includes changes to many areas of the application, such as localization for a new language. To learn more about prioritizing your regression test cases for a particular release cycle, refer to the Ranorex Regression Testing Guide.
Leverage your development toolchain
An automated regression test is code, so treat it like code, and take advantage of your existing development environment.
Use source control
Integrate with a CI process
Integrate with a defect-tracking application
Manage the size of your regression suite
With each release cycle, you will likely add a number of new regression test cases. Over time, this can cause the regression suite to become large and require a lot of resources to execute. Keeping an increasing number of regression tests up-to-date with the changing application could become a burden. To prevent this, keep the size of your regression suite manageable. Each release cycle, remove test cases that don’t provide value for the testing process, such as tests for obsolete features, or low-priority tests that the AUT consistently passes. Carefully review new test cases for their ability to add value to the testing process, such as ones that were successful in uncovering defects in the previous release cycle, or that test critical new functionality.
Be aware of the limits of regression testing
Like a well-traveled path through a minefield, your regression suite may execute perfectly but miss new defects. Remember that the purpose of regression testing is to ensure that code changes haven’t reintroduced old defects or caused new defects in previously-working code. The fact that your AUT has passed all of its regression tests tells you nothing about the quality of new functionality. One of the key benefits of automating your regression tests is that manual testers can have more time to focus on exploratory testing of new features and ensuring a great user experience.