This article describes how to use the Ranorex Studio IDE and the Ranorex API for test automation in your behavior-driven development (BDD) process. BDD requires a cognitive shift: instead of thinking about testing functions, you are now thinking about behaviors....
What are the best practices in test automation?
In this master post, we’ve collected a set of test automation best practices that can help your software testing be more efficient, effective, and successful. Initially published as a 10-article blog series, these recommendations are also available as a free ebook that you can download to read offline.
Each recommendation is summarized below. Click on the individual summaries to read the entire blog article. Or, you can start with the first article in the blog series, “Know What to Automate,” and then simply click through the entire series using the navigation links at the bottom of each article.
Efficient product development is always about trade-offs. One of the first considerations in undertaking any test automation project is where to focus your efforts. Resources are invariably limited, so which efforts will give you the greatest payoff? Which test cases will give you the highest return on the time invested?
The types of test cases to focus on include tests for features that are stable, regression tests, and tests for high-risk features. Other good candidates are smoke tests, data-driven tests, load tests, and cross-browser/cross-platform tests.
On the other hand, it can be difficult to automate mixed-technology tests, dynamic content, and certain aspects of mobile applications. Avoid automating single-use tests, tests with results that cannot be predicted, features that resist automation such as CAPTCHAs, and some native O/S features on mobile devices.
Regression tests help ensure that a software release doesn’t introduce new defects or cause old ones to reappear. Because regression tests are frequently repeated, they are good candidates for automation. High-value regression tests include smoke tests and sanity tests, as well as test cases that found defects in previous testing cycles. It’s good practice to manage the size of your regression suite, and run a full regression suite only when necessary.
One of the top challenges in test automation is maintaining existing automated tests due to changes in the UI. Another challenge is identifying and improving flaky tests – those tests that work sometimes, and fail other times for reasons unrelated to the AUT. But taking the right approach to test case design, coding, and execution that help manage these challenges and reduce time spent in test maintenance.
Design tips that can help minimize maintenance include keeping each test as simple and modular as possible. A single test should validate a single function. Eliminate dependencies as much as possible: a test should succeed or fail independently of other tests. Group tests by functional area so they are easier to update if the function changes. Separate your test steps from your test data by using an external data source such as a spreadsheet or database. Use source control to manage changes to your test code.
If not properly designed, user interface (UI) tests can be slow and prone to failure, or “fragile”. But your tests can be stable, even when the UI changes. One of the most important factors in designing a UI test for stability is the method used to identify UI elements including text, form fields, buttons, scrollbars, and other controls. Ranorex uses a proprietary technology called the “RanoreXPath“, which is based on the XPath query language and can identify even the most challenging dynamic UI elements.
Certain types of recognition are inherently less stable, such as coordinate-based (X,Y) recognition, image-based recognition, and dynamic IDs. The most stable path to an object is typically the shortest one.
Prefer to read offline?
Get the free PDF version
10 Best Practices in Test Automation
The ability to do data-driven testing is one of the key benefits of test automation. In data-driven testing, an automated test case retrieves input values from a data source such as an Excel spreadsheet or a database file. The test case is repeated automatically for each row of data in the data source. So, instead of 10 testers having to manually execute test cases for 10 different data values, and determine whether or not each test case succeeded, an automated test can execute 100 test cases in a single test run.
Benefits of data-driven testing include faster results from testing, increased accuracy, reduced test case maintenance, and better test data storage.
28% of test failures that result from issues such as missing or invalid test data, problems with the test environment, a bug in the test automation code, or changes in the AUT that are not defects. It may be tempting to simply re-run a failed test case to see if it passes. But a test case that passes sometimes and fails on other occasions for no discernable reason is a “flaky,” unreliable test case. It’s important to resolve the issue that caused it to fail so that you can have confidence in the results of your automated testing.
Whether your CI/CD/DevOps cycle is measured in weeks or days, it’s essential to integrate automated tests with your build and release pipeline. Automation makes it possible for your build to be self-testing: automatically run your smoke and sanity tests for each build to ensure basic functionality before conducting additional tests.
To be testable, a requirement must be clear, measurable, and complete, without any ambiguity. One of the best ways to do this is to follow the principle of one function per requirement. Avoid ambiguous wording such as “fast,” “intuitive,” or “user-friendly”. Also avoid including implementation details such as font size in a requirement: instead, put these in a set of standards that apply to the entire project.
The complexity of a typical end-to-end test can make it difficult to automate, and slow to execute. Yet, E2E tests are valuable for finding bugs. To get the best value from your E2E tests, do the following: keep an end-user perspective, limit exception testing, and apply risk analysis to focus on only the high-risk scenarios. It’s also important to manage your test environment to ensure it is as similar as possible to your production environment and to optimize your setup/teardown processes.
Combine functional testing with load testing to confirm that an application’s features work as expected with reliable performance even during peak use. Best practices in load testing include defining measurable metrics, using realistic scenarios, configuring a clean environment, focusing on high-risk scenarios, and starting small.
As part of our commitment to helping testing teams improve quality, we’ve compiled a comprehensive library of test automation resources.
For individual testers who want to learn automation concepts or skills, we’ve got test automation webinars, available live and on-demand. We’ve also got articles on test automation concepts in our testing wiki and test automation blog with informative articles. Subscribe to our newsletter for announcements on new webinars and articles as they become available.
For testing managers and teams that are ready to begin an automation project, we’ve got a comprehensive ebook, Strategies for a Successful Test Automation Project.
If you’re ready to purchase a test automation solution, get our free Software Test Automation Buyer’s Guide.
Ranorex Studio is a comprehensive test automation solution with built-in best practices. Your automated tests can be stable, maintainable, and efficient. To learn more, get in touch with our sales team, or download a free, full-featured trial using the link below.
All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform
The SpecFlow add-in provides file templates for feature files, step definition files and event definition files. It also translates the features from Gherkin syntax to C# code. To install the SpecFlow add-in to Ranorex Studio, follow the instructions below:
Test driven development is a type of programming that relies on testing and coding as well as design to work as one.
Test maintenance ensures the quality and accuracy of an application is not compromised. Uncover how to ensure your tests are always up to code.