The world of tech is rapidly evolving, and developers are against the clock when it comes to creating innovative and flawless products that can keep up with the competition. For DevOps teams to work with the speed and accuracy necessary to make this happen, they have...
How Are DesignWise Tests Objectively Superior?
“Before vs After” DesignWise Comparison Guide.
“Our current tests are good enough. Why invest the time & resources in adopting the DesignWise methodology if it doesn’t move the needle?”
That may be true, but we don’t have to guess. In this document, we describe the process to evaluate the existing tests, directly compare them with the tests generated by DesignWise, and make data-driven conclusions about what is best for the software testing efficiency in your organization.
The example below uses a simple banking application, but this process can also be followed with any existing set of tests as long as it is converted to the parameterized data table. The order of creating the optimized DesignWise model or the one for analyzing the existing suite technically doesn’t matter (this article goes through the manual side first).It is crucial though that the models are exactly the same when it comes to Parameters/ Values and value expansions/ Constraints.
The process description assumes intermediate knowledge of DesignWise features.
- Direct duplicates (inconsistent formatting; spelling errors);
- “Hidden”/Contextual duplicates (meaningful typos; same instructions written by different people with varied styles);
- Tests specifying some values, leaving others as default (when several scenario combinations could be tested in the single execution run).
Note: there is a difference between “select these 3 values and everything else should be default for this rule” and “select these 3 values and everything else should be anything because it doesn’t matter for this rule”. The second interpretation is much more common in our experience.
To generate the most precise comparison, the actual values from execution logs need to be placed in all the blanks in requirements (i.e. red font in the picture above). If that is impossible, the default value for each parameter is assumed to be used.
For this example, we use 8 artificial existing tests which didn’t specify all the values in their documentation.
- Manually create model with all the parameters & values then input each of the existing tests inside the tool on the Forced Interactions tab.
- Create an empty plan with 2 dummy parameters, navigate to Forced Interactions, and use the cloud icon on the left for the Import dialog. It contains the template you should copy-paste the reformatted existing tests to.
Note: when working with large existing suites, making the updates in Excel and importing the file into DesignWise is generally faster but let us know if you run into any issues.
This is how your existing test suite looks when “generated” by DesignWise. However, we see that the algorithm believes you need 19 test cases (not 8 we imported in this example) to thoroughly explore the potential system weaknesses. Why?
Before we dive into that, copy the model you just created, remove all forced interactions in that second version, and generate the scenarios there as well.
Then the answer to the central question of this guide is on the Analysis screen.
Comparison & Conclusions
Granted, the more experienced testing organizations with focus on variations and, with some knowledge of combinatorial methodologies, will do better than this. Yet, it is rare that the manual selection can consistently achieve the coverage levels in the second picture.
Thus, this portion of the comparison tells us that the existing thoroughness is not sufficient, and 4 more DesignWise-generated tests would be needed to get to 81% 2-way interactions, which is a safe benchmark proven by research studies. You can clearly see which pairs are still missing and make concrete execution decisions based on the business risks & constraints (i.e. execute all 19 tests to reach 100%).
However, that is not the key conclusion. These 2 images evaluate the concept of building DesignWise tests on top of the existing ones to just close the coverage gaps. This approach ignores the potential benefits of completely remodeling the application inside DesignWise. Let’s prove the benefits of this alternate approach by looking at plan we copied (with removed forced interactions).
What if you let DesignWise select all the non-specified values for the 8 business rules that you had? As you go to the Analysis tab in that copied model, this is what you should notice:
DesignWise is able to scientifically detect the optimal way to select values for each test scenario and generate 26% more interaction coverage with the same number of tests. Consequently, you hit the diminishing returns on coverage a lot sooner and your total suite size is smaller (18 in this case).