TDD (test driven development) and BDD (behavior driven development) are unique software development techniques that differ in what they are testing and how they are testing it. Despite their similar names, they serve distinct purposes. What Is TDD? TDD stands for test...
How to perform end-to-end testing in DesignWise
Learn about optimizing coverage, traceability, and the E2E scenario count in DesignWise, taking Guidewire implementation as an example.
- Core Testing Challenge
- DesignWise Solution & Modeling Process
- Building a Model – Step 1 – Parameter & Value Selection
- Building a Model – Step 2 – Generating Optimal Scenarios
- Building a Model – Step 3 – Coverage Results Comparison
- Building a Model – Step 4 – Scripting & Export
- Summary & Case Studies
Core Testing Challenge
However, with that increase comes the greater difficulty in decision making which leads to the major challenge in testing complex systems – “how much testing is enough?”
We will describe how these benefits come to life on the generalized example of the Guidewire suite implementation at a large insurance client (specifically, “Auto Policy Bind and Rewrite” workflow).
DesignWise Solution & Modeling Process
Note 2: The execution/design split is almost never 100 to 0 in either direction, so the decision is more between, e.g., 70/30 or 30/70.
Note 3: Release history and changes to the functional/integration testing could alter the approach decisions throughout the project lifecycle.
In this article, we will focus on a more “strategic” level (“Yes” for the first diamond) and will briefly touch upon the extra steps/considerations to achieve the “No -> Design” tree path.
Building a Model – Step 1 – Parameter & Value Selection
prioritize parameters that affect more than 1 system;
prioritize values heavily involved in business rule/integration triggers;
consider using value expansions for less impactful options.
Drop down menus in Policy Creation system itself could be responsible for the majority of the variation ideas to get you started. Often though, additional insights into the optimal values to use for elements such as dates, payment methods, or user/agent profiles will need to be added.
Therefore, the stakeholders from each involved system should provide input about a) integration factors that matter to them (e.g., “given our objectives in this set of tests, it is important to include policy duration and payment methods, but not delivery preferences”), and b) the appropriate level of detail for those (e.g, “when it comes to payment methods, it is the category of payment method that’s most important to vary; be sure to include some scenarios with ACH and some with Credit Card).
Those 2 point apply whether it is billing…
The “nested” values deserve a special mention:
Such nesting also allows test designers to “connect” DesignWise models in a way, since the same profile can be reused in the models responsible for parallel (create Umbrella policies) or sequential (test Renewals later) steps.
Value expansions can further serve as test data specification, if needed:
Repetitive parameters can be handled by appending the “extra detail” to the name:
Building a Model – Step 2 – Generating Optimal Scenarios
Having said that, the algorithm would not have any way of knowing whether a specific combination involving, say, 10 values is important to include in a single “special” test. So the test designer, ideally collaborating with a subject matter expert, should force the inclusion of any specific “high-priority scenarios” into the generated suite.
To accomplish that, we use Forced Interactions to specify all the factors that constitute the “core” scenarios. For example, let us say we have 3 such high-priority use cases:
- Happy Path: Full Term, No change in premium, single driver, single car, monthly, etc.
- High complexity 1: New Business, Increase in premium, 2 drivers, 2 vehicles, etc.
- High complexity 2: Full Term, Decrease in premium, 1 driver, 2 vehicles, etc.
This is how they would look inside DesignWise:
Pro tip: one subtle trick for one-off testing in DesignWise is that Forced Interactions can overwrite Constraints, and vice versa. You can use that workaround for the scenarios that are deemed “very low probability” by the business but are still required to be tested from an IT standpoint.
The last point at this step is to select an appropriate level of thoroughness for your needs. It is rare for E2E DesignWise models to utilize anything other than 2-way (at the “Strategic” level) or Mixed-strength (at any level). 3-way or higher coverage strengths would typically be overkill. The dropdown settings in Mixed-strength are generally chosen based on the following parameter logic:
- Does it impact 2+ systems and have numerous rules/dependencies associated with it? -> Include using at least 2-way coverage selection.
- Does it impact 2+ systems and have few/no rules/dependencies associated with it? -> Include with 2-way selection given short value lists + value expansions or with 1-way otherwise.
- Does it impact only 1 system but have numerous rules/dependencies associated with it? -> Include with 1-way coverage selection and a fairly exhaustive list of values (because of the constraints).
- Does it impact only 1 system and have few/no rules/dependencies associated with it? -> Likely should not have been included in the model, but 1-way otherwise.
The resulting scenarios table could look like this:
The effective combination of the level of detail in Parameters and the business-relevant coverage strength in Scenarios guarantees that the DesignWise algorithm optimizes your total model scope to have a minimal number of tests that cover all the important interactions.
And next, we will discuss the last piece of the core testing “puzzle” – given the total scope, how we can use DesignWise visualizations to select the right stopping point.
Building a Model – Step 3 – Coverage Results Comparison
If we now analyze the coverage achieved across, e.g., 8 critical parameters and compare it with the typical manual solution, the results would often look like this:
Taking this analysis a step further, given typical schedule deadlines, etc., we can identify the exact subset of the total scope that will be sufficient for the immediate testing goals and communicate that decision clearly to the management with the combination of the Mind Map + Coverage Matrix.
Building a Model – Step 4 – Scripting & Export
We are observing more & more teams switching to BDD, so we will be covering DesignWise Automate in this article, but most general principles also apply to Manual Auto-Scripts.
First, the overall script structure is completely up to you. The number of steps, length of each, number of parameters per each, etc. depend on your guidelines for both test design and execution – DesignWise has the flexibility to support a wide range of preferences.
Second, for the review and export efficiency, we will be using {[]} filters to separate Full Term and New Business scenarios (assuming they have different validation steps for example purposes).
The sequential time aspect can be accounted for by “On Day X,…” parts of the steps. The system-to-system transition can be reflected in a similar manner as well as in commented-out lines. Parameters that didn’t “qualify” for model inclusion and static validations can be hard coded (i.e. you don’t need to include <> syntax in every line). Test data generated during the execution can be captured using steps like this:
Lastly, we strongly recommend sharing the DesignWise models with automation engineers early in the process to allow them enough time to review and provide feedback on step wording, value names, etc.
Note: if present, only value expansions are exported in the script (the same is true for CSV), so you can have abstract/categorical value names and then provide execution-ready details in the expansion.
Once the script is finalized, you can proceed to export the scenarios in the format compatible with the test management tools and/or automation frameworks. E.g., without any extra actions, you can generate the CSV for Xray alongside Java files.
This step enables accelerated, optimized automation because you can:
- Rapidly create clear, consistent steps that leverage Behavior Driven Development principles.
- Export 1 Scenario block into multiple scripts based on the references to the data table.
- Improve collaboration across business and technical teams to understand the testing scope as a group.
Building a Model – What is Different for “No -> Design” decision tree path
First, the “hard” DesignWise limits are 200 parameters and 4000 tests per model. Highly detailed models will require you to consider non-traditional DesignWise parameters (e.g. test data elements, more expected results) that can exhaust the limit fairly quickly, so the prioritization of the scope and the balance between design & execution becomes even more critical.
Second, the extension requires more attention to how parameters & values are organized (e.g. value vs value expansion, nested vs standalone) and to mixed-strength settings. Keep in mind that even a single additional parameter – if it has a long list of values and a 2-way strength setting – will result in a disproportionally large increase in the scenario count.
Lastly, additional model elements may require more precise scripting (if conditional or “vague” steps are not an option). It becomes even more important to keep track of 1) which {[]} filters are used; 2) how mixed-strength affects the possible combinations of {[]} filters (you may not need to create a scenario block for each possible combo).
Summary & Case Studies
The image above should be familiar from our other educational materials, and hopefully, it underscores the notion that the process & methodology are not strongly dependent on the type of testing, type of system, industry, etc.
The goal of applying DesignWise is to deal with such challenges of manual test creation as prolonged and error-prone scenario selection, gaps in test data coverage, tedious documentation, and excessive maintenance.