The world of tech is rapidly evolving, and developers are against the clock when it comes to creating innovative and flawless products that can keep up with the competition. For DevOps teams to work with the speed and accuracy necessary to make this happen, they have...
Sample Test Cases – Multi-Market Stock Trading
Learn how an optimized set of scenarios for a stock trading application is efficiently generated in DesignWise.
What are our testing objectives?
We know that testing each item in our system once is not sufficient; we know that interactions between the different things in our system (such as a particular credit rating range interacting with a specific type of property, for example) could well cause problems. Similarly, we know that the written requirements document will be incomplete and will not identify all of those potentially troublesome interactions for us. As thoughtful test designers, we want to be smart and systematic about testing for potential problems caused by interactions without going off the deep end and trying to test every possible combination.
DesignWise makes it quick and simple for us to select an appropriate set of tests whatever time pressure might exist on the project or whatever testing thoroughness requirements we might have. Design-generated tests automatically maximize variation, maximize testing thoroughness, and minimize wasteful repetition.
What interesting DesignWise features are highlighted in this sample plan description?
Forced Interactions – How to force certain high priority scenarios to appear in your set of tests.
Auto-Scripting – How to save time by generating detailed test scripts in the precise format you require (semi)-automatically.
Coverage Graphs – How to get fact-based insights into “how much testing is enough?”.
Matrix Charts – How to tell which exact coverage gaps would exist in our testing gif we were to stop executing tests at any point in time before the final DesignWise-generated test.
Using DesignWise’s “Coverage Dial” – How to generate sets of thorough 2-way tests and/or extremely thorough 3-way tests in seconds
With particular emphasis on this “test design superpower” feature:
Risk-Based Testing / Mixed-Strength Test Generation – How do we focus extra testing thoroughness SELECTIVELY on high-priority interactions we identify
What interesting test design considerations are raised in this particular sample plan?
In this sample plan, we have a very critical parameter: Transaction Exchange (Country). This parameter determines the exchange that is used for the orders being placed. If one of the available options for an exchange is not functioning it can cause not only headaches for the traders but potentially millions of dollars in lost profit. The testers are required to test every single exchange with all of the other determined parameter values.
The parameter Transaction Exchange (Country) has fourteen values. And one of the values (US) even has a value expansion. Read on for how we ensure there is even greater coverage of this particular parameter.
It is often useful to start by identifying a verb and a noun for the scope of our tests
Designing powerful software tests requires that people to think carefully about potential inputs into the system being tested and how they might impact the behavior of the system. We strongly encourage test designers to start with a verb an a noun to frame a sensible scope for a set of tests and then ask the “newspaper reporter” questions of who?, what? when? where? why? how? and how many?
- Who trades the stocks? (e.g., what is the user type)
- When will they trade stock? (e.g., good-till-date used)
- When was the order created? (e.g., trade a previously saved order)
How / How Many
- How large of a transaction will they trade?
- How do they trade stock? (online, call in, etc.)
- How many transactions are placed per order?
What / What Kind
- What authorizations do they have to trade stock?
- What kind of changes can be made to an existing order?
- What kind of order is it? (buy or sell)
- What type of order is it? (Market or Limit order)
- Where do they trade stock? (e.g., what country is the exchange located in)
Variation Ideas entered into DesignWise’s Parameters screen
Once we have decided which test conditions are important enough to include in this model (and excluded things – like “What shirt do they wear when trading stock?” – that will not impact how the system being tested operates), DesignWise makes it quick and easy to systematically create powerful scenarios that will allow us to maximize our test execution efficiency.
Once we enter our parameters into DesignWise, we simply click on the “Scenarios” link in the left navigation pane.
DesignWise helps us identify a set of high priority scenarios within seconds
DesignWise gives test designers control over how thorough they want their testing coverage to be. As in this case, DesignWise allows testers to quickly generate dozens, hundreds, or thousands of tests using DesignWise’s “coverage dial.” If you have very little time for test execution, you would find those 87 pairwise tests to be dramatically more thorough than a similar number of tests you might select by hand. If you had a lot more time for testing, you could quickly generate a set of even more thorough 3-way tests (as shown in the screen shot immediately below).
Selecting “3-way interactions” generates a longer set of tests which cover every single possible “triplet” of Values
The only defects that could sneak by this set of tests would be these two kinds:
- 1st type – Defects that were triggered by things not included in your test inputs at all (e.g., if special business rules should be applied to an applicant living in Syria, that business rule would not be tested because that test input was never included in the test model at all). This risk is always present every time you design software tests, whether or not you use DesignWise.
This risk is, in our experience much larger than the second type of risk:
- 2nd type – Extraordinarily unusual defects that would be triggered if and only if 4 or more specific test conditions all appeared together in the same scenario. E.g., if the only way a defect occurred was if a trade placed (i) on the Web from a (ii) New user trading in (iii) Hong Kong and the transaction was (iv) $100,001. It is extremely rare for defects to require 4 or more specific test inputs to appear together. Many testers test software for years without seeing such a defect.
If a tester spent a few days trying to select tests by hand that achieved 100% coverage of every single possible “triplet” of Values (such as, e.g., (i) Existing User, and (ii) Thailand exchange, and (iii) Sale), the following results would probably occur:
- It would take far longer for a tester to attempt to select a similarly thorough set of tests and the tester would accidentally leave many, many coverage gaps.
- The tester trying to select tests by hand to match this extremely high “all triples” thoroughness level would create far more than 505 tests (which is the optimized solution, shown above).
- Almost certainly, if the tester tried to achieve this coverage goal in 600 or fewer tests, there would be many, many gaps in coverage (e.g., 3-way combinations of Values that the tester accidentally forgot to include).
- Finally, unlike the DesignWise-generated tests which systematically minimize wasteful repetition, many of the tester’s hand-selected scenarios would probably be highly repetitive from one test to the next; that wasteful repetition would result in lots of wasted effort in the test execution phase
You’ll notice from the screen shots of 2-way tests and 3-way tests shown above that some of the Values in both sets of tests are bolded. Those bolded Values are the Values we “forced” DesignWise to include by using this feature.
Auto-scripting allows you to turn scenario tables (from the “Scenarios” screen) into detailed test scripts
We document a single test script in detail from the beginning to end. As we do so, we indicate where our variables (such as, “Channel,” and “User Type,” and “Purchase or Sale?”) are in each sentence. That’s it. As soon as we document a single test in this way, we’re ready to export every one of our tests.
From there, DesignWise automatically modifies the single template test script we create and inserts the appropriate Values into every test in your plan (whether our plan has 10 tests or 1,000).
We can even add simple Expected Results to your detailed test scripts
It is possible to create simple rules using the drop down menu that will determine when a given Expected Result should appear. To do so, we would use the drop down menus in this feature to create simple rules such as “When ____ is ___ and when ____ is not ____, then the Expected Result would be_____.”
This Expected Results feature makes it easy to maintain test sets over time because rules-based Expected Results will automatically update and adjust as test sets get changed over time.
Coverage charts allow teams to make fact-based decisions about “how much testing is enough?”
This graph, and the additional charts shown below, provide teams with insights about “how much testing is enough?” And they clearly show that the amount of learning / amount of coverage that would be gained from executing the tests at the beginning of test sets is much higher than the the learning and coverage gained by executing those tests toward the end of the test set. This type of “diminishing marginal return” is very often the case with scientifically optimized test sets such as these.
DesignWise tests are always ordered to maximize the testing coverage achieved in however much time there is available to test. Testers should generally execute the tests in the order that they are listed in DesignWise; doing this allows testers to stop testing after any test with the confidence that they have covered as much as possible in the time allowed.
We know we would achieve 80.2% coverage of the pairs in the system if we stopped testing after test number 37, but which specific coverage gaps would exist at that point? See the matrix chart below for that information.
The matrix coverage chart tells us exactly which coverage gaps would exist if we stopped executing test before the end of the test set
For example, in the first 37 tests, there is no scenario that includes both (a) “Customer Authorization Limit – 1 – 5000” together with (b) “Transaction Exchange (Country) – Indonesia.”
Risk-Based Testing Feature
In this example, we want to test every single possible combination involving (a) Purchase or Sale, (b) Transaction Exchange (Country), (c) Market or Limit Order, and the (d) Size of Transaction. That’s because each Exchange needs to have the primary transaction details tested very thoroughly. Stakeholders (and loud, bossy, hostile ones, at that) have made it extremely clear that, whatever else we do, we NEED to be VERY sure to test EVERY 4-way interaction involving these 4 “high priority” parameters. We can’t forget to test any of those 4-way combinations.
For the avoidance of doubt, one of such high-priority combinations would be:
Market Order of Limit Order? = Limit Order tested together with
Size of Transaction = 5001 – 100000 and also tested together with
Purchase or Sale? = Purchase and also tested together with
Transaction Exchange (Country) = Thailand
If we didn’t know any better, we might try to generate a complete set of EVERY possible 4-way combination (e.g., including not only those high-priority 4-way combinations but ALL 4-way combinations). But that would create a lot more tests than we need to achieve this particular coverage goal.
No problem. We can achieve all three of our goals with the Mixed-Strength feature:
- Far fewer than 1,961 tests (which was the number of tests required to achieve 4-way coverage of every possible test input, regardless of priority-level of the inputs).
- 100% coverage of every possible 4-way combination involving the super-high-priority variables.
- 100% coverage of all the pairs of the “normal priority” variables.
We select “Mixed-strength interactions” from the drop down menu.
Booya! Newly-updated plan that focuses extra thoroughness in a very smart way. This new plan has: (a) less than a third as many tests as our “brute force” 4-way test solution had, (b) 100% coverage of every targeted, high-priority 4-way interaction, AND (c) – as usual – 100% coverage of every pair of Values!
Mind maps can be exported from this DesignWise plan to facilitate stakeholder discussions.
Detailed test scripts (complete with stepped-out tester instructions and rule-generated Expected Results) can be exported also:
Other possible export formats could include test data tables in either CSV or Excel format or even Gherkin-style formatting.