Testing is a crucial part of the software creation process. It ensures that your code is working correctly and that all bugs are found before the software lands in the hands of consumers. But it can be difficult to explain these tests and their results to investors...
How to perform request/response validation in DesignWise
Learn about two ways to design data validation models in DesignWise for use cases like consumer-driven contract testing
In this one, let’s talk about the data as it relates to the communication between systems – using this guide by Pact as a reference (note however – these DesignWise methods are not limited to or specific to contract testing).
We will leverage a slightly modified setup. There are still a consumer (Order Web) and its provider (the Order API). And we will still be submitting the request for product information from Web to API, but will “enrich” the attributes a bit:
- Market (e.g. NA, EU)
- Product Category (e.g. A, B)
One or both factors need to be present for the successful request
(keeping in mind we care more about the structure and format compatibility than about the business logic calculations)
- Quantity (e.g. whole, partial)
- Value (e.g. whole, decimal)
- Date last sold (e.g. mm/dd/yyyy format, dd/mm/yyyy format)
- primary vendor ID (e.g. company itself, partner, unrelated 3rd party)
- Status (200 for the scope of this article, but other non-error ones could be included in the same model as well with relevant triggers)
Some products are new and have not been evaluated and/or sold yet
Modeling in DesignWise
Approach 1 – Whole response profile per test case
Each row in the Scenarios table would describe all parameterizable response attributes for a given request combination.
One script with “Then” line per attribute would cover all scenarios:
Approach 2 – Attribute per test case
- If there is any validation dependency between response attributes, this approach has a much higher change to catch defects.
- Less vulnerable to setup costs per TC (i.e., in an absurd example, if each test requires a unique API token that costs $1000, then executing a test per response profile is much cheaper than a test per attribute).
Approach 1 – Cons:
- More complex and less flexible execution-wise (i.e. more steps to get to the end of the scenario).
- More vulnerable to test data availability (if the request is sent against the real database or the mock that was built only based on production sample, the “free” combinations that DesignWise algorithm generates may result in “record not found” too often).
- Quicker and more flexible execution of “componentized” TCs (i.e. if only 1 API response attribute changes, you don’t need to re-execute all the steps just to get to that one).
- Less vulnerable to test data availability (if a valid standalone attribute value is not present in the mock/real database, that’s probably not a good sign and should be solved separately).
Approach 2 – Cons:
- Will have a much lower chance to catch any interaction defects (i.e. if Value is not retrieved correctly only when unrelated 3rd party vendor is involved).
- More vulnerable to setup costs per TC (higher total setup cost in the “$1000 per token” example above).
Extra consideration: shared DesignWise model can serve as another collaboration artifact between the consumer and the provider, which could allow to uncover mismatching expectations between much faster.