Testing is a crucial part of the software creation process. It ensures that your code is working correctly and that all bugs are found before the software lands in the hands of consumers. But it can be difficult to explain these tests and their results to investors...
Who tests the software at your company? Is it people trained to test software? Maybe people who have been trained to write test cases and execute the steps as written? What if they are people who have NOT been trained to write test cases? What if they are given steps to follow but they don’t always follow them precisely?
What if “testing” is not something they ever expected to do? What if they don’t work in IT or software development? Do you still call them “testers?”
Over the last several years a growing number of companies, large and small, established and new, have been seen doing something different from the usual. Instead of having testing being done by people trained to do testing and expected to do testing related work, testing is being done by other people.
The “State of Testing, 2020” report compiled by Tea Time with Testers shows this interesting trend. More organizations are looking to people who understand the business needs and functions to test their software than to “testers.” In some ways this makes a great deal of sense.
Here’s how you can get the most benefit from this trend.
Identify the value of business expertise
The first response from many “traditional” testers or software professionals, is often something like “Well, sure. But they aren’t trained to follow scripts and can’t do any really technical work.” Do we really want to require people to “follow scripts” all the time? Does that lead to good testing work, in and of itself?
As for the technical aspects, how many “manual” testers are comfortable running SQL queries against the database? Are they comfortable digging into system logs to find evidence of behaviors not reflected in the UI? Are they comfortable moving from one tool to another to help them examine parts of the system that might not be examined through “following a script?”
Loads of people learn about things by experimentation. When you get a new mobile phone or laptop do you read the documentation available? If there is no user guide or “quick start” guide included in the packaging do you go online and look them up? Does anyone read all the tips and tricks on a device before doing anything with it?
I do not. Most people I know don’t. Instead, we apply what we already know from previous experience and look to how this device behaves in light of that. We know how to use a phone. We know how to use a laptop or computer. We have our preferences based on comfort and experience.
We start exercising the new device and comparing it against our experience with other devices and expectations. We look for the behavior of the device against the model of our experience. In effect, we are testing the new device.
People applying knowledge gained from working in a variety of roles can likewise evaluate software intended to meet their needs and expectations. They can test the software based on their understanding of the business processes which need to be achieved.
Avoid the “training” trap
A common statement about experienced business users doing testing is they “need to be trained” to use the software. A case can be made that when the new software is radically different from what they have worked on, some training might be needed. Following a “cookbook” collection of scripts does little toward actually testing the software.
A short tutorial on how the new system works and how the pieces interact, including how it differs from the old system, often gives more effective training. Explaining the differences, then watching and gently guiding them as they work through broadly stated scenarios often can lead to greater success for people to learn the new software, and greater effectiveness in testing software.
However, a fair number of organizations try to combine “training” and “testing” into the same activity. The “testers” are instructed to follow scripts to “test” the software. The theory is they will learn to use the new system correctly.
The trap is that they are focused on what they are “supposed” to do and not what the software is supposed to do. They are not actually testing the software.
Here’s the trap in that approach. For most adults, a demonstration of the task is a good start to show people how to do something. Then, let people learn and experiment around what they need to do for their work. This gives people a chance to apply what they were shown in the demonstration to what their actual work is.
People who know the business needs and workflows can very quickly transition to actually testing the new system instead of following a step by step recipe for them to “learn.”
For some organizations, this concept presents a challenge in managing the process of testing. There is a consistent belief that detailed scripts will always provide measurable proof of progress and efficacy of testing. After all, if these scripts find defects in specific steps, then they will be able to show value.
The most common response I get when I suggest this to companies using business experts to test new software is that they don’t have a good way to measure what is being done or what areas are showing problems.
Measure progress without detailed scripts
Here is what I have done to address the need for measurement. Begin with having a list of tasks needed to do their work. This likely will require conversations with multiple business areas to make a “punch list” of high and mid-level tasks. This gives you a list of tasks which can be worked through with “How do I do THIS?” focus. It also gives an in-depth understanding of the work itself and allows for creation of targeted training material if it does not already exist.
The list of high and mid-level tasks also becomes the measure for progress and system readiness. The people working on testing the software are the same ones who will need to use it after it “goes live.” If they are satisfied that each function they need to be successful works as they need it to, testing for those tasks is complete.
Borrowing the idea of “Just In Time” and making it “Just Enough” training, have an expert in the new system available to answer questions after there have been some demonstrations and basic exercises. You can also have testing experts available to answer questions and help communicate problems found to the development team.
Apply business expertise to test automation
The business experts testing the application have also provided real scenarios that can be incorporated into scenario based automated tests. These can be built and used as models for regression testing and scenario based smoke tests.
Using the combined expertise of business and testing together, organizations can work to build realistic test scenarios which will cover the most common and the most critical interest points for the business. Testers can help structure the work and look for likely areas that would not have been thought of without experience testing. However, the bulk of the testing will be done by people with an eye to what is needed for them and their customers.
This type of working environment improves the relationship and builds a partnership between IT and testing groups and the people who use the software every day. It helps both feel more connected to the challenges each face and how they can help each other overcome them and improve their work, their product and their organization.
All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform
Gherkin: Overview, Use Cases, and Format
Gherkin is a format used for cucumber testing. In this article, we go over what it is, its use cases, and the format of Gherkin.
The Importance of SQL Injection Testing
SQL injection testing is important for finding vulnerabilities and keeping your information secure. Learn more here.
6 Best Practices for Code Review
Code review is a daunting process, but there are ways to make it easier, more efficient, and more accurate. Learn more here.