While competing in the exacting modern software market, many companies find integrating automated testing into their overall testing process beneficial. This guide will help you better understand the different types of automated test solutions and how developers...
Maybe I’m reading the wrong articles. Maybe I am listening to the wrong podcasts. Maybe I am finding the wrong sources and references.
Why do I think I’m checking all the wrong sources and “experts”? Because loads of them will say things that sound great. The problem is, when it comes to the actual doing of those things the great sounding stuff suddenly gets vague.
Testing is supposed to “provide value.” Software Testers are supposed to be the ones who “provide value.” Except, somehow, these same experts usually fail to say how they are to do this.
Traditional Scripted Testing
I work hard to understand testing. I also work hard to understand concepts like quality. I work hard to understand how testing is done well, and how the quality of a product can be improved.
When I look for ideas on how I can become better at these I often come across writers, speakers, pundits, and “experts” who advocate ideas I have heard over many years. These ideas might be valid in some situations. They are not all universal. They do not really tell us anything about adding value, except by inference.
What do I mean? The list often looks something like this:
- Software testers find bugs
- Software testers verify conformance to requirements
- Software testers validate functions
- Software testers improve quality
There are different versions of these ideas; they may be expressed in different ways. Some organizations embrace one or more of these ideas. They define and direct testing based on their understanding of what these ideas mean. They insist that testing be done a specific way. They insist that practices in place that have “always been followed” must not be disturbed. That these practices ensure maximum effectiveness and the best possible results.
In some ways, I get it. There are loads of reasons to not change how things are done. People are comfortable with how the organization has worked in the past. Change is uncomfortable and can sometimes be messy.
I understand why people might like having detailed scripts that lay out precise steps to be followed. I also understand why the “expected results” are so important to many organizations. I get the comfort of standardized approaches. I really do.
For several years I was quite comfortable with them. I worked in shops where these were the norm. Detailed scripts direct people’s efforts. They cut down on distractions. They drive the focus of work to be done. But except in narrow instances — for example, those working under strict contractual requirements — I have come to firmly oppose using them.
Solve Your Testing Challenges
Test management tool for QA & development
Problems with Scripted Testing
Proponents of using detailed test scripts often suggest they can help train new people in how the software functions. Another argument is to make sure everyone understands what is expected to happen. Both arguments focus on the “expected results” format for test cases and test steps.
My concern with that is, if the “expected results” state one thing, people executing the steps usually find themselves focusing only on that. This is not a criticism of the people doing the work. It is a criticism of the belief that very limited focus will determine success or failure of the work.
We may very well get the precise “expected result” to appear in the screen or the column in the database. Did anything else happen? Were there any error messages in application or system logs resulting from this test? Were there any odd behaviors not noted because the “expected result” happened?
One obvious solution is to have people to look at broader aspects than the documented “expected results.” Except, this sets up a conundrum around what is and is not part of what should be looked for. Many of us would assert that the tester should track down apparently odd behavior out of professional responsibility and due diligence. Testers should take the time to investigate what is going on. I absolutely agree.
If the team is to reduce variation and execute the scripts as written, is that likely to happen? If tests have explicit time frames for execution, how likely is it a tester will do anything to impact those targets? What is the reasonable outcome if something odd is seen but the “expected result” is met? People tend to ignore these errors if they feel constrained from investigating them.
Of course, management can insist that testers be “professional” and investigate off-script issues. Will testers follow those instructions if their performance measurements drop? If part of their performance review and their pay/bonus is tied to those measures, can we really expect them to branch out from documented steps?
Teams relying on “click and get reports” tools for functional or UI testing are set up for a similar problem. Without careful investigation of the results in tool and application logs, problems not accounted for in the expected results will be missed. That means the error must be anticipated for the automation code to look for it.
A New Path
I tried these techniques for several years and have seen their shortcomings firsthand. I’ve explored the consequences of these ideas. Early in my career I tried to make them work. Believe the problem was I was not doing things “right” I doubled down and worked harder. Then I began looking for ideas that might work instead.
It has become a decades-long search that has led me down many paths and investigated many possibilities. I don’t believe that the ideas described above work as people say they do. To be honest, I’ve found few approaches that work every time in every situation. This led me to another idea. This one I have been pursuing for several years, and so far, I find it generally holds up. It is not an academic testing definition. However, it is a reasonable working definition:
Software Testing is a systematic evaluation of the behavior of a piece of software, based on some model.
Don’t look for bugs exclusively. Don’t look for proof of requirements, exclusively. Instead, look at the software’s behavior.
If we have a good understanding of the intent of the software, we can develop useful models around that understanding. We can consider logical flows people using the software may use to meet their needs. Noting what the software does, we can compare that behavior against the expectations of our customers.
These observations can serve as starting points for conversations with product owners and managers. They can incorporate documented requirements along with the product owners’ expectations and expertise. This means the project team can choose the path they wish to examine next based on its significance and the likelihood of providing information of interest to the stakeholders.
Instead of a rote checklist, testers working with product owners, the development team and other stakeholders can compare their understanding of the software and ask the crucial question of “Will this software meet our needs?” Comparing system behavior with the documented requirements means that testers can initiate and participate in discussions around the accuracy of the requirements (do they match the expectations) and the way those requirements are communicated. Thus, testers help reduce the chance of misunderstanding. This helps the Business and Requirements Analysts do a better job writing requirements and position the organization for conversations around how to make requirements and the software better.
By changing what we are looking for from specific items in a checklist to looking at overall behavior with specific touch points, we change testing from an activity that merely checks a box, to something very different.
If you, like me, have been told what “good testing” or the “role of testing” is from someone who has not done software testing or developed actual working software in 20 years, if ever, I have an invitation:
- Join me in rejecting grand pronouncements with no real meaning, like “add value.”
- Walk away from limiting definitions like “find bugs” and “verify requirements.”
Instead, make the conversation about what we can do as testers:
- We can identify behavior.
- We find limits and characteristics of the system.
- We can compare these findings with the expectations and documented requirements and understanding.
- We can bring awareness of the reality of the software to the development team. We can bring awareness of the state of the software to leadership and organizational management.
- We can make clear ideas and information that is not clear now. We can shine light into areas that are murky. We can bring them truth and fact.
That is what software testers do.
That is the value in software testers.
All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform
Automated testing is a crucial part of software development. This guide helps you understand and implement the right types of automated testing for your needs.
Verification and validation in software testing are formal processes of assessing the correctness and completeness of a software product.
APIs and GUIs have different functions and require suitable testing. Ranorex Studio’s test automation gives you the perfect testing program for APIs and GUIs.