Who Tests the Testing

Mar 10, 2020 | Best Practices, Test Automation Insights

keeping an eye on testers

The Satires by Juvenal has a line in it which often gets pulled into things that may (or may not) have little to do with the original intent: “Quis custodiet ipsos custodes?

Generally translated as “who watches the watchmen” or “who guards the guardians.” Never mind that a closer translation would run “who guards the guards themselves,” but there you have it. (Four years of Latin has some benefits.)

In the Satires, this was in a section where a fellow was uncertain of women and their reliability. It did not matter if it was in his family, or women in general. He was lamenting that anyone who would watch them (either to control or for protection) could be corrupted. They would need to be watched themselves.

How can we know our testing is “good”?

Software testing can be complex. I’ve found that things that are complex get glossed over by people who are not deeply engaged in the activity. This can be managers, developers, other analysts and other testers.

We often rely on some vague notions about test plans and test scripts to demonstrate the testing is being done well. We might also rely on “automation” to demonstrate that the testing is good.

How do we really know the testing is good? How confident are we in the effectiveness of our software test efforts? Are they doing what we think and hope they are?

Even when done well, testing as an independent activity is problematic. People working in an environment that expects or requires formal test plans for each project spend loads of time preparing documentation based on their understanding of the system function.

If they are moving onto a system area that has not had anyone testing it before, they might find themselves in a situation with no documentation around testing at all. They also might have little meaningful or accurate system notes about how pieces work and interact with each other and how the system acts with external systems.

Combine these needs with a common attitude of “everything you need to know is in the documentation.” Then mix in the expectation that testers develop tests strictly from the specifications, or the “story cards” if they are in some form of “Agile.”

It is impossible to develop meaningful tests in these conditions, particularly if there are time constraints around them, for example, the “testing must be done in a two-week sprint” because the shop is “Agile.”

It is inconceivable that any meaningful testing work can be done. It will be more rote “check the box” work and not insightful or informative to the organization.

Is it any wonder that under situations like this, any attempt at thoughtful deliberate testing is set aside in favor of a test automation tool? Writing code to handle automation might give people the experience needed to say they have “worked in” automation.

What about test automation?

Test automation can contain some of these same serious challenges. If information is limited to documented requirements, system documentation and limitations on conversations with system or application experts, the automated tests will be like the tests created for “manual” testing described above. They will be narrow focus and shallow in depth.

If the intent of test automation is to speed testing and make testing efficient by executing the lower-level, mundane tests, are the narrow-focused tests going to do what needs to be done? Will the portions of the system that need testing regularly actually get tested?

Test automation comes with the same bonuses of any other piece of software. Does the code do what we expect it to do? Does it do what we need it to do?

How carefully is the code for the automation tested? This code is prone to the same problems (bugs) as any other code. How carefully will this code be tested? How certain are you it will be working as needed?

As with any code, over time, some functions will become redundant as new functionality is added or behavior changes. Will the tests that exist for those functions be maintained appropriately?

In my experience that is not always the case. This leads to failed tests that get ignored or written off as “that always fails.” The result is people don’t pay as close attention to the test results as they normally would. Legitimate failures get missed.

What about testing in agile environments?

Agile practices introduce their own issues and problems for testing. The point of Scrum, which many people mean when they say “Agile”, is to deliver a working piece of functionality at the end of each sprint. Where it gets interesting is when the teams take the approach that the work to be done is development alone – that testing can happen later. The “later” is in a different sprint.

This means the idea of delivering a working piece of software is rather lost in translation.

Many organizations will respond with “we do automation” to test software and test it quickly. Great! Is it done in the same sprint as the development work? Are you wrapping up “stories” and handing them off tested and ready to go, confident in their correct behavior?

The times I have been told “We do TDD, then automate those tests” are too many to count. I get it. TDD has “test” in the name. Still, that is a development and design technique to speed development. It is, effectively, a set of unit tests with the expectation that if the unit test “works” then the goal is met.

Automating that is fine. At least, it is fine if someone tests the incremental changes between sprints.

Does the work of the last two or three sprints, the “incremental improvements”, all work together? Are there any issues found that would not be found in a unit test? Have you exercised the potential environmental variables?

How about error conditions? Have those been exercised? If not, do you really have a product you could safely deliver to customers?

Depending on how often the organization releases software this may not be a big deal. It seems to me, however, that if a team is not doing these things, then the broader organization has some challenges beyond testing. It also has challenges around what “Agile” means.

In companies where I’ve seen this quasi-Agile-ish approach, they have also had challenges with how to deliver working software.

So, how do we test the testers?

And so, to the rub of the problem. The center of the conundrum.

How do we know that testers are doing things well? If we reward them for stopping significant problems released into production, the best we can do is reward them for writing bug reports. This can be painfully counterproductive as bug reports are often seen as criticism of the code writers in the development team (of which testers are a part.)

What if the people doing the testing are engaged with the people who wrote the piece of code they are testing? What if they can send a message to them over a chat tool, or turn around in their chair, or look to their left or right and ask a question?

When testing is seen as a step removed from making the product, testing will always be a “cost.” When testing is integrated in making the product, it is simply “making the product.”

The most effective testers I know are not the ones who raise the most interesting bugs or write the most detailed bug reports. They are the ones working with their teams, side by side as equals, making sure the bug reports don’t get written in the first place.

I’ve come to believe that this then, is the only meaningful “test” of testing. It is the measure of software such that the delivered product has no notable, identifiable problems found by customers of the product.

This is also the only meaningful measure of software.

The testers must test themselves.

Everyone is a tester.

Get a free trial of Ranorex Studio and streamline your automated testing tools experience.

Start your intelligent testing journey with a free DesignWise trial today.

Related Posts:

Test Design: What Is It and Why Is It Important?

Test Design: What Is It and Why Is It Important?

In software development, the quality of your tests often determines the quality of your application. However, testing can be as complex as the software itself. Poorly designed tests can leave defects undetected, leading to security vulnerabilities, performance issues,...

Ranorex Introduces Subscription Licensing

Ranorex Introduces Subscription Licensing

Software testing needs are evolving, and so are we. After listening to customer feedback, we’re excited to introduce subscription licensing for Ranorex Studio and DesignWise. This new option complements our perpetual licenses, offering teams a flexible, scalable, and...