A testing department’s true goal is to spark change, not just to write lists of errors. Clear thinking about what not to test helps liberate resources for more thorough testing in areas that make a difference. Here are five kinds of tests you should reconsider.
Tests of past requirements
The value of tests is that sometimes they fail. Tests that always pass provide little information.
Most of us have seen tests that might fail from time to time, but only because requirements changed and we don’t get notice of the update arrived after a test squawked. Tests like that invite everyone to go numb, and there’s no value in that.
Here’s an example: Early in a product’s history, there is a requirement for a display field that is expressed as an absolute position. The first release passes all its tests. Soon, the position shifts a few pixels to accommodate other, newer, screen elements. The test fails, is adjusted to fit the new layout, passes, the release goes out. The cycle continues.
It continues to no one’s benefit, though. While tests can sometimes be rightly written for a physical location, this is not a hard and fast rule. When it does not work, the result is repeated, tedious updates each release and brings no true additional value.
In these cases, consider:
- Remove the test entirely. When it takes more effort to maintain it than the benefits it gives, get rid of it;
- Rewrite it to be more robust. For example, the test might have a widget on the screen, is visible, or has specific content. Or “more robust” may include using smart automation tools that use more reliable methods to identify UI elements, like Ranorex Studio.
Of course, you can always disable the test until product management finds a better communication channel to allow testing to track requirements with greater precision.
Continuing wasteful tests isn’t a virtue; it’s a kind of theft because it steals your attention away from more rewarding efforts.
All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform
Inconsequential unit tests
It is sometimes tempting to develop unit tests for existing code that do not have any existing unit tests. This might seem a good idea, however, without a thorough, deep analysis of the code before starting, it is easy to be pulled off well away from your primary purpose: developing unit tests for new code.
While it may be tempting to try and develop unit tests for the big ball of mud, don’t do it. Just don’t. Should you test it? No. Don’t try and create unit tests for existing, legacy, mud code without a corresponding effort to refactor.
One of the great values of unit tests is their ability to help developers consider implementations ahead of time. The “big ball” already exists, and its muddy architecture won’t change. Don’t pretend to write unit tests for code whose main value is not clearly understood.
This isn’t an invitation to nihilism. If you ever suspect your tests are headed toward irrelevance, don’t give up; figure out what else needs to change. Put together a proposal that addresses the true need — the one whose correction will enable your tests to become meaningful.
Test vs. inspection
Formal code inspections can be powerful tools to significantly reduce the odds of problems in a piece of software. By examining each command and each line, with great diligence, problems missed in simple unit tests can be uncovered.
Inspections can be helpful for more robust function or integration tests by highlighting areas that appear to be “correct” but may not behave as expected. Sometimes idiomatic expressions might be better candidates for inspection than some levels of unit or function testing.
In some situations, software inspection is more cost-effective than testing. Often, inspections used with various levels of testing make a potent combination to drive test quality.
Testing throwaway code
There is a school of thought that suggests “disposable programs” often survive long after their creators expect them to. There are times when code developed as a prototype can turn into the framework for a portion of the application. In these cases, the code often bears little resemblance to the original, “throwaway” code.
If a piece of code is intended for a very specific, short term purpose, it is possible it would not need to be tested, at least not rigorously. Given the nature of code itself, it is possible that the idea of “it is throwaway, don’t both testing it” can be dangerous. Code is still code and if people can allow for unusual behavior by imprecision in code itself, it will need to be tested.
Something simple, like a quick report, might not need the in-depth examination of code generating test data. If in doubt, presume you will need the program will again and that testing it will help you know if it does, or does not do what it is expected to.
Tests you can delegate
Unusual situations sometimes arise when manual testing is particularly cheap. If passing a fraction of your testing responsibilities to a cost-effective alternative is possible, do it, and turn your attention back to the higher-value testing you do best.
Engineers judge trade-offs. Every test has not just benefits, but also costs. You’re not a janitor, sweeping up every crumb that programmers might have accidentally dropped on the floor. You’re an investor, looking to apply your high-value testing ability to build the best possible portfolio of tests you can write. Treat your work like the precious commodity it is.
Solve Your Testing Challenges
Test management tool for QA & development