Today’s software has progressed from the long procedural routines of its early history. Gone are the days of gotos and line numbers, at least as far as non-embedded systems or drivers go today. Multiple generations of programmers are now organizing their software with classes, modules, design patterns, or objects; taking advantage of templates, and building ever smaller microservices or larger sub-component architectures.
This trend in software componentization, the sharing of open source software libraries and frameworks, along with other tools and dependencies, has greatly expanded possibilities. This has allowed for the building of a myriad of multi-tenant cloud-based services and platforms along with the pursuit of ever more complex machine learning and enabled Continuous Delivery. These components shrink ever smaller and are combined and reused in many diverse and unexpected ways. Often this is exposed to the customer through the User Interface. That Graphical User Interface is the first contact with the customer and is often where the training wheels fall off and the bike falls over.
Regression testing is one way to make sure the bike stays upright and steady. Prior to release, companies spend a great deal of time and effort checking the software for problems, probably more than once: for example, before and after bug fixes. This end-to-end work simulates the customer using the software, and it tends to be so expensive that the company doesn’t do it very often.
Enter GUI Test Automation
It’s simple enough to write a computer program, or use a tool, to open a web page, click a link, type in some numbers, press submit, and check to confirm that the total is correct. On the surface level, that is all that GUI Test Automation looks like — running the regression test process.
Human testers can be particularly good at finding interactions that “just look wrong.” The simplest of testing tools are not able to do that, but advanced tools can capture portions of the screen and use image comparison to make sure today’s results match yesterday’s last good run.
Actually verifying a feature remains a human task. But once a path is free of blocking bugs, test automation can make sure it does not change over time. This can be incredibly powerful for guarding against regression. Specifically, it can create the capability to release more often. It can find critical workflow impacting problems earlier, and because GUI Automation only needs attention when an error or failure occurs, it can reduce the cost of regression testing and provide a team great confidence to make changes quickly.
Reduce the Cost of Regression Testing
As many Scrum teams will decide to limit work in progress near the end of a sprint or release cycle, the team can easily spend the last three days of a two-week sprint working on regression testing and bug fixes. That’s potentially fifteen percent of all activities (assuming a two-week sprint) and does not include unit testing, behavioral or functional testing, or security testing. Unlike functional testing, which can find new uses of the software and usability improvements, regression testing is a non-value add. Regression testing is more confirmatory, trying to ensure that nothing which was key to the user experience has been adversely impacted.
In “sprint” terms, regression testing is running in place, it’s like the last rehearsal before a big performance on stage or theater. Because regression testing is focused so much on prior product experience, its repetition is tiring, and fresh ideas to test and flex at the end of an already long development cycle can make the development team feel like they are spinning their tires in the mud. It does not push the product experience forward or create real forward progress.
A solid regression-test tooling approach, combined with modern engineering approaches, can reduce the cost of regression testing from days (or weeks) to moments (or minutes). The larger the product or platform, the greater the potential value that can be gained.
So do we automate all the regression testing so we bother our human developers a lot less? Not quite, but once we have key and critical regression risks covered by automation, it allows us to limit and focus human regression testing. Once regression testing is small, it will be possible to release more often.
Once regression testing is happening automatically, it no longer makes sense to queue up huge batches of changes to save costs. Instead, the team can deploy with every new feature – when it and it’s dependencies are ready. Web-based software is especially amenable to multiple deploys per day.
There is more to deploying frequently than bragging rights – there is direct business value. Scrum teams that could deliver in two weeks now can improve customer experience faster by delivering working software into the hands of customers six months before waterfall teams that did “big bang” integration. Test automation first removes the pain from the two-week cycle then enables the continuous cycle – with less risk and more understandable scope. Teams that can deploy multiple times per day get even more value in the hands of customers.
More importantly, quick releases make it possible to conduct quick experiments. Teams can try new features for a day (perhaps for a percentage of customers), continuing with the feature if it drives customer engagement. This means the company can experiment with ten features, picking the most popular two, all for about the cost of a “requirements document” under the previous system.
Discover Problems Earlier
Once the tooling exists and is hooked into the Continuous Integration (CI) Pipeline, it can run on every build. That means many problems will be found earlier. The famous cost of change curve demonstrates that the earlier a defect is found in calendar time, the cheaper it is to fix.
As a simple example, consider a bug found one hour after it is created, where the creator of the bug is automatically emailed by the CI system. The creator will remember the changes he made, can look in version control at the difference, and easily find and fix the bug.
Delay that regression test run by two weeks, and we now have hundreds of changes that could have caused the problem. The creator of the bug is now unknown and is most likely working on something else. A human tester needs to find the defect, test around it to see if it is a symptom of something larger, document the bug, and then argue if it should be fixed. Then the bug is assigned to a programmer who has to reproduce, debug, and do the fix, which needs to be retested by a tester. All of that time is lost opportunity cost.
Consider instead an approach where the software is continually tested. Devs have unit and functional testing running in the background as they save changes locally. Then when it comes time to create a pull request and pull the new feature into source control, the CI System, or version control system, can fire off a process which not only runs the unit and functional tests, but also runs linters to ensure that committed code meets standards. It can then run regression tests to see if any existing features are broken and then indicate whether that feature is ready for code review and peer dev testing. All of that can be bundled into the development pipeline, and failed test cases can then be observed. Changes can then be made to the current solution, or to the feature which has revealed it has a flaw, all as part of a single patch before it is merged into the master code line.
After the code passes these precheck-in tests and has been peer reviewed, the code can be committed, built, and merged to local environments for exploratory testing, and will now be covered by the Continuous Integration environment with every single build. The system knows who made the change and can email that person or alert the team in a dashboard when errors occur, long before the product is released to the customer. What’s more, when the bug report is written, you already have a baseline of tests that must pass before it can be accepted as done. This can reduce the cost to find, document, fix, and retest something between 50% and 95%. All of this is possible, because the scope of change is reduced to ever smaller increments, and most of the changes will be covered by new tests as features are checked-in.
Enable More Confident Change
There are a lot of words for software with problems. Older systems, legacy systems, monolith systems, and “big balls of mud” are just a few of the names. As the name implies, a change in one place can create unpredictable side effects. Programmers working in these systems are very careful to make small, surgical changes in just one place. These changes generally made the system just a little bit worse, making the next programmer even more reluctant to add further change. Over time, the systems slowly become worse and worse, increasing the need for more testing and continually dragged down the pace of development.
Until the software has automated tests around it.
Once the system has enough tests to detect most failures relatively quickly, the programmers can start “merciless refactoring”, improving the design of existing code. Merciless refactoring makes future change cheaper instead of more expensive, improves morale, helps retention, and can even help unlock new, emergent features. For example, once the search code is moved to a single place, it can be covered by an API and easily reused in a mobile application — instead of re-written. Ideally, refactoring would be part of the normal development workflow, but teams that are new to automation will need time to warm up and gain confidence.
The Whole Picture
Test tooling has a cost. There is an investment in skills, in time, energy, and effort. In economic terms, it is worth a careful look at costs, timelines, and benefits. Initial attempts at GUI Test Automation may actually slow down the team in the short run. It is important to be strategic, and focused where it is applied, and for the coverage beneath it at the unit and API levels to be as high as possible. A good way to achieve this is to set goals related to the key feature paths that need GUI Automation. Set goals to increase unit and API test coverage, which will then allow the GUI automation to focus on the outward facing aspects of the system, while the easier to test inward pieces are covered closer to the components in which they reside.
Here are some of the things that GUI Test Tooling can do. Automated tests can reduce the size and duration of the test-report-fix-retest loop, making it possible to release much more often. If the tests run more often, the feedback will be more relevant, clear, and localized, making the fixing cheaper. Once the tests exist, programmers will be confident in making changes, allowing them to move faster knowing there is an automation net to catch key problems in already existing features — and will vastly reduce regression test costs as well. This combination makes it possible to have experiments running in hours instead of months, which makes it possible to run dozens of experiments for the same cost.
All of this means more software releasing to happier customers earlier, with fewer defects.
No one ever said that scaling Everest was easy. It’s a little like rebuilding a classic car, you focus on rebuilding a few bits at a time, with an eye on the final goal, but those up to the challenge tend to agree that the end result makes it worth doing.