Ai and continuous testing

Testing professionals have succeeded: Testing is finally recognized as a strategic capability in forward-looking organizations.

One of the consequences of that achievement is that testers’ responsibilities have exploded.
Software increasingly runs the world, so faults are mission- and even life-critical; and we’re accumulating massive datasets of test results so large that they exceed the ability of humans to digest them thoughtfully.

That’s where artificial intelligence comes in. Complex applications for mobile devices and the internet of things, as well as more traditional web and desktop platforms, demand computing power to understand their test measurements. We’re well past a single dimension of “good enough to ship” vs. “hold for another round of corrections.” We need AI to help us process it all.

Scope for Automation

“Test result” can mean several different things, so it’s crucial to distinguish the meanings carefully.

First, the tooling that relates tests to business requirements is improving. While a range of traceability products have aimed to guarantee that all requirements are tested, now that tracing benefits from machine learning (ML). This helps make it more likely that everything that deserves to be tested actually is tested.

Next, test generation is increasingly automated. ML applies nicely to runs during development and can yield tests that apply to different platforms and are even loosely coupled to a particular user interface (UI). The same ML can ensure that tests are ranked and that higher-value ones receive more investment.

Tests may have a higher value for several reasons:
● Patterns of customer usage
● A history of “hot spots” where a particular function has been delicate during development
● Clarity in suggesting related tests not in an existing test suite
● Power to reveal defects specific to a particular platform or functional area
● Detection of errors human testers often miss

Test maintenance has potential in at least a couple of distinct areas to apply advanced automation. “Self-healing” automated tests are available with several tools now, and they will likely become standard before long.

AI is also a natural candidate to handle thorny recognition problems in automated testing. A defining characteristic of AI is that it learns.

Consider that an application that appears on more than one platform — an inevitable goal for any successful application — will have a responsive UI, effectively meaning it doesn’t look the same on different platforms. Many humans migrate with difficulty between iPhone and Android smartphones, let alone more divergent user interfaces. Automated tests more and more need the AI power of image recognition and optical character recognition (OCR) just to exercise common UIs. Once AI is applied to widget recognition, though, an example test can be maintained in terms closer to “the video scrollbar” rather than “ID #371,” with all the advantages that brings.

Finally, increasingly complex test measurements demand machine help to manage. A simple application might have a test suite as straightforward as to produce “85 out of 85 items pass.” In such a case, it’s easy to make the business conclusion that the application is ready for end users. Complex modern products, though, are likely to yield a result such as, “Version 5182 scored 93.4 over the 118,526 tests completed to this point.” Does that mean it’s time to deploy to production, correct errors, or run more tests? Humans can’t reliably operate at that scale, so we need help from smart tools.

Trends in agile methodology and the business role of software intensify the need for AI in testing. Just a few years ago, AI and testing were linked only in academic studies. Now, though, as testing and development more broadly shift left — and businesses recognize that they need good-enough software now, rather than perfected software in the indefinite future — testing must take advantage of AI to produce actionable results on time.

Delivery cycles will continue to shorten, and product expectations will only grow more and more complex. A test department that plans to rely on traditional playback and scripted test tools is planning not to keep up.

Advice for Decision-makers

Tools that rely on ML are invariably hard to evaluate. Most organizations can barely keep up with the expertise involved in the configuration of the traditional testing tools they already rely on. How can the same staff analyze the ML claims sure to appear in 2019 and decide on specific tools and technologies to adopt?

There’s no easy answer. Careful evaluation of the sorts of products already visible on the horizon is a dissertation-level undertaking, and few organizations budget that level of effort. The best result is likely to come when the organization identifies two or three challenges specific to its software development lifecycle and works with the vendor or tool community to prototype ML’s payback in those particular areas.

AI works by learning, and any fair trial of an AI-based tool must supply it substantial knowledge to learn and apply.

To explore the features of Ranorex Studio risk-free download a free 30-day trial today, no credit card required.

About the Author

Cameron Laird is an award-winning software developer and author. Cameron participates in several industry support and standards organizations, including voting membership in the Python Software Foundation. A long-time resident of the Texas Gulf Coast, Cameron's favorite applications are for farm automation.

You might also like these articles