Employing AI in Continuous Testing

Sep 22, 2021 | Product Insights, Test Automation Insights

What is RPA? Robotic Process Automation

Testing professionals have succeeded: Testing is finally recognized as a strategic capability in forward-looking organizations.

One of the consequences of that achievement is that testers’ responsibilities have exploded.
Software increasingly runs the world, so faults are mission- and even life-critical; and we’re accumulating massive datasets of test results so large that they exceed the ability of humans to digest them thoughtfully.

That’s where artificial intelligence comes in. Complex applications for mobile devices and the internet of things, as well as more traditional web and desktop platforms, demand computing power to understand their test measurements. We’re well past a single dimension of “good enough to ship” vs. “hold for another round of corrections.” We need AI to help us process it all.

Scope for Automation

“Test result” means several different things, so it’s crucial to distinguish the meanings carefully.

First, the tooling that relates tests to business requirements is improving. While a range of traceability products have aimed to guarantee that all requirements are tested, now that tracing benefits from machine learning (ML). This helps make it more likely that everything that deserves to be tested actually is tested.

Next, test generation is increasingly automated. ML applies nicely to runs during development and can yield tests that apply to different platforms and are even loosely coupled to a particular user interface (UI). The same ML can ensure that tests are ranked and that higher-value ones receive more investment.

Tests may have a higher value for several reasons:

● Patterns of customer usage
● A history of “hot spots” where a particular function has been delicate during development
● Clarity in suggesting related tests not in an existing test suite
● Power to reveal defects specific to a particular platform or functional area
● Detection of errors human testers often miss

UI testing has resisted automation to a considerable degree. That’s one reason that ML’s potential to automate UI tests looks like “low-hanging fruit.” ML that is imperfect or partial in this area, still provides a great improvement over manual approaches.

Test maintenance has potential in at least a couple of distinct areas to apply advanced automation. “Self-healing” automated tests are available with several tools now, and they will likely become standard before long.

AI is also a natural candidate to handle thorny recognition problems in automated testing. A defining characteristic of AI is that it learns.

Consider that an application that appears on more than one platform — an inevitable goal for any successful application — will have a responsive UI, effectively meaning it doesn’t look the same on different platforms. Many humans migrate with difficulty between iPhone and Android smartphones, let alone more divergent user interfaces. Automated tests more and more need the AI power of image recognition and optical character recognition (OCR) just to exercise common UIs. Once AI is applied to widget recognition, though, an example test can be maintained in terms closer to “the video scrollbar” rather than “ID #371,” with all the advantages that brings.

ML has its own slants on the problem of maintaining tests through time. ML has potential to apply during training, or report generation, or over the longer cycles–sprints, yearly plans, and so on–that combine the previous two. Supervised ML is a good expectation for report generation, for example: the real value of such a project comes when a human expert co-operates with the ML to improve dataset selection or test execution. Think of this as more augmented intelligence than artificial intelligence.

UI testing appeared above as a particularly favorable current opportunity for AI. Testing of application programming interfaces (API) also is a special opportunity. UI and API share a focus on “interface”, of course. At a technical level, the kinds of ML that suit the two domains appear to differ. ML’s contribution in API testing seems to center on test generation: machines are becoming better than humans at generating test cases that exercise weaknesses in API implementations.

Finally, increasingly complex test measurements demand machine help to manage. A simple application might have a test suite as straightforward as to produce “85 out of 85 items pass.” In such a case, it’s easy to make the business conclusion that the application is ready for end users. Complex modern products, though, are likely to yield a result such as, “Version 5182 scored 93.4 over the 118,526 tests completed to this point.” Does that mean it’s time to deploy to production, correct errors, or run more tests? Humans can’t reliably operate at that scale, so we need help from smart tools.

Trends in agile methodology and the business role of software intensify the need for AI in testing. Just a few years ago, AI and testing were linked only in academic studies. Now, though, as testing and development more broadly shift left — and businesses recognize that they need good-enough software now, rather than perfected software in the indefinite future — testing must take advantage of AI to produce actionable results on time.

Delivery cycles will continue to shorten, and product expectations will only grow more and more complex. A test department that plans to rely on traditional playback and scripted test tools is planning not to keep up.

Advice for Decision-makers

Tools that rely on ML are invariably hard to evaluate. Most organizations barely keep up with the expertise involved in the configuration of the traditional testing tools they already rely on. How can the same staff analyze the ML claims vendors make and decide on specific tools and technologies to adopt?

There’s no easy answer. Careful evaluation of the sorts of products already visible on the horizon is a dissertation-level undertaking, and few organizations budget that level of effort. The best result is likely to come when the organization identifies two or three challenges specific to its software development lifecycle and works with the vendor or tool community to prototype ML’s payback in those particular areas. Likely domains include UI testing, API testing, and analytics of test results over multi-sprint spans.

AI works by learning. Any realistic trial of an AI-based tool needs to plan on expert involvement to supply the tool with high-quality knowledge from which it learns. Only then might the tool usefully apply that knowledge. That’s the point at which investment in AI-based testing begins to pay off.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

Related Posts:

7 Best Android Testing Tools

7 Best Android Testing Tools

There are more and more Android testing tools available for mobile app developers. These are our favorites for performance, accessibility, and security.

What Is the Difference Between Regression Testing and Retesting?

What Is the Difference Between Regression Testing and Retesting?

Regression testing and retesting are essential methodologies testers employ to ensure software quality. If you’re new to both or aren’t sure when to use which technique, this article should help. We’ll discuss the importance of regression testing in software testing. ...