Testing is a crucial part of the software creation process. It ensures that your code is working correctly and that all bugs are found before the software lands in the hands of consumers. But it can be difficult to explain these tests and their results to investors...
Originally published on December 2, 2021. Transcription has been altered slightly from the original recording. Listen to the full episode below.
Introduction to AI in Test Automation
Jackie King: Welcome to the latest episode of the Idera DevOps Tools podcast. Our goal is to educate and inform you about key topics in software development. With solutions that help almost one million users throughout every step of building, testing, and deploying applications, our experts are poised to provide enticing insights, perspectives, and information. I’m Jackie King, and with me is Ranorex product manager Jon Reynolds.
Jon Reynolds: Thanks Jackie, and hi, everyone. So, as Jackie said, I am the product manager for Ranorex. Ranorex Studio exists to eliminate the complicated details around UI test automation. So wouldn’t it be great if we could just completely eliminate the test automation step, and have an AI that could generate tests for us?
There’s certainly a lot of talk in the test automation tool market around both AI and machine learning. And when you look at what’s happening with technology like self-driving cars, the idea of self-writing tests doesn’t seem all that far-fetched.
Jackie King: And self-writing tests is probably what most people think of when they hear “AI-driven test automation.” But at least for now, I don’t think that’s the state of the art in the industry. I read recently that Pulse Opinion Research surveyed IT leaders about AI in test automation, and in that survey, only 4% said that the AI features of their test automation worked “Very Well.” At the same time, another 55% rated the AI features of their test automation as either “Fair” or “Poor”. So at least in this group, there definitely seems to be a gap between what users are expecting, and what tools that claim to have AI are delivering.
Jon Reynolds: Yeah, I think that’s partly due to confusion about what these terms actually mean and the level of AI technology that currently exists in test automation. So, let’s start by discussing the meaning of these key terms. First, when we say “artificial intelligence” or AI, the most common definition is “the ability of a computer to mimic or imitate human intelligent behavior, such as thinking, reasoning, learning from experience, or making decisions.”
Some of the earliest applications of AI were chess-playing devices. So shortly after the end of World War II, Alan Turing and David Champernowne developed a program called Turochamp that could play an entire game of chess. It worked by analyzing all possible moves as well as an opponent’s next potential move, and then assigning a point score to these possibilities. The program then selected the move with the highest point score. The problem was that the computers of the time weren’t really capable of running such a complex algorithm, so it was never used during Turing’s lifetime.
A more modern example of AI would be Amazon’s new robot that can carry small items through the home, tell you dad jokes, and then return to a charging dock before running out of battery. Another common example in homes is a robot vacuum that can scan a room, identify obstacles, and then decide on the most efficient route for cleaning. And of course, self-driving cars are another common example of AI implementation.
Jackie King: So that reminds me, at a restaurant in my area, they have a robot named Rita that walks you to your table from the host stand. It’s actually pretty funny. When you get to your table, Rita says that her algorithm has identified the perfect table for you. But as far as I can tell, the table is actually selected by the person working at the host stand, so I don’t think we can really call that AI.
Machine Learning, AI, and Deep Learning
Jackie King: But now, let’s go ahead and turn our attention to machine learning. How is machine learning different from AI?
Jon Reynolds: Machine learning is just a subset of AI, where the focus is on the ability of a computer to learn and adapt through experience, usually through building models. AI is a broader concept that includes the ability of a computer to apply what it learns to solve problems and make independent decisions.
There are actually two types of ML: Supervised, and Unsupervised. In Supervised Machine Learning, the algorithm is provided with labeled input data to train the algorithm, while Unsupervised Learning does the labeling of all of this data on its own.
Let’s take a look at a couple of real-world examples of supervised machine learning: facial recognition and speech recognition. Think about how you train a device such as Siri or Alexa to recognize your voice, or train your phone to recognize your face or fingerprint. You provide a series of inputs, and with each input the device then learns a little bit more about you, until it builds a model that it can use in the real world.
Another example of supervised machine learning, going back to the example of a robot vacuum, brings us to one well-known problem that has been experienced by many Roomba owners who also have pets. The device has the potential to spread dog poo as it moves around the home. So if your pet makes a mess on the floor, the robot vacuum could run right over it and just take it with it and spread it everywhere it goes. The Verge recently released a report that that Roomba-maker iRobot has been working on this for years, and recently announced a solution using a combination of machine learning and the built-in vision to train their devices to identify and avoid pet messes. To do this, they built a huge database of fake pet messes — using playdough or any other kind of modeling clay — that they used in training their AI vision system.
Now, let’s talk about “unsupervised ML,” where instead of giving the computer the categories of objects, such as fake piles of dog poo, you allow the computer to discover hidden patterns of data or categories without the need for human intervention.
An example of this is a recommendation engine on a shopping site, that might look at your past purchase history and trends in what similar customers are buying in order to suggest additional items for you to buy during the checkout process. Another example of unsupervised learning is how Google News categorizes articles on the same story from various online news outlets. Some real-world applications require a combination of supervised and unsupervised machine learning, and this is referred to as “semi-supervised machine learning.”
Jackie King: OK, that makes sense. So let’s take a look at another team. What is “deep learning?”
Jon Reynolds: Well, just as machine learning is a subset of AI, deep learning is a subset of machine learning. Deep learning refers to a machine learning algorithm that uses an artificial neural network — basically a brain-like layered structure of algorithms.
It may help us to use an extremely simple example. Let’s assume that your income can be calculated based on your education level and years of experience. A software engineer would create an algorithm based on this knowledge. Now if you input some test data, the algorithm can use it to predict a person’s income.
Compare that to deep learning, such as what’s necessary to help a self-driving car recognize a stop sign. First, the artificial neural network would learn to recognize for itself the features of the stop sign, such as its edges and colors. Then, the deep learning algorithm would be fed data so that it could learn from its own errors and identify where its predictions need to be adjusted, without any human intervention. The critical thing with self-driving cars is, how do they collect that data? Some of this data collection can happen in a lab, but data collection is also happening in the real world.
Jackie King: When people think about self-driving cars, they tend to think of Tesla, because that’s been in the news a lot. But recently, CNBC reported that GM autonomous subsidiary Cruise plans to have at least one million self-driving cars by 2030. So this is a technology that’s developing very quickly.
Moving on, what does AI or machine learning look like when it’s applied to the test automation industry?
AI in Test Automation
Jon Reynolds: Well, we’re not yet at the point where tests can write themselves. For AI or machine learning, you’ve got to have data that the computer can use to learn and make decisions. But here are some applications of machine learning that we do see in the real-world.
First, we can talk about self-healing tests. When a button or other object moves or the test environment changes slightly, does your tool automatically detect this change and continue? To resolve this type of issue, the typical approach that you see in some tools is to fix the UI element’s recognition settings and then re-run the failed and unexecuted tests. And Ranorex actually does this. We have a machine-trained self-healing feature that can automatically rerun a failed test with a more robust object path – trying to find the item using some additional criteria.
Then you have object recognition, like image validation. Can your tool compare an expected image with actual results? Part of training an algorithm involves reviewing your tool’s results and correcting errors. Over time, your tool should be more reliable and make fewer validation mistakes.
I remember watching a Ministry of Testing presentation called “The Rise of the Guardians: Testing Machine Learning Algorithms 101.” This was presented by Patrick Prill and he gave a pretty good explanation of how Machine Learning algorithms have to be trained with huge data sets. Thousands or millions of samples are needed just to identify something as simple as the number 7 inside an image to ensure it’s not confused with letters or numbers like the number 1.
Another one of the biggest challenges in test automation is handling web elements with dynamic IDs. For example, fields related to a user profile might have a different ID for each new session. In early versions of Ranorex Studio, we handled this by configuring path weight rules that disregard dynamic IDs in favor of stable attributes. But a couple of years ago, we introduced a machine-trained algorithm that can detect dynamic IDs in web elements and disregard them, choosing other, more stable attributes to uniquely identify each element.
As for those tests writing themselves, there are potential data sources that an AI could use. For example, if you’re developing using an approach such as behavior-driven development in a language like Cucumber, then it’s possible to imagine that you could use your BDD statements as input to train an AI on the expected behavior of an application, which it could then test.
Or, if you’re using a tool that tracks how users engage with a product, you could take that data and use it to create automated regression tests. But currently, these applications of AI for test automation are really in their infancy.
Implications for Automated Testing
Jackie King: So what’s your advice to someone who might be considering buying a test automation tool that claims it does AI?
Jon Reynolds: I’m going to go off on a little bit of a tangent here and bring up the latest Zillow problems that made major news. Briefly, for those who aren’t aware, Zillow used a machine learning algorithm called iBuying to predict home prices and make cash offers on homes, with the expectation of turning those homes for a profit. After having purchased over 9000 homes in the third quarter of 2021, Zillow’s executives came to the conclusion that the iBuying program put the company’s future at risk by over-valuing too many homes. There are various reports and comments by executives that they could tweak the algorithm and make some changes for the long run, but the executives were not willing to bet the stake of the company on the program.
As a result, executives shut down the program at the beginning of November, laying off 25% of its workforce. Now, Zillow has a multi-billion dollar inventory of homes and a Business Insider report indicates that in Zillow’s 5 biggest markets, they are listing the majority of homes for less than their purchase price.
So, a lesson which can be learned from Zillow which can be applied to any tool, whether it is AI-based or not, is you have to evaluate what the tool is promising and ensure you will get what you paid for. Did the iBuying program offer Machine Learning? Of course it did. Was the algorithm good enough to meet Zillow’s needs? The results tell us “no.”
For choosing a test automation tool (or really any other tool) which offers AI, You just need to ask a lot of questions:
- What does AI imply with this tool?
- How does the application learn or adapt to your own specific use case?
- Where does this application get the data to feed the model?
There are many other questions you can ask along these lines. To me, the most important thing to do is complete a thorough proof of concept. Demo videos and sales teams can make a tool look great, but what do your testers say? Do your testers think the tool delivers on its promise by providing zero-maintenance tests, no code, or other AI buzz-words? Will the tool allow you to increase test coverage, test faster, and have fewer flaky tests? Are there functionality gaps which will require you to maintain multiple testing frameworks? And in the end, is the tool a good investment?
At Ranorex, we’re constantly improving the algorithm for Ranorex Spy, which is used to detect UI elements within mobile, web, and desktop platforms. We have a full set of codeless automation capabilities, so you can record a test and add validations and conditions without writing code. And we have wizards for things like integrating with Jira or TestRail and for instrumenting a web browser or mobile application. But we don’t call that “AI”, yet, because a human is still doing the decision-making, and it’s really important for us to accurately represent the capabilities of our tool.
So in conclusion, we’ve discussed several terms related to AI, and how it relates to test automation.
- AI is the broadest term, describing everything from very simple applications such as predictive algorithms, all the way up to self-driving cars.
- Machine learning is a subset of AI, and is most commonly what you see in test automation.
These applications require massive amounts of data to make decisions. And realistically, we are still in the early stage of machine learning applications in automated testing.
Jackie King: Thank you Jon, for discussing AI in test automation with us today, and thank you to everyone who’s listened in. Ranorex eliminates the complicated details around UI test automation while helping users to collaborate and improve software quality. If you’d like to learn more about the topic of AI in test automation, or test automation in general, check out the blogs, ebooks, and webinars that are available at the Ranorex website, www.ranorex.com.
All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform
Gherkin: Overview, Use Cases, and Format
Gherkin is a format used for cucumber testing. In this article, we go over what it is, its use cases, and the format of Gherkin.
The Importance of SQL Injection Testing
SQL injection testing is important for finding vulnerabilities and keeping your information secure. Learn more here.
6 Best Practices for Code Review
Code review is a daunting process, but there are ways to make it easier, more efficient, and more accurate. Learn more here.