Model-based testing (MBT) has emerged as a powerful strategy for maintaining high standards of quality in an efficient and systematic manner. MBT transforms the development process by allowing teams to derive scenarios from models of the system under test. These...
According to research conducted by management consulting firm McKinsey & Company in 2018, by the year 2030, AI will have the potential to deliver additional global economic activity of about $13 trillion.
As more companies are moving toward incorporating AI into their existing business systems, it becomes crucial for software testers to consider how this technology would change the way they — and their product’s users — will interact with these systems.
AI’s impact on end-users
AI-based systems have hugely influenced our lives already. Things we thought weren’t possible have become a reality.
Researchers at UC San Francisco built an AI model that could detect the onset of Alzheimer’s disease an average of six years before a clinical diagnosis. They did two rounds of testing; in the first round, the machine-learning algorithm correctly identified patients who developed Alzheimer’s with 92% accuracy, and in the second round, with 98% accuracy.
But while AI-based systems have been able to comb through millions of datasets to find patterns and gain new insights, this practice has also caused significant problems in the area of data, privacy, security and biases.
The lifeline of AI-based systems is data. A large amount of user information is needed to train AI models to make the right predictions. But if consumer data is used for these models, there’s the likely consequence of security breaches that can happen with the flow of data between different systems. According to one risk report, there were 5,183 data breaches in the first nine months of 2019 alone — an increase of 33.3% from that time the year before. A total of 7.9 billion records were exposed.
Another toxic byproduct of AI-based systems is the impact on race, culture, diversity and other human social aspects. Do you recall such unsettling news as Google Photos classifying Black people as gorillas; Microsoft’s Tay, an AI chatbot that quickly began spitting out racist tweets; and the Beauty.AI algorithm deeming only white people beautiful?
When AI models are being used to make decisions about humans, rather than humans using AI models as an aid to make informed decisions, we risk becoming slaves to these algorithms, whether we realize it or not.
How do testers ensure that AI is safe for human consumption, and how do we interact with these systems?
Interacting with AI-based systems
As testers, our minds are trained to think of different failure scenarios that could happen in production. We put ourselves in the shoes of an end-user and exercise the application the way they would use it. This helps to uncover a lot of critical information about the application.
The same applies to AI-based systems. We have to think about edge cases when providing different data sets to train the AI model. For example, say we are training an AI model for autonomous cars. Instead of only feeding the model clear images of stop signs, we should also supply images of stop signs covered with snow or graffiti. This tests the AI-based system under real conditions it would encounter. These are the edge cases we need to think about when interacting with these systems.
Also, remember that the working of an AI system is a black box. We do not know how the AI model forms different relationships based on the data sets or how it makes decisions. Keeping this in mind, use more inclusive data sets to reduce biases, have an audit process to ensure the learning of the AI model is according to your expectations, and test for adversarial attacks. (Just like other applications, AI-based systems are also prone to attacks.)
Finally, while advancements in AI continue to evolve, it is essential to upgrade our skills by learning new technologies and programming languages to stay relevant in the industry. After all, being curious, continually learning, and applying critical thinking skills is the essence of what makes us human and differentiates us from algorithms and machines.
All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform
Related Posts:
Model-Based Testing with Ranorex DesignWise
Model-based testing (MBT) has emerged as a powerful strategy for maintaining high standards of quality in an efficient and systematic manner. MBT transforms the development process by allowing teams to derive scenarios from models of the system under test. These...
What Is OCR (Optical Character Recognition)?
Optical character recognition technology (OCR), or text recognition, converts text images into a machine-readable format. In an age of growing need for efficient data extraction and analysis processes, OCR has helped organizations revolutionize how they process and...
Support Corner: API Testing and Simple POST Requests
Ranorex Studio is renowned for its robust no-code capabilities, which allow tests to be automated seamlessly across web, mobile, and desktop applications. Beyond its intuitive recording features, Ranorex Studio allows custom code module creation using C# or VB.NET,...