Gone are the days where “responsive web” and “mobile first” were the focus for developing and testing applications. Now, it is the era of AI-based application, which is exponentially complex compared to its predecessors.
AI will soon be pervasive in almost all software products and services, and our skills as engineers will have to adapt accordingly. Current roles in companies will change significantly because of AI, and software development and testing will be no exception; we need to prepare for it now.
Working with AI
The workings of AI are usually a black box. People do not know how the algorithm forms different relationships and makes decisions. We provide different training data sets of input/output combinations, and the AI learns from them, with people constantly monitoring the learning process to ensure the AI system is making the right decisions.
Only recently, experiments were conducted to make AI rationalize its decisions. But the research is still in its primitive stage, and it is going to take time before the study can be generalized to the outside world. For the most part, we are trying to identify different patterns from thousands of data sets using AI models.
Given that working with AI is a black box, how should developers and testers interact with these systems? There are some ways this can be done.
More inclusive datasets
We need to ensure the datasets used in training AI models are diverse and contain different combinations. For example, if an AI model is built to detect webpage buttons from different images, the data sets should be diverse and contain images of buttons of all different shapes, as well as images that are not buttons. This helps the AI models to become smarter in detecting buttons from various images. This approach is similar to creating test data for testing to perform both positive and negative testing.
There should be regular audits performed in companies and organizations to ensure AI models are trained with diversified data sets, and that the learning is happening as expected. This is a crucial step to be performed by developers and testers to ensure AI-based systems do not have a negative impact on consumers once they are released to production.
Testing for adversarial attacks
AI-based systems are prone to attacks. If we have an AI-based system that detects a particular object from different images, someone could potentially change a couple of pixels in the images to skew the learning of the AI model. Ultimately, when this AI based system is released in production, it would not work as expected and could even cause harm to humans.
Likewise, if we are training an autonomous car, it is important to feed the AI model images of stop signs with graffiti or stickers on them, so the system still recognizes them. These are the edge cases to keep in mind when interacting with these systems.
Will AI affect our jobs?
It is almost a certainty that more jobs will be automated, pushing workers to upgrade their skill set. The largest impact would be felt by the workforce in predictable environments, such as assembly plants and the fast food industry.
As for developers and testers, our jobs would likely be secure. Large amounts of our work is creative and exploratory, and after all, we still need humans to ensure the AI data sets are diversified, to constantly monitor the learning process of AI models, and to analyze data results classified as “unknown.” And testers will still be needed to test AI-based systems!
Even with the potential security of developer and tester jobs, we still need to take the necessary steps to sharpen our skill sets and be open to learning new technologies. Being curious, continuing to be creative, and thinking critically are the essence of what makes us human and differentiates us from algorithms and machines.
We need to keep up with the fast-paced world of new technologies springing up every day. If we do not keep this in mind, we may become obsolete — with or without the coming of AI.