As applications become more complex, it’s even more important to shift testing left in order to find defects early, fix them, and release updated features as fast as possible. Automated testing is one way to help keep up with demanding project schedules and release cycles.
Unfortunately, organizations may come to the incorrect conclusion that automation is a replacement for their existing functional testing process, which is performed manually by experienced and skilled testers and could include a combination of scripted test case execution, exploratory testing and risk-based testing. Automation is not a one-size-fits-all solution; it is not a silver bullet that can solve all testing problems, but rather an aid in the overall testing process.
Where and how does automation fit into the overall testing process? Let’s analyze this step by step.
How automation fits with modern agile practices
In a modern agile environment, automated testing can be done at every level, starting right from the requirements phase all the way through the user acceptance and deployment phases.
This is especially true in the realm of DevOps and continuous testing. DevOps has helped software development and operations teams better collaborate, enabling constant automation and monitoring throughout the software development lifecycle (SDLC), which includes infrastructure management as well. Continuous testing helps to ensure testing starts as early as possible in the SDLC, aligning with the shift-left paradigm.
To achieve continuous testing, automation will be needed in various phases of the development process. That means there will be many changes to what we do as part of testing:
- Starting automation at the beginning of the SDLC, to ensure nearly all test cases are automated
- Aligning all QA tasks to create a smooth CI/CD cycle
- Creating a high level of collaboration to allow for continuous monitoring in the production environment
- Standardizing all QA environments
- Changing the testing mindset from “We’ve completed testing on this module” to “What are the business risks that have been mitigated in the release candidate?”
The key to all these changes is automation. It is the glue that binds DevOps and continuous testing together, and it’s where smart people and tools can help create shorter and more dependable release cycles.
When to use automation
Automation can complement existing functional testing processes — when used in the right way.
Here are some use cases where automation would bring value:
- Automating mundane and repeatable tasks that are time-consuming to do manually
- Using automation right from the start of the SDLC until the release and production monitoring phase, especially in DevOps environments
- Receiving quick feedback about the system when new code is checked in (like when a new feature is implemented), as automated tests can get triggered for every code check-in
- Running several regression tests on a daily basis to ensure the older functionalities of the system are still working as expected
- Creating test data to be used for manual exploratory testing, which may otherwise be time-consuming to create manually
- Testing different fields with hundreds of data sets using data-driven testing
- Performing load and stress testing by simulating thousands of users using the application simultaneously, which would otherwise be virtually impossible to do manually
All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform
Not everything can be automated
To get the most value from automation, it is important to consider what scenarios need to be automated. Quite often, teams spend too much time automating the wrong tasks, such as those that would be more easily tested manually. This makes them come to the wrong conclusion that automation is not working for them.
Here are some scenarios to highlight how automation could be unstable and actually provide less value, when used incorrectly.
Using automation for catching rendering issues in the application (such as look and feel) is not ideal. There are a few tools that do visual validation, but it is really difficult to replace humans in this aspect. For example, there were scenarios when I was testing where the mobile webpage looked white on one mobile phone and dark gray on another mobile phone. Yes, we can try to automate this, but humans would be better at finding these subtle differences in the look and feel of the application.
Using automation to figure out an element location on a page is also not ideal. The automated tests will become unstable if we start writing tests based on x, y coordinates of elements, as the webpage could be viewed in different browsers, devices, and operating systems, and these coordinates are going to change based on the screen size. This means the automated tests are going to be inaccurate and inconsistent.
Using automation to test integrated systems involving software, hardware, web services, APIs and cloud services all communicating in real time with each other may be too complex to be worth it. For example, how would we write an automated test that verifies all the end-to-end scenarios of fitness trackers? We can try as hard as possible to simulate real human movements and mock services, but it is going to be a difficult task to automate a fitness tracker’s entire process. Instead, it would be more effective to have real humans do exploratory testing in parallel to some automated tests.
Automation: A complement to manual testing
Automation and manual testing go hand in hand. Manual testing uses a tester’s creativity to explore the product and consider different failure scenarios of the system, while automated tests are good at repeating mundane tasks and may supply faster feedback.
Each one serves their own purpose, and when used appropriately, each can have a positive impact on the overall testing process.
Manage Your Source Code Securely
Hosted and on-prem solutions for Git, SVN and Perforce