The test automation pyramid has long been used as a reference point to help teams structure their automated tests. But with applications becoming more complex, does this model still make sense for teams to follow? Have things changed since the inception of this model? What are the things to consider in conjunction with this model in today’s automated testing world?
What is the Test Automation Pyramid?
In his book “Succeeding with Agile,” Mike Cohn came up with the test pyramid as a way to approach automated tests in projects. There are various interpretations of this model, but the basic idea is that there are three levels of automated testing that need to be performed:
Unit tests validate the smallest component of your software application. This could be a function in the application code that computes a value based on some inputs, where the function is part of several other functions in the application codebase. The biggest advantage of unit tests is that they run really fast underneath the UI and we can get quick feedback about the application. This should be the majority of your tests, as shown by them representing the base layer of the pyramid.
These tests validate several components of the software system together. The components could include testing databases, APIs, and third-party tools and services, along with the application. These tests run much faster than UI tests, as they still run underneath the hood, but may consume a little more time than unit tests, as they have to check the communication between various independent components of the system and ensure they have seamless integration.
These tests validate the UI of the application. They are usually written to test end-to-end flows through the application. The biggest limitation of UI tests is that they are relatively slow compared to unit- and API-level tests, as they run on the GUI of the application.
This pyramid has served as a blueprint for teams for the past several decades to structure automated tests. But is it still relevant to use this model to test applications in the current era, where the complexity of tests and applications have exponentially increased?
Rethinking the Test Automation Pyramid
I think the test automation pyramid needs to be revisited. Customers’ needs and technologies have drastically changed since the inception of this model. Teams have modified their automation test strategy to author tests quickly and find defects faster. Also, there are various challenges to building automated tests that the model does not take into consideration.
I have noticed that many organizations — especially those with no development and release process — do not write unit tests, or the tests that already exist are not maintained. There are multiple reasons for this, but one key factor is the availability of various tools to create API and UI tests, resulting in less attention being given to unit tests.
As systems become more complex in the age of blockchain, cryptocurrency, AI, and microservices, the focus of testing has slowly shifted from testing individual components to more integrated testing solutions. Testers complement the automation effort with risk-based and exploratory testing.
With the increase in complexity of applications, the need for performing load, performance and security testing has become a necessity. The current applications handle large volumes of data and have thousands of interactions taking place with different systems in a matter of seconds, and the movement of data between systems needs to be secure and encrypted. Teams use different tools to simulate load, users, and data to perform these kinds of testing.
Positive user experience has become critical for applications to ensure the products stay relevant in the market. Organizations are making huge investments in performing usability testing and focusing on seamless user experiences across different browsers, operating systems and devices. This is also the reason cloud-based solutions are popular: so teams can get subscriptions to test applications in different configurations without having to purchase actual hardware devices for testing.
Finally, there is the emergence of smarter testing using technologies such as artificial intelligence. There are several solutions available to test the application on every level without having to write a large number of tests. The AI learns from user actions in production and can identify the flows that need more testing, and the more tests you run, the smarter the AI becomes in detecting flaky tests and automatically running the most valuable tests.
It is clear that automated testing needs to be planned and adopted for the current era based on the team, application, tools available, time, cost and effort.
I propose the context-based hub and spoke model of planning automated testing, pictured below. There are no more “levels” of testing, as in the pyramid; in this model, the focus is more on the types of testing performed. Some types of testing could take precedence over others, based on the context.
This is just one possible reference model that could be used to make decisions about automated testing. The key is to stay relevant with the times and adopt automation practices accordingly.