Retooling for testers

We had recently hired a new software test engineer into our small group, and as part of the orientation, it fell to me to show them our automation suite. I walked through the examples I had prepared, showing what we did to create the tests, how they were integrated with our product, and the steps necessary to commit new tests and use them with our product. By the time I had finished the demonstration, I had hoped that they’d be impressed.

The reaction I received was a little different from what I expected: “Um, I don’t want to be rude … but have you considered just starting over?”

I was a little bit taken aback by this, but truth be told, it was something I had considered as well.

We had a test automation suite that was robust, provided broad and deep coverage, and took advantage of being integrated closely with our product. In fact, we ran our tests literally within the framework of our application and were able to edit our tests directly in our application. At the time it was designed, it was seen as elegant, was strongly supported by the development team, and had stood the test of time. However, to everything there is a season, and this conversation was the beginning of my realization that our test suite’s season had come and gone.

True, there was a lot of underlying code developed to help support our tests, but that also made it difficult to upgrade or update tests. Tests were written with Perl as the underlying language because once upon a time, our team had a lot of people who worked with Perl regularly. Over time, though, turnover and new hires caused a shift in that expertise. Where we once had a solid group of people able to create modules and maintain the code base, that number gradually shrank. The tight coupling we had with our own product and a variety of techniques we used to streamline our continuous integration approach was also showing its age.

As we weighed the options of adding a number of new features, we came to the conclusion that we could either work hard and aggressively modify what we had in place to focus on the new features, or we could, as our new hire suggested, think about “starting over.”

Ultimately, we decided that we would keep the existing framework in place for the older features but start with something new for the new feature development. This was pushed to the forefront of our efforts when our company was acquired and the new management team asked us to focus on streamlining technologies.

If a product or application survives in the market for long enough, it will go through several evolutions. Few products are exactly the same as when they were first introduced. Our product had been in the market for a decade by the time we were acquired by a larger organization. What they envisioned doing with our product was different from what we had originally planned for its primary use and in how it would integrate. Specifically, we were asked to consider how we could make a test automation infrastructure that would not just look at our specific product, but also consider how it might interact effectively with the broader range of products inside its ecosystem.

Starting over does not necessarily mean starting from ground zero

Legacy applications have a user base already in place, and that user base expects key functionality to work in a reliable manner. That means that, even if an application will be redesigned or features will be added, it’s not likely to have everything redone at the same time.

A good example is a web application that is enhanced to allow for responsive design. The look and feel of the application will change based on the dimensions of the browser or the reported user agent. Depending on the level of device support, this may provide two or more views and groups of elements to work with.

If a legacy application has existing test automation in place, odds are that that automation will still be useful for the existing functionality. But it will likely not be as useful for new functionality (if it can be used at all). Rather than focus on doing a full rewrite of the automation code, instead use the new functionality or feature set to create new tests, using new techniques or tools if necessary.

How far down the road can you see?

What may have been a promising platform at one time may fall out of favor down the road. Additionally, with companies merging or trying to streamline product offerings, a testing team may be asked to have their infrastructure fall in line with an existing framework. There can be a variety of reasons for this. Usually, it is because there is already a level of expertise in place, but just as important is the ability to train new people to use that technology.

When companies get acquired or merge, it is often advantageous to try to blend products to work as seamlessly together as possible. While it is possible to take products that exist on different platforms and with different operating systems and supporting tools and get them to work together, having to support multiple frameworks and languages can be more of a burden in the long run than standardizing on a few choices. That can be an advantage if the technologies chosen align with what the team is already using. However, it is just as likely that an engineering group will be asked to migrate to another platform to correspond with products already being used.

If a team has visibility into this and what the desired technology stack will ultimately be, it will be easier to train individuals on the team to be effective users, developers, and testers of those technologies. There still may be considerable need for learning and adaptation to get proficient with those technologies.

Separating the tools from the problems

One of the biggest challenges with any test automation strategy is understanding what problems actually need to be overcome. It is common to look at test automation as having a hammer and, thus, every problem to be addressed as though it were a nail. But there may be a variety of solutions that do not or will not fit into a standard test automation model.

It is common to consider the steps we take to run manual tests, look at what was necessary to work through those steps, and then automate them for future operations. Can those efforts be valuable? Certainly. Will they always be? Not necessarily, or at least not as valuable as we might think. Just because a test can be automated doesn’t necessarily mean that it should be, particularly when using a test tool means forcing it into that paradigm.

Consider the example of adding users to a system. If an application allows for adding individual users through a user interface, is it worth automating that process? To test a single user being added or a couple of different possible workflows, the answer is yes. Does it make sense to add hundreds of users with the same mechanism? If the goal is to stress test the front end over time with repetitive actions, then the answer could be maybe. However, if speed and volume are important, there are usually far better ways to do this type of action, such as with shell scripts or database queries to create bulk user transactions from existing data.

When in doubt, go for small and reusable

It is natural to think of tests or interactions with a system as a process of stringing together workflows. We rarely just perform atomic tasks with a system. Instead, we perform a variety of tasks so that we can accomplish a goal.

Likewise, it’s natural to consider test automation in the same way. We look to create tests that will allow us to get from point A to point Z. In some cases, having lengthy end-to-end tests is the right thing to do, but it comes at a cost. Maintaining such tests may prove to be difficult, if not impossible.

By breaking up individual tests into smaller components that can be run in a sequence, it is possible to modify workflows if needed. Additionally, if one step of a workflow changes, it is easier to make a single change to that one area than it is to try to go through and rework a long test with multiple steps. By using a modular approach, it makes tests and test components easier to maintain.

There are patterns in everything

While it may feel like a daunting task to put together new test automation, in truth there are a handful of basic interactions that are repeated time and again. If the focus is on a user interface, depending on the device there may be some variation, but most users will type text into a text box, select a drop-down menu to make a choice, click on a radio button or checkbox to select an option or set of options, and click buttons or links to perform operations or navigate to a new location. Those actions are not as numerous as we might initially think.

With a little forethought, it might be possible to make simple components that can be reused in multiple places. By looking for these patterns, we can determine which areas would be easier to make into small libraries or function calls. Rather than have a number of different statements to press a variety of buttons, it may be easier to simply have one method or function that presses a button, with the value of said button a variable that uses an element ID or class. Likewise, clicking on a link or selecting from a drop-down menu will probably be similar in most cases. Leveraging the patterns of interaction can help us make code that is easier to maintain, or at least not have us repeat the same process multiple times.

By taking the time to consider each of these aspects and how to work with them effectively, retooling test automation for a legacy system need not be a terrifying undertaking. Sometimes starting over may indeed be the best bet. Still, remember that what is new and shiny today may require the same overhaul sometime down the road, so get in the habit of starting over frequently. It may save headaches and heartaches down the line.

To explore the features of Ranorex Studio risk-free download a free 30-day trial today, no credit card required.

About the Author

Michael Larsen has, for the better part of his 20+ year career, found himself in the role of being the “Army of One” or “The Lone Tester” more times than not. He has worked for with a broad array of technologies and industries including virtual machine software, capacitance touch devices, video game development, and distributed database and web applications. Michael currently works with Socialtext in Palo Alto, CA.

You might also like these articles