How to Improve the Automate Everything Approach

Dec 9, 2020 | Best Practices, Test Automation Insights

Test equipment on an assembly line

A little over 10 years ago I heard many, many people at a conference insist their management expected them to “automate everything.” I heard very similar statements 5 years ago. I heard the same thing this Spring.

In some ways it seems perfectly reasonable. You have some tests you work through. Then you exercise the function and how the entire piece works. Then you look to see how it interacts with what was already in the system or on the screen. When you are comfortable with what is going on and you understand it, you turn around and write the code to make what you just did repeatable. Simple, right?

Recently I was talking with a tester who was a bit frustrated. At her shop there are functional tests written for every story. Each test starts like this:

  • Logon to the system 
  • Navigate to the screen under test
  • Locate this menu item

Then you test it. Every change to every field on every screen is exercised the same way.

Every script is executed by a person sitting at a desk who steps through each script. They manually verify the results and click pass or fail on every single step. The goal is to make certain everything matches the acceptance criteria in detail. So far this makes sense.

Then all the scripts are gathered and sent to the automation team who automate these tests. All of them are automated as written and executed by the testers working “by hand.”

This is the result of taking the “automate everything” reasoning to its full, logical conclusion. But is this really the best that we can do? 

Collaborate, don’t isolate

If the developers are doing any level of unit testing, there should be conversation between the people writing code (and unit testing it) and the people exercising the code. Once it makes it to the test environment, a sanity check of those same unit tests in the new environment likely will give the first level of confirmation of behavior. If they fail, take the failing code and fix it. Then test again.

Then, the people exercising the code can do a deeper level of evaluation of the behavior. Check links, drop-down lists, communication with other modules, response codes, messages in the logs (application, DB, system, whatever) and the normal testing “stuff.”

Check the acceptance criteria and requirements. Make sure those are handled properly. Also, check the exceptions to them that likely were not called out. How many times are there “requirements” and “acceptance points” that explain only one path? What happens if something ELSE happens? Exercise the “something else.”

Focus on function and flow

You have now reasonably confirmed the software addresses the change it was intended to address, at least to the level many people will exercise it. Likely, you have already done more than most would and we have not yet automated anything. Here is where most “testing” stops and people begin to “automate.” This is the wrong place to do this.

Instead, take a look at intended usage of the software. How does it get used, in the wild? What do the customers, external or internal, reasonably intend to use the software for? Can you emulate what they need to do? Can you emulate the “business flows” they will use?

Many will say “No.” I understand that. At one point in my working life, I would have agreed. A wise woman gently asked me once, “Have you tried asking anyone?” I hadn’t. That was a lesson I have never forgotten.

It often isn’t in the requirements or in the acceptance criteria, and is not often addressed in the “justification” or “statement of business purpose” or “problem/need” statement. Most of the time those are not prepared by the people who use the software to do what needs to be done. Ask the people who need it for their jobs, if at all possible. It may not be, I get that. But someone can likely describe how the software gets used.

Talk with them.

Then, build scenarios to exercise what they describe, Review it with them. Show them what the software does to make sure you understand what need is being addressed.

The scenarios you scripted and reviewed the results for have one vital purpose.

They define the main business “flows” going through the software you are supposed to test. Once you have that done, you now have a meaningful set of test scenarios which make sense to the actual customers.

Automate the right tests

Now that you have meaningful test scenarios, automate those scripts — with the following caveats:

  • Be cautious in automating scenarios that require a lot of manual intervention (i.e., a card swipe). 
  • Avoid automating tests of features that resist automation, like image-based ReCaptchas. 
  • Wait to automate a test until a feature is relatively stable and the pass/fail criteria is clear.

Otherwise, the effort required to maintain or execute your test automation may outweigh the benefit of automating your test in the first place.

For more recommendations on what to automate, check out the article: 10 Best Practices in Test Automation #1: Know What to Automate. 

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

Get a free trial of Ranorex Studio and streamline your automated testing tools experience.

Related Posts:

Ranorex Introduces Subscription Licensing

Ranorex Introduces Subscription Licensing

Software testing needs are evolving, and so are we. After listening to customer feedback, we’re excited to introduce subscription licensing for Ranorex Studio and DesignWise. This new option complements our perpetual licenses, offering teams a flexible, scalable, and...

Seamlessly Test SwiftUI Apps with Ranorex Studio

Seamlessly Test SwiftUI Apps with Ranorex Studio

As mobile development continues to evolve, Apple’s SwiftUI framework has become a favorite among developers for building intuitive, cross-platform user interfaces. But as SwiftUI’s adoption grows, ensuring these applications meet the highest quality standards requires...