When to Choose Manual over Automated Testing: Podcast Transcript

Feb 10, 2022 | Test Automation Insights

globe and two software testers working with automation

When should you automate testing? In this episode of the Idera DevOps Tools Podcast, Ranorex Product Managers Jon Reynolds and Julie Fuelberth describe when you should use test automation, and when you should test manually.

The podcast was originally published on February 10, 2021. Transcription has been altered slightly from the original recording. Listen to the full episode below.

Julie Fuelberth: Welcome to the latest episode of the Idera DevOps Tools podcast. Our goal is to educate and inform you about key topics in software development. With solutions that help almost one million users throughout every step of building, testing, and deploying applications, our experts are poised to provide enticing insights, perspectives, and information.

I’m Julie Fuelberth, and today, I’m with Jon Reynolds. We are both Product Managers for Ranorex Studio, and today we’re going to discuss when to choose manual or automated testing  in your software testing strategy. Hey Jon, how are you?

Jon Reynolds: Oh, I’m doing pretty good today Julie, thanks.

Julie Fuelberth: Excellent. So recently, you addressed the process of moving from manual to automated testing in a webinar. And in that webinar, you acknowledged that you should never expect to automate 100% of your testing. And that automated and manual testing pretty much complement each other. So that said, there must be best practices, though, to get started automating software testing.

So how can you determine which test cases will give you the highest return on the time and effort invested? I know both you and I have experience in QA testing, but what are the best practices to get started?

Jon Reynolds: So in theory, any software test can be automated. But the question is, where are you investing your time correctly? Will it cost more to develop and maintain some automated tests? Or are they better suited for manual testing? And that’s what you need to focus on: identifying the tests which suit automation, and acknowledging there are tests that just don’t. So you want to get the best return on your effort. To be successful, you need to focus your strategy on test cases that meet certain criteria for automation.

Because testing resources are limited, one of the first considerations in launching a test automation project is where are you going to focus? What are you going to automate?

Julie Fuelberth: Sure, so we’ve identified how we’re going to transition. What you’re saying is the next thing we need to talk about is what we’re going to automate and focus in on those beneficial test cases that we need to start with.

Jon Reynolds: Exactly. So prioritizing your list of tests to automate will guide you through a successful test automation strategy.

Which tests are candidates for automation, and which aren’t?

Julie Fuelberth: I love it. So can you walk us through and maybe some of the questions that you would ask yourself that would help identify where to start? Because I’m sure once you’ve taken this step to say OK, we’re going to automate some of these tests, you’ve got a lot of opinions on where to start. So how do you start focusing?

Jon Reynolds: Yes, certainly in the first place, the most obvious question to ask is, “Is the test going to be repeated?” So automating a test will only be worthwhile if you are going to rerun it — if you’re going to automate it. Because if you only do the test once, then the automation script never gets executed. So you want to look at: not only is the test going to be run more than once, but how often are you running the test? So the tests that you execute the most with the most frequency are likely going to be the ones that you’re going to automate first because you’re going to get the greatest return on time investment from those tests.

Julie Fuelberth: Yeah, absolutely, and I would assume there’s multiple benefits here. Not only are you saving time and money on those repetitive tests, but at least in my experience, eliminating those really improves my morale. So then you’re improving your QA teams’ morale and getting them more involved in things that are more difficult. So if it’s a manual test then it’s prone to mistakes, though, so you also have to consider removing that human factor: that can help increase your accuracy. It gives them more time to spend on more challenging tests. And aside from the repetitive test, though, you’ve taken a big chunk away from your QA team.

So what’s next? Where else can we find great results in automating tests?

Jon Reynolds: Yeah, so two great candidates for automation are also regression tests and smoke tests. These are the ones you’re going to end up executing the most frequently, so it does allude back to the frequency question, but generally these are tests that cover the entire product in some capacity. Automating them will give you a quicker way of assessing the quality and stability of your entire product.

Automating regression test suites can also help you integrate them with the build process of your DevOps pipeline and make shifting quality into your build process. These are items that you’re pretty much going to run with every build anyway. So automating them makes them key candidates for transitioning and starting with your automation project.

Julie Fuelberth: So when you talk about regression testing, at least my experience has been sometimes some companies are really good at that and always do that. And sometimes that’s one of the things that when you’re in a hurry to get a build out or a new software update, might get missed. So this is a huge impact on making sure that that regression testing is being done. You know regression testing makes sense and you should always do it. But when you’re in a hurry or you’re really in an agile environment and you’re trying to roll out code quickly, sometimes it gets missed.

Jon Reynolds: Right, and adding these regression tests as part of your build process through a CI/CD pipeline can help ensure that they’re run every time. You may miss some of the manual tests that you end up doing because you don’t have the time. But at least the regression and smoke tests are going to help you identify: “is my application at least stable? Can I deploy it?” Even if you’re trying to do a canary release or something like that, that can help you at least get some initial baselines on how your application performs.

Then, when you’re looking at the automation tests, another thing to help with your speed of delivery is determining which tests you can run in parallel. So if you have tests which can be run in parallel versus in sequential order, you can save an immense amount of time by running these tests concurrently on different VM’s or different machines or instances. That will help you also speed up your delivery and execute way more tests in a shorter amount of time.

With the sequential tests, if these tests are only executed in a certain order, then they may not be the best candidates for automation. You could probably automate them, but they may not be utilizing your time most efficiently. Or they may just be a lower priority for what you do want to automate. Especially if you’re transitioning to automation, the tests you can run in parallel, that aren’t as sequential, that are aren’t as dependent on other tests, are easier (or better) candidates for getting started.

And then you have some other items for say, parallel tests. Usually items that indicate whether or not tests can be run in parallel include: are you running the tests with multiple data sets, or paths, or environments. Do you have cross browser tests? These are perfect tests to run in parallel. Do you have to run different logins with different credentials or different inputs? If you have a data source you can probably pull that data and run those tests in parallel as well.

Julie Fuelberth: For sure. Now, you might think, “if I am making all these costs and time savings, do I need as as many resources in QA as I have?” What I have found is you still do. You’ve freed these people up to maybe dig into more complex test cases. Those might be lengthy, they’re tedious, and you need to perform them manually. A lot of times those are the ones that are avoided, but they affect the quality and the user experience. That also means that your test coverage and your accuracy could decrease  —and there again your areas are prone to failure. So what do you think the highest priority should be when you move into the more complex features?

 Jon Reynolds: So when you start looking into the more complex features, you want to start looking at: what are the higher priority features? If it’s more complex, then it may be more prone to failure, because it’s integrated with other parts of the tool. So you have to prioritize those features themselves: which ones will yield a bigger return on investment for your automation? Prioritize the ones that must be tested while also being candidates for automation based on some of the other criteria we’ve already discussed. So you just look at the ones that are most likely to fail. You may be able to get some of that analytics from past reports.

And just look at your failure rate for some tests and look at those as the ones that you are going to get just a higher return on investment.

Julie Fuelberth: Totally makes sense. So like you said, those areas that are more prone to failure are key for guaranteeing product stability. That is huge when it comes to customer retention and the quality and the user experience. High priority features could also equate, though, to critical business paths or main features of the product that customers have grown to expect. So I want to come back to something that you mentioned earlier in a webinar and I mentioned earlier — that you shouldn’t expect to automate 100% of your test cases. So you’ve got to find that right balance between manual and automated tests, so that you can take some time and really dig in.

Which automation tool should you choose?

Julie Fuelberth: So we’ve talked about test cases. What about the tool you’re going to use? How do you know when to use which one?

Jon Reynolds: Right, so one of the important things to note: not only can you not automate 100% of your test cases, you also can’t execute 100% of your tests. You cannot fully test your application. You’ll never get 100% test coverage on your application because there are just so many variables at play. So the goal is to get as much coverage as possible.

Julie Fuelberth: That’s a great point, yeah.

Jon Reynolds: Thanks! Part of the coverage aspect is that there are limitations within the tool that you choose for automation. Some automated tools aren’t great for coded automation, say using WebDriver endpoints or API endpoints and testing automation doing load testing. But each tool also has its limitations. So you’re not going to use a load testing tool to test your UI. You’re not going to use an API testing tool like Postman to check to make sure that all of your buttons are present on your on your web interface. So when you’re automating and you’ve chosen where you’re trying to focus on your automation, your tool should align with that focus. You may have tools that do more than just API testing or single focus. But you’re still going to be limited by your tool. So, you should prioritize your test cases based on the tools that you have, or the tools that you plan to use.

Julie Fuelberth: So when you think about it — because you had that discussion earlier during your webinar about “when to when to move from manual to automated” — you should pretty much when you’re going into choosing the tool . . . you should already have a good idea of what you want to automate? Is that right?

Jon Reynolds: Yes, you want to know what your goal is from automation before you start diving into tools, because you could spend months evaluating tools. If you don’t have a plan in place ahead of time, then you could be evaluating tools that may not yield the best return on investment. If you need to free up time so that you can get more of the unit tests done in your tool — if you need to focus on unit tests — then UI test automation may not be the right tool for you. Conversely, if you already have some good unit tests built into your developer code, or your build process, and you have a big web interface, then you probably want to focus on seeing what you can automate in your web interface.

Julie Fuelberth: So evaluating your current process and setting the goals for your test automation strategy should address some of the feasibility. However, as you mentioned, automation doesn’t replace all manual tests. There are tradeoffs.

Jon Reynolds: Exactly. When you are transitioning from manual to automated tests, you’re not going to — and you’ve mentioned this a couple of times — you’re not going to eliminate your manual testers. You’re going to give them more time to do what they do best, which is not smoke testing, but exploratory testing, trying to break your product, or doing the more complex tasks that just can’t be automated.

And when you’re looking at that, when you’re evaluating which tests to actually test manually, are you looking at an area of your application that’s going to change over time? Or do you have, say, a product or project in flight, already in development, that’s going to overhaul your UI? So if you’re trying to switch into automated testing, and you’re planning to the overall overhaul your UI, you want to bench those UI automation tests until after your UI is updated. There’s no point in going back through and redoing it.

So, take a deep look at the changes that are coming in your product line and other integrations or related features. And if you see those kinds of changes coming, those aren’t going to be the tests that you want to automate right now. Those are going to be the tests that you keep as manual tests until those releases are out and stable, and you can start transitioning those tests into your regression suite and test the new features that are coming out.

Julie Fuelberth: Great points, all great reasons to consider automating certain test cases and not others. I would assume there’s also reasons to not automate test cases and other areas of your test automation strategy that need to be addressed to be successful. So what are some other successful things you need to consider?

Jon Reynolds: So not only do we need to know that your application accepts the inputs that you provided, we also need to ensure that it doesn’t fall over when somebody does something unexpected. So when you’re trying to perform random actions on an interface, it can be difficult to mimic some of these behaviors using automation. Like, if you’re in the middle of a checkout process hitting the back and forward buttons or just doing things that say a frustrated user who’s not very good with computer might do. I’m not going to throw my wife under the bus here, but, uh, so …

Julie Fuelberth: I was going to say it. Sounds like me.

Jon Reynolds: The person who gets mad and just closes the laptop mid checkout because the connection timed out. These are the kind of things that you’re not going to get from automation because you’re not going to get those random acts. You know, unplugging your device in the middle of a process. Those are the kind of things that you’re not really going to see from automated tests, so these are the kind of things where you want your manual testers to have time to be able to experiment with.

What are the benefits of test automation?

Jon Reynolds: When you’re doing your automated tests, test automation tools will provide you with insights about the quality of your software in a few reports. But these reports shouldn’t be the only reason you’re automating. So when you are running your automated tests and you’re just getting green reports all the time, are you actually looking at the reports? Are you looking at the tests that did fail and why they failed and determining what needs to be corrected?

You shouldn’t be doing automated tests just so you can see more “green”. You want to actually look at your report, and maintain your tests and monitor them. Spot check your results to make sure that things are still working: your automation is working the way it should. So all of this should be kept in mind as you are transitioning to automation. You still need to do some maintenance. If you write your tests well, and modularize your tests, and make them as independent as possible, the maintenance requirements should be a little lower. But that’s part of your automated test strategy.

But when you are looking at your test results also you want to make sure that you’re actually looking at them and not just seeing, “OK, I’ve got 100% green, everything is great.” So that’s where also the manual process comes in to back up your automated results so that you can say, “Not only did my automation pass, but my exploratory testing didn’t encounter any issues, or we only encountered a couple of minor items.”

Julie Fuelberth: Oh, I totally agree. When you look at those reports, and I I know you want to see all green, but in my opinion, in my experience, you want some of that red so that you know that you’re pushing as hard as you can with your testing against your product. So I always say, everything comes down to the income statement and the ROI you receive from a business decision you make.

So John, we’ve identified several of the gains that you get with automated testing: time and money, mainly because of productivity improvements. However, some of the points that we’ve made today bring up valid impacts on ROI like quality and user satisfaction. I mean that goes a long way for customer retention.

In my early days of doing QA testing, I was I was always so glad to have the time to get to a deeper, more complicated test. Like you said, like that exploratory testing that I knew should be done, my gut said there might be something we should worry about, but it was avoided because we just didn’t have the time. The time element wasn’t there. I knew they needed to be done and that would have an impact on the quality of the product. But time just didn’t allow because I was doing the repetitive, functional regression testing, so I particularly wanted to be able to do more data-driven testing and more scenario testing to try to replicate the variety of the customer data and the customer product use cases that our customers were using the product for. So, for lack of a better way to say it: I wanted to really kind of put the software product to the test, but I just didn’t have the ability, or the time. Have you experienced that before?

Jon Reynolds: Yeah, and hopefully this is where your automation strategy, if you plan it well, and identify your tests which are high priority, highly repetitive — you are going to get your cost savings and your investment is going to come very early. So the more that you can take away those repetitive, redundant, “I’ve got to click the same inputs”, or enter the same values, or slightly different values, over and over again: the more you can move those into automation, the more you do free up time for your testers to start poking around more and exploring those real-life scenarios which are more than just looking at A, B and C inputs and expecting D, E,  and F outputs.

Julie Fuelberth: I kind of like your comment “poking around”. You know, like I always say, I love a good story problem and I love to poke around and try to break things or try to solve things. So you’re exactly right, and that’s true for your good QA testers. This is what they enjoy to do the most: poke around and see where there might be issues and find them, and solve them for the for the business.

So hey John, we always have a great discussion. We both share being product managers here on Ranorex Studio. I so I appreciate your time! And I thank everyone who’s listened in. Be sure to subscribe to all of our podcasts so that you don’t miss any of our upcoming episodes.

Jon Reynolds: Yes, thank you very much Julie. This has been a great discussion, and I hope we have more soon.

Julie Fuelberth: Absolutely, thanks!

Related Posts:

Types of Automated Testing Explained

Types of Automated Testing Explained

Automated testing is a crucial part of software development. This guide helps you understand and implement the right types of automated testing for your needs.

API vs GUI: What’s the Difference?

API vs GUI: What’s the Difference?

APIs and GUIs have different functions and require suitable testing. Ranorex Studio’s test automation gives you the perfect testing program for APIs and GUIs.