In this article, we’ll talk about eliminating duplication, comparing images, explicit waits, and driving input from places other than the user interface.
Don’t Repeat Yourself, or DRY, is every bit as good advice for writing test automation as it is writing production code. The more places I have to make a change, the more likely I will miss something and have to hunt down multiple locations to modify. Also, if I had to hunt down those multiple locations at this time, it’s a good bet I’m going to have to do it again. This goes back to data‑driven testing and trying to keep specific items out of tests wherever possible. Hopefully, this goes without saying, but any time I find myself tempted to copy and paste something from one test into another, even if it saves me time, it means I’m setting myself up for tedious replacement and refactoring.
Better to make small snippets that you re-use, some of which eventually become functions. That means when a break happens the “fix” only needs to happen in one place.
Use Smart Image Comparison
It’s one thing to confirm that an image is on a page, and having a test to confirm the right image is there is certainly a test I might want to write. However, if there is an issue with the pixelation or the loading of that image, the higher the percentage of a match I select, the more likely I’ll get a false positive and fail a test. Fortunately, there are options with a variety of tools that, allow image compares with a percentage match, allowing “fuzzy matches” that are close enough – by dialing down the precision up or down. This can eliminate false failures, but too fuzzy a match might allow a real defect to slip in. Another approach is to use a tool like Appitools Eyes, which use AI to detect only differences in images that are perceptible to users.
Don’t Delay Explicitly
I’m certain that I could go through my tests right now and I could find several places where, due to the erratic response of resources, I’ve put in arbitrary delays. I know from experience that it takes about a minute from when I spin up an AWS environment until I can actually connect via ssh Entering in a delay of 60 seconds makes a lot of sense. However, there are times when the resource is available in less time than that, and sometimes, it’s not available even after 60 seconds. When it’s available sooner, I’m waiting longer than necessary. If it doesn’t respond after the sixty seconds, my test fails and I have to start again. Putting in an implicit wait or making a routine that polls for the availability of a service makes a lot more sense to me. That way, I only wait the required amount of time and then, if I decide that there is an unacceptable wait time, I can then consider a test to be a failure legitimately, not because an arbitrary timer completed. Additionally, if I get access to the resource in 20 seconds, so much the better. I’ve saved 40 seconds in that test step. Multiply by dozens or hundreds of tests and that’s a real time saver.
Change The Channel
I consider effective automation to help take out the steps that are tedious and that require a lot of repetition. In many cases, testing at the UI level makes sense, but there are times where there are better ways. If I’m looking to create test data, it makes much more sense to write scripts that let me create data in a database from a known good source. In the product that I currently work with, there are often varying needs for testing and rather than try to create that data from scratch each time or drive the UI to create that data, I can import accounts or other data structures that contain all of the pieces that I need and none of the pieces that I don’t. Additionally, API’s are my friend. I can send a few parameters via Postman or cURL and confirm that I am getting back the data I expect. With practice, entire suites of tests can be created that will exercise the feature functionality of my application without my having to drive a web browser at all.
All test suites will, at times, struggle with some or all of these issues, but if I take the time to consider these areas and implement them where possible, the odds of my test suite needing a lot of time and attention to debug or maintain will go down considerably.
Putting It All Together
Instead of this code:
wait 30 click_on(element_name)
Use an approach more like:
wait_for_element_maxtime(element_name, optional time) click_on(element_name)
This function should end the waiting as soon as the element appears. Time should be a variable that you can change once that expands on all test — so if fifteen seconds is acceptable instead of ten, you only need to make the change once.
Expand that “don’t repeat yourself” (DRY) principle to the test automation code will make greening the tests easier. As you automate, consider how often the user interface will change and what parts of the screen might change, and by how much. Finally, look to import test data from an external source that is fresh for every build, instead of entering it through the user interface each time.
The results will be faster-running tests that have fewer false errors that are easier to fix.
What’s not to like?