This chapter introduces and explains the preprocessing methods for image-based automation in Ranorex Studio. The application of these methods are introduced in previous chapters, but the underlying concept and basic functionality are explained in overview herein. This chapter is an advanced topic and the knowledge of these concepts is optional.
In this chapter
In computer graphics and digital imaging, image scaling refers to the resizing of a digital image. When scaling a vector graphic image, the graphics primitives that make up the image can be scaled using geometric transformations, with no loss of image quality. When scaling a raster graphics image, a new image with a higher or lower number of pixels must be generated. In the case of decreasing the pixel number (i.e. scaling down, downsizing) this usually results in a visible quality loss.
Application in Ranorex
The primary use of downsizing an image is to reduce the pre-processing time. Smaller images are quicker to compare and search for. Downsizing means reducing an image to a set of remarkable attributes and characteristics without losing important information. Downsizing brings similar, but non-equal images closer together with respect to similarity match.
Example for downsized images (www.wikipedia.com)
Edge detection is an image processing technique for finding the boundaries of objects within images. It works by detecting discontinuities in brightness. Edge detection is used for image segmentation and data extraction in areas such as image processing, computer vision, and machine vision. Simplified, applying an edge detection algorithm to an image usually significantly reduces the amount of data to be processed and therefore filters out information that is regarded as less relevant while preserving the important structural properties of an image.
Application in Ranorex
Within Ranorex ‘edge’ detection is used to make the selection of images, or sub-images easier and more robust with respect to color and brightness.
Example image without & with Edges filter (www.wikipedia.com)
The Sobel operator sometimes called the Sobel–Feldman operator or Sobel filter is used in image processing and computer vision, particularly within edge detection algorithms where it creates an image emphasizing edges. The Sobel operator is an additional function for edge detection and clarification. Together with the ‘Edge’ detection the operation makes the image detection more robust and immune against color, brightness, and complexity.
Example for an image with and without filter application (www.wikipedia.com)
In photography, computing, and colorimetry, a grayscale or greyscale image is one in which the value of each pixel is a single sample representing only an amount of light, that is, it carries only intensity information. Grayscale images are composed exclusively of 256 shades of gray, varying from black at the weakest intensity to white at the strongest. Grayscaling an image is a common principle of bringing similar, but non-equal images with respect to color closer together for similarity match.
Thresholding is the simplest method of image segmentation. From a grayscale image, thresholding can be used to create binary images. The simplest thresholding methods replace each pixel in an image with a black pixel if the image intensity is less than some fixed constant, or a white pixel if the image intensity is greater than that constant. In the example image on the right, this results in the dark tree becoming completely black, and the white snow becoming completely white.
Example image without and with filter application (www.wikipedia.com)
The similarity value can be adjusted from 0.0 to 1.0. This corresponds to 0 % similarity (completely different pictures) and 100 % similarity (completely identical pictures). It may be tempting to use values like 0.8 or 0.9 to ensure the image is found even if some superficial changes occur. However, these values are only seemingly high. In reality, they are actually very low already.
At 0.9 similarity, an entirely white 100-pixel picture would be considered identical to a picture with 90 white and 10 black pixels. That’s quite a difference already. When you start comparing images in the magnitude of several thousand pixels, the optical deviations can be even more striking.
Similarity example #1
Consider the icons of Edge and Internet Explorer in the image below. They are each around 2000 pixels and markedly different from each other. A 0.9 value would not catch these differences. It would consider them a match. In fact, you would need a minimum value of 0.95 for them to be treated as different.
Program icons of Edge and Internet explorer
The two images are found to be identical pictures at a similarity value of 0.9. For this reason, we recommend you use a similarity value of 1.0 or 0.9999. To ensure your images are found at these high values, make sure to use uncompressed image formats, such as .png and .bmp. The artifacts created during compression make formats like .jpg unsuitable.
For large pictures in the order of several thousand pixels and more, we also recommend you turn off similarity reporting, as it can take a very long time to compute even on fast machines.
Similarity example #2
Similarity defines (as a percentage) how similar the compared images need to be in order to pass the check. The color differences for each pixel is calculated and the mean squared differences are summed up.
- Imagine that we compare 10x10 pixel color images
- If all pixels have the same color except for one pixel being white (RGB 255,255,255) in picture A and black (RGB 0,0,0) in picture B, then the similarity is 99%
- If all pixels have the same color except for one pixel being black in pic A and one being gray (RGB 128,128,128, i.e. 50% color difference) in pic B, then the similarity is 99,75%
(because of the squared error)
- Simply speaking, a similarity of 99% is already a quite low similarity if you compare large images and want to find small differences