@krstcs, thank you for your response. I'm going to digress a bit here as an exchange beyond a simple issue, into "philosophical" matters is always interesting and is likely to broaden our views on test automation.
Problem statement: the tracker does not "see" controls when they are under an empty overlay
I am very familiar with advanced C# coding techniques and I have mostly done hybrid code so far in Ranorex, combining drag/drop repository items and custom functions when needed. For example, working with the SAP system and validating variant configuration values took 5 minutes to run. The best that could be done with drag/drop code (recording would have been a nightmarish experience) was something that resembled this
Code: Select all
Click "find"
Type the name of the characteristic to look for
Click on the search button
Validate that the found value was good
repeat for each value (30+ of them)
The code ran in a little over 5 minutes, making it almost impossible to test for the 2000 or so iterations that this bit will need to test. It would have taken ~7 days of continuous testing for this single script, which was unacceptable. So the following components were built
- data grabber
Code: Select all
Click on grid
Click on "go to first page"
Read value in the grid
Take a picture of the value row
Store both in a global variable of type Dictionary<string,customValue> (where customValue contains the value and the picture)
Repeat Read until the end of the current page
Click on Next Page and Repeat Read
Repeat Next page until last page
data validation
Code: Select all
Receive parameter "value name" and "expected value"
Retrieve the value from the dictionary
Validate that the values match
Mark the <string,customValue> object as "validated"
un-validated values report
Code: Select all
Select Dictionary<string,customValue> object that was not validated
Report as warning message
read next until all un-validated values are processed
Once these components were built, they were assembled together into a "recording" that is like this
Code: Select all
Call data grabber
Validate that value "A" is "123"
Validate that value "B" is "456"
[... more values to validate]
Report un-validated variables
Adopting this new approach reduced the execution time to ~15 seconds, or around an hour for all the iterations. This approach hides the complexity under user functions and allow easy mapping of values by less "tech savvy" resources and therefore reduce maintenance costs.
Recordings, on the other hand, give you "code" which is often too rough to even start mapping data fields. Consider the following example
Code: Select all
Click on field A
Enter value "1"
Enter "{TAB}"
Enter value "2"
That sequence is too often interpreted by the recorder as
Code: Select all
Click on field A
Enter a value "1{TAB}2"
Other tools I use (which I am not going to name here) actually convert code to the following, making it easier to map data at a later time
Code: Select all
Click on field A
Enter value "1" in field A
Enter value "2" in field B
I'm not saying one is better than the other; Ranorex captures actual key sequences because you might actually be interested in testing how the GUI reacts to various inputs. But if you don't really care about this and all you want to do is capture the results of firm actions like entering data in an app and making sure it works, then Ranorex recordings need some rework/rewrite.
The plan for recordings are to
- Capture new repository items quickly
In user sessions, capture steps of a process; no need to take notes
In user sessions, capture minimum validations that users need; no need to take notes
From these recorded sessions, you get repository items which are likely to be "all over the place" unless you have carefully crafted weights rules to suit your environment. But I think it's faster to take that output as a start and reorganize it with cached root folders under various grouping levels or under global features common throughout a given app, etc. You also get some draft code that is a reference to the process you are testing and makes it easier to then dispatch the refactoring activities to resources that may not know how the business process/application work.
I hope it make sense, but I am sure this will eventually evolve when more scripts are written.
p.s. I didn't want to post the snapshot because I would have had to heavily edit it to remove some bits to protect intellectual property; I won't go in too much details here. The reason I put up a screenshot is to highlight the hierarchical portion of the snapshot, which places an overlay before the other container and contributes to the recording issues I am facing.
p.s.s. Sorry for the long post, and if you've been patient enough to read to this point, then thank you!