Best practice in Test Suite usage

Best practices, code snippets for common functionality, examples, and guidelines.
Posts: 109
Joined: Wed Mar 26, 2014 6:23 pm

Best practice in Test Suite usage

Post by gilbar16 » Wed Mar 09, 2016 6:55 pm

In a Ranorex Test Suite, what's a good number of test cases to be set as your maximum before branching out to a new project new test suite?
Is over 200 test cases too much? or 300? or 500?

Or is it okay to have 1,000 or more test cases in a Test Suite?

And how about the Setup/Teardown...
Would you just add them in the beginning and the end of the Test Suite?
Is it okay to have Setup/Teardown within test cases in addition to?

Thanks in advance for any input you may have.


Ranorex Guru
Posts: 2683
Joined: Tue Feb 07, 2012 4:14 pm
Location: Austin, Texas, USA

Re: Best practice in Test Suite usage

Post by krstcs » Wed Mar 09, 2016 7:24 pm

For your first question, I think the only practical limit is the amount of ram your system has. Logically, it really depends on how you organize your tests. I have 1000s of test cases, most nested inside other test case. But my tests are very dynamic and data-driven (as in, most test cases only run if the data requires it). So, for me, thousands is actually a good thing, it means my data can cover more scenarios and the manual testers don't have to mess with creating tests, only putting in data that shapes the tests. (I also have created a Test Data Management system that is a web-based front-end for entering data in my database, which is what drives everything.) So, one of my setups has about 8 projects (1 core library, 5 test suites for vastly different business flows (purchase @ each of 2 system types, fullfillment, change orders, returns, and a couple of odd-man out type tests), each having hundreds to thousands of nested test cases, all with SQL data connectors (which are manipulated on-the-fly at runtime to make the returned data and test case execution fit the current test scenario).

Setup/Teardown, again, depends on your needs. I use them throughout my tests, although mostly in the suite. Setup/Teardown ensure that the modules in them get run, regardless of the outcome of the test case. They are handy for cleaning up after failures, or setting up something that has to be done before something else and won't impact external test cases (or test case that aren't children of the test case that the setup/teardown is in).

I don't think there is any one way to do things. There may be better ways, depending on what you are trying to do, but best practices with Ranorex can be boiled down to just a few things:

1. Always, always use a version control system. If you don't, you will lose work.
2. Structure the test suites in a way that makes sense for your organization and test developers.
3. Only automate and validate what you absolutely have to.
4. Keep the modules as small as possible as this makes them more "modular" and reusable.

Other than that... Do what feels best.
Shortcuts usually aren't...

Posts: 109
Joined: Wed Mar 26, 2014 6:23 pm

Re: Best practice in Test Suite usage

Post by gilbar16 » Thu Mar 10, 2016 6:47 pm

Excellent tips/suggestions, krstcs !!!

I assume that when you say "organize your tests", you meant creating folders in the Test Suite then adding all Login tests in the Login folder, for example.

It would be nice if we can see a sample of your dynamic and data-driven test frame plus your Test Data Management even just a miniature version or just plain screenshots. :)


Ranorex Guru
Posts: 2683
Joined: Tue Feb 07, 2012 4:14 pm
Location: Austin, Texas, USA

Re: Best practice in Test Suite usage

Post by krstcs » Thu Mar 10, 2016 7:33 pm

Well, I can't post that much info on here, and none of it is available externally, so I will probably just have to describe it and post screen shots.

However, before I do, let me state that I do not do test automation the way most people do. I approach it like a software development project. Most people who move from manual testing into automation try to do exactly the same thing with automation that they do with manual tests. They structure the tests the same way, they organize their data the same way (Excel, anyone?? I hate Excel, by the way, I think it is a blight on the testing process... :D ). I also very heavily (maybe overly!! :D ) embrace data-driven testing.

So, let me describe the systems I test, and then I'll layout some info about my test architecture.

I work for The Container Store (look us up, we sell great stuff, and we're #14 on Forbes' "Top 100 Places to Work in the US"). We have approximately 90 stores across the continental US. We build all of our Point-Of-Sale (POS) and eCommerce (web) software in-house. The POS software also includes a custom closet design application ("CDC" : a semi-CAD system) and is built on Java, using a REST middle tier with Oracle (and some MS SQL Server) backend. eCommerce is standard stuff and uses the same middle tier.

I write tests mainly for the POS/CDC system, although I have been recently re-tasked with writing IE 11 web tests since Selenium seems to have issues with IE (or that is what I was told).

Our POS system allows customers to make orders. These orders can contain multiple fulfillment groups. A fulfillment group is just a collection of items that are going to be given to the customer a certain way:

"TAKE" - customer carries the items out of the store at the time of purchase
"SHIP" - customer wants the items shipped from our distribution center (DC) directly to their home/business/etc.
"PICKUP" - like TAKE, but they will get the items at a different time, or from a different store
"DELIVER" - like SHIP, but comes from a store, using a deliver service, mainly in large metro areas (NYC, DFW, LA)
"RETURN" - customer returns an already purchased item to a store

The last 2 can be combined with an installation service depending on which items are purchased.
The last 3 can be placed over the phone, online, or in the store.

All of this sits on top of a REAL-TIME inventory tracking system (we were one of the first in the industry to do this) that tracks how much stock each store as well as the DC. In addition, it also tracks what is on trucks to the stores, and projects what we expect to get into the DC on any given day from our suppliers.

In addition to all of this, the POS/CDC also allows a user to go into an existing order (anything but TAKE) and change it (say the customer calls back and wants a different color of the item they originally ordered, or quantity, or ship location, or payment, or really just about anything in the order...).

ANNNDDD... The CDC allows the user to create and customize closet systems (elfa is our big one, it's great stuff!!) and purchase those fully complete designs.

Sooo.... How do you test all of that with a flat system? How do I get to point M in the system without going through A,B,C,D,E, and J first (or any other combination that may be appropriate)??

On top of that, how do I allow the manual testers to be able to define test scenarios without them having to learn SQL and Ranorex the way I know it? Even to do simple test cases with a flat structure, like most people do, I would then have thousands of test case to manage individually. I'm not gonna do that... :D

So, I came up with a model in the DB to handle almost every situation that can occur in our system. Basically it mirrors the structure of our Oracle back-end.

Here it is (* => each parent may have many of these):

Code: Select all

CustomerOrder has -
    - *FulfillmentGroup (SHIP, TAKE, etc) has -
        - *Closet design (all the items that are being purchase to make that closet system)
        - *Detail (the item list) has -
            - *Quantities (this allows for changing the quantity during a test, to check that the system handles that)
    - Customer (email, name, etc.)
        - *Address
        - *Phone
    - *Store (the store or stores that this order will be purchase in, allows for the same purchase shape to be placed in multiple stores so we can test different tax rates, etc.)
This isn't all of it, but you get the point. Returns are handled in a different structure, but each return is linked to a specific Order/Store combination (or to multiples, so we can return from multiple orders at the same time).

Do you get the sense of how complex our system is yet?? :P

That is why I prefer to use a more complex test architecture. It better mirrors actual use and because each base test is fully-featured, I don't need to maintain as many cases.

More next...
Last edited by krstcs on Thu Mar 10, 2016 8:52 pm, edited 2 times in total.
Shortcuts usually aren't...

Ranorex Guru
Posts: 2683
Joined: Tue Feb 07, 2012 4:14 pm
Location: Austin, Texas, USA

Re: Best practice in Test Suite usage

Post by krstcs » Thu Mar 10, 2016 8:25 pm

This is my database structure:
DB Diagram.png
Database Diagram
DB Diagram.png (86.6 KiB) Viewed 1402 times
This is the main screen of our TDM system:
TDM main screen.png
TDM Main Screen
TDM main screen.png (161.83 KiB) Viewed 1402 times
TDM is built on Microsoft Lightswitch HTML, which makes it pretty easy to create screens based on data structures in the backing database. You need VS 2013+ to use it.

I've also attached a screenshot of a portion of my test project and one test suite for those following along at home... :D
Ranorex Project and Test Suite.png
Ranorex Project and Test Suite
Ranorex Project and Test Suite.png (279.22 KiB) Viewed 1402 times
The problem that I have with Ranorex's data connectors is that they are designed to be fairly static at runtime. This won't work for what I do, because I need to have certain test cases run on one test scenario, and others on the next. The test flow needs to be reshaped at runtime by the data that gets passed in.

So, I came up with a solution. I edit the SQL query of the data connectors before each test case in a custom module that takes whatever values I am using to limit the data as inputs, and sets the static SQL query to use those values. NOTE: I exclusively use Stored Procedures in SQL Server in order to avoid having to change SQL in my code in several places. I have close to 300 stored procedures in SQL Server (not all of them are used for test suite pathing though...).

The data manipulators (as I call them) are standard test modules (not Recording modules) and their run() method looks like this (I usually use a library call to do this, but...):

Code: Select all

int ID = 3;
int dataCacheName = "MySqlDataConnector";

DataCache dc = DataSources.Get(dataCacheName );

((SqlDataConnector)dc.Connector).Query = "SELECT * FROM MyTable WHERE ID=" + ID.ToString();

//or you can do this with SPs:

((SqlDataConnector)dc.Connector).Query = string.Format("EXEC MyStoredProcedure {0}", ID);

dc.Load(); // you need to do this to ensure the data is properly loaded after the change

//my overloaded library methods look like this:
public static void SetupSqlDataConnector(string dataCacheName, string queryString) {
    DataCache dc = DataSources.Get(dataCacheName);
    ((SqlDataConnector)dc.Connector).Query = queryString;
public static void SetupSqlDataConnector(string dataCacheName, string queryFormat, params object[] paramsArray) {
    SetupSqlDataConnector(dataCacheName, string.Format(queryFormat, paramsArray));
You can see these in the screen shot. They usually are named "Set_<dataCacheName>[_for_<inputParameter>]". Naming is very important (the part in [] is optional, but can be nice if the variable needed is not the same from one module to the next, when both modules manipulate the same structure). And, you should always place these right before the test case/data connector they are manipulating.

So, that's it. That's all there is to it... Pretty easy, right? :D

Really, I hope it helps you guys out.

One last thing. If you go this route, it is vitally important that you (a) have someone who really understands SQL and DB design and (b) keep your modules as small as possible and name them according to what they do (you can always use module groups if some modules are always together, but it is very hard to split a module that is too big once you are already using it).

For example, I have a module named "Click_OK". Take a while guess what it does. I'll even give you 2 chances... :D

OK, so that was a trick question. That module actually clicks the OK button only if it is there (it checks for Exists() first). And my repo item is: OKButton -> \\button[@text='OK' and @visible='true' and @enabled='true']. This means I can use it anywhere, even if more than 1 ok button is visible, it will only click the one that is also enabled.

"Click_No" is simpler. It just clicks the No button, but again, the repo item is setup similarly to the OK button.

What about text entry, let's say for a username. "Enter_Username" does just that. It does NOT click any buttons, it just enters the given username. Note that the variable it takes is also named "Username" and the data column in the Stored Procedure is also called "Username". Makes auto-bind very simple. One thing I do when entering text though is to click the box, hit ctrl-a, hit del, enter the given text, then validate the text is correct. This ensures that the text is exactly what I want every time, just in case the value isn't cleared. If you need to check that you can skip the ctrl-a/del stuff.

So, is that enough info for now?? LOL

If you have any questions after reading this, let me know! If you don't, you probably didn't finish reading it... I wouldn't blame you... :D
Shortcuts usually aren't...

Ranorex Guru
Posts: 2683
Joined: Tue Feb 07, 2012 4:14 pm
Location: Austin, Texas, USA

Re: Best practice in Test Suite usage

Post by krstcs » Thu Mar 10, 2016 8:42 pm

OH, let me add:

I have these tests that run, in this order, on 3 different environments, every night:

1. CDC Purchase Regression (includes custom closet orders, but no CAD tests... YET!! :D )
2. POS Purchase Regression (orders from CDC can be suspended and picked up in POS)
3. CDC Change Order Regression
4. RF Pull Order (this actually uses a version of the software on our RF "guns" that the stores use to scan items for shipments and fulfills the orders by simulating getting items and putting them in the "bags" for the customer)
5. POS Return Regression (Returns can only be done in POS since CDC doesn't have a cash drawer)

This ordering and running is handled by Jenkins, which, again, pulls data from the DB to see what stores have orders/returns for each test above. Jenkins then runs orders by store in one of 8 VMs, in parallel, with the stores ordered by most orders/returns having priority for the VMs over stores with fewer orders/returns.

Each regression set sends an email ONLY if it fails (or if it is the first success after having failed).

The Ranorex report is published to the Jenkins web server as an HTML artifact of the job, so we always have all of the reports in HTML format for anyone to look at. This also means we don't have to take the extra step of converting to PDF because everyone has a browser.

The results are also published to our DB, including receipt copies for accounting to look at when needed. There is a whole other Lightswitch HTML project for that so the accountants only have to look at the data they need instead of wading through all of it.

OK, hand cramps and carpel tunnel are setting in... :D
Shortcuts usually aren't...

Posts: 109
Joined: Wed Mar 26, 2014 6:23 pm

Re: Best practice in Test Suite usage

Post by gilbar16 » Fri Mar 11, 2016 1:43 am

Whoa! Thanks for sharing these info. But a little bit or way too much for most of us to follow.

You sounded like a former co-worker who set up similar stuffs in QAPartner (now SilkTest) years ago.
Everything was fine if you can follow his coding but, when he left the company, oh boy. Slowly but surely, everyone was back to the normal way of writing scripts as described in the User Manuals.

If someone else is doing the way you are doing this, it would be nice to know how well it works in mobile device apps (native, hybrid, web) running in Android, iOS, etc.

Good stuffs!!!