Test every environment

DevOps dissolves barriers between development and operations that slow down delivery of working software. Value to customers is paramount, and DevOps redesigns roles and teams to optimize for value.

Another boundary ripe for re-examination is the one that confines testing to the “test” or “staging” environment: every environment deserves testing insight and results. Consider, among other potential gains, these three specific benefits you can realize by expanding your tests to more environments.

Corraling the “long tail”

A traditional model has a test environment as isolated as possible from production. The test environment has its own databases, users, authentication services, caching utilities, customer data, endpoint devices and so on.

However appealing that model sounds, I’ve never seen it strictly implemented. Every practical test department ends up with downscaled services, and most simultaneously share at least some services with production or development. If production has a farm of 70 servers specialized for image or document conversion, the test department inevitably budgets one or two servers for the same duty. Similarly, it’s common for production LDAP to host not only customer accounts but also test users.

These realities don’t destroy the value of testing. They do set limits to it. A deadlock that erupts only when the sixteenth server joins the load balancer will never appear in testing if testing is restricted to three hosts.

Smart enterprises recognize that test environments only mimic production imperfectly. How do they make the most out of testing, in the face of these limits?

One possibility is to enlist the cloud. Architecting applications to admit “on-demand” management allows testing to temporarily dial up its resources to match or even exceed the scale of production. If a certain kind of error only appears every terabyte or so, but it’s important to diagnose the error, then rent enough of the cloud to process a terabyte every ten minutes.

The other possibility is for testing to colonize all environments. While it doesn’t have to be full-blown testing in production, production must at least have enough instrumentation to allow testing to perform its role. Different organizations brand this as “logging,” “observability,” “monitoring” and “synthetic transactions.”

Whatever the label, the environment must be rich enough to support the testing role. Among other requirements, this means capturing enough detail about incidents of interest to be able to reproduce them, whether in a safe corner of the production environment or in a specially constructed “asylum” with enough resources to exhibit the range of behaviors seen in production. A minimal entry, for instance, probably includes a timestamp, the line in a specific source running on a specific host, and the runtime error condition.

Synthetic transactions are a special kind of testing in production where test accounts exist that can perform nearly every operation. When the order completes, the test products are not shipped, the sales are not counted in the profit/loss statements, and so on.

2. Testing the testers

Return, for a moment, to the ideal model described above. Suppose it’s working perfectly, in some sense, and testing has its own copies of enough resources to model production. But there is still work to do: Someone must verify that the resources in production and testing not only correspond; but actually match.

Perhaps that sounds mundane. How hard is it to check installations on servers in both environments for version numbers? Hard enough that it’s often fumbled, in my experience. Testing instruction generally focuses on a simplified theory of application development in one particular software language, which generally leads to discussions of JUnit, NUnit or Selenium. In real commercial situations, though, I often see more time going to configuration and integration than to coding problems.

Containers are no solution, either; while they can be a useful technique, they don’t eliminate requirements for asset management. Whether an Elasticsearch is configured by Ansible or Kubernetes or any combination of alternatives, it still takes engineering judgment to choose an appropriate version and configuration.

The only realistic conclusion is that testing departments at least need to be able to monitor production and development environments in order to verify configurations.

Wasn’t DevOps supposed to settle all this? DevOps commonly emphasizes a combination of continuous integration, continuous testing, and continuous deployment, so surely a DevOps organization quickly settles any configuration problems.

Not true! As much as continuous verification helps, it still can result in gaps when it’s verifying the wrong standards. Testing must have not only a full range of resources but a way to test that range across all environments. Otherwise, divergences will enter and grow until someone is asking questions such as, “Why are we doing all our tests with Ruby 2.4.2 when production is running 2.5.0?”

3. Boosting development

While DevOps has great techniques for eliminating the errors behind such questions, the techniques inevitably are applied according to the motivations and interests of the DevOps decision-makers. Most of them have backgrounds in development or operations and simply don’t understand all the benefits rigor in testing can bring them.

That’s good news, though! Elevate testing to full partnership with DevOps, rather than a subordinate role, and testing can boost consistency and uniformity in testing across all environments. This has the potential to pay off directly for development; for instance, when testing enriches the acceptance tests of a continuous integration scheme, subtle timing errors might be caught automatically within a day, rather than having to wait until the end of a development cycle.

Textbooks generally present coding examples to explain testing concepts. Architectural consistency sounds arcane, by contrast, and configuration uniformity looks mundane. Difficulties with consistency and uniformity, though, are neither rare nor trivial to solve. DevOps needs all the help testing can provide to see them in proper perspective and solve them in all environments in a unified fashion.

Start today with a concrete first step: What was the last error testing discovered after development thought it had finished a release? How can testing help fortify development’s continuous integration so such an error is caught before it reaches testing?

Now get going.

To explore the features of Ranorex Studio, download a free 30-day trial today, no credit card required.

About the Author

Cameron Laird is an award-winning software developer and author. Cameron participates in several industry support and standards organizations, including voting membership in the Python Software Foundation. A long-time resident of the Texas Gulf Coast, Cameron's favorite applications are for farm automation.

You might also like these articles