The Missing Test Automation Skills

Jun 9, 2020 | Test Automation Insights

Missing Test Automation Skills

At one time I was working for a large company where upper management had very definite ideas about “testing” and “automation” and “code.” Some of them were quite reasonable. Some of them resulted in me, and others, pushing back hard on their assertions.

The responses I received, always from their proxies, ran something like “Have you read the book {big boss} talked about? If you have, you would understand his message.” In truth, I had read the book. I had a signed copy from the authors. I had their second book and was mentioned in that book. Twice. I knew the authors of that book. We had presented at the same conferences and had taken each other’s workshops. I also knew the people their work was built upon, whom they cited.

I responded with this information. It was suggested that I “read it again” so I would understand. To this day I have no comprehension what such a response meant.

The “new” job descriptions for people doing what I did were circulated. There was a stated need for fairly particular skills. This in itself made perfect sense. In order to write code to support better test automation, the job definitely required a certain level of expertise in writing code.

In the drive to focus on coding ability and “test automation” the very particular set of skills needed to capitalize on these traits were left from the job description. Somehow, the skills needed to do good testing were abandoned in the process.

Over the last several years I have seen this trend grow ever more prevalent. There appears to be a disconnect.

On Automation

I am a fan of using automation and test automation tools. I want to use the best possible tools to address the needs and problems I am working on to make sure of the best possible outcome for the software I work on. I want to use tools that help me do good work.

I see more and more positions for “Automation Testers” that focus on the primary development language, characteristics of the stack, toolset, Git repository and on and on. Many of these read like the job descriptions I was speaking out about a few years ago. I have kind of been pushing hard against these types of characterizations for some time. This might be why some folks consider me to be “anti-automation.”

Please, don’t misunderstand me. Those things can be important, in some contexts for many organizations.

To paraphrase Wendy’s commercial from 1984, “Where’s the testing?

By placing the emphasis of the job descriptions, notices and search terms into the easy to understand structure used for developers, there is a disconnect. By focusing on developing skills for performance evaluations above all other skills and work accomplished, there is a disconnect.

When the ability to write code is of greater value than all other skills and abilities you are opening you and your product and organization at risk. The skills and experience from testing are needed to make maximum use of the skills writing code. Ignore them at your peril.

Situational Awareness

I realize the description above does not apply to every person in senior management. This has little to do with the background or training the developers in those teams received. I do not believe it has anything to do with developers coming from “code camps” or from college and university programs.

I suspect one part of the problem might be a failure on many levels to understand the nature of Software Development as a whole. A case can be made that it is a failure in “situational awareness.” People are not aware of what the full picture is. They are focused on their piece and that is all.

Over the last many years software development has become more specialized. There is less attention paid to broader skills. The idea of thinking of software as part of a large whole, a collection of related tools has dropped off, at least at the development level. The concept of “system thinking,” that is, thinking about how this piece fits into the whole application has declined.

To be sure, there is some recognition of this in most, if not all development organizations. People working in developing software know at some level that their work is part of a broader product. However, this is more of a notion than a deep understanding. To help people focus on their particular area many organizations have developed silos or cells around each team.

The individual teams know their part extremely well. They may have some understanding of how their piece interacts and relates with other components. The concept of an integrated entity is often remote.

You see, but you do not observe.

This is most easily observed when a problem is detected in production. Many teams will quickly look to make sure it was not their portion with the defect. To the customers using the software, however, the defect occurred in the broad application. They usually don’t care in the least which development team is responsible.

People will recognize problems exist. They can understand there are challenges. They might not understand the impact on them or their team. They might also not recognize that even though the problem was found in another team’s module, it was caused by an error introduced by their team.

This problem is compounded when managers of development teams have been moved up from the ranks of developers. While this is common enough, the conditions and views with which they developed professionally will carry forward with them into their new role.

What does this mean?

By focusing their work on developing software, with minimal testing, most developers turned development managers will carry that attitude forward. They will carry the view that testing is something other than their role. From my experience, this comes from the establishment of “test teams” independent of the development groups they support.

By advancing the concept of testing as a completely independent function from programming or development, the idea that testing was a separate act from software development, and could, therefore, be done as an independent “phase” by people not involved in creating the software was given the opportunity to take root.

It did. That has contributed to this problem.

The Problem With Testing

Good Testing is not a box to check, no matter how it may appear. Indeed, “Good Coding” should be a comparable box if that were the case. Good Testing takes careful, considered work. It is crucial to understanding how your software product behaves before your customer gets their hands on it.

The issue for many organizations comes from splitting the concepts of “test” and “development.”

By making “testing” a subordinate activity to “development,” most “leaders” of software organizations have no idea what is involved in good testing. They do not understand the relationship between designing, writing and testing software.
There is another reason contributing to this.

Oftentimes, testers themselves do not understand what “good testing” is. There are the testers who have been trained, and so advocate, to “follow the steps in the cookbook” approach. If they make test cases that cover all of the requirements, then execute them, they are done. They have been trained to have an extremely narrow focus on what testing is.

These are among the first “tests” converted to an automation tool or platform. The reason is simple: once written, the code for automation can be executed time and again for little to no cost to the organization. The challenge is, these tests often yield the same value as “automated tests” as they did “manual tests” – almost none.

If developers have done even a reasonable job of unit testing their code, these “tests” will likely give no new meaningful information.

There is another extreme which is also a contributing factor.

Development managers and directors who are used to the idea of “cookbook testing” have specific expectations. These are based on their experience and what their peers talk about. The response from some testers and testing consultants? “That will never work.”

Instead of presenting alternative methods that break ideas down into business-centric need matching or goal and objective oriented responses, the discussion often ends there. Of course, there are “alternatives” presented, and these usually don’t have anything that addresses fundamental questions in need of answers by technical and corporate management.

Testers and test consultants often make comments about “adding value.” They tend to be short on hard, serious information as to how to do this, or even what “value” looks like.

Without solid, viable alternatives, technical leadership will fall back to what they have experience with and what they can use to demonstrate progress to their leadership. In 2012 I wrote, “Testers are not seen as knowledge workers by a significant portion of technical and corporate management.” For many organizations, this is still the case.

A Possible Solution

Adding testing skills to those often listed in “test automation” positions might be simply stated: Identity what those look like. They need to be clear and concise. They need to be as defined as the coding needs. They need an organization-wide shared understanding.

This is not simple. It is a very tall order. For this to be possible, each organization’s technical and corporate management must have an understanding of what testing is. Furthermore, these leaders need a firm understanding of how said good testing benefits their organization.

There needs to be an understanding of what skilled, thoughtful testing is, there needs to be an understanding of the “easy choices” and the hidden pitfalls they contain.

In The End

I have spent the last many, many years working to help people understand that good, thoughtful testing is impossible to be removed from good software development. Some organizations are more open to this idea than others. Often, this openness is the result of a near-catastrophic failure of their software products.

“Automation” by itself will not help your company, team or project avoid such problems. Test Automation driven by skilled, informed planning, driven by skilled people trained in good, thoughtful testing will help a great deal. It is up to testers to demonstrate what those skills look like and convince others those skills work.

Get a free trial of Ranorex Studio and streamline your automated testing tools experience.

Start your intelligent testing journey with a free DesignWise trial today.

Related Posts:

Test Design: What Is It and Why Is It Important?

Test Design: What Is It and Why Is It Important?

In software development, the quality of your tests often determines the quality of your application. However, testing can be as complex as the software itself. Poorly designed tests can leave defects undetected, leading to security vulnerabilities, performance issues,...

Ranorex Introduces Subscription Licensing

Ranorex Introduces Subscription Licensing

Software testing needs are evolving, and so are we. After listening to customer feedback, we’re excited to introduce subscription licensing for Ranorex Studio and DesignWise. This new option complements our perpetual licenses, offering teams a flexible, scalable, and...