Automated test scripts are a game-changer in software development. They allow you to test your applications faster, more consistently, and with fewer errors than manual testing. But if you’re new to automation, it might feel a bit overwhelming at first. Don’t...
When it comes to software development, building applications as one large monolith is long gone. Instead, teams build software in components. Sometimes the components are objects and have unit tests; often the user interface is the sum of those components. But even with clean interfaces and distinct components, the only way to test it from a customer perspective is as a whole. The web page or application is in many ways still a monolith because it needs to be built and deployed as one unit.
While we now build in components, too many teams still deploy as a monolith. To deploy components they need to be compiled, tested, and versioned independently.
Let’s start with an example. Take a look at any popular e-commerce site. The website itself will consist of various pieces you are familiar with, such as a login link (that shows your name if you are logged in) and a shopping cart that shows the number of items in your cart. There will be a search button and several widgets, including things like what you have viewed recently, products recommended for you, upcoming sales, close-out deals and so on.
Each of these items is, in effect, a separate web page; the homepage is just a collection of these sub-pages at specific locations. If one of the services is down, the rest can render, and only the savviest of customers will even notice. The product page also assembles itself out of components: product image, product description, reviews, products other customers also purchased, customer questions and answers, and so on.
There is real functionality in these components. On a modern website, the product image may include videos too, along with the ability to select from many images, change the color of clothing, or mouse over for an expanded image.
Instead of one single build-and-deploy monolith, we can break the website up into a dozen little websites. Each can be built, tested and deployed separately. This reduces the build time for each component from hours to minutes, or less. Large e-commerce sites with over a hundred teams need to do this, but even the smallest of applications can benefit from it, too.
Crucial to successful “componentization” is versioning of those components. Here’s one way to attach version tags to components.
Building and Testing a Component-Based Application
The answer is in managing different versions at the application or micro-service layer — that is to say, the largest deployment package group that will be deployed together as a single unit. Version control systems like Git, Subversion and even Microsoft’s Azure DevOps Server (formerly Team Foundation Server) provide mechanisms to group builds, not just incrementally as new things are added, but logically with a process called tagging.
It’s a popular way to structure the version control system and components so that each component can be tested separately. At a high level tagging involves these steps:
- Create release tags for all code and tests
- Check out, build and test each release tag independently per component
- Add release tags to external test repositories and perform tests after building
- Deploy code to the relevant environment when ready
- Retire tags after enough development has occurred
Let’s discuss each step.
1. Create release tags for all code and tests
In theory, the simplest way to build components separately is for each one to have its own separate place to store changes. This is where source control systems come into play. Subversion, TFS, and GitHub are all examples of applications which provide the tools to manage source code changes. While each version control system has its differences, all of them accept and store changes as they are regularly added. The source control system then tracks the corresponding changes, the lines in which files have new or updated code, as well as when new files are added or removed thereby allowing installation of that code at any time.
Commit is the command used to record those changes to the source control system where it is then often assigned its own unique identifier thereby allowing specific checkout or cloning of the code base at a specific time a commit occurred. The collection created for a group of commits managed by a source control system is called a repository. A repository is a container setup inside a version control system for managing all the details about what has been committed. This enables logical development of each component or app from its own repository to assist with development, testing, and delivery.
All of the subcomponents of a single repository app can then be contained within a single repository. However, as the number of components increases, the sheer burden of managing all those systems within one repository makes that approach unlikely. In addition, the components will likely have shared code libraries or want to interact. Creating separate repositories for components to manage features with specific responsibilities helps clear the build, test, and maintenance headache by allowing them to happen in isolation.
The next step after that is to manage each component through a series of tags. In modern version control systems like Git, tags are essentially local branches. A tag might be, for example, the component and the date for the next planned deploy. Programmers can build and test a component on a tag and version the tests right along with that tag. When the code is deployed to production and other services can rely on that code, the code will also be merged into master.
Versioning the tests along with the components makes it possible to rollback a version or hotfix to production and then run some or all of the tests from that previous version. Without this ability, the hotfix of production from last week’s code would run against today’s tests and fail because the expected behavior would not occur.
Most version control systems have a tag feature. It’s a way to associate a specific version with a name, like the sprint name or a feature name. Modern build systems like Jenkins, TeamCity, and Travis CI allow “build kickoff” to happen for any branch or tag when a programmer adds new code. This allows previous tests to act as a guard against potential regression before that code is merged into master.
2. Check out, build and test each release tag independently per component
Package management tools like Node.js Package Manager (NPM) do a good job of helping load or update to the latest version of a dependency based on semantic versioning rules. In C# this is NuGet; in Ruby it’s a gem. Tests bundled with the code in the same repository provide versioning for free, allowing the last check in to have the latest code for that version and the corresponding testware.
A simple checkout and build for a release on February 28, 2021, for one component could look like this:
$ git clone https://github.com/username/component-repo.git $ git checkout tags/v2021.02.28 $ npm run build $ npm run test
For Git, this checks out a clone of the repository, including all of its tags and branches, and then switches out the code for the version tag provided. NPM can then run its build and test in separate steps, as shown above.
3. Add release tags to external test repositories and perform tests after building
Now all you have to do is add these commands to the CI build script for the component or service, and the same steps will be performed each time the tag is updated. That’s pretty simple for tests at the unit, behavioral or component integration levels, or quadrant 1 of the Agile Testing Quadrants.
It turns out to be less true for tests that sit in the other quadrants. These tests often exist externally to the component and may be included inside another component that consumes or integrates with it. They frequently are captured in separate or shared test-only repositories that may run only after the subcomponent is built or after service deployment to a testing server.
These tests tend to cover more parts and at higher levels. Some examples include exploratory tests (which require a lot more thinking to automate than the tests written during traditional test-driven development), end-to-end or scenario tests, and performance, accessibility and security tests. These tests are often run after the application is compiled, and perhaps only after it has been deployed, in the case of performance and end-to-end tests. Tests like these cover systems and subsystems, workflows, and interactions between pages and screens, and include authorization and access levels such as different permissions and roles within a single test.
Thankfully, with the use of release tagging, even these external tests are pinned to the relevant release tag and performed after builds.
To put all that a different way: The software can build a component and run component tests, then put the scaffolding in to build the entire system and run any full system end-to-end tests. This can happen for every build or, if the suite is too large, overnight. Some automation test tools, like Ranorex, can create suites that run more tests more often (for an overnight build) and less for any release (just those tests related to a component).
In Mocha, a JavaScript library for running tests, a build step added to the package.json file might be as simple as:
"mocha --recursive "./tests/*.test.js"
On operating systems loaded with the Bash shell, the command is even simpler and leverages the globstar (**) wildcard character to automatically traverse the directory tree, like this:
“mocha "./tests/**/*.js"
For Ranorex, the compiled tests for a test suite targeting your shopping cart integration can be compiled into a ShoppingCartIntegration.exe file and run with:
“$ ShoppingCartIntegration.exe”
Additional configuration for the tests can then be passed in via command line arguments.
Tagging tests, associating them with a release, and then associating them with a specific component is powerful because you can run just the right tests for any release. For example, instead of a distinct directory under tests, you could include the category type in the test file name as “src/**/*.integration.test.js”, or “src/**/*.contract.test.js” as the criteria for running tests.
4. Deploy code to the relevant environment when ready
Once a commit has been built, run and tested by tools, it is ready to be promoted. Some teams promote the code to test servers for humans to explore. Once the build gets a thumbs-up, the code is tagged again and pushed to a system integration test (SIT) environment.
Once that passes, the code is tagged again and promoted to production. With a component approach, some teams can test a component and promote it to production, perhaps automatically. Examining logs of the test run history can be extremely valuable in narrowing down when a problem was introduced, and even by what change.
5. Retire tags after enough development has occurred
As a bonus, issue trackers can also reference release tags, making it easier to trace bugs to their fixes and tests in source control. Tags can last a long time, but if a component continues to have development over a long while, the name “tag/v2021.02.28” will lose all meaning. Beyond providing the timing of the release date, its value diminishes the more tags are added. That makes cleaning up tags a good idea, which enables easier development as new features are added.
l retire them once enough newer versions are available and the history on those tags is already deeply consumed in other releases. Just remember to look at all the available tags in both your components, service and test repositories, and your issue tracker before you retire them.
Technical Alternatives
To understand component tagging adequately, you need to recognize the limits of its use. Some practitioners argue that tagging is obsolete, and feature flags are a more appropriate mechanism for component management in the current decade.
A full comparison of the strategies of release tagging and feature flags has yet to be written, although advocates have already started on several prominent aspects. What you most need to know for now is that both release tagging and feature flags are in widespread use in commercial computing as of this writing, and will be for a long time to come.
Another confusion about tooling and practices identifies release tagging with GitHub, or even Git more generally. It’s true that GitHub has sponsored considerable documentation on release tags. However, other source code management (SCM) systems, including Subversion, Perforce, and many others, also are compatible with release tagging.
Is GitHub-based release tagging the only way to manage component release successfully? Far from it. Is Git-based release tagging a widely-used technique for management of component release? Absolutely.
Making Sense of the Complexity
Over time, a large software build becomes slow and expensive to maintain. It becomes brittle. Continuous delivery becomes impossible. Automated end-to-end system tests that were designed to decrease the risk of a bad merge actually slow down feedback and create maintenance work.
Organizing the software in terms of components, managed by tags, can transform this complexity into an opportunity. Teams can build, test and release components on a tag, and they can even share code, working on the same code libraries at the same time but on different tags.
Not using Node.js or Mocha? Don’t worry. Most programming languages and unit test runners provide mechanisms for labeling tests and suites, and the release tag method is available on most modern source control tools, which means you can continue to apply the same technique as your choice of language and components evolves over time.
All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform
Get a free trial of Ranorex Studio and streamline your automated testing tools experience.
Related Posts:
How to Create Automated Test Scripts
Discover a comprehensive approach to writing automated test scripts with Ranorex Studio. Create scripts that save time and improve the quality of your apps.
7 Mobile Testing Best Practices
Before a native application can hit the app store, it must pass rigorous testing. For example, Microsoft runs over 400 test cases for each app before it’s approved for the Teams Store, and similar standards apply to Android and iOS. High-quality testing is essential...
What Is Codeless Test Automation?
The reliability of mobile apps, software, and web applications is critical to success. Ideally, testing should cover every aspect of the UI or system, including inputs and data flows. Performing the number of tests necessary to ensure the quality of an application can...