Pipeline

When systems were created with compiled, executable files, a lot of the process of building software was relegated to dedicated build engineers. When I worked at Cisco Systems in the 1990s, we had a dedicated engineering team focused entirely on the build tools and creating the final distribution that would be made available to our customers. For most of us on the engineering team, we had little interaction with the build process. It was built, and then we’d download the necessary images to the machines we would test with.

Today, that process has been opened up considerably by tools that allow anyone on the team to create images for testing. More to the point, through the development of continuous integration tools, like Jenkins, it is possible to make small changes to an application and have those changes start a build, test the build and be pushed to a server with minimal interaction.

Continuous integration and continuous delivery (CI/CD) can seem like mystical ideas, but in truth, they are pretty straightforward. At the center of these practices is the concept of “the pipeline.”

What is a CI/CD pipeline?

The CI/CD methodology hinges on the idea that each step in the process of building software takes place after a previous step has finished. To make it possible for software to be deployed quickly and to apply small changes to test or deliver to customers, there are a variety of steps that can be automated and scheduled to run in a sequence. At their heart, CI/CD tools are scheduling tools that execute bits of code at desired times.

It’s in this framework that a software delivery pipeline is possible. Using Jenkins as an example, in its earlier iterations, I could create a variety of what are called “freestyle jobs” that allow for a set of steps to be executed. By chaining a number of freestyle jobs together, I can create a software delivery pipeline that allows me to commit a code change to a branch and then, from that commit or push, start the process of configuring, testing, building and deploying software to a target machine.

What do I need to build a pipeline?

The first thing to realize is that there are some standard processes most pipelines will have in common, but there isn’t a single approach that will work for everyone. The benefit to the pipeline model is that the stages that are most important to the organization can be applied and made the most effective for that given entity.

What works for my environment may not be practical somewhere else, but an example pipeline approach in my environment looks something like this.

The first step is the merge request builder. This is a process where, if I have a branch that has had some development work done and I push changes to the branch, a hook is triggered that makes a comparison to the original trunk. The system recognizes that a change has been made, and that in turn triggers the build process.

The second step in our environment is based on the fact that we use Docker to allow for parallel tests to be run simultaneously. For that to happen, a sequence of steps is run to bring up a Docker master server, which in turn creates a number of Docker slaves. By configuring these slaves, we create the space where the discrete Docker containers can be duplicated and run.

The third step is what we call the “Docker build” phase, where we bring up the master Docker image, create our test environment using the branch we are doing development with, and sett up the application with all the necessary information and test data needed to run the suite of tests.

The fourth step allows us to take all the tests that are tagged for a given sequence and divide them up based on the number of Docker slaves we have configured. This then allows us to create copies of the Docker image and give each copy a unique ID so that the necessary tests can be run on them. This results in several hundred Docker containers being created and running tests. Through this process, Jenkins monitors the tests and looks for failures. If none are seen (meaning all tests pass), then we go to the next step.

The fifth step is to create the actual branch image and resulting artifacts necessary to install the software effectively on a target machine.

The sixth step is to start up a final target machine — in this case, a dedicated system not within a Docker container. Depending on the system, we can perform a new install or an upgrade based on how the target machine is set up and which flags are configured.

The seventh and last step is a cleanup phase that makes sure that all systems are turned off and that we are not running build or test systems needlessly. This is important because we use a cloud-based solution, and leaving those machines running is, effectively, spending money needlessly. Making sure that everything is cleaned up is vital to the process.

In our environment, currently these are all individual projects configured in Jenkins and then passed on to the next project with successful completion. This is best described as a “sequential pipeline,” where the processes are configured with a dependency on an upstream project and an order of operations.

In the latest Jenkins implementations, the notion of a pipeline has been simplified, and with a Jenkinsfile, the individual steps described above can be put into the Jenkinsfile and run as a single process.

Care and feeding of the pipeline

The purpose of CI/CD — and creating a delivery pipeline in the first place — is to help eliminate bottlenecks and busywork so that the primary focus can be placed where it matters the most: creating features, testing features and delivering features to customers.

By scheduling and sequencing these steps, we are able to add features iteratively and replicate the process reliably. More to the point, each new feature can be checked in conjunction with the rest of the system so we can make sure we have not introduced any defects — or, more specifically, that the changes we have made will not cause other sections of the code to fail.

This is a big benefit compared to trying to compile a system with multiple changes at one go. Still, it has a price: maintaining the pipeline and expanding it as necessary. With new features come new tests. With new tests comes the need for planning for capacity so that tests can be run effectively. Scripts need to be maintained to make sure that the resources are available and running when needed. New tests need to be checked in and run in conjunction with the CI/CD tools to make sure they are working effectively. This all takes time, and every part needs to be maintained.

The CI/CD pipeline will depend on a variety of external tools, such as plugins in the Jenkins example, and these need to be evaluated from time to time and updated and tested with the existing system. However, one other aspect of having a CI/CD pipeline is that, once the steps are codified, they can be maintained by a variety of people and are not just left in the realm of the developers.

The process of making and operating a software delivery pipeline may seem complicated at first, but like all pipelines, they are created step by step and section by section. Each organization will have its own requirements and ways in which it will best be configured. Pipelines can be as short or as long as is necessary, so begin by determining what steps you are already performing and see how you can get started. Each pipeline starts with a first section, so if in doubt, start there.

To explore the features of Ranorex Studio risk-free download a free 30-day trial today, no credit card required.

About the Author

Michael Larsen has, for the better part of his 20+ year career, found himself in the role of being the “Army of One” or “The Lone Tester” more times than not. He has worked for with a broad array of technologies and industries including virtual machine software, capacitance touch devices, video game development and distributed database and web applications. Michael currently works with Socialtext in Palo Alto, CA.

You might also like these articles