Cloud-based testing

Multi-phase GUI testing is an approach to testing under Continuous Integration/Continuous Delivery (CI/CD) in which the software is tested in a particular runtime environment depending on the phase of the software development lifecycle. As we demonstrated in previous articles, in the Development phase code and tests were copied directly into the Jenkins server running the CI/CD process. In the Staging phase, code and tests were copied into the Jenkins server controlling the deployment build. Only, instead of conducting the tests directly against the application’s source code, the code was compiled into a Docker container and run against the container.

In this final phase, UAT, the software under test will be containerized as it was in the Staging build. But, instead of running the container directly on the Jenkin’s server, we’re going to deploy the container image to Google Cloud’s Container Registry. Then, a container based on the image in the Container Registry will be deployed to a Kubernetes Cluster running on Google Cloud and exposed to the Internet from there. The GUI tests were created in Ranorex Studio and are designed to run as standalone tests from within a Windows environment. The Ranorex GUI tests will be built and run from a Jenkins Windows slave server running on Azure. Figure 1, below illustrates the overall test architecture for the UAT Phase.

Figure 1: Testing in the UAT phase is conducted against an instance of the demo application running in a Kubernetes Cluster on Google Cloud.

In previous articles, we demonstrated how to download code and associated tests from GitHub and run both in Jenkins on a Windows Slave. This article describes the steps necessary to configure the Jenkins server running under Linux to containerize the demo application’s source code that resides in GitHub. Also, this article shows you how to use Jenkins to deploy the container to a Kubernetes Cluster running in Google Cloud. Executing the Ranorex GUI tests from a Jenkins instance running on a Windows Server on Azure is covered in the final article in this series.

Working with Kubernetes on Google Cloud

What makes this phase of deployment and testing different from the previous testing phases? The previous phases deployed the application directly to a Windows slave machine attached to Jenkins. In this UAT phase, the master Jenkins server running on Linux will create a container image of the demo application using source code downloaded from GitHub. Then, the Jenkins server deploys the container image to Google’s Container Registry. Once the image is in the Google Container Registry, additional code in the Jenkins build creates a Kubernetes deployment using the container image in the Container Registry. In addition, the Jenkins server creates a Kubernetes service that is bound to pods that are part of the Kubernetes deployment.

tipp icon

Hint

You will need to have an account on Google Cloud. You can create a Google Cloud account on this web page. You will need a Gmail email address to complete the signup process.

Google Cloud and Kubernetes play a central role in this Jenkins build. Thus, before the build can work, two things need to be in place. First, Jenkins needs to have the credentials required for access both the Google Container Repository and the Kubernetes Cluster running on Google Cloud. Second, the Kubernetes Cluster on which the demo application will run needs to be created in Google Cloud. The following sections describe the steps necessary to meet these two requirements. First, we’ll start with creating the credentials that the Jenkins build needs to access Google Cloud.

Creating the Service Account and Downloading Access Credentials

Giving Jenkins access to the Google Cloud requires that we create a service account under the Google Cloud Project dedicated to this build and deploy process. The service account will have the rights to work with the Google Container Registry and the Google Kubernetes Engine. Also, as part of the service account creation process, you’ll download a JSON file that contains the credentials that Jenkins will use to access the Google Cloud resources associated with the build. This JSON file needs to stored in a safe place. You’re going to add this information to Jenkins as a secret credential using the Jenkins Credentials Binding Plugin. To create a service account for the Jenkins build on Google Cloud, perform the following steps: Login into your Google Cloud account and at the top of the Google Console web page, select the project into which you want to add the service account, as shown in callout 1 of Figure 2 below. Then, enter the term, IAM in the search text box as shown in callout 2. Select IAM & admin from the dropdown that appears.

Figure 2: You create a service under the IAMs service (2), according to a project (1).

Once you are in the IAM & admin section of your Google Cloud account, click the CREATE SERVICE ACCOUNT link at the top of the page as shown below in Figure 3.

Figure 3: Create a service account from within the IAM & admin page.

Upon clicking the link CREATE SERVICE ACCOUNT, you are presented with the Create service account web page. Enter the name of the service account in the associated text box as shown below in Figure 4. In this case, we defined the name of the service account as   feelingtrackerserviceaccount. A Service account ID is created automatically when you declare the Service account name. The Service account ID will be in the form of an email address that is created by Google Cloud as part of the service account creation process.

Figure 4: A Service account ID is created automatically by Google Cloud when you declare the Service account name.

Click the CREATE button. The Permissions page will appear. We’ll use the Permissions page to assign the roles that have the permissions that enable Jenkins to interact with Google Cloud. You need to assign three roles to the service account. These roles are Kubernetes Engine Admin, Deployment Manager Type Editor, and Storage Admin. To add roles:

  • Click the link ADD ANOTHER ROLE, as shown below in Figure 5
  • Select a role from the dropdown that appears.

When all three roles have been defined, click the CONTINUE button.

Figure 5: Defining roles is the way a service account is granted permissions

After you assign roles, you create the key that will generate the JSON file that you will save for later use. Click the button, CREATE KEY as shown below in Figure 6, callout 1. You’ll be presented with a dialog that allows you to declare the Key type. Select JSON and then click the CREATE button as shown at callout 2 of Figure 6. The JSON file will download to your computer. Put this file in a safe place. You’ll need it later.

Figure 6: You’ll download a JSON file with access credentials upon completion of the service account creation process.

Now it’s time to create the Kubernetes cluster that will serve as the target location for the demo app.

Creating the Kubernetes Cluster

Go to the Search text box at the top of the Google Cloud Console web page, and do a search on the term, kub. A choice for Kubernetes Engine will appear in the drop-down. Select it, as shown below in Figure 7.

Figure 7: The Google Cloud Console search features allow you to access services such as the Kubernetes Engine quickly.

Once at the Kubernetes Engine page on the Google Cloud Console, click the link CREATE CLUSTER as shown in Figure 8, below.

Figure 8: The Google Cloud Console makes creating a Kubernetes Cluster a straightforward undertaking.

Clicking the CREATE CLUSTER link displays the page in which you’ll create the Kubernetes Cluster that we’ll use in the Jenkins build. Create a name for the Kubernetes cluster and declare a zone in which the cluster will run if you don’t want to use the default. (See Figure 9, below) Also, declare the number and type of Virtual Machines that will be worker nodes in the Kubernetes Cluster, as shown below in Figure 9.

Figure 9: Be careful when creating a Kubernetes Cluster. Misdeclaring the number and type of virtual machines can incur an unnecessary expense.

After the declaration process is completed, click CREATE. It might take a few minutes for Google Cloud to create the Kubernetes Cluster, so be patient. When the process is complete, the new cluster will appear in the Clusters section of the Kubernetes Engine page, as shown below in Figure 10.

Figure 10: Kubernetes clusters are listed in the Clusters section of the Kubernetes page.

Now that the service account for the Jenkins build and the Kubernetes cluster has been created, we’re ready to get to work on creating the Jenkins build. But, before we do, let’s take a moment to review the concept of a Container Repository and how Google Cloud implements one in the form of the Container Registry.

Using the Google Cloud Container Registry

Sharing container images has become a common practice in modern software development. The standard mechanism for sharing container images is the repository. Repositories can be public or private. The industry standard that has evolved is DockerHub, but there are others. In fact, Google Cloud has a container repository of its own. It’s called the Container Registry. Figure 11, below shows the Container Registry page for the Google Project related to this article.

Figure 11: Container images are stored and organized in Google Cloud under the Container Registry service.

The important thing to understand about the way Google Cloud implements container repositories under the Container Registry is that each Google Cloud project will have a dedicated container repository that is defined as a file system location, for example, us.gcr.io/feelingtrackeronthecloud. Also, there is a subdirectory within each file system location dedicated to the given container image. For example, the container image, feelingtracker, has an associated file system location, us.gcr.io/feelingtrackeronthecloud/feelingtracker. This container directory in the file system location stores the various versions of the container as each version is uploaded into the Container Registry. Understanding how the Google Container Registry stores container images according to a file system location is important when we deploy container images up into Google Cloud as part of the Jenkins build.

System Requirements for Running Docker and kubectl on the Jenkins Server

The master Jenkins server needs to have certain programs installed on it in order to support deploying the demo code to Google Cloud. These programs are:

  • Docker: The program will create the container image that gets deployed to Google Cloud
  • Google Cloud SDK: The SDK contains the command line programs that allow the Jenkins build to interact with Google Cloud
  • kubectl: The command line program for interacting with the Kubernetes Cluster operating in Google Cloud.

The following links provide the instructions for installing each requirement on the Jenkins server.

Once the requirements are installed, we need to make it possible for the Jenkins server to access Google Cloud. This will be accomplished by using the Jenkins Credential Plugin and Credential Binding Plugin to store sensitive security information required for access.

Installing and Configuring the Credentials Plugin on Jenkins Master

You install the Credential Plugin and Credentials Binding Plugin using the Jenkins Plugin Manager as shown below in Figure 12:

Figure 12: Add the Credentials Plugin and Credentials Binding Plugin to Jenkins using the Plugin Manager

Once the necessary credential plugins are installed, we need to configure them to hold the information needed to access Google Cloud. To add credentials, go to the Jenkins main page and click System from beneath the Credentials link on the left navigation bar. Then click Global credentials (unrestricted) from the section that appears. (See Figure 13.) Note: If you do not see the listing for Credentials in the left-hand navigation bar, there has probably been a problem with the installation of the Credentials Plugins.

Figure 13: Add a credential to Jenkins by clicking System (1) and then clicking Global credentials (2)

To add a credential, click Add Credentials from the left side navigation bar that appears on the Global credentials page. (See Figure 14, callout 1) This displays the page into which you will make the credential declaration. The process for creating a credential is displayed in Figure 14 below and is as follows:

  • Select the Kind as Secret text
  • Set the Scope as Global
  • Add the text that you want to be saved as a secret in the Secret textbox (callout 2)
  • Give the secret a unique ID (callout 3)
  • Click OK to save the credential (callout 4)

Figure 14: The credential Kind you’ll use to access Google Cloud is Secret text.

In the case of the Jenkins build we’re developing, we’re going to create two secret credentials. Table 1 below describes the secret credentials. All credentials will be of Kind: Secret text with a Scope of Global. You’ll create the credentials using the technique described earlier in Figure 14.

Name Contents
GC_FEELINGTRACKER_ACCESS Contains the text of the Google Cloud service account access JSON downloaded when the service account was created
FEELING_TRACKER_PROJECT_ID The unique identifier of the Google Cloud project under which the Kubernetes Cluster and Container Registry were created.

Table 1: The definition and description of the secret credentials required for the Jenkins build to interact with Google Cloud.

Figure 15 shows the rendering in the Jenkins credentials page this is the result of creating the credentials required to access Google Cloud.

Figure 15: You will need two credentials defined as secret text in order to access services on Google Cloud.

The next step after setting the credentials is to configure the build.

Using Jenkins to Deploy the Containerized NodeJS Web App to a Kubernetes Cluster on Google Cloud

Create a new project in Jenkins by selecting New Item from the Jenkins main page and then declaring the item as a Freestyle Project, as shown in Figure 16, below.

Figure 16: The Jenkins Freestyle project for deploying the demo application to Google Cloud.

Upon saving the new project, you can then proceed with filling in the details. Figure 17 below shows the initial setup. Notice that the option Restrict where this project can be run is left unselected. Whereas in previous stages we had the build take place on the Windows slave server, this build will execute on the Jenkins master running under Linux.

Figure 17: Add a description that describes the nature of the Jenkins project.

Next, we’ll declare the GitHub repository and branch where Jenkins will find the source code for the demo application. This UAT phase of deployment will run against the */master branch as shown below in Figure 18.

Figure 18: The build will use code that is stored in the master branch of the demo app’s source code repository on GitHub.

Set the Build Environment to Use Secret Information

Now we’ll set the Build Environment. Select Use secret text(s) or files(s) as shown below in Figure 19. Setting the build environment to use secret text(s) and files (s) allow Jenkins to create runtime environment variables that will be bound to the secrets we defined earlier.

Figure 19: Select Use secret text(s) or files(s) so that the Jenkins build can access information stored in credential secrets.

Now we need to bind secrets in the credentials we configured previously to environment variables that the Jenkins build will use at runtime.

Setting Access Information using the Jenkins Credential Plugin

We’ll set up the information that Jenkins will need to access Google Cloud by creating runtime environment variables that are bound to the secret credentials that created earlier using the Jenkins Credential Plugin. The two environment variables that we’ll create are:

  • GC_ACCESS which binds to the secret credential GC_FEELINGTRACKER_ACCESS
  • PROJECT_ID which binds to the secret credential FEELING_TRACKER_PROJECT_ID

The details of the binding process are described in the sections that follow.

Defining the User Access Credentials

To add the access information described in the JSON file associated with the Google Cloud service account created earlier, click the Add button in the Bindings section of the Jenkins build and select Secret text. Add the information as shown below in Figure 20.

Figure 20: The environment variable, GC_ACCESS is bound to the secret credential, GC_FEELINGTRACKER_ACCESS.

Defining the Project ID

To add the Project ID associated with the Google Cloud service account created earlier, click the Add button in the Bindings section and add the information as shown below in Figure 21.

Figure 21: The environment variable PROJECT_ID is bound to the secret credential FEELING_TRACKER_PROJECT_ID.

Build Timeout

Finally, we need to add information in the Bindings section that declares an overall timeout period for the build. Also, we need to tell Jenkins what to do upon timeout. Figure 22 below shows the details of the settings.

Figure 22: In addition to setting timeout period and behavior, the Bindings section allows you to configure Jenkins to add timestamps to console output.

Defining the Build Steps

The way the Jenkins project builds the demo app into a container and then deploys that container to the Kubernetes Cluster is by adding the Execute Shell task as a build step in the project. Figure 23 below shows the build step for containerization and deployment with the Jenkins GUI.

Figure 23: Adding an Execute shell build task containerizes the demo application and deploys the result to a Kubernetes Cluster.

Table 2 below shows the lines of code that make up the build step.

printf %s "$GC_ACCESS" > "gcsecret.json"
gcloud auth activate-service-account --key-file=gcsecret.json
gcloud config set project ${PROJECT_ID}
docker build -t feelingtracker .
docker tag feelingtracker us.gcr.io/${PROJECT_ID}/feelingtracker:v1
gcloud --project=${PROJECT_ID} docker -- push us.gcr.io/${PROJECT_ID}/feelingtracker:v1
gcloud container clusters get-credentials feelingtrackercluster --zone=us-central1-a
kubectl run feelingtracker --image=us.gcr.io/${PROJECT_ID}/feelingtracker:v1 --port=3000 --replicas=3
kubectl get deployment
kubectl expose deployment feelingtracker --type=LoadBalancer
sleep 30s
kubectl get service | grep 'feelingtracker'

Table 2: The command line instructions to containerize the demonstration application and deploy it to a Kubernetes Cluster using a Jenkins Execute shell build step.

Taking a look at the details of the Build Step process

The following describes the details behind each command in the build step process. The command below, which starts the process, takes the contents of the environment variable, $GC_ACCESS and saves it to a file, gcsecret.json. $GC_ACCESS is mapped to the credential secret that holds the contents of the JSON file that contains the service account information needed to access Google Cloud.

printf %s "$GC_ACCESS" > "gcsecret.json"

The next command uses the command gcloud auth from the Google Cloud SDK to authorize a command session on the Jenkins server with Google Cloud. The command uses the –key-file parameter to pass authentication information stored in the file, gcsecret.json to Google Cloud.

gcloud auth activate-service-account --key-file=gcsecret.json

The command gcloud config set project tells Google Cloud to set the scope of activity on Google Cloud to the project defined by the environment variable, PROJECT_ID.

gcloud config set project ${PROJECT_ID}

Now we’ll build the docker file using the docker build command. The command assumes that the present working directory has the Dockerfile with the build instructions.

docker build -t feelingtracker .

After building the container, we need to tag it with the exact path to the file system location in the Google Container Registry. Also, we’ll append a version definition, v1, to the end of the tag. This tagging step is required by Google Cloud. Notice that the environment variable, ${PROJECT_ID} is part of the file path definition.

docker tag feelingtracker us.gcr.io/${PROJECT_ID}/feelingtracker:v1

We push the container image to the Google Cloud Container Repository using the commands gcloud and docker.

gcloud --project=${PROJECT_ID} docker -- push us.gcr.io/${PROJECT_ID}/feelingtracker:v1

Now we download the access credentials for the Kubernetes cluster from Google Cloud to the Jenkins server. The command, gcloud container clusters get-credential automatically appends the cluster, user and context information relevant to the Kubernetes cluster feelingtrackercluster to the Kubernetes config file on the Jenkins server. Also, the command makes the context related to feelingtrackercluster the current context that kubectl will use.

tipp icon

Hint

What is a Kubernetes context? In Kubernetes, a context is a formal mechanism that binds a Kubernetes cluster to a Kubernetes user profile. The following example shows a context, minikube that binds the user, [email protected] to the cluster, minikube.

- context:
cluster: minikube
user: [email protected]
name: [email protected]

Developers and admins using Kubernetes work in a cluster defined by a particular context and have operational permissions within that cluster that are defined by the user profile declared in the context. For more information about Kubernetes contexts, go here.

gcloud container clusters get-credentials feelingtrackercluster --zone=us-central1-a

We use kubectl run to have Kubernetes create a Kubernetes deployment using the demo application’s container image that has just been uploaded to the Google Container Registry.

kubectl run feelingtracker --image=us.gcr.io/${PROJECT_ID}/feelingtracker:v1 --port=3000 --replicas=3

The command, kubectl get deployment confirms that the deployment is running in the Kubernetes Cluster on Google Cloud. This is simply a convenience call.

kubectl get deployment

We execute kubectl expose deployment to tell Kubernetes to create a Kubernetes service that is bound to the Kubernetes deployment we just created.

kubectl expose deployment feelingtracker --type=LoadBalancer

It takes a while for Kubernetes to create a service and expose a public IP address that’s public on the internet. Therefore, we’ll wait 30 seconds for the exposure of the public IP address to be implemented. In some cases, a wait period of a few minutes might need to be set.

sleep 30s

The final build step runs kubectl get service filtering on ‘feelingtracker’ to visually determine that the service has been created and that a public IP address has been issued.

kubectl get service | grep 'feelingtracker'

Calling the Project that Runs the Ranorex Test

Once Jenkins containerizes the demo app and deploys it as service in the Kubernetes cluster, we’re ready to run the Ranorex tests against the demo app running as a website. We’re going to isolate the Ranorex testing into a separate Jenkins project that runs on a Windows slave server in Azure. This project is called FeelingTrackerRanorexTest. We’ll add a post-build step to call this separate build project when the current project finishes successfully. In order to add the post-build step, we go the section Post Build Actions at the bottom of the Jenkins project page. Click the Add post-build action button and then select Build other projects from the drop-down list that appears. (See Figure 24.) Enter the name of the project that’s going to be executed and select the option, Only if build is stable.

Figure 24: Jenkins makes it so an additional project can be run after a current project run successfully.

Next Steps

The next, final installment of this series describes how to configure the separate Jenkins project  FeelingTrackerRanorexTest to test the demo application by executing the associated Ranorex from a Windows slave server running in the cloud on Azure. The build FeelingTrackerRanorexTest will run separately after the Jenkins project described in the article finishes deploying the demo application to Google Cloud.

To explore the features of Ranorex Studio, download a free 30-day trial today, no credit card required.

About the Author

Bob Reselman is a nationally-known software developer, system architect, and technical writer/journalist. Bob has written four books on computer programming and dozens of articles about topics related to software development technologies and techniques, as well as the culture of software development. Bob lives in Los Angeles. In addition to his work in a variety of aspects of software development and DevOps, Bob is working on a book about the impact of automation on human employment.

You might also like these articles