by Sion Williams
This guide will take you through the steps necessary to continuously deliver your software to end users by leveraging Amazon Web Services and Jenkins to orchestrate the software delivery pipeline. If you are not familiar with basic Kubernetes concepts, have a look at Kubernetes 101.
In order to accomplish this goal you will use the following Jenkins plugins:
- Jenkins EC2 Plugin – start Jenkins build slaves in AWS when builds are requested, terminate those containers when builds complete, freeing resources up for the rest of the cluster
- Bitbucket Oauth Plugin – allows you to add your bitbucket oauth credentials to jenkins
In order to deploy the application with Kubernetes you will use the following resources:
- Deployments – replicates our application across our kubernetes nodes and allows us to do a controlled rolling update of our software across the fleet of application instances
- Services – load balancing and service discovery for our internal services
- Volumes – persistent storage for containers
This article is an AWS variant of the original Google Cloud Platform article found, here.
- An Amazon Web Services Account
- A running Kubernetes cluster
Containers in Production
Containers are ideal for stateless applications and are meant to be ephemeral. This means no data or logs should be stored in the container otherwise they’ll be lost when the container terminates.
– Arun Gupta
The data for Jenkins is stored in the container filesystem. If the container terminates then the entire state of the application is lost. To ensure that we don’t lose our configuration each time a container restarts we need to add a Persistent Volume.
Adding a Persistent Volume
From the Jenkins documentation we know that the directory we want to persist is going to be the Jenkins home directory, which in the container is located at /var/jenkins_home (assuming you are using the official Jenkins container). This is the directory where all our plugins are installed, jobs and config information is kept.
At this point we’re faced with a chicken and egg situation; we want to mount a volume where Jenkins Home is located, but if we do that the volume will be empty. To overcome this hurdle we first need to add our volume to a sacrificial instance in AWS, install Jenkins, copy the contents of Jenkins Home to the volume, detach it, then finally add it to the container.
Make sure that the user and group permissions in the Jenkins Home are the same. Failure to do so will cause the container to fail certain Write processes. We will discuss more about the Security Context later in this article.
To recursively change permissions of group to equal owner, use:$ sudo chmod -R g=u .
Now that we have our volume populated with the Jenkins data we can start writing the Kubernetes manifests. The main things of note are the name, volumeId and storage.
apiVersion: v1 kind: PersistentVolume metadata: name: jenkins-data spec: capacity: storage: 30Gi accessModes: - ReadWriteOnce awsElasticBlockStore: volumeID: aws://eu-west-1a/vol-XXXXXX fsType: ext4
With this manifest we have told Kubernetes where our volume is held. Now we need to tell Kubernetes that we want to make a claim on it. We do that with a Persistent Volume Claim.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: jenkins-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 30Gi
In the file above we are telling Kubernetes that we would like to claim the full 30GB. We will associate this claim with a container in the next section.
Create a Jenkins Deployment and Service
Here you’ll create a deployment running a Jenkins container with a persistent disk attached containing the Jenkins home directory.
apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: labels: app: jenkins name: jenkins spec: replicas: 1 selector: matchLabels: app: jenkins template: metadata: labels: app: jenkins spec: containers: - image: jenkins:2.19.2 imagePullPolicy: IfNotPresent name: jenkins ports: - containerPort: 8080 protocol: TCP name: web - containerPort: 50000 protocol: TCP name: slaves resources: limits: cpu: 500m memory: 1000Mi requests: cpu: 500m memory: 1000Mi volumeMounts: - mountPath: /var/jenkins_home name: jenkinshome securityContext: fsGroup: 1000 volumes: - name: jenkinshome persistentVolumeClaim: claimName: jenkins-data
There’s a lot of information in this file. As the post is already getting long, I’m only going to pull out the most important parts.
Earlier we created a persistent volume and volume claim. We made a claim on the PersistentVolume using the PersistentVolumeClaim, and now we need to attach the claim to our container. We do this using the claim name, which hopefully you can see ties each of the manifests together. In this case jenkins-data.
This is where I had the most problems. I found that when I used the surrogate method of getting the files onto the volume I forgot to set the correct ownership and permissions. By setting the group permissions to the same as the user, when we deploy to Kubernetes we can use the fsGroup feature. This feature lets the Jenkins user in the container have the correct permissions on the directories via the group level permissions. We set this to 1000 as per the documentation.
If all is well and good you should now be able start each of the resources:
kubectl create -f jenkins-pv.yml -f jenkins-pvc.yml -f jenkins-deployment.yml
As long as you dont have any issues at this stage you can now expose the instance using a load balancer. In this example we are provisioning an aws loadbalancer with our AWS provided cert.
apiVersion: v1 kind: Service metadata: labels: app: jenkins name: jenkins annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:eu-west-1:xxxxxxxxxxxx:certificate/bac080bc-8f03-4cc0-a8b5-xxxxxxxxxxxxx" service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http" service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443" spec: ports: - name: securejenkinsport port: 443 targetPort: 8080 - name: slaves port: 50000 protocol: TCP targetPort: 50000 selector: app: jenkins type: LoadBalancer loadBalancerSourceRanges: - x.x.x.x/32
In the snippet above we also use the loadBalancerSourceRanges feature to whitelist our office. We aren’t making our CI publicly available, so this is a nice way of making it private.
I’m not going to get into the specifics of DNS etc here, but if that’s all configured you should now be able access your Jenkins. You can get the ingress url using the following:
kubectl get -o jsonpath=
I guess you’re wondering; “why after all that effort with Kubernetes are you creating AWS instances as slaves?” Well, our cluster has a finite pool of resource. We want elasticity with the Jenkins slaves, but equally, we don’t want a large pool sat idle waiting for work.
We are using the EC2 Plugin so that our builder nodes will be automatically launched as necessary when the Jenkins master requests them. Upon completion of their work they will automatically be turned down ,and we don’t get charged for anything that isn’t running. This does come with a time penalty for spinning up new VM’s, but we’re OK with that. We mitigate some of that cost by leaving them up for 10 mins after a build, so that any new builds can jump straight on the resource.
There’s a great article on how to configure this plugin, here.
Our Active Directory is managed externally, so integrating Jenkins with AD was a little bit of a headache. Instead, we opted to integrate Jenkins with Bitbucket OAuth, which is useful because we know all of our engineers will have accounts. The documentation is very clear and accurate, so I would recommend following that guide.