Working with Microservices-4: Setting up a Helm v3 chart repository in Amazon S3 for CI/CD pipeline in Jenkins

Cumhur Akkaya
10 min readJul 25, 2023

--

We will install Helm, then create and manage Helm charts by integrating the Helm repository into Amazon Simple Storage Service (Amazon S3). We will keep Kubernetes manifest yaml files of the microservices app in Amazon S3 anymore. Then we will be able to install the Springboot app consisting of 10 microservices on the Kubernetes cluster by running a single helm install command in the Jenkins CI/CD pipeline.

1. What is the Helm?

Figure 1

Helm is the package manager for Kubernetes. Helm helps you manage Kubernetes applications. Basically, helm has the ability to package YAML files (Secret, Service, ConfigMap, etc) and distribute them to other users and implement them in the cluster. We will do these using Jenkins, as in Figure 1.

Helm Charts help you define, install, and upgrade even the most complex Kubernetes application. Charts are easy to create, version, share, and publish, so start using Helm and stop the copy-and-paste :-). (1)

2. Installing Helm

Helm installation is below for those who have not installed the Jenkins server mentioned in the article of “Working with Microservice-2: Installing and Preparing the Jenkins server for the microservice application’s CI/CD pipeline.”

Install Helm [version 3+] on Jenkins Server, using the command below, as shown in Figure 2. (2)

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
helm version
Figure 2

3. Creating and working in different branches on Jenkins Server

Create a “staging” branch, with “git branch staging” command. And switch to the “staging” branch, with “git checkout staging” command, as shown in Figure 3. In this way, when an error occurs, we can recover the project by reverting to a working branch. This is the purpose and biggest benefit of using git anyway.

Figure 3

4. Creating a Helm chart

Create a helm chart named “petclinic_chart” under the “microservices-application-with-db/k8s” folder, as shown in Figure 4.

cd k8s
helm create petclinic_chart

Remove all files under the petclinic_chart/templates folder.

rm -r petclinic_chart/templates/*
Figure 4

Add the “k8s/petclinic_chart/values-template.yaml” file as below. Change the DNS name by looking from AWS Route53, as shown in Figures 5-a, 5-b. Thanks to this file, we will be able to dynamically pull the required values when we run the “Jenkins Staging Pipeline” from environments, as shown in Figure 6.

If you don’t have an “A” record, you must create an “A” record for “DNS_NAME: “micoservices-app.cmakkaya-awsdevops.link”. Use your hosted zone (mine is “cmakkaya-awsdevops.link”) to create “A” record by using AWS Route 53 domain registrar. Then we will bind it to our “microservice app” cluster.

If you don’t know how to create “A” Record, next article will be about this topic; “Working with Microservices-5: Creating “A” record in AWS Route 53 for the Microservice app that running in the Kubernetes cluster”, Or the video in the link will help you from the 27th minute (Unfortunately, among my videos, only this video is not in English, you can watch it with English subtitles).

IMAGE_TAG_CONFIG_SERVER: "${IMAGE_TAG_CONFIG_SERVER}"
IMAGE_TAG_DISCOVERY_SERVER: "${IMAGE_TAG_DISCOVERY_SERVER}"
IMAGE_TAG_CUSTOMERS_SERVICE: "${IMAGE_TAG_CUSTOMERS_SERVICE}"
IMAGE_TAG_VISITS_SERVICE: "${IMAGE_TAG_VISITS_SERVICE}"
IMAGE_TAG_VETS_SERVICE: "${IMAGE_TAG_VETS_SERVICE}"
IMAGE_TAG_API_GATEWAY: "${IMAGE_TAG_API_GATEWAY}"
IMAGE_TAG_ADMIN_SERVER: "${IMAGE_TAG_ADMIN_SERVER}"
IMAGE_TAG_HYSTRIX_DASHBOARD: "${IMAGE_TAG_HYSTRIX_DASHBOARD}"
IMAGE_TAG_GRAFANA_SERVICE: "${IMAGE_TAG_GRAFANA_SERVICE}"
IMAGE_TAG_PROMETHEUS_SERVICE: "${IMAGE_TAG_PROMETHEUS_SERVICE}"
DNS_NAME: "DNS Name of your application" # Don't forget to change.
Figure 5-a
Figure 5-b
Figure 6

5. Creating Amazon S3 Bucket using AWS CLI and AWS Management Console

Create an “S3 bucket” for Helm charts (3). In the bucket, create a “folder” called “stable/myapp”.

I. In order to create with the AWS CLI command, use this command below;

aws s3api create-bucket --bucket petclinic-helm-charts-<put-your-name> --region us-east-1
aws s3api put-object --bucket petclinic-helm-charts-<put-your-name> --key stable/myapp/

II. In order to create with the AWS Console, Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/ Click on the “Create bucket” button, as shown in Figure 7.

Figure 7

The Create bucket page opens. For Bucket name, enter a name for your bucket, and choose “us-east-1” AWS region. Click on the “Create Bucket” button, leaving the other values as default, as shown in Figure 8.

Note: Bucket’s name consists only of lowercase letters, numbers, dots (.), and hyphens (-). For best compatibility, we recommend that you avoid using dots (.) in bucket names, except for buckets that are used only for static website hosting. (4)

Figure 8

Click on the Create folder button. The Create bucket page opens. For Bucket name, enter a name for your bucket, as shown in Figure 9.

Figure 9

Enter the folder’s name “stable”, then “myapp”. Click on the Create folder button, as shown in Figure 10.

Figure 10

You can check your bucket, as shown in Figures 11–12.

Figure 11
Figure 12

6. Installing and Using the Helm-S3 plugin

Install the helm-s3 plugin for Amazon S3, as shown in Figure 13.

helm plugin install https://github.com/hypnoglow/helm-s3.git
Figure 13

On some systems, we need to install “Helm S3 plugin” as “Jenkins user” to be able to use S3 with pipeline script. For this, enter the following commands, as shown in Figure 14.

sudo su -s /bin/bash jenkins
export PATH=$PATH:/usr/local/bin
helm version
helm plugin install https://github.com/hypnoglow/helm-s3.git
Figure 14

We come back to User “ec2-user” with the “exit” command, as shown in Figure 15.

Figure 15

“Initialize” the Amazon S3 Helm repository with the following command, as shown in Figure 16. The command creates an “index.yaml” file in the target to track all the chart information that is stored at that location

AWS_REGION=us-east-1 helm s3 init s3://petclinic-helm-charts-<put-your-name>/stable/myapp
Figure 16

Check the “index.yaml” file was created with the following command, as shown in Figures 17–18.

aws s3 ls s3://petclinic-helm-charts-<put-your-name>/stable/myapp/
Figure 17
Figure 18

Add the Amazon S3 repository to Helm on the client machine (Jenkins server) with the following command. Check the Amazon S3 repository was added to Helm with the “helm repo ls” command, as shown in Figure 19.

helm repo ls # No any repo will appear.
AWS_REGION=us-east-1 helm repo add stable-petclinicapp s3://petclinic-helm-charts-<put-your-name>/stable/myapp/
helm repo ls
Figure 19

7. Updating the version of the app using Chart.yaml file and Sending the helm chart to the S3 bucket

Update the “version” and “appVersion” fields of Chart.yaml (in k8s/petclinic_chart/) file for testing, as shown in Figures 20–21.

version: 0.0.1
appVersion: 0.1.0
Figure 20
Figure 21

“Package” the local Helm chart, as shown in Figure 22. That is, 25 Kubernetes manifest yaml files(belonging to the microservices app) in the template folder will package. The “petclinic_chart-0.0.1.tgz” file was created by the command, as shown in Figure 22.

cd k8s
helm package petclinic_chart/
Figure 22

Note: We converted our docker-compose.yaml to Kubernetes manifest yaml files using the Kompose tool. This simple program saved us from writing the 25 Kubernetes files of our Springboot microservice app. We did them all step by step in this article.

Store the local package in the Amazon S3 Helm repository with the following command, as shown in Figure 23.

HELM_S3_MODE=3 AWS_REGION=us-east-1 helm s3 push ./petclinic_chart-0.0.1.tgz stable-petclinicapp

Check the Helm chart with the following command, you must get an output, as shown in Figure 23, and you can also check the results in the AWS S3 Bucket, as shown in Figure 24.

helm search repo stable-petclinicapp
Figure 23
Figure 24

If we set the “version” value to “0.0.2” in Chart.yaml, and then package the chart, this time changing the version in Chart.yaml to 0.0.2. Of course, this is a manual change, for automated change; version control is ideally done by using tools like GitVersion or Jenkins build numbers in a CI/CD pipeline.

Update the repo, then check the updated Helm chart with the following commands, as shown in Figure 25.

helm repo update
helm search repo stable-petclinicapp
Figure 25

To view all the available versions of a chart execute the following command, as shown in Figure 26.

helm search repo stable-petclinicapp --versions
Figure 26

In “Chart.yaml” file, change the “version” value to “HELM_VERSION” for automation in the Jenkins pipeline, as shown in Figure 27. In this way, we will no longer change the version manually, our pipeline will automatically pull the version dynamically from the system. We will use this “HELM_VERSION” variable in the Jenkins pipeline we will create.

Figure 27

8. Pushing created files to the remote repo (GitHub)

Commit the change, then push created files (helm chart, yaml files, etc) to the remote repo (GitHub). Then, checkout to the “main” branch. Run the command below.

The best practice is to create a different branch and continue from the new branch in the next stage.

git add .
git commit -m 'added helm chart, yaml files for Microservice-4:Setting up a Helm v3 chart repository in Amazon S3'
git push --set-upstream origin staging
git checkout main

9. As a result

We installed Helm, then created a Helm chart by integrating the Helm repository into the Amazon S3 bucket. We sent Kubernetes manifest yaml files of the microservices app to Amazon S3.

Thanks to the Amazon S3 bucket, we will be able to keep our Helm chart (K8s manifest yaml files) securely and anyone we allow will be able to access them.

We have completed one more step in the Staging CI/CD pipeline, after that we will install our Springboot application in the K8s cluster with the Helm chart files we created, just by running a single helm install command in the Jenkins CI/CD pipeline.

Share this article with friends in your network and help others to upskill.
I frequently share articles about Cloud and DevOps tools and resources. Follow me on Medium so you don’t miss future articles.

For more info and questions, please contact me on Linkedin.

10. Next post

Working with Microservices-5: Creating “A” record in AWS Route 53 for the Microservice app that running in the Kubernetes cluster”, as shown in Figure 28. If you want to see the microservice application running in your browser without entering the IP address, you must create an “A” record for the microservice app. So, we can reach the microservice application via DNS Name.

We will use a hosted zone to create an “A” record type by using AWS Route 53 domain registrar. Then we will bind it to our cluster that runs a “Java-based Springboot web microservice app” in this cluster.

Figure 28- “Working with Microservices-5: Creating “A” record in AWS Route 53 for the Microservice app that running in the Kubernetes cluster

Happy Clouding…

Don’t forget to follow my LinkedIn or Medium account to be informed about new articles.

--

--

Cumhur Akkaya

✦ DevOps/Cloud Engineer, ✦ Believes in learning by doing, ✦ Dedication To Lifelong Learning, ✦ Tea and Coffee Drinker. ✦ Linkedin: linkedin.com/in/cumhurakkaya