Working with Microservices-10: Explanation of the Production Stage and Creating Amazon EKS cluster for the “Production Environment and Pipeline” in order to deploy the Microservices App into it.
We started the “Production Stage of the Microservices App” section with this article. In the production stage, whenever the developers push their codes to the main branch of the GitHub repository, the Jenkins pipeline will run automatically thanks to the GitHub webhook and automatically update our Java-based application running on the web. We will prepare the “Production Environment and Jenkins Production Pipeline” using Jenkins, Rancher, Docker, Maven, Amazon EKS, Amazon ECR, Amazon RDS, Amazon S3, Amazon Route53, Let’s Encrypt, Cert Manager, AWS Certificate Manager, Nexus, Prometheus and Grafana”. In this article, we will explain Production Stage and create a Kubernetes cluster using Amazon EKS. We will do it all step by step.
In the first article of the production stage, we will prepare a source code management repository (GitHub), and create an Amazon EKS cluster for Jenkins Production Pipeline.
In future articles in this series, we will do the following, as shown in Video 1.
Working with Microservices-11: Creating an ECR repository and Preparing Production Pipeline scripts. We will prepare script files to run in the Jenkins pipeline, and then we will do packaging, tagging, pushing, and deploying with these script files. We will create an ECR repository to save the images. Then, we will prepare a Jenkins file and then create a Jenkins Production pipeline by using them. We will do it all step by step.
Working with Microservices-12: Setting Domain Name and TLS Certificate for Production Pipeline using About Route 53, Let’s Encrypt, and Cert Manager. We will set “Domain Name”, create an “A record” in our hosted zone by using AWS Route 53 domain registrar for the microservices app, and then, bind it to our the “Kubernetes cluster of the app”. We will configure a TLS (Transport Layer Security) certificate for HTTPS connection to the domain name using Let’s Encrypt and Cert Manager. Thus, we will be able to provide a secure connection to our application on the Internet. We will do it all step by step.
Working with Microservices-13: Setting Domain Name and TLS Certificate using About Route 53 and AWS Certificate Manager. This time, we will get a TLS certificate by using AWS Certificate Manager
Working with Microservices-14: Installing and Running Amazon RDS MySQL Database on Kubernetes Cluster on the Production Stage. In this section, we will use Amazon RDS instead of MySQL pod and service during the Production stage, because Amazon RDS has automatic backup, multi-AZ deployment, performance and scalability, and security measures, also it is serverless. We will install and run Amazon RDS, then create a MySQL database on AWS RDS. We will integrate Database into our app. For Production Pipeline, we will edit the Kubernetes manifest yaml files related to Amazon RDS. Finally, we will integrate our Amazon RDS endpoint into the Kubernetes cluster to connect it.
Working with Microservices-15: Importing Amazon EKS Cluster to Rancher in the Production Stage. We will import the Amazon EKS cluster that we created to Rancher by using Rancher menus. For this, we will prepare and assign a role and policy to the Rancher server in order to use AWS EKS resources. And, we will make the necessary change for the TLS certificate in “api-gateway” ingress file.
Working with Microservices-16: Preparing and Running The Production Jenkins File and Pipeline, Examining the output of the Jenkins Pipeline Using Rancher and our Browser. We continue our Production Pipeline. In this section, we will prepare a Jenkinsfile for the production pipeline. We will use the scripts that we prepared, in Jenkinsfile. Then, we will run the Jenkinsfile in a Jenkins pipeline so that our application will be deployed in the production environment. Finally, we will examine the app’s output via Rancher and check whether our application works properly in the browser.
Working with Microservices-17: Monitoring with Prometheus and Grafana in the Production Stage. We will run Prometheus and Grafana together, collect metrics of the Kubernetes cluster with them, and set up alarms using these metrics. We will create a Grafana dashboard, and then view and examine these metrics and alarms on Prometheus and Grafana. Also, we will learn to install Prometheus and Grafana by using Helm Chart. We will do these practically step by step in this article.
Working with Microservices-18: Setting Up An Alarm By Using the Grafana Dashboard and Prometheus ConfigMap.yml. We will run Prometheus and Grafana together, collect metrics of the Kubernetes cluster with them, and then set up alarms using these metrics. We will view and examine these metrics and alarms on the dashboards of Prometheus and Grafana. Also, we will install Prometheus and Grafana by using Helm Chart. We will do them practically and step by step.
Working with Microservices-19: Explanation of the Testing Stage, and Running a Unit Test and Configuring Code Coverage Report using Jacoco tool. We will use unit tests to check the lines, and functions in the source code. Then will create “code coverage report” using the Jacoco plugin in pom.xml file. Then we will install the Jacoco plugin on the Jenkins server too and we will run it. Finally, we will examine the code coverage report in the Jenkins pipeline.
Working with Microservices-20: Performing Functional Tests with Selenium. We will create a functional QA automation testing infrastructure. We put the selenium jobs (that were prepared and given to us by the QA testers) in the selenium-job folder in the testing infrastructure. Then we will prepare the “Jenkins functional test Jenkins pipeline” and we will set up a cron job in the Jenkins pipeline. It will run the pipeline at 12 every night. We will examine the code coverage report in the Jenkins pipeline. Finally, Jenkins pipeline will automatically terminate “functional QA automation testing infrastructure” when the tests are finished.
Working with Microservices-21: Preparing Manual Test Infrastructure for Manual Testers and Performing Manual Tests. We will create a manual testing infrastructure for manual testers. Then we will prepare the “Jenkins manual test Jenkins pipeline” and we will set up a cron job and the cron job will run the Jenkins pipeline at 12 every Friday for 3 days. We can observe the manual tests in the browser that the app running in it.
Working with Microservices-22: CSI (Crime Scene Investigation): Troubleshooting, Encountered Problems and Their Solutions. In this section, we will talk about we encounter problems and their solutions, while we are creating resources, testing codes, deploying microservices, running pipelines, etc. during the series of articles.
Working with Microservices-23: Clean up The Production Envirovment (Amazon EKS cluster, Amazon ECS cluster, Amazon RDS Database, The Rancher server, The Jenkins server, The Development server)
In this article, we will clean up the resources we have created so far.
Working with Microservices-24: Detailed Explanation of The Scripts We Use in Jenkins Files and Pipelines. We will explain the following scripts in detail; “package-with-maven-container.sh”, “prepare-tags-ecr-for-docker-images.sh”, “build-docker-images-for-ecr.sh”, “push-docker-images-to-ecr.sh” and “deploy_app_on_prod_environment.sh”.
Thus, we will have finished our series of the “Working with Microservices on Development, Testing, Staging, and Production environments by using different DevOps tools”.
Topics we will cover in this article:
6. As a result
7. Next post
If you like the article, I will be happy if you click on the Medium Following button to encourage me to write more, and not miss future articles.
Your clap, follow, or subscribe, they help my articles to reach the broader audience. Thank you in advance for them.
For previous articles in this series, you can check the links below.
Working with Microservices-1: Running a Java app that consists of 10 Microservices on a Development server.
Working with Microservices-1: Running a Java app that consists of 10 Microservice on a Development…
In this article series, we will work with a Spring pet clinic application consisting of 10 microservices. It is a…
Working with Microservices-2: Installing and Preparing the Jenkins server for the microservice application’s CI/CD pipeline.
Working with Microservice-2: Installing and Preparing the Jenkins server for the microservice…
We will install it with Terraform and manual installation. Then, we will install almost all the tools that we will need…
Working with Microservices-7: Creating a cluster for microservices application by using Rancher
Working with Microservices-7: Creating a cluster for microservices application by using Rancher
In this article, we will create a cluster by using Rancher in order to deploy the microservices application it using…
Working with Microservices-8: Preparing the staging pipeline in Jenkins, and deploying the microservices app to the Kubernetes cluster using Rancher, Helm, Maven, Amazon ECR, and Amazon S3. Part-1
1. About the Production Stage
In the production Stage, Whenever the developers push their codes to the GitHub repository, the Jenkins pipeline will run automatically thanks to the GitHub webhook and automatically update our Java-based application running on the web. For this;
We will create the AWS EKS cluster using
eksctl and then, import it into Rancher by using Rancher’s menus.
We will use the AWS RDS MySql Database for customer records.
We will create an AWS ECR Repo to store images.
We will prepare scripts, These scripts will package the app into jars with Maven Wrapper, then prepare image tags for Docker Images, and build App Docker Images, after that push the images to the ECR Repo using AWS CLI in Jenkins pipeline.
We will set the Domain Name, Create an “A record” for the microservices app in our hosted zone by using AWS Route 53 domain registrar and bind it to our “Kubernetes cluster of the app”. We will configure TLS/SSL certificate for HTTPS connection to the domain name using Let’s Encrypt and Cert Manager. Later we will do this with AWS Certification Manager as well. Thus, we will be able to provide a secure connection to our application on the Internet.
We prepare helm charts for the microservices application and push AWS S3 Bucket. Next, we will deploy the microservices app release on the AWS EKS Kubernetes cluster using the Helm charts.
Finally, we will check whether our microservices application works in the browser, and monitor the microservices apps in the cluster with Prometheus and Grafana.
We will use the below tools for the following purposes in the “Production Stages of the Microservices Application”;
Amazon EKS, Amazon Elastic Kubernetes Service as a Kubernetes cluster (an AWS-managed Kubernetes service to run Kubernetes on AWS without needing to install, operate, and maintain),
Amazon S3 bucket, as a repository for Helm charts,
Amazon ECR, Amazon Elastic Container Registry as an AWS-managed Docker container image registry service,
Amazon Route53, as a Domain Name System (DNS) service (for domain registration, DNS forwarding, and health checking without coding requirements),
Let’s Encrypt, in order to enable HTTPS on your website, you need to get a certificate (a type of file) from a Certificate Authority (CA). Let’s Encrypt is a CA. Let’s Encrypt uses the ACME protocol in order to control your domain.
Cert Manager, gives certificates from supported sources such as Let’s Encrypt. Cert-manager adds certificates and certificate issuers as resource types in Kubernetes clusters and simplifies the process of obtaining, renewing, and using those certificates.
AWS Certificate Manager, as a public and private SSL/TLS(Secure Sockets Layer/Transport Layer Security) certificate provider (for connecting to the web page of the microservices application via HTTPS protocol),
Amazon RDS, Amazon Relational Database Service as a fully managed, cloud relational database service,
Jenkins, as a Continuous Integration/Continuous Delivery and Deployment (CI/CD) automation software,
Rancher, as a Kubernetes management tool (to simplify the deployment, scaling, and management of Kubernetes clusters),
Maven, as a software project management (for building and managing the Java-based project),
Docker, as a containerization platform (to build, run, test, and deploy distributed applications quickly in containers),
Nexus, as an artifact repository manager (to organize, store, and distribute artifacts needed for development, we can also use AWS CodeArtifact instead.),
Prometheus, as an event monitoring and alerting service,
Grafana, as a data visualization, monitoring, and analysis service.
Prerequisite: The Jenkins server we set up in the article “Working with Microservices-2: Installing and Preparing the Jenkins server for the microservice application’s CI/CD pipeline” should be ready.
2. Cloning Source Code Management Repository(GitHub) and working in different branches on Production Stage
Connect to your Jenkins Server via “ssh”, as shown in Figure 1.
Firstly, We will clone the microservice petclinic app from the repository [a Java app that consists of 10 Microservices, named Spring Petclinic Microservices App] into Jenkins Server.
Don’t forget to add your GitHub token to the command below. This way, we won’t have to enter the password on every push command, as shown in Figures 2–3.
Create a “production” branch, with
“git branch production” command. And switch to the “production” branch, with the
“git checkout production” command, as shown in Figure 4.
In this way, when an error occurs, we can recover the project by reverting to a working branch. This is the purpose and biggest benefit of using git anyway. (1)
Note: I recommend connecting to the Jenkins server with Vscode to work more easily like above. For more information on how it’s done (step-by-step explanation) check my article. If you haven’t used it before, you will love it.
You should definitely try: Great convenience auto-connect to our EC2 instance using VScode
In this article, I will talk about a feature of VScode (Visual Studio Code) that I am sure you will like very much. We…
3. Creating Amazon EKS cluster
3. a. Creating
Switch the user to
ec2-user with the following command, as shown in Figure 5.
Since the production pipeline will run on the
jenkins user, we need to create the cluster.yaml file in this user.
sudo su - jenkins
In order to create an Amazon EKS cluster, firstly, create a
“cluster.yaml” file (2) (3) under the
“/var/lib/jenkins” folder, and use the following content, as shown in Figures 6–7–8. Use the
“sudo nano cluster.yaml”command to create
availabilityZones: ["us-east-1a", "us-east-1b", "us-east-1c"]
- name: ng-1
cluster.yamlis a Kubernetes manifest file, the explanations of the sections are given below;
apiVersion: This is where you specify the version of the Kubernetes API you’re using. To check
apiVersion , you can use the following command;
kind: It is a type of object. This tells Kubernetes what type of resource you want to create. It could be a `Pod`, `Service`, or deployment`, we want to create “ClusterConfig”.
metadata: In this section, we specify the properties of the object to be created;
nameThis is the section where we gave our resource a name “petclinic-cluster” and also assign it labels and annotations. The “name” is the unique identifier of the resource within its own namespace.
regionwe have determined the region where we will create a cluster as “us-east-1” region.
availabilityZoneswe have determined the zones where we will run the application as “us-east-1a”, “us-east-1b”, “us-east-1c” zones.
managedNodeGroups In this section, we specify the properties of the nodes to be created;
name The name of the node is “ng-1”.
instanceType Nodes will work with “t3a.medium” processor.
desiredCapacity We desire to create up to two nodes, and I want to create up to
maximum three nodes and
minimum two nodes.
Note: Indentations are important in yaml files, be careful when preparing them. Kubernetes manifests are written in YAML, which is a whitespace and indentation-sensitive language. Incorrect indentation is a common source of errors. Make sure each nested field is properly indented.
Create a sample of the cluster.yaml file under the k8 folder so that you can have a backup in your GitHub repo, as shown in Figure 9.
3. b. Creating
cluster using eksctl
Create your Amazon EKS cluster and worker nodes with the following
eksctl commands. Cluster provisioning usually takes between 15 and 20 minutes, as shown in Figures 10–11–12.
Note: If you create a cluster with
eksctl, by default
eksctl creates a role for you. If you don’t use
eksctl, you have to create an Amazon EKS cluster IAM role. (4)
eksctl create cluster -f cluster.yaml
eksctl command we entered runs two cloudformation templates to set up the Amazon EKS cluster in the background, it takes between 15 and 20 minutes for them to create the cluster, as shown in Figures 12–13.
3. c. Creating
The Ingress Controller is responsible for creating an entry point to the cluster so that Ingress rules can route traffic to the defined Services. Simply put, if you send network traffic to port 80 or 443, the Ingress Controller will determine the destination of that traffic based on the Ingress rules, as shown in Figure 14. (5)
After the cluster is up, run the following command to install and check the
ingress controller , as shown in Figures 15–16.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml
4. Examining the structure of the cluster
In addition, we can check that our nodes were created with the name we gave in the EC2>Instances menu, as shown in Figure 17.
We can check that our cluster was created with the name we gave in the Amazon EKS>Clusters menu, as shown in Figures 18–19–20.
In addition, we look at the information of the cluster following commands (6), as shown in Figures 21–22.
kubectl get nodes
kubectl get pods --all-namespace
kubectl get ns
kubectl get po,deploy,svc,ing -n petclinic-prod-ns
When you examine Figures 21 and 22, you can see that the namespaces take some time to create.
We can also see the web address of our application in ingress, as shown in Figure 22.
Our cluster where we will install the application is now ready.
If you want to delete this cluster, run the following command. But our EKS Cluster must be installed to complete the production pipeline. We will run this command after the deployment of our application is complete.
eksctl delete cluster -f cluster.yaml
5. Pushing created files to the remote repo (GitHub)
Commit the change, then push created files (cluster.yaml file, etc) to the remote repo (GitHub). Run the command below, as shown in Figure 23. (7)
git add .
git commit -m 'added cluster.yaml file for production pipeline'
git push --set-upstream origin production
The best practice is to create a different branch and continue from the new branch in the next stage. (1)
If you liked the article, I would be happy if you click on the Medium Following button to encourage me to write more and not miss future articles.
6. As a result
We successfully created an Amazon EKS cluster for the Jenkins Production Pipeline. And we prepared a source code management repository (GitHub) for the pipeline.
You can find the necessary files in my GitHub repo.
If you liked the article, I would be happy if you click on the Medium Following button to encourage me to write and not miss future articles.
Your clap, follow or subscribe, they help my articles to reach the broader audience. Thank you in advance for them.
7. Next post
In the next post, “Working with Microservices-12: Setting Domain Name and TLS certificate for Production Pipeline using Route 53, Let’s Encrypt and Cert Manager” in Figure 24.
We will set “Domain Name”, create an “A record” in our hosted zone by using AWS Route 53 domain registrar for the microservices app, and then, bind it to our the “Kubernetes cluster of the app”. We will configure a TLS (Transport Layer Security) certificate for HTTPS connection to the domain name using Let’s Encrypt and Cert Manager. Thus, we will be able to provide a secure connection to our application on the Internet. We will do it all step by step.