Rancher-1: Creating the Rancher server with manual installation or using terraform file and Running Rancher in it.

Cumhur Akkaya
16 min readJul 16, 2023

--

In 1st part of this series, we will learn the basic concepts of Rancher, and then we will create a “Rancher server” using terraform file, also we will learn to do manual installation it too. Finally, we will deploy Rancher into a cluster in the Rancher server and examine the created cluster structure using Rancher menus. We will do them all step by step.

In 2nd part of this article: “Deploying microservices java application to the cluster created with Rancher using Helm”. We will do this step in the following order;

  • Creating node template,
  • Creating AWS cloud credentials and assigning them to node template,
  • Creating a cluster consisting of 3 nodes using the Rancher “cluster management” menu.
  • Deploying a microservices java application to the cluster using Helm.

Topics we will cover:

1. What is the Rancher?

2. Deploying a Rancher server on AWS

2. a. Prerequisites

2. b. Download Rancher Terraform file

2. c. Rancher Terraform file’s explanation and what to do in manual installation

2. d. Modifying and applying the “Rancher Terraform file”

3. Creating Kubernetes cluster

3. a. Installing kubectl

3. b. Installing eksctl (optional)

3. c. Installing RKE (Rancher Kubernetes Engine)

3. d. Creating cluster.yml file

3. e. Copying your key.pem into the Rancher server

3. f. Creating RKE Kubernetes cluster using “rke” command

4. Installing Rancher App on RKE Kubernetes Cluster using Helm

4. a. Installing Helm on Rancher Server

4. b. Creating a namespace for Rancher

4. c. Installing Rancher on RKE Kubernetes Cluster using Helm

4. d. Resetting your admin password (if you forget it)

5. Connecting to Rancher and examining the created cluster structure and Rancher menus.

6. As a result

7. Next post

8. References

If you like the article, I will be happy if you click on the Medium Following button to encourage me to write more, and not miss future articles.

Your clap, follow, or subscribe, they help my articles to reach the broader audience. Thank you in advance for them.

1. What is the Rancher?

Figure 1

Rancher is a Kubernetes management tool to deploy and run clusters anywhere and on any provider.

It enables detailed monitoring and alerting for clusters and their resources, sends logs to external providers, and integrates directly with Helm via the Application Catalog. If you have an external CI/CD system, you can plug it into Rancher.

We use Rancher for easy deployment and management of containers in development and production. It is a complete container management platform for Kubernetes, giving you the tools to successfully run Kubernetes anywhere. Rancher deploys the application to either a bare metal rack or a vSphere cluster in private clouds. Rancher supports the deployment of the application on various public cloud platforms, as shown in Figure-1. (1)

Rancher is better than its competitors, because it aligns perfectly with each step of the container orchestration strategy, as shown in Figure-2. Rancher works 100% on any cloud provider and can manage any Kubernetes service. This is very different from its competitors. (2)

Figure 2

Figure-3 illustrates the high-level architecture of Rancher. The figure depicts a Rancher Server installation that manages two downstream Kubernetes clusters: one created by RKE and another created by Amazon EKS (Elastic Kubernetes Service). For the best performance and security, Rancher recommends a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can create or import clusters for running your workloads. We can manage both Rancher-launched Kubernetes clusters and hosted Kubernetes clusters through Rancher’s authentication proxy, as shown in Figure-3. (3)

Figure 3

We can install Rancher on a single node, or on a high-availability Kubernetes cluster. A high-availability Kubernetes installation is recommended for production. A Docker installation of Rancher is recommended only for development and testing purposes.

2. Deploying a Rancher server on AWS

2. a. Prerequisites

  • Amazon AWS Account: An Amazon AWS Account is required to create resources for deploying Rancher and Kubernetes in the AWS cloud.
  • Amazon AWS Access Key: Use this link to follow a tutorial to create an Amazon AWS Access Key if you don’t have one yet.
  • Install Terraform: We will use it as an “Infrastructure as code” to provision the server and cluster in Amazon AWS.

2. b. Download Rancher Terraform file

Clone Rancher Terraform file to your folder in local compute from my GitHub repo, using the command below;

git clone https://github.com/cmakkaya/rancher-installing-and-using.git
Figure 4.

This Terraform Template (check Figure-4) creates a Rancher server on EC2 Instances.

2. c. Rancher Terraform file’s explanation and what to do in manual installation

Our Terraform Template will create the following, but if we want, we can also perform the manual installation by creating them ourselves in the respective AWS menus.

I. The necessary security groups are below, as depicted in Figure 5. (To create manually it in AWS Console the link.)

"rke-alb-sg" allows HTTP (Port 80) and HTTPS (Port 443) connections from anywhere.
"rke-cluster-sg" allows;
* "Inbound" rules;
* HTTP protocol (TCP on port 80) from Application Load Balancer.
* HTTPS protocol (TCP on port 443) from any source that needs to use Rancher UI or API.
* TCP on port 6443 from any source that needs to use Kubernetes API server (ex. Jenkins Server).
* SSH on port 22 to any node IP that installs Docker (ex. Jenkins Server).
* "Outbound" rules;
* SSH protocol (TCP on port 22) to any node IP from a node created using Node Driver.
* HTTP protocol (TCP on port 80) to all IP for getting updates.
* HTTPS protocol (TCP on port 443) to `35.160.43.145/32`, `35.167.242.46/32`, `52.33.59.17/32` for catalogs of `git.rancher.io`.
* TCP on port 2376 to any node IP from a node created using Node Driver for Docker machine TLS port.
* All protocol on all port from "rke-cluster-sg" for self communication between Rancher `controlplane`, `etcd`, `worker` nodes.
Figure 5 (4).

II. IAM policy below, named “rke-etcd-worker-policy.json” and “rke-controlplane-policy.json” (To create manually it in AWS Console the link.)

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage"
],
"Resource": "*"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVolumes",
"ec2:CreateSecurityGroup",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyVolume",
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateRoute",
"ec2:DeleteRoute",
"ec2:DeleteSecurityGroup",
"ec2:DeleteVolume",
"ec2:DetachVolume",
"ec2:RevokeSecurityGroupIngress",
"ec2:DescribeVpcs",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:AttachLoadBalancerToSubnets",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:CreateLoadBalancerPolicy",
"elasticloadbalancing:CreateLoadBalancerListeners",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteLoadBalancerListeners",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DetachLoadBalancerFromSubnets",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeLoadBalancerPolicies",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:ModifyListener",
"elasticloadbalancing:ModifyTargetGroup",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
"iam:CreateServiceLinkedRole",
"kms:DescribeKey"
],
"Resource": [
"*"
]
}
]
}

III. IAM role, named “rke-role”. Also, it assigns the role to rancher_worker and rancher_controlplane nodes. (To create manually it in AWS Console the link.)

IV. Application Load Balancer and Listener below, named “rancher-alb”. Also, it configures ALB Listener of HTTP on “Port 80” to redirect traffic to HTTPS on “Port 443”. (To create manually it in AWS Console the link.)

Scheme              : internet-facing
IP address type : ipv4

<!-- Listeners-->
Protocol : HTTPS/HTTP
Port : 443/80
Availability Zones : Select AZs of RKE instances
Target group : `call-rancher-http-80-tg` target group

V. Target group below, (To create manually it in AWS Console the link.)

Target type         : instance
Protocol : HTTP
Port : 80

<!-- Health Checks Settings -->
Protocol : HTTP
Path : /healthz
Port : traffic port
Healthy threshold : 3
Unhealthy threshold : 3
Timeout : 5 seconds
Interval : 10 seoconds
Success : 200

VI. AWS ACM certificate, for HTTPS connection to Rancher server. (To create manually it in AWS Console the link.)

VII. AWS Route53 “DNS A record” for Rancher server and attaches the “rancher-alb” Application Load Balancer to it. (To create manually it in AWS Console the link.)

VIII. Install and start Docker on Ubuntu, using the following script named “rancherdata.sh”.

In this step, to create Ubuntu instance with the manual installation using the script named “rancherdata.sh”.

Firstly, click on the “launch instance” in the EC2 menu in the AWS console. Launch an EC2 instance choosing “Ubuntu Server 20.04 LTS (HVM) (64-bit x86)” with “t3a.medium” type, 16 GB root volume.

Then, choose “rke-cluster-sg” security group, “rke-role” IAM Role, “Name:Rancher-Cluster-Instance” tag and your “key.pem” key-pair. We created all of these above. Take note of “subnet id” of EC2.

And copy paste the script below into “user data”.

Finally, click on the “launch instance” button.

# Set hostname of instance
sudo hostnamectl set-hostname rancher-instance-1
# Update OS
sudo apt-get update -y
sudo apt-get upgrade -y
# Install and start Docker on Ubuntu 19.03
# Update the apt package index and install packages to allow apt to use a repository over HTTPS
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Use the following command to set up the stable repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Update packages
sudo apt-get update

# Install and start Docker
# RKE is not compatible with the current Docker version (v23 hence we need to install an earlier version of Docker
sudo apt-get install docker-ce=5:20.10.23~3-0~ubuntu-focal docker-ce-cli=5:20.10.23~3-0~ubuntu-focal containerd.io docker-compose-plugin
sudo systemctl start docker
sudo systemctl enable docker

# Add ubuntu user to docker group
sudo usermod -aG docker ubuntu
newgrp docker

2. d. Modifying and applying “Rancher Terraform file”

Go into the folder containing the Terraform files.

Note-1: We must modify appropriate variables in “variable.tf” file before launching the terraform file;

Line 6 “mykey”. You can use your existing key.pem in AWS, If you want you can create a new key.pem with the following command for Rancher Server too.

Log into the AWS console and create “rancher.pem” key-pair for Rancher Server on the EC2 menu, or you can use the AWS CLI command below to create “rancher.pem”;

aws ec2 create-key-pair --region us-east-1 --key-name rancher --query KeyMaterial --output text > ~/.ssh/petclinic-rancher.pem
chmod 400 ~/.ssh/rancher.pem

Line 24 “domain-name”, your domain name in Route53.

Line 28 “rancher-subnet”, your subnet in AWS VPC that you will run the Rancher server in it.

Line 32 “hosted zone”, your hosted zone in Route53

The system will work even if other values are not changed but If you are working in a different region, you can change its values and so on.

aws_region - Amazon AWS region, choose the closest instead of the default (us-east-1)

instance_type - EC2 instance size used, minimum is t3a.medium but t3a.large or t3a.xlarge could be used if within budget.

Note-2: Also, we must modify the appropriate variables form in the rancher.tf file when launching the instance;

Line 42 “vpc_id”, is your AWS VPC that you will run the Rancher server in it.

Line 70 “subnets”, is your subnet in AWS VPC that you will run the Rancher server in it.

If you haven’t run the “aws configure” command on your computer before (which means there is no “aws configuration” on the local computer), you must enter the values “secret_key” and “access_key”.

aws_access_key - Amazon AWS Access Key

aws_secret_key - Amazon AWS Secret Key

After the settings are finished, run the terraform initcommand in the folder where the terraform files are. To initiate the creation of the environment, run terraform apply --auto-approve, as shown in Figure 5.

Figure 5.

Then wait for output similar to the following, as shown in Figure 6, and go into your AWS account EC2>Instance, you must see “rancher server”, as shown in Figure 7.

Figure 6
Figure 7.

3. Creating Kubernetes cluster

Log into Rancher Server via ssh connection using your key.pem, as shown in Figure 8.

Figure 8.

3. a. Installing kubectl

The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. (5)

Install kubectl, using the following command, as shown in Figure 9.

# Download the Amazon EKS vended kubectl binary
curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.23.7/2022-06-29/bin/linux/amd64/kubectl
# Apply execute permissions to the binary.
chmod +x ./kubectl
# Move the kubectl binary to /usr/local/bin.
sudo mv kubectl /usr/local/bin

After you install kubectl , you can verify its version with the following command, as shown in Figure 9.

kubectl version --short --client
Figure 9

3. b. Installing eksctl (optional)

eksctl is a simple command line tool for creating and managing Kubernetes clusters on Amazon EKS. eksctl provides the fastest and easiest way to create a new cluster with nodes for Amazon EKS. (6)

eksctlcommands similar to kubectlcommands. If you have installed kubectlwill be enough for you. You can not install eksctl.

Download and extract the latest release of eksctl with the following command.

curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp

Move the extracted binary to /usr/local/bin.

sudo mv /tmp/eksctl /usr/local/bin

Test that your installation was successful with the following command.

eksctl version

For example, you can use the following command to create an EKS cluster;

eksctl create cluster -f cluster.yaml

3. c. Installing RKE (Rancher Kubernetes Engine)

The Rancher Kubernetes Engine, a simple and efficient way to deploy, manage, and upgrade Kubernetes clusters anywhere.

Install RKE, the Rancher Kubernetes Engine, [Kubernetes distribution and command-line tool] on Rancher Server, and check it is working with the command below, as shown in Figure 10. For RKE's last version link.

curl -SsL "https://github.com/rancher/rke/releases/download/v1.4.2/rke_linux-amd64" -o "rke_linux-amd64"
sudo mv rke_linux-amd64 /usr/local/bin/rke
chmod +x /usr/local/bin/rke
rke --version
Figure 10

3. d. Creating cluster.yml file

Create “rancher-cluster.yml” with the following content to configure RKE Kubernetes Cluster using your editor (mine is nano) and save it, as shown in Figure 11.

nodes:
- address: 172.31.32.254 # Change with the Private Ip of rancher server
internal_address: 172.31.32.254 # Change with the Private Ip of rancher server
user: ubuntu
role:
- controlplane
- etcd
- worker

# ignore_docker_version: true

services:
etcd:
snapshot: true
creation: 6h
retention: 24h

ssh_key_path: ~/.ssh/cumhurkey.pem

# Required for external TLS termination with
# ingress-nginx v0.22+
ingress:
provider: nginx
options:
use-forwarded-headers: "true"
Figure 11

And, check it is working with the “cat” command, as shown in Figure 12.

Figure 12

3. e. Copying your key.pem into the Rancher server

Copy your key.pem using your editor into the Rancher server and give the user read permission, and remove all other permission using chmod 400 command, as shown in Figure 13. We will use it to connect to nodes.

Figure 13

3. f. Creating RKE Kubernetes cluster using “rke” command

Run “rke” command to set up the RKE Kubernetes cluster on EC2 Rancher instance, as shown in Figure 14.

rke up --config ./rancher-cluster.yml

Notice: İf you use another ec2 instance (ex. Jenkins Server) to connect Rancher instance, you should add a rule to the cluster sec group for EC2 instance using its `IP/32` from SSH (22) and TCP(6443) before running “rke” command, because it is giving a connection error.

Figure 14

Create a folder named “.kube”, and move credential files into it, using the following command, as shown in Figure 15.

Note: The folder’s name must be “.kube” by default.

mkdir -p ~/.kube
mv ./kube_config_rancher-cluster.yml $HOME/.kube/config
mv ./rancher-cluster.rkestate $HOME/.kube/rancher-cluster.rkestate
chmod 400 ~/.kube/config

Check if the RKE Kubernetes Cluster was created successfully using the following command, as shown in Figure 15.

kubectl get nodes
kubectl get pods --all-namespaces
Figure 15

4. Installing Rancher App on RKE Kubernetes Cluster using Helm

4. a. Installing Helm on Rancher Server

Install Helm [version 3+] on Rancher Server using the following command, as shown in Figure 16. (7–8–9)

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
helm version
Figure 16

Add “helm chart repositories” of Rancher using the following command, as shown in Figure 17.

helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
helm repo list
Figure 17

4. b. Creating a namespace for Rancher

Create a “namespace” for Rancher using the following command, and verify its result as shown in Figure 18.

kubectl create namespace cattle-system
Figure 18

4. c. Installing Rancher on RKE Kubernetes Cluster using Helm

Install Rancher on RKE Kubernetes Cluster using Helm.

Don’t forget to change the DNS name. We will set up a Kubernetes cluster with a replica with the following command, as shown in Figure 19.

helm install rancher rancher-latest/rancher \
--namespace cattle-system \
--set hostname=ranchertr.cmakkaya-awsdevops.link \ # Change DNS name "helm install rancher rancher-latest/rancher --namespace cattle-system --set hostname=rancher.cmakkaya-awsdevops.link --set tls=external --set replicas=1"
--set tls=external \
--set replicas=1
Figure 19

Check if the Rancher Server is deployed successfully with the following command, as shown in Figure 20.

kubectl -n cattle-system get deploy rancher
kubectl -n cattle-system get pods
Figure 20

4. d. Resetting your admin password (if you forget it)

If the bootstrap pod is not initialized or you forget your admin password you can use the below command to reset your password.

kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $(kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | head -1 | awk '{ print $1 }') -- reset-password

5. Connecting to Rancher and examining the created cluster structure and Rancher menus.

Enter the domain name into the browser, you must see the Rancher window, as shown in Figures 21–22.

Figure 21

For the password, click on “for a Helm installation, run” to copy.

Figure 22

Then, paste it Rancher server terminal and enter, as shown in Figure 23.

Figure 23

Copy the password that appears and paste it into the passport section in the Rancher window, and click on “Login in with local user”, as shown in Figures 21 and 24.

Figure 24

We are now logged into Rancher, as shown in Figure 25.

Figure 25

Set a new 12-digit password in the new window, check your server URL, click on the “By checking the box,…” and click on the continue button, as shown in Figure 26.

Figure 26

The local cluster where the Rancher is installed should appear in the window that opens, as shown in Figure 27.

Figure 27

When you click on “Local” in Figure 27, rancher cluster information comes up, as shown in Figures 28–29.

Figure 28
Figure 29

You can check and edit your rancher server URL on the Global Settings menu, as shown in Figure 30.

Figure 30

6. As a result

We learned basic concepts of Rancher, and then we successfully created the Rancher server using Terraform file, also we learned to do the manual installation it. Finally, we deployed the Rancher Kubernetes management tool into a cluster in the Rancher server and examined the created cluster structure.

Rancher provided us, with easy installation and zero-downtime Kubernetes upgrades and centralized management of multiple clusters (Easy Workload Management). We found that one of the best features of using Kubernetes with Rancher is multi-cloud deployments.

You can find the terraform file, policies, and other necessary files in my GitHub repo.

If you liked the article, I would be happy if you click on the Medium Following button to encourage me to write and not miss future articles.

Your clap, follow or subscribe, they help my articles to reach the broader audience. Thank you in advance for them.

For more info and questions, please contact me on Linkedin or Medium.

7. Next post

In the next post, Deploying microservices java application to the cluster created with Rancher using Helm, in Figure 31.

We will do the following in order;

  • Creating node template,
  • Creating AWS cloud credentials and assigning them to node template,
  • Creating a cluster consisting of 3 nodes using the Rancher “cluster management” menu.
  • Deploying microservices java application to the cluster using Helm

We will do it all step by step.

Figure 31 - Next post, Deploying microservices java application to the cluster created with Rancher using Helm

I hope you enjoyed reading this article. Don’t forget to follow my Medium or LinkedIn account to be informed about new articles. I wish you growing success in the DevOps and the Cloud way.

Happy Clouding…

--

--

Cumhur Akkaya

✦ DevOps/Cloud Engineer, ✦ Believes in learning by doing, ✦ Dedication To Lifelong Learning, ✦ Tea and Coffee Drinker. ✦ Linkedin: linkedin.com/in/cumhurakkaya