Setting Up Kubernetes Cluster with Kubeadm and Deploy an Application


In this, we are going to cover How to Install Kubernetes Cluster on Ubuntu with kubeadm in cloud platforms like Amazon EC2 and Deploying Application to Kubernetes Cluster.

We Going to Build?

Image Description

We will be building a k8s cluster with one control-plane(CP)/k8s-master node and a worker(W) node on Ubuntu.

Prerequisites

k8s image
Click to Zoom

Setting up the Worker Node


First, we'll launch an EC2 instance in a chosen region using a specified subnet. Once the instance is launched, we'll establish a connection to it either via SSH or AWS Session Manager.

Step 1: Set Hostname

Setting the hostname makes it easier to identify the node within the Kubernetes cluster. Adding an entry to /etc/hosts ensures that the hostname resolves to the correct IP address.

hostnamectl set-hostname "k8s-node1.github.io"
echo "`hostname -I | awk '{ print $1}'` `hostname`" >> /etc/hosts
            

Step 2: Update and Install Packages

Updating and installing necessary packages ensures that the system has the required dependencies for running Kubernetes and related tools.

sudo apt-get update 
sudo apt-get install git curl unzip tree wget -y
            

Step 3: Disable Swap

Disabling swap is important because Kubernetes requires swap to be disabled on all nodes in the cluster.

sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
                

Step 4: Load Kernel Modules

Loading kernel modules ensures that necessary kernel modules are available for Kubernetes to function properly.

sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
    

# load kernel modules 
sudo modprobe overlay
sudo modprobe br_netfilter 
                

Step 5: Set Kernel Parameters

Setting kernel parameters configures the network and other settings required by Kubernetes.

sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF


# Reload the above changes
sudo sysctl --system
                

Step 6: Installing containerd

Installing containerd provides the container runtime needed by Kubernetes.

            
# Install containerd run time
sudo apt-get install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates


# Enable docker repository
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"


# apt command to install containerd
sudo apt-get update
sudo apt-get install -y containerd.io
        

Step 7: Configuring containerd

Configuring containerd ensures that it uses systemd as cgroup, which is required by Kubernetes.

            
# Configure containerd so that it starts using systemd as cgroup.
sudo containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml


# check the status containerd service
sudo systemctl status containerd
        

Step 8: Adding the Kubernetes repository

Adding the Kubernetes repository to apt sources allows installation of Kubernetes components using apt.

        
# Refer this page before Execution for various versions https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

# Add apt repository for Kubernetes
sudo apt-get update

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
    

Step 9: Installing Kubernetes Components

Installing Kubernetes components (kubelet, kubeadm, kubectl) provides the necessary tools for managing the Kubernetes cluster.

            
# Install Kubernetes components
sudo apt-get install -y kubelet kubeadm kubectl
        

Step 10: Holding Kubernetes Packages

Holding Kubernetes packages prevents them from being upgraded automatically, ensuring stability of the cluster.

            
# Prevent automatic upgrades for stability
sudo apt-mark hold kubelet kubeadm kubectl
        

Step 11: Checking Kubernetes Component Versions

Checking the versions of kubelet, kubeadm, and kubectl ensures that the correct versions are in use for the Kubernetes cluster setup.

            
# Verify the version
kubelet --version
kubeadm version
kubectl version
        

Setting up the Control Plane (CP)


Now, we'll launch an EC2 instance in a chosen region using a specified subnet. Once the instance is launched, we'll establish a connection to it either via SSH or AWS Session Manager.

Step 1: Set Hostname

Setting the hostname makes it easier to identify the node within the Kubernetes cluster. Adding an entry to /etc/hosts ensures that the hostname resolves to the correct IP address.

hostnamectl set-hostname "k8s-cp.github.io"
echo "`hostname -I | awk '{ print $1}'` `hostname`" >> /etc/hosts
        

Step 2: Update and Install Packages

Updating and installing necessary packages ensures that the system has the required dependencies for running Kubernetes and related tools.

sudo apt-get update 
sudo apt-get install git curl unzip tree wget -y
        

Step 3: Disable Swap

Disabling swap is important because Kubernetes requires swap to be disabled on all nodes in the cluster.

sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
            

Step 4: Load Kernel Modules

Loading kernel modules ensures that necessary kernel modules are available for Kubernetes to function properly.

sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF


# load kernel modules 
sudo modprobe overlay
sudo modprobe br_netfilter 
            

Step 5: Set Kernel Parameters

Setting kernel parameters configures the network and other settings required by Kubernetes.

sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF


# Reload the above changes
sudo sysctl --system
            

Step 6: Installing containerd

Installing containerd provides the container runtime needed by Kubernetes.

        
# Install containerd run time
sudo apt-get install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates


# Enable docker repository
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"


# apt command to install containerd
sudo apt-get update
sudo apt-get install -y containerd.io
    

Step 7: Configuring containerd

Configuring containerd ensures that it uses systemd as cgroup, which is required by Kubernetes.

        
# Configure containerd so that it starts using systemd as cgroup.
sudo containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml


# check the status containerd service
sudo systemctl status containerd
    

Step 8: Adding the Kubernetes repository

Adding the Kubernetes repository to apt sources allows installation of Kubernetes components using apt.

        
# Refer this page before Execution for various versions https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

# Add apt repository for Kubernetes
sudo apt-get update

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
    

Step 9: Installing Kubernetes Components

Installing Kubernetes components (kubelet, kubeadm, kubectl) provides the necessary tools for managing the Kubernetes cluster.

        
# Install Kubernetes components
sudo apt-get install -y kubelet kubeadm kubectl
    

Step 10: Holding Kubernetes Packages

Holding Kubernetes packages prevents them from being upgraded automatically, ensuring stability of the cluster.

        
# Prevent automatic upgrades for stability
sudo apt-mark hold kubelet kubeadm kubectl
    

Step 11: Checking Kubernetes Component Versions

Checking the versions of kubelet, kubeadm, and kubectl ensures that the correct versions are in use for the Kubernetes cluster setup.

        
# Verify the version
kubelet --version
kubeadm version
kubectl version
    

Step 12: Initializing the Kubernetes cluster sets up the control plane and prepares the node to join the cluster.

        
# Initialize Kubernetes cluster with Kubeadm command as Root User
sudo kubeadm init --control-plane-endpoint=k8s-cp.github.io >> /home/ubuntu/k8s-cluster.output
Note : After Executing this command need to copy down the output for feature purpose.

At the end of the command output, you will see something like this which will be important.

output-k8s-cp Description

Step 13: Setting up kubeconfig allows the user to interact with the Kubernetes cluster using kubectl.

        
# Setting up kubeconfig
su - ubuntu
id
pwd
cd

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

Step 14: Setting KUBECONFIG

Exporting Kubernetes configuration to system environment for administrative access.

#Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf

echo $KUBECONFIG

Step 15: Installing the Calico network add-on provides networking capabilities to pods running on the Kubernetes cluster.

        
# Installing Calico network add-on
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml
    

Step 16: Kubernetes cluster information.

To Check cluster and network is Ready.


# Check cluster-info
kubectl cluster-info
cluster-info image

To check network.


# Check network
kubectl get svc
get-svc image

Step 17: Join any number of worker nodes by running.

You should run this command on the worker node EC2 instance to join the cluster.

sudo kubeadm join 192.168.0.100:6443 --token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef
        
join-node-to-cluster image

Switch back to the control-plane node and run the below command to verify.


# To See node Joined the Cluster.
kubectl get nodes
        
get-nodes image

These steps collectively set up a Kubernetes cluster and Joined the nodes on the Ubuntu machine. Now it is ready for use in a production environment.

Deploying Nginx Application on Kubernetes Cluster


  1. First, deploy the nginx application with two replicas:
  2. kubectl create deployment nginx-app --image=nginx --replicas=2
    deploy-nginx-application image
  3. Check the status of the nginx-app deployment:
  4. kubectl get deployment nginx-app
    status-nginx image
  5. To get the status of all pods:
  6. kubectl get pods
    get-pods image
  7. Expose the deployment as a NodePort service on port 80:
  8. kubectl expose deployment nginx-app --type=NodePort --port=80
    expose-nginx image
  9. View the service status to get the NodePort assigned:
  10. kubectl get svc nginx-app
    get-svc-nginx image
  11. Once you have the NodePort, you can access the nginx-based application using the worker node's/Control plane IP address and the NodePort. Replace <worker-node-ip-address>/<control-plane-ip-address> with the actual IP address of your worker node and <NodePort> with the assigned port:
  12. http://<worker-node-ip-address>/<control-plane-ip-address>:<NodePort> welcome-nginx page
  13. To delete deployment:
  14. kubectl delete deployment nginx-app
  15. To delete service:
  16. kubectl delete svc nginx-app

Deploy an artifact on Kubernetes Cluster


  1. Create deployment.yml
  2. File Name : tomcat-deployment.yml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: tomcat-deployment
    spec:
      selector:
        matchLabels:
          app: tomcat
      replicas: 2 # tells deployment to run 2 pods matching the template
      template:
        metadata:
          labels:
            app: tomcat
        spec:
          containers:
          - name: tomcat
            image: nandeesh10/tomcat:1.0.0
            ports:
            - containerPort: 80
    
  3. Create Service and Deploy:
  4. File Name : tomcat-service.yaml

    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: tomcat
      namespace: default
      labels:
        app: tomcat
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    spec:
      externalTrafficPolicy: Local
      ports:
        - name: http 
          port: 80
          protocol: TCP
          targetPort: 80
      selector:
        app: tomcat
      type: LoadBalancer
    
    ...
    
  5. Creating a NodePort Service for tomcat Deployment:
  6. kubectl create service nodeport tomcat --tcp=80:80
  7. Apply the deployment:
  8. kubectl apply -f tomcat-deployment.yml
  9. Retrieve Deployment Information deployment.
  10. kubectl get deployment tomcat-deployment
  11. Get ReplicaSets
  12. kubectl get rs
  13. Get deployment information.
  14. kubectl get pods
  15. Check Service.
  16. kubectl get svc
  17. Check nodes.
  18. kubectl get nodes
  19. Once you have the NodePort, you can access the nginx-based application using the worker node's/Control plane IP address and the NodePort. Replace <worker-node-ip-address>/<control-plane-ip-address> with the actual IP address of your worker node and <NodePort> with the assigned port:
  20. http://<worker-node-ip-address>/<control-plane-ip-address>:<NodePort> tomcat.png page
  21. Scaling Deployment.
  22. kubectl scale --current-replicas=2 --replicas=3 deployment/tomcat-deployment
  23. Get ReplicaSets
  24. kubectl get rs
  25. Get deployment information.
  26. kubectl get pods

"Application deployment in Kubernetes completed successfully and All resources are now provisioned and operational. You can now scale up the replicas as needed."