Building a Kubernetes Cluster using kubeadm

In this article, we will look how we can create a three node Kubernetes cluster using kubeadm

Unni P
4 min readApr 26, 2023

Introduction

  • kubeadm is a tool used to create Kubernetes clusters
  • It automates the creation of Kubernetes clusters by bootstrapping the control plane, joining the nodes etc
  • Follows Kubernetes release cycle
  • Open-source tool maintained by the Kubernetes community

Prerequisites

  • Create three Ubuntu 20.04 LTS instances [control-plane, node-1, node-2]
  • Each instance has a minimum specification of 2 CPU and 2 GB RAM
  • Networking must be enabled between instances
  • Required ports must be allowed between instances
  • Swap must be disabled on instances

Steps

  • Set up unique hostnames on all instances [control-plane, node-1, node-2]
    Once the hostnames are set, logout from the current session and log back in to reflect the changes
$ sudo hostnamectl set-hostname control-plane
$ sudo hostnamectl set-hostname node-1
$ sudo hostnamectl set-hostname node-2
  • Update the hosts file on all instances [control-plane, node-1, node-2] to enable communication via hostnames
$ sudo vi /etc/hosts

172.31.12.122 control-plane
172.31.7.52 node-1
172.31.10.184 node-2
  • Disable swap on all instances [control-plane, node-1, node-2] and if swap entry is present in the fstab file then comment out the line
$ sudo swapoff -a

$
sudo vi /etc/fstab
# comment out swap entry
  • Setup containerd as our container runtime on all instances [control-plane, node-1, node-2] and in order to do that, first we need to load some Kernel modules and modify system settings
$ cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
$ sudo modprobe overlay

$
sudo modprobe br_netfilter
$ cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
$ sudo sysctl --system
  • Once the Kernel modules are loaded and modified system settings, now we can install containerd on all instances [control-plane, node-1, node-2]
$ sudo apt update

$
sudo apt install -y containerd
  • Once installed, generate a default configuration file for contained on all instances [control-plane, node-1, node-2] and restart the service
$ sudo mkdir -p /etc/containerd

$
sudo containerd config default | sudo tee /etc/containerd/config.toml
$ sudo systemctl restart containerd
  • Install some prerequisite packages on all instances [control-plane, node-1, node-2] for configuring Kubernetes apt repository
$ sudo apt update

$
sudo apt install -y apt-transport-https ca-certificates curl
  • Download the Google Cloud public signing key and configure Kubernetes apt repository on all instances [control-plane, node-1, node-2]
$ sudo mkdir /etc/apt/keyrings

$
sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

$
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
  • Install kubeadm, kubelet and kubectl tools and hold their version on all instances [control-plane, node-1, node-2]
$ sudo apt update

$
sudo apt install -y kubeadm=1.27.0-00 kubelet=1.27.0-00 kubectl=1.27.0-00

$
sudo apt-mark hold kubeadm kubelet kubectl
  • Initialize the cluster by executing the below command on [control-plane] instance
$ sudo kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.27.0
  • Once the installation is completed, setup our access to the cluster on [control-plane] instance
$ mkdir -p $HOME/.kube

$
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

$
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • Verify our cluster by listing the nodes on [control-plane] instance
    But our nodes are in NotReady state because we haven’t setup networking
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
control-plane NotReady control-plane 36s v1.27.0
  • Install Calico network addon on [control-plane] instance and verify the status
$ kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/calico.yaml
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
control-plane Ready control-plane 2m51s v1.27.0
  • Once the networking is enabled, join our workload nodes to the cluster
    Get the join command from the [control-plane] instance
$ kubeadm token create --print-join-command
kubeadm join 172.31.12.122:6443 --token 6t3r2c.5zzsgtekwwltofwt --discovery-token-ca-cert-hash sha256:2f1c4115125ec62af8ae6ab2648277ace3a625ebc0281e85ca8145e0e9077ee4
  • Once the join command is copied from [control-plane] instance, execute them in the [node-1, node-2] instances
$ sudo kubeadm join 172.31.12.122:6443 --token 6t3r2c.5zzsgtekwwltofwt --discovery-token-ca-cert-hash sha256:2f1c4115125ec62af8ae6ab2648277ace3a625ebc0281e85ca8145e0e9077ee4
  • Verify our cluster from the [control-plane] instance, it should be in a READY state
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
control-plane Ready control-plane 7m5s v1.27.0
node-1 Ready <none> 91s v1.27.0
node-2 Ready <none> 17s v1.27.0
  • Deploy a Nginx pod, expose it as ClusterIP from the [control-plane] instance and verify it’s status
$ kubectl run nginx --image=nginx --port=80 --expose
service/nginx created
pod/nginx created
$ kubectl get pod nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 30s 192.168.84.129 node-1 <none> <none>

$
kubectl get svc nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP 10.111.228.20 <none> 80/TCP 39s

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Unni P
Unni P

Written by Unni P

SysAdmin turned into DevOps Engineer | Collaboration and Shared Responsibility

No responses yet

Write a response