How to setup Kubernetes cluster with kubeadm on Ubuntu 20.04

How to setup Kubernetes cluster with kubeadm on Ubuntu 20.04


  1. Control and worker nodes OS: Ubuntu 20.04
  2. Container runtime: containerd
  3. Kubernetes version 1.23
  4. Testbed scheme:


Set hostnames for nodes (optional):

  1. Log in to the control node and set a user-friendly hostname:
    sudo hostnamectl set-hostname k8s-control
  2. Log in to the worker node 1 and set a user-friendly hostname:
    sudo hostnamectl set-hostname k8s-worker-1
  3. Log in to the worker node 2 and set a user-friendly hostname:
    sudo hostnamectl set-hostname k8s-worker-2
  4. The hostname changes to take effect logout and log back in:

Set networking:

To make your worker and control nodes talk to each other via hostnames we need to edit /etc/hosts file. To edit the file use the sudo vi /etc/hosts command. The following entries must be added to all nodes:

[control_node_private_ip] [control_node_hostname]
[worker_node1_private_ip] [worker_node1_hostname]
[worker_node2_private_ip] [worker_node2_hostname]

Enable modules and configure networking:

  1. On control node enable the overlay and br_netfilter kernel modules for containerd:
    cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
  2. Enable the modules (control node):
    sudo modprobe overlay
    sudo modprobe br_netfilter
  3. Edit the 99-kubernetes-cri.conf file to make sure Kubernetes networking works as expected (control node):
    cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
    net.bridge.bridge-nf-call-iptables   = 1
    net.ipv4.ip_forward                  = 1
    net.bridge.bridge-nf-call-ip6tables  = 1
  4. Reload the configuration file:
    sudo sysctl --system

Containerd runtime installation and configuration (control node):

  1. Install containerd runtime:
    sudo apt-get update && sudo apt-get install -y containerd
  2. Create configuration directory for containerd:
    sudo mkdir -p /etc/containerd
  3. Generate default configuration file:
    sudo containerd config default | sudo tee /etc/containerd/config.toml
  4. Restart the containerd:
    sudo systemctl restart containerd
  5. In order Kubernetes to work disable swap:
    sudo swapoff -a
  6. Check fstab file if swap is disabled:
    sudo cat /etc/fstab

Install Kubernetes (control node):

  1. Install required packages for Kubernetes:
    sudo apt-get update && sudo apt-get install -y apt-transport-https curl
  2. Add GPG key for Kubernetes package repository:
    curl -s | sudo apt-key add -
  3. Set Kubernetes repository entry in apt list:
    cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
    deb kubernetes-xenial main
  4. Update the packages:
    sudo apt-get update
  5. Install Kubernetes components:
    sudo apt-get install -y kubelet=1.23.0-00 kubeadm=1.23.0-00 kubectl=1.23.0-00
  6. Mark installed packages from the automatic update. This allows us to make manual updates on the components.
    sudo apt-mark hold kubelet kubeadm kubectl

Install Kubernetes on worker nodes

To install Kubernetes on worker nodes repeat the steps starting from networking setup up until Kubernetes installation as was shown above for control node. To facilitate the process you can use terminator terminal to type in parallel.

Initialize the Kubernetes cluster on the control node

  1. On the control node initialize the Kubernetes cluster:
    sudo kubeadm init --pod-network-cidr --kubernetes-version 1.23.0
    On success you will see the following message:
  2. Setup the kubeconfig:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  3. Check if you are able to reach the nodes:
    kubectl get nodes
    you should see only the control node:
  4. Setup the Kubernetes networking. install calico networking plugin:
    kubectl apply -f
  5. Get the join token for the cluster:
    kubeadm token create --print-join-command

Join worker nodes to the cluster

  1. On the worker nodes execute the following command:
    kubeadm join [private_ip_of_control_node]:6443 --token [token] --discovery-token-ca-cert-hash [hash]
  2. On success you should see the following message:
  3. Go to the control node and check if nodes have joined the cluster:
    kubectl get nodes
    on success you should see the following message (wait until nodes are up and running):


  1. br_netfilter module is required to enable transparent masquerading and to facilitate Virtual Extensible LAN (VxLAN) traffic for communication between Kubernetes pods across the cluster. If you need to check whether it is loaded reference
  2. An overlay-filesystem tries to present a filesystem which is the result over overlaying one filesystem on top of the other. Overlay Filesystem
  3. Use the modprobe command to add or remove modules on Linux. The command works intelligently and adds any dependent modules automatically. The kernel uses modprobe to request modules. How To Use The Modprobe Command In Linux - phoenixNAP
  4. /etc/sysctl.conf - Configuration file for setting system variables
  5. swapoff disables swapping on the specified devices and files. When the -a flag is given, swapping is disabled on all known swap devices and files (as found in /proc/swaps or /etc/fstab). Linux swapon and swapoff command.
  6. Your Linux system's filesystem table, aka fstab, is a configuration table designed to ease the burden of mounting and unmounting file systems to a machine. It is a set of rules used to control how different filesystems are treated each time they are introduced to a system. An introduction to the Linux /etc/fstab file
  7. apt-mark hold is used to mark a package as held back, which will prevent the package from being automatically installed, upgraded or removed. apt-mark - show, set and unset various settings for a package
  8. Calico enables Kubernetes workloads and non-Kubernetes or legacy workloads to communicate seamlessly and securely. About Calico


  1. How do I run the same linux command in more than one tab/shell simultaneously?
  2. Terminator: getting started
  3. How to manually install Kubernetes cluster on AWS EC2 instances