bimals.net

Hosting Kubernetes cluster locally in an Ubuntu server

Overview

While using kubernetes with minikube and docker desktop is a good learning experience, I found it very limiting for couple of reasons. Firstly, they are both single node clusters and I had to run them on my personal laptop which was not ideal. Secondly, they both provide a ready-made environment and I felt there wasn’t much I could experiment over. So, I booted up my old laptop, installed ubuntu server on it and here I am, trying to setup and run k8s locally.

Installation

Swap off

In a kubernetes cluster, kubelet, the primary node agent doesn’t start if swap is on. There’s two options to deal with this: Either disable swap or make kubelet “tolerate” swap. I picked the straightforward option, which was to disable swap with: sudo swapoff -a

Note: The command above only disables swap temporarily and swap is available on every restart. I deliberately did it for repetition based learning.

Container runtime

Kubernetes requires container runtime to be able to run a container inside Pods. There are few commonly used options available via documentation: containerd, Docker Engine, CRI-O. I thought I would use docker because of the familiarity but it required additional installation of cri-dockerd to make it work.

Containerd on the other hand, came easier because it comes with apt-get in packages managed by Docker because docker engine itself depends on it. So I decided to go with it. First, I added the Docker repository to apt sources by running the following code in the Ubuntu server.

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

I then executed sudo apt-get install containerd.io to install the runtime. To verify the successful installation, sudo systemctl status containerd showed the service active and running.

containerd.service - containerd container runtime
   Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; preset: enabled)
   Active: active (running) since Sun 2024-10-24 08:06:11 UTC; 23s ago
   Docs: https://containerd.io
   Process: 6567 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 6574 (containerd)
   Tasks: 9
   Memory: 13.5M (peak: 14.0M)
   CPU: 103ms
   CGroup: /system.slice/containerd.service
           └─6574 /usr/bin/containerd

Control group drivers

There are two options for cgroup drivers: cgroupfs and systemd. Because Ubuntu uses systemd, it is recommended to be used for both container runtime and kubelet to mitigate any issues during higher resource usage.

When a cluster is created with kubeadm without specifying a cgroup driver, it is defaulted to systemd. So, I only have to change the configuration for containerd. to align all three: ubuntu, kubelet and runtime.

First I replaced the current config file with the default configuration with the following.

containerd config default > /etc/containerd/config.toml

I had to do it because the existing config file didn’t have any settings to change the control group as shown in the documentation.

I then changed the SystemdCgroup parameter to true in the config file and restarted the service.

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

Installing kubeadm, kubelet and kubectl

Now the fun part. These three are the core packages of any Kubernetes cluster with different key functionalities.

Kubeadm is the component to bootstrap a cluster, Kubectl is the command line tool with API access to cluster for communication and kubelet is primary node agent that performs functionalities such as starting pods and containers.

Since all these are packages of Kubernetes’ apt repository, first I had to get the signing key of the package manager as follows.

# Download public signing key for k8s package repo
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# Add K8s apt repo to the list of sources
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Update
sudo apt-get update

Finally I installed all three with the following command.

sudo apt-get install -y kubelet kubeadm kubectl

Once the installation was complete, finally it was time to create a cluster.

Creating a cluster

A cluster is created with kubeadm init <args> command, where there are multiple arguments that can be passed as per requirement.

My only requirement was to use the —pod-network-cidr flag to specify the IP addresses that the pods within the cluster could communicate with each other.

There are a bunch of third-party providers that provide this service. I picked Calico (because I liked their documentation) and hence had to pass the pod network flag as follows.

kubeadm init --pod-network-cidr=192.168.0.0/16

After a minute, the following is printed in the shell to let you know that a control-plane has initialized.

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

I followed the instruction from the shell and copied the config file so I can assess the cluster.

Install pod network add-on

Now is the time to install the pod network add-on I picked from earlier. Following their documentation, I had to run the two following commands, one for custom resource definitions and another to create the resource to make it work.

# Tigera Calico operatorr and custom resources definitions
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/tigera-operator.yaml

# Install necessary custom resources
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/custom-resources.yaml

I then used the following command to see everything was working as expected.

watch kubectl get pods -n calico-system

Output:
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-677cd79677-4zfd2   1/1     Running   0          20m
calico-node-fxq2j                          1/1     Running   0          20m
calico-typha-9f7f58856-lz9x8               1/1     Running   0          20m
csi-node-driver-c729z                      2/2     Running   0          20m

Remove control plane taints

By default, Pods cannot be scheduled in Control Plane nodes for security reasons. To be able to schedule pods and host containers, the following command has to be executed.

kubectl taint nodes --all node-role.kubernetes.io/control-plane-

Finally, the single-node kubernetes cluster is up and ready to schedule pods.

bimal@server:~$ kubectl get nodes
NAME    STATUS   ROLES           AGE   VERSION
server   Ready    control-plane   22h   v1.31.3

Conclusion

This post goes through the setup process to host k8s locally using kubeadm and creating a kubernetes cluster. The process of hosting projects in the cluster will be covered in different posts.

(This post is a follow-up to Kubernetes Cluster on Docker Desktop)

Last updated: 11/25/2024
Tags:Kubernetesk8sDockerContainerdkubectlKubeadmKubelet