bimals.net
Kubernetes Cluster on Docker Desktop: Express.js Deployment
Overview
Docker desktop comes with a non-configurable single-node kubernetes cluster that is handy for testing. This post makes use of that cluster to deploy an Express.js application. It covers a few key concepts that k8s enables: Deployment, services, scaling, rolling update and roll back.
Setup
Kubernetes is turned off by default on docker desktop so this process will have to be followed to install the images required to run it locally and it is good to go. The installed node could be verified with kubectl get nodes command which shows a control-plane node in ready state.
kubectl get nodes
Output:
NAME STATUS ROLES AGE VERSION
docker-desktop Ready control-plane 99s v1.30.2
Application setup
For the purpose of this post, I’ve setup an express application using express-generator. The root of the application prints the output of the hostname in console and also in the web-page. I’ve pushed the image to DockerHub and it is available here: bimalpaudel/express-k8s:latest
Deployment
I ran kubectl create deployment command to deploy my express application.
kubectl create deployment express-k8s --image=bimalpaudel/express-k8s:latest
Output:
deployment.apps/express-k8s created
The deployment creates a pod which hosts the container that runs the application. The pods can be listed with get pods command.
kubectl get pods
Output:
NAME READY STATUS RESTARTS AGE
express-k8s-575d6b657f-nnqhg 1/1 Running 0 56s
Similarly, the logs of the pod be seen with kubectl logs <pod_name>.
kubectl logs express-k8s-575d6b657f-nnqhg
Output:
> express-k8s@0.0.0 start
> node ./bin/www
Service
Next, I exposed the deployment so that the container can be accessed from outside the K8s internal network.
kubectl expose deployment express-k8s --type=NodePort --port=3000
Output:
service/express-k8s exposed
I then ran kubectl get services command to check which port the exposed node got mapped to.
kubectl get services
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
express-k8s NodePort 10.108.80.11 <none> 3000:31721/TCP 12s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 64m
As you can see, the 3000 port of the application is mapped to 31721. The application could be accessed by going to localhost:31721 from the browser or with curl in the command line.
curl 127.0.0.1:31721
Output:
Host Name: express-k8s-575d6b657f-2k7m2
Scaling
Currently, only one pod is running in my deployment. One of the best feature of kubernetes is the scaling capability. When traffic increases in production ready application, it has to be scaled up to keep up with the demand. This is achieved in kubernetes by creating replicas.
kubectl scale deployments/express-k8s --replicas=4
Output:
deployment.apps/express-k8s scaled
This creates 4 different replicas of the deployments instead of the current one and can be verified with get pods:
kubectl get pods
Output:
NAME READY STATUS RESTARTS AGE
express-k8s-575d6b657f-2k7m2 1/1 Running 0 3m1s
express-k8s-575d6b657f-8wwvh 1/1 Running 0 36s
express-k8s-575d6b657f-gdgkj 1/1 Running 0 36s
express-k8s-575d6b657f-nffl9 1/1 Running 0 36s
The kubernetes services have an integrated load balancer that helps distribute the traffic. To check this, if the browser page is refreshed multiple times/opened in multiple tabs, the host name changes to show multiple hosts are being used.
# Multiple curl requests
curl 127.0.0.1:31721
Outputs:
#...
Host Name: express-k8s-575d6b657f-gdgkj
Host Name: express-k8s-575d6b657f-2k7m2
Host Name: express-k8s-575d6b657f-nffl9
#...
The number of replicas can be changed, whether to scale up or down using the same command.
Rolling update
Another important feature of kubernetes is the capability to do rolling updates i.e simultaneously updating the application code while still not having any down time. To show this here, I first changed my application code; this time it prints the current time along with the host. I pushed a new version of docker image to docker hub and used kubectl set image command to update the image.
kubectl set image deployments/express-k8s express-k8s=bimalpaudel/express-k8s:v1
Upon checking the status of pods, the rolling update can be seen, with new containers getting created while the old ones are replaced simultaneously.
kubectl get pods
Output:
NAME READY STATUS RESTARTS AGE
express-k8s-574cdff545-bklkk 0/1 ContainerCreating 0 2s
express-k8s-574cdff545-t8fql 0/1 ContainerCreating 0 2s
express-k8s-575d6b657f-2k7m2 1/1 Running 0 12m
express-k8s-575d6b657f-8wwvh 1/1 Running 0 10m
Now, curl returns the new output with the current time and there was no down-time.
# Multiple curl requests
curl 127.0.0.1:31721
Outputs:
#...
Host Name: express-k8s-574cdff545-ms9v9, Current Time: 3:51:32 PM
Host Name: express-k8s-574cdff545-t8fql, Current Time: 3:51:31 PM
Host Name: express-k8s-574cdff545-bklkk, Current Time: 3:51:29 P
#...
Roll backs
Final kubernetes concept for this post is roll back. This is useful when you want to go back to previous deployment version because of any issue in the latest image. For example, I am going to use set image command to pull v:10 of the image which doesn’t exist. As expected, when I check the status of the pods, I can see the following errors:
kubectl get pods
Output:
NAME READY STATUS RESTARTS AGE
express-k8s-56b65986c5-8vb2f 0/1 ImagePullBackOff 0 8s
express-k8s-56b65986c5-rcjrj 0/1 ErrImagePull 0 8s
This can be fixed by going back to the stable deployment using kubectl rollout undo deployments/express-k8s which goes back to using v1 image of the application.
(This post is a follow up to Kubernetes with minikube.)