bimals.net
FastAPI, Postgres and Redis backend project deployed in local Kubernetes cluster
Overview
I hosted my Fantasy Premier League dashboard’s FastAPI backend in the locally hosted Kubernetes cluster. I ran into DNS issue and Apple software update changes before I finally made it work.
Kubernetes Config file to laptop
While I obviously have SSH access to the Ubuntu server in which the cluster is in, I didn’t want to do it every time to communicate with the cluster.
So for easy access, I copied the configuration file using scp to my Mac from which I could access it easily using the following command structure.
kubectl --kubeconfig=<config-file> get nodes
Deployment
Now for deployment. I picked this personal project for deployment. So first comes backend with FastAPI, Redis and PostgreSQL. Instead of having a single file, I decided to go with separate .yaml files for each, for my own understanding.
backend-deployment.yaml
After installation I played around with multiple replicas but for this purpose I decided on just the one. I uploaded my backend Image to docker hub, which will be used for the container. Lastly, the environment variables for Redis, Postgres directly and SQL Alchemy.
Instead of managing it properly as the project grew, I decided to do the classic “Personal Project” thing and stick with separate variables for database connections.
apiVersion: apps/v1
kind: Deployment
metadata:
name: fpl-backend
labels:
app: fpl-backend
spec:
replicas: 1
selector:
matchLabels:
app: fpl-backend
template:
metadata:
name: fpl-backend
labels:
app: fpl-backend
spec:
containers:
- name: fpl-backend
image: bimalpaudel/fpl-exp:latest
env:
- name: DATABASE_URL
value: postgresql://postgres:postgres@postgres-service:5432/fpl_db
- name: REDIS_URL
value: redis://redis-service:6379/0
- name: ALCHEMY_DB_URL
value: postgresql+psycopg://postgres:postgres@postgres-service:5432/fpl_db
imagePullPolicy: IfNotPresent
restartPolicy: Always
backend-service.yaml
Kubernetes uses services to expose applications outside containers or clusters as per requirements. The backend service is pretty simple. targetPort is the port in which the FastAPI is running inside the pod. Port is the main port which is exposed by the pod to access the application to the cluster and finally node port is the cluster port used to expose the application outside the cluster.
apiVersion: v1
kind: Service
metadata:
name: fpl-service
spec:
selector:
app: fpl-backend
ports:
- protocol: TCP
port: 80
targetPort: 8000
nodePort: 30001
type: NodePort
postgres-deploy.yaml
Important thing to remember here while passing the environment variable is the credentials to the database.
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-dep
labels:
app: postgres-dep
spec:
replicas: 1
selector:
matchLabels:
app: postgres-dep
template:
metadata:
name: postgres-dep
labels:
app: postgres-dep
spec:
containers:
- name: postgres-dep
image: postgres:16-alpine
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: fpl_db
ports:
- containerPort: 5432
postgres-service.yaml
Since the database doesn’t have to be exposed outside the cluster, only the target port is mapped with the cluster port. Need to use the exposed port and the service name along with the database credentials above when using as environment variables in the backend-deployment.yaml file above.
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres-dep
ports:
- protocol: TCP
port: 5432
targetPort: 5432
type: ClusterIP
And lastly,
redis-deploy.yaml
Not much different from the postgres setup above.
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-dep
labels:
app: redis-dep
spec:
replicas: 1
selector:
matchLabels:
app: redis-dep
template:
metadata:
name: redis-dep
labels:
app: redis-dep
spec:
containers:
- name: redis-dep
image: redis:7-alpine
ports:
- containerPort: 6379
redis-service.yaml
Similar to postgres-service.yaml, the service name and port have to be accurate to be passed as the environment variables in the backend deployment file.
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
selector:
app: redis-dep
ports:
- protocol: TCP
port: 6379
targetPort: 6379
type: ClusterIP
Finally, I applied all the files using kubectl —kubeconfig=config apply -f k8s/deployment and so on. Every deployments and services were created and every pods were running as expected.
BUT
when I tried to navigate to 192.168.2.73:30001, the URL in which the backend was exposed, it didn’t work.
Issues Encountered
Turns out, on Apple’s Sequoia, I had to allow my browser (Google Chrome) to search for local devices by navigating to Settings → Privacy and Security → Local Network and toggling the button. Without that permission, the browser is unable to connect to the locally hosted server but you can curl request to it.
After giving the permission, I could finally navigate to my APIs URLs.
I then navigated to the shell of the pod in which the backend was running so I could run my Python script to fetch players data.
kubectl --kubeconfig=config exec -it <pod-name> -- /bin/sh
I ran into another issue there where I couldn’t execute my scripts because of DNS related issues where my pod couldn’t resolve any external or internal requests. As a result I had to reset the cluster all over.
(This is a direct follow up from Hosting Kubernetes cluster locally post.)