In the previous article, we developed an employee management application. Today, we will learn how to deploy this application using Kubernetes, and we will then use a local cluster (minikube) for its deployment. Below are the steps we'll follow (please note that all these steps are for the Ubuntu operating system, and the installation might be a bit different depending on your operating system):
Before you continue reading this post, carry out the following steps:
We will use the following tools to deploy our application:
The only tool that you need to install independently is Docker, for installing the rest use script from previously provided repository.
Navigate to the scripts directory and run the following command:
> sh install-tools.sh
After performing all these operations, our environment is ready to work.
Now, we will create a Docker image which will allow us to containerize our application. Refer to the file dockerfiles/Dockerfile, where there are comments explaining what each line of code does.
# Use python docker image from global DockerHub as base image
FROM python:3.11-slim-bullseye AS base
# Define a new stage build based on the base image
FROM base AS build
# Set the working directory in the image to /app. All subsequent instructions are relative to this path.
WORKDIR /app
# Copy the application's pyproject.toml and poetry.lock (if exists) to the working directory
COPY ./pyproject.toml ./poetry.lock* /app/
# Install poetry, create a virtual environment in the project directory, configure poetry and install project dependencies
# Setting virtualenvs.in-project true makes poetry create the virtual environment in the project's root directory
# poetry install without arguments reads the pyproject.toml file from the current project and installs the dependencies specified
RUN pip install poetry \
&& poetry config virtualenvs.in-project true \
&& poetry install
# Define a new stage based on the base image
FROM base
# Set system environments for Python to ensure smooth running
# PYTHONUNBUFFERED=1 ensures that Python output is logged to the terminal where it can be consumed by logging utilities
# PYTHONDONTWRITEBYTECODE=1 ensures that Python does not try to write .pyc files which we do not need in this case
# PYTHONPATH and PATH variables are set to include the .venv directory
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PYTHONPATH="${PYTHONPATH}:/app" \
PATH="/app/.venv/bin:$PATH"
# Set the working directory in this stage to /app
WORKDIR /app
# Copy the virtual environment .venv from the build stage to the current stage
COPY --from=build /app/.venv .venv
# Copy scripts from the project to the Docker image
COPY scripts scripts
# Copy application code to the Docker image
COPY employee employee
# Define the command to run the application using bash
# The script entrypoint.sh is expected to start the application
ENTRYPOINT ["bash", "./scripts/entrypoint.sh"]
You can test the Docker image using the commands below:
docker build . -f dockerfiles/Dockerfile -t employee-app:latest
docker run -it -p 8000:8000 employee-app:latest
Go to the link http://localhost:8000/docs and check if the API is working.
The next step in our process is the preparation of all the necessary Kubernetes elements. Let's dive in. The first element will be a service in Kubernetes.
Services in Kubernetes are a crucial element that provides constant access to pods (basic units in the Kubernetes computational model). Despite the fact that a pod may be started and stopped, services ensure uninterrupted network availability to the sets of pods they represent.
A service can also serve load balancing functions, distributing network traffic among different pods.
Here's a YAML file that defines a Kubernetes Service for our application:
apiVersion: v1
kind: Service
metadata:
name: {{ .Chart.Name }}-svc
spec:
selector:
app: {{ .Chart.Name }}
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8000
Key elements of the service are:
As a result, this service allows access to our application on port 80, redirecting traffic to port 8000 of our pods.
The next stage is to prepare the Deployment component. A Deployment file in Kubernetes defines how the application should be distributed across different pods (instances) within a Kubernetes cluster. It contains information like how many replicas of pods should be created, what Docker images should be used to create these pods, which ports are open, what resources are required, and many more.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}-deploy
labels:
app: {{ .Chart.Name }}-{{ .Values.app.version }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Chart.Name }}
template:
metadata:
labels:
app: {{ .Chart.Name }}
spec:
containers:
- name: app
image: {{ .Values.docker.image }}
imagePullPolicy: {{ .Values.docker.pullPolicy | default "Always" | quote }}
A quick breakdown of the file structure
This Deployment file is a template that is rendered by Helm during the installation of the Helm chart. It uses the values defined in the values.yaml file or provided during the installation of the chart. For instance, ".Values.docker.image" is the name of the Docker image to be used, and ".Values.docker.pullPolicy" defines the image update policy.
We have all the necessary files to run Kubernetes. Now let's configure our chart for Helm, the package manager for Kubernetes, which facilitates defining, installing, and updating applications on Kubernetes clusters.
A Helm Chart is the equivalent of a package in other package managers like apt or yum. Each chart is a collection of files that predefine the resources needed to run an application, service, test, etc., on a Kubernetes cluster.
A chart consists of several components:
In summary, Helm charts package applications for running on Kubernetes, enabling easy distribution and management of applications on Kubernetes. In the chart folder, let's create a templates folder and add our two files, in which we have defined the service and deployment. Then, let's create a Chart.yaml file, which will name our deployment:
apiVersion: v1
name: employee-api
version: 0.1.0
description: "Employee api chart repo"
The last step is to add a values.yaml file that will contain the variables used by Helm during the rendering of templates:
app:
version: 0.0.1
docker:
image: employee-api # Must match skaffold.yaml Artifact name
pullPolicy: Never # Always use local image
In summary, our project structure looks like this:
chart/
├── templates/
│ ├── deployment.yaml
│ └── service.yaml
├── Chart.yaml
└── values.yaml
At this stage, we've finished preparing our Chart. The next step will be to deploy it on a local cluster created using Minikube and Skaffold.
Skaffold is a powerful tool that simplifies the development workflow for Kubernetes applications. With Skaffold, you can automate the build, deployment, and testing processes, making it easier to iterate on your code and accelerate development cycles. In this article, we'll explore how Skaffold works and how it can enhance your Kubernetes development experience. At the heart of Skaffold is the skaffold.yaml configuration file:
apiVersion: skaffold/v2beta29
kind: Config
build:
local:
push: false
tagPolicy:
sha256: {}
artifacts:
- image: employee-api
context: "."
docker:
dockerfile: dockerfiles/Dockerfile
noCache: false
deploy:
helm:
releases:
- name: employee-api
chartPath: chart
portForward:
- resourceType: service
resourceName: employee-api-svc
port: 80
address: 0.0.0.0
localPort: 8000
This file defines the build and deployment settings for your application. Let's take a closer look at the different sections in theskaffold.yaml file:
Build:
Deploy:
Port forward:
To leverage Skaffold's capabilities, you can use:
skaffold dev
This command starts the continuous development loop, where Skaffold monitors your project files for changes.
Whenever a change is detected, Skaffold automatically rebuilds the image, updates the deployment, and syncs the changes to the cluster. It provides a seamless development experience, allowing you to focus on writing code and seeing the changes in real-time.
For example, with the "skaffold dev"
command and a local Minikube cluster, you can quickly spin up your entire application stack. Skaffold handles the deployment of your application's services and manages the container images efficiently. With live reloading and rapid iteration, you can iterate faster and validate your changes instantly.
In conclusion, Skaffold is a valuable tool for Kubernetes development, automating the build and deployment processes while providing an efficient workflow. By using Skaffold, developers can streamline their development experience, enabling faster iterations and easier testing. Whether you're working on a small project or a complex microservices architecture, Skaffold can greatly simplify your Kubernetes development journey.
So why not give Skaffold a try and experience the joy of rapid Kubernetes development?
After running "skaffold dev", we can access our application athttp://localhost:8000/docs, where we will see the running application in the Kubernetes cluster. Additionally, we can check the contents of the Kubernetes cluster in two ways:
To use the command line, we use the
minikube kubectl <any kubectl command>
command. Below is an example image:
Alternatively, if we are not familiar with using the kubectl command, we can utilize the Kubernetes dashboard. Use the command minikube dashboard and wait for the browser to automatically open with the dashboard. If it doesn't open automatically, you can copy and paste the following link into your browser: http://127.0.0.1:33925/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/.
Below is an example image:
That's it! You have successfully deployed a FastAPI application to Kubernetes. As you can see, it's not as difficult as it may seem. Remember, Kubernetes and Docker are powerful tools. In this blog post, I have demonstrated only the basic elements that will help you understand the fundamentals and pave the way for further exploration.
If you have any questions, We're here to help. We encourage you to clone the code from the repository and continue your learning journey independently. Good luck!