Deploying a FastAPI application using Kubernetes

In the previous article, we developed an employee management application. Today, we will learn how to deploy this application using Kubernetes, and we will then use a local cluster (minikube) for its deployment. Below are the steps we'll follow (please note that all these steps are for the Ubuntu operating system, and the installation might be a bit different depending on your operating system):

  1. Tool Installation
  2. Docker Image Creation
  3. Creating Kubernetes Files
  4. Configuring Our Chart for Helm
  5. Streamline Kubernetes Development with Skaffold
  6. Minikube Interaction

Before you continue reading this post, carry out the following steps:

  1. Clone the project repository
  2. Install docker

Step 1 - Tool Installation

We will use the following tools to deploy our application:

  • Docker - An open-source project that automates the deployment of applications as portable, self-sufficient containers that can run on the cloud or locally.
  • Kubernetes - Also known as K8s, an open-source software for automating deployment, scaling, and management of containerized applications.
  • kubectl - The Kubernetes command-line tool, which allows you to run commands against Kubernetes clusters.
  • Helm - Helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
  • Minikube - Local Kubernetes, focusing on making it easy to learn and develop for Kubernetes.
  • Skaffold - Handles the workflow for building, pushing, and deploying your application, allowing you to focus on what matters most: writing code.

The only tool that you need to install independently is Docker, for installing the rest use script from previously provided repository.

Navigate to the scripts directory and run the following command:

> sh install-tools.sh

After performing all these operations, our environment is ready to work.

Step 2 - Creating a Docker image

Now, we will create a Docker image which will allow us to containerize our application. Refer to the file dockerfiles/Dockerfile, where there are comments explaining what each line of code does.

# Use python docker image from global DockerHub as base image
FROM python:3.11-slim-bullseye AS base

# Define a new stage build based on the base image
FROM base AS build

# Set the working directory in the image to /app. All subsequent instructions are relative to this path.
WORKDIR  /app

# Copy the application's pyproject.toml and poetry.lock (if exists) to the working directory
COPY ./pyproject.toml ./poetry.lock* /app/

# Install poetry, create a virtual environment in the project directory, configure poetry and install project dependencies
# Setting virtualenvs.in-project true makes poetry create the virtual environment in the project's root directory
# poetry install without arguments reads the pyproject.toml file from the current project and installs the dependencies specified
RUN pip install poetry \
    && poetry config virtualenvs.in-project true \
    && poetry install

# Define a new stage based on the base image
FROM base

# Set system environments for Python to ensure smooth running
# PYTHONUNBUFFERED=1 ensures that Python output is logged to the terminal where it can be consumed by logging utilities
# PYTHONDONTWRITEBYTECODE=1 ensures that Python does not try to write .pyc files which we do not need in this case
# PYTHONPATH and PATH variables are set to include the .venv directory
ENV PYTHONUNBUFFERED=1 \
    PYTHONDONTWRITEBYTECODE=1 \ 
    PYTHONPATH="${PYTHONPATH}:/app" \ 
    PATH="/app/.venv/bin:$PATH" 

# Set the working directory in this stage to /app
WORKDIR /app

# Copy the virtual environment .venv from the build stage to the current stage
COPY --from=build /app/.venv .venv

# Copy scripts from the project to the Docker image
COPY scripts scripts

# Copy application code to the Docker image
COPY employee employee

# Define the command to run the application using bash
# The script entrypoint.sh is expected to start the application
ENTRYPOINT ["bash", "./scripts/entrypoint.sh"]        

You can test the Docker image using the commands below:

docker build . -f dockerfiles/Dockerfile -t employee-app:latest
docker run -it -p 8000:8000 employee-app:latest

Go to the link http://localhost:8000/docs and check if the API is working.

Step 3 - Creating Kubernetes Files

The next step in our process is the preparation of all the necessary Kubernetes elements. Let's dive in. The first element will be a service in Kubernetes.

Services in Kubernetes are a crucial element that provides constant access to pods (basic units in the Kubernetes computational model). Despite the fact that a pod may be started and stopped, services ensure uninterrupted network availability to the sets of pods they represent.

A service can also serve load balancing functions, distributing network traffic among different pods.

Here's a YAML file that defines a Kubernetes Service for our application:

apiVersion: v1
kind: Service
metadata:
  name: {{ .Chart.Name }}-svc
spec:
  selector:
    app: {{ .Chart.Name }}
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 8000

Key elements of the service are:

  • metadata: We set the service name based on the name of the Helm chart we're using for deployment.
  • spec: It defines the specifications of our service.
  • selector: Specifies which pods the service should direct network traffic to. In this case, these are the pods with the app label, whose value corresponds to the name of our Helm chart.
  • ports: Defines the ports at which the service will be available. In this case, the service will be available on port 80, which will be directed to port 8000 on the pods.

As a result, this service allows access to our application on port 80, redirecting traffic to port 8000 of our pods.

The next stage is to prepare the Deployment component. A Deployment file in Kubernetes defines how the application should be distributed across different pods (instances) within a Kubernetes cluster. It contains information like how many replicas of pods should be created, what Docker images should be used to create these pods, which ports are open, what resources are required, and many more.


apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Chart.Name }}-deploy
  labels:
    app: {{ .Chart.Name }}-{{ .Values.app.version }}
spec:
  replicas: 1
  selector:
    matchLabels:
      app: {{ .Chart.Name }}
  template:
    metadata:
      labels:
        app: {{ .Chart.Name }}
    spec:
      containers:
        - name: app
          image: {{ .Values.docker.image }}
          imagePullPolicy: {{ .Values.docker.pullPolicy | default "Always" | quote }}
          

A quick breakdown of the file structure

  • apiVersion: apps/v1: Specifies the API version being used.
  • kind: Deployment: The type of object to be created. In this case, we want to create a Deployment.
  • metadata: Contains metadata such as the name and labels of the Deployment.
  • name: {{ .Chart.Name }}-deploy: The name of the Deployment.
  • labels: app: {{ .Chart.Name }}-{{ .Values.app.version }}: Labels assigned to the Deployment.
  • spec: The specifications of the Deployment.
  • replicas: 1: Specifies the number of replicas (pods) that should be running.
  • selector: Selects pods that are part of this Deployment based on their labels.
  • template: Defines the template for the pods that will be created by this Deployment.
  • containers: Contains the specifications of the containers to be run in the pods.
  • name: app: The name of the container.
  • image: {{ .Values.docker.image }}: The Docker image to be used.
  • imagePullPolicy: {{ .Values.docker.pullPolicy | default "Always" | quote }}: The Docker image update policy.

This Deployment file is a template that is rendered by Helm during the installation of the Helm chart. It uses the values defined in the values.yaml file or provided during the installation of the chart. For instance, ".Values.docker.image" is the name of the Docker image to be used, and ".Values.docker.pullPolicy" defines the image update policy.

Step 4 - Configuring Our Chart for Helm

We have all the necessary files to run Kubernetes. Now let's configure our chart for Helm, the package manager for Kubernetes, which facilitates defining, installing, and updating applications on Kubernetes clusters.

A Helm Chart is the equivalent of a package in other package managers like apt or yum. Each chart is a collection of files that predefine the resources needed to run an application, service, test, etc., on a Kubernetes cluster.

A chart consists of several components:

  • Chart.yaml file: Contains basic information about the chart, such as its name, version, description, etc.
  • values.yaml file: Contains default values for our templates. These values can be overridden during chart installation.
  • Templates (templates/): These are files that generate Kubernetes manifests, such as deployments, services, pods, etc.
  • README.md: Optionally, a chart can contain a README file with information on how to install and use it.
  • dependencies.yaml: Optionally, a chart can contain a dependency file defining dependencies on other charts.

In summary, Helm charts package applications for running on Kubernetes, enabling easy distribution and management of applications on Kubernetes. In the chart folder, let's create a templates folder and add our two files, in which we have defined the service and deployment. Then, let's create a Chart.yaml file, which will name our deployment:


apiVersion: v1
name: employee-api
version: 0.1.0
description: "Employee api chart repo"

The last step is to add a values.yaml file that will contain the variables used by Helm during the rendering of templates:


app:
  version: 0.0.1

docker:
  image: employee-api # Must match skaffold.yaml Artifact name
  pullPolicy: Never # Always use local image

In summary, our project structure looks like this:


chart/
├── templates/
│   ├── deployment.yaml
│   └── service.yaml
├── Chart.yaml
└── values.yaml

At this stage, we've finished preparing our Chart. The next step will be to deploy it on a local cluster created using Minikube and Skaffold.

Step 5 - Streamline Kubernetes Development with Skaffold

Skaffold is a powerful tool that simplifies the development workflow for Kubernetes applications. With Skaffold, you can automate the build, deployment, and testing processes, making it easier to iterate on your code and accelerate development cycles. In this article, we'll explore how Skaffold works and how it can enhance your Kubernetes development experience. At the heart of Skaffold is the skaffold.yaml configuration file:


apiVersion: skaffold/v2beta29
kind: Config
build:
  local:
    push: false

  tagPolicy:
    sha256: {}

  artifacts:
  - image: employee-api
    context: "."
    docker:
      dockerfile: dockerfiles/Dockerfile
      noCache: false

deploy:
  helm:
    releases:
    - name: employee-api
      chartPath: chart

portForward:
- resourceType: service
  resourceName: employee-api-svc
  port: 80
  address: 0.0.0.0
  localPort: 8000

This file defines the build and deployment settings for your application. Let's take a closer look at the different sections in theskaffold.yaml file:

Build:

  • The local section specifies the build context and Dockerfile location.
  • The push flag determines whether the built images should be pushed to a container registry.
  • The tagPolicy defines how the image tags are generated. In this example, the SHA256 of the image contents is used.
  • The artifacts list includes the images to be built. Each artifact specifies the image name, context, and Dockerfile location.

Deploy:

  • The helm section configures the deployment using Helm charts.
  • The releases list contains the release configurations. In this case, we have a single release named "employee-api" that points to the chart directory.

Port forward:

  • The portForward section sets up port forwarding for easy access to services running in the cluster.
  • The resourceType specifies the Kubernetes resource type, which is a service in this example.
  • The resourceName identifies the specific service to forward ports.
  • The port specifies the port inside the cluster, while localPort maps it to a port on the local machine.
  • The address determines the network address to bind the port forwarding.

To leverage Skaffold's capabilities, you can use:

skaffold dev

This command starts the continuous development loop, where Skaffold monitors your project files for changes.

Whenever a change is detected, Skaffold automatically rebuilds the image, updates the deployment, and syncs the changes to the cluster. It provides a seamless development experience, allowing you to focus on writing code and seeing the changes in real-time.

For example, with the "skaffold dev"

command and a local Minikube cluster, you can quickly spin up your entire application stack. Skaffold handles the deployment of your application's services and manages the container images efficiently. With live reloading and rapid iteration, you can iterate faster and validate your changes instantly.

In conclusion, Skaffold is a valuable tool for Kubernetes development, automating the build and deployment processes while providing an efficient workflow. By using Skaffold, developers can streamline their development experience, enabling faster iterations and easier testing. Whether you're working on a small project or a complex microservices architecture, Skaffold can greatly simplify your Kubernetes development journey.

So why not give Skaffold a try and experience the joy of rapid Kubernetes development?

Step 6 - Minikube Interaction

After running "skaffold dev", we can access our application athttp://localhost:8000/docs, where we will see the running application in the Kubernetes cluster. Additionally, we can check the contents of the Kubernetes cluster in two ways:

  • Using the command line
  • Using the Kubernetes dashboard

To use the command line, we use the

minikube kubectl <any kubectl command>

command. Below is an example image:

Terminal output

Alternatively, if we are not familiar with using the kubectl command, we can utilize the Kubernetes dashboard. Use the command minikube dashboard and wait for the browser to automatically open with the dashboard. If it doesn't open automatically, you can copy and paste the following link into your browser: http://127.0.0.1:33925/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/.

Below is an example image:

kubernetes dashboard

Summary

That's it! You have successfully deployed a FastAPI application to Kubernetes. As you can see, it's not as difficult as it may seem. Remember, Kubernetes and Docker are powerful tools. In this blog post, I have demonstrated only the basic elements that will help you understand the fundamentals and pave the way for further exploration.

If you have any questions, We're here to help. We encourage you to clone the code from the repository and continue your learning journey independently. Good luck!