Capturing FastAPI server access logs using Loki and visualizing them on Grafana

Capturing FastAPI server access logs using Loki and visualizing them on Grafana

Introduction

FastAPI is a high-performance web framework for building APIs with Python. One of its many features is its built-in logging capabilities. FastAPI uses the standard logging module in Python to log information about requests, responses, and errors.

However, as your application grows in complexity and you scale horizontally on a distributed environment, it’s essential to have a good logging system in place to track errors and debug issues. One of the most efficient ways to log is to use a centralized logging solution. In this tutorial, we’ll cover how to capture FastAPI server logs using Loki and visualize them on Grafana using Docker Compose for deployment.

Loki is an open-source log aggregation system that is designed to handle large volumes of log data. Loki is designed to be highly scalable and can handle large volumes of data in real-time. It is a powerful tool that can be used to centralize logs from multiple sources and make them available for analysis.

Grafana is an open-source analytics and visualization platform that is designed to help you understand your data. Grafana can be used to visualize metrics and logs in real-time, making it an ideal tool for analyzing data from multiple sources.

Prerequisites

To follow this tutorial, you’ll need Docker and Docker Compose installed on your system

Step 1: Configuring FastAPI to send logs to Loki

First, we’ll modify our FastAPI application to send its logs to Loki. To do this, we’ll use the python-logging-loki library.

First, create a new file named requirements.txt to specify all the libraries needed for the application:

fastapi==0.91.0
python-logging-loki==0.3.1

Create a new file named main.py with the following code:

import logging
from os import getenv
from fastapi import FastAPI
from multiprocessing import Queue
from logging_loki import LokiQueueHandler

app = FastAPI()

loki_logs_handler = LokiQueueHandler(
    Queue(-1),
    url=getenv("LOKI_ENDPOINT"),
    tags={"application": "fastapi"},
    version="1",
)

uvicorn_access_logger = logging.getLogger("uvicorn.access")
uvicorn_access_logger.addHandler(loki_logs_handler)

@app.get("/")
async def root():
    return {"message": "Hello World"}

This code sets up a FastAPI server by creating a new FastAPI instance with a single route for the root URL ("/") using the @app.get() decorator which returns a simple JSON response. The server access logs generated by uvicorn workers are registered by the logger name: uvicorn.access, so we add our handler: LokiQueueHandler to that logger which sends logs to Loki asynchronously using a separate thread. We have mentioned Queue(-1) as the first parameter to create a queue of infinite size as we can't determine the queue size for our logs. The LOKI_ENDPOINT id set to url parameter to specify the address and port of the Loki instance, which we'll configure later in the docker-compose file. The tags parameter will help to query the log in Grafana as labels which we will see later.

Next, create a new file named Dockerfile to containerize our application with the following code:

FROM tiangolo/uvicorn-gunicorn-fastapi:latest
COPY main.py /app
COPY requirements.txt /app

RUN pip install -r /app/requirements.txt

This Dockerfile builds a new Docker image based on the official tiangolo/uvicorn-gunicorn-fastapi image, which is an optimized production-ready image for FastAPI. We copy our main.py file and requirements.txt file to the image and install the required dependencies using pip.

Step 2: Setting up Loki and Grafana with Docker Compose

Create a new file named docker-compose.yml with the following code:

version: "3"

services:
  fastapi:
    build: .
    ports:
      - "8000:80"
    depends_on:
      - loki
    environment:
      - LOKI_ENDPOINT=http://loki:3100/loki/api/v1/push

  loki:
    image: grafana/loki:latest
    ports:
      - "3100:3100"

  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    depends_on:
      - loki
    volumes:
      - grafana_data:/var/lib/grafana

volumes:
  grafana_data:

This Docker Compose configuration file defines three services: fastapi, loki, and grafana.

The fastapi service is defined as:

fastapi:
    build: .
    ports:
      - "8000:80"
    depends_on:
      - loki
    environment:
      - LOKI_ENDPOINT=http://loki:3100/loki/api/v1/push

In this section, we define the fastapi service. We specify that we want to build the Docker image using the Dockerfile in the current directory (.), and map port 8000 on the host to port 80 in the container. We specify that this service depends on the loki service so that it can send logs to it.

We also define an environment variable LOKI_ENDPOINT, which specifies the address and port of the loki instance.

  loki:
    image: grafana/loki:latest
    ports:
      - "3100:3100"

In this section, we define the loki service. We specify that we want to use the official grafana/loki:latest image, and map port 3100 on the host to port 3100 in the container.

grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    depends_on:
      - loki

In this section, we define the grafana service. We specify that we want to use the official grafana/grafana:latest image, and map port 3000 on the host to port 3000 in the container. We also specify that this service depends on the loki service.

Step 3: Starting the services

Now that we have everything set up, we can start the services using the following command:

docker-compose up

This will start the fastapi, loki and grafana services. You should see logs from all three services in your terminal. Go to your browser and navigate to http://localhost:8000/ to generate some logs before proceeding further.

Step 4: Visualizing the logs in Grafana

Once the services are running, you can access the Grafana UI by opening a web browser and navigating to http://localhost:3000. You should see the Grafana login page.

The default username and password for Grafana is admin/admin. After logging in, you'll be prompted to create a new password.

Once you’re logged in, you’ll be taken to the Grafana home page. To add Loki data source, click on Add Your First Data Source, search and select Loki. On the configuration page under the HTTP section set url as: http://loki:3100.

At the bottom of the screen press Save & Test button, you should see Data source connected and labels found message.

Go back to home and click “Create your first dashboard” button, and click on the “Add a new panel” button. Select Dat source as “Loki” under label filters select “application” “=” “fastapi” from the dropdown. Click on “Run queries” buton and then click on “Switch to table” button. Finally click Save to see the screen as shown below which shows logs from your FastAPI application.

Conclusion

Capturing and visualizing logs is an important part of any application’s monitoring and debugging strategy, and by following the steps outlined in this tutorial, you should now have a solid foundation for building a more robust logging solution for your own applications.

Future Scope

There are several possible future improvements or extensions to this setup:

  1. Use Kubernetes to deploy and manage the FastAPI and Loki services, as well as the Grafana dashboard. Kubernetes provides more advanced features for scaling and managing containerized applications.

  2. Add additional log sources to Loki and visualize them alongside the FastAPI logs. For example, you could add logs from other services or from the underlying operating system.

  3. Use Loki’s powerful querying and alerting features to detect and respond to anomalies or errors in the FastAPI logs. For example, you could set up alerts to trigger when a certain error rate or response time threshold is exceeded.

  4. Customize the logging configuration to include additional metadata or fields in the logs, such as the user ID or IP address of the client making the request.

  5. Use a different logging library or tool to capture and send logs to Loki, such as Fluentd or Logstash. This would provide additional flexibility and customization options.