Instrumenting Django with Prometheus and StatsD

Fri 09 June 2017

If you ever wondered how to monitor your Django application with Prometheus this article is for you. Quick search on the topic will lead you to django-prometheus. For those who don't want to use it, there is another way to export application metrics via StatsD.

The idea is to send metrics from Django by StatsD Python library to StatsD server over UDP. Here is an example of incrementing "hello.requests.total" metric every time "say_hello" view is run.

from statsd.defaults.django import statsd

from django.http import HttpResponse


def say_hello(request):
    statsd.incr('hello.requests.total')
    return HttpResponse('Hello, World!')

StatsD server aggregates measurements over time and flushes them into monitoring backend. Here statsd_exporter comes into play. It uses the same UDP protocol as StatsD server and exports StatsD-style metrics as Prometheus metrics, so "hello.requests.total" becomes "hello_requests_total_counter".

All we need is to configure Django to send metrics to statsd_exporter daemon. Let's set up "Olympus" Django project to demonstrate StatsD integration or you can get it from github.com/marselester/django-prometheus-via-statsd.

$ virtualenv venv
$ source ./venv/bin/activate
$ pip install Django==1.11.1 statsd==3.2.1
$ django-admin.py startproject olympus .

These two lines of Django settings ./olympus/settings.py configure a default StatsD client that is used in the "say_hello" Django view.

# statsd_exporter daemon listens to UDP at localhost:8125
STATSD_HOST = 'localhost'
STATSD_PORT = 8125

Assuming "say_hello" is available at "/hello" URL path, we can start a web server.

$ python manage.py runserver
$ curl http://localhost:8000/hello
Hello, World!

The Olympus application is ready to emit metrics. To make sure this is the case, we can use tcpdump to capture UDP packets on loopback interface on port 8125 and print each packet in ASCII.

$ tcpdump -i lo0 udp port 8125 -A
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on lo0, link-type NULL (BSD loopback), capture size 262144 bytes
11:19:05.460810 IP localhost.57776 > localhost.8125: UDP, length 24
E..4....@................ .3hello.requests.total:1|c

As we can see, "hello.requests.total:1|c" is sent every time we hit http://localhost:8000/hello.

StatsD Exporter

Since we have StatsD metrics being sent, we can expose them for Prometheus via statsd_exporter. Let's download the latest version and install it (the binary will be in "$GOPATH/bin/").

$ go get -u github.com/prometheus/statsd_exporter

Run statsd_exporter so it gets StatsD metrics from our Django application.

$ statsd_exporter -statsd.listen-address="localhost:8125"

By default it exposes generated Prometheus metrics at http://localhost:9102/metrics. Check out the output, among many metrics there will be our counter from "say_hello" view.

# HELP hello_requests_total_counter Metric autogenerated by statsd_exporter.
# TYPE hello_requests_total_counter counter
hello_requests_total_counter 2

Run Everything on Kubernetes

Of course we will deploy everything on Kubernetes. Minikube will help us here. Minukube is an easy way to run Kubernetes locally. When we want to build a Docker image in Minukube (so Kubernetes has an access to it), we can configure our Docker client to communicate with the Minikube Docker daemon.

$ minikube start
Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.
$ eval $(minikube docker-env)

Django application

First, we shall deploy our Django application with statsd_exporter running in the same Kubernetes Pod. It exposes 8000 (uWSGI) and 9102 (statsd_exporter's generated Prometheus metrics) container ports.

$ docker build -t marselester/olympus:v1.0.0 ./olympus-app/
$ kubectl apply -f ./kube/olympus-app/deployment.yml
$ kubectl apply -f ./kube/olympus-app/service.yml

Though we don't need Nginx in the demo, it's ubiquitous on production servers. Nginx proxies HTTP requests using uWSGI protocol to "olympus-service" Kubernetes Service we created above.

$ kubectl create configmap olympus-nginx-conf --from-file=./kube/nginx/olympus.conf
$ kubectl apply -f ./kube/nginx/deployment.yml

Prometheus

Next is Prometheus server's turn to be deployed. It is configured to scrape own metrics from 9102 port and metrics from Pods that have "app: olympus" label.

Prometheus server listens on port 9090. On production servers you'll likely run it behind Nginx with basic authentication and make it accessible via VPN only.

$ kubectl create configmap prometheus-server-conf --from-file=./kube/prometheus/prometheus.yml
$ kubectl apply -f ./kube/prometheus/deployment.yml

But in our case we'll use Kubernetes port forwarding to test whether Django metrics show up in Prometheus dashboard.

$ kubectl port-forward nginx-deployment-3580857522-tn332 8080:80
$ kubectl port-forward prometheus-deployment-2456821496-8zdg8 9090
$ curl http://localhost:8080/hello
Hello, World!

"hello_requests_total_counter" should be searchable at the expression browser http://localhost:9090/graph.

Prometheus Helm Chart

There are other ways to install Prometheus on Kubernetes:

Prometheus Operator and Helm look awesome, though I have not played with Operator yet. Here is how you can set up Prometheus via Helm (package manager for Kubernetes). You will need the Helm client

$ brew install kubernetes-helm

and Helm server (Tiller). The following command installs it into the Kubernetes cluster.

$ helm init

Now you can install Prometheus Helm package (chart).

$ helm repo update
$ helm install --name team-ops stable/prometheus

Nice, we have full-blown Prometheus "team-ops" chart release running with alert manager and node exporter.

$ helm list
NAME        REVISION    UPDATED                     STATUS      CHART               NAMESPACE
team-ops    1           Thu Jun  8 21:40:29 2017    DEPLOYED    prometheus-3.0.2    default

Let's add one more Prometheus to the cluster (call it "team-dev"), but this time we want only Prometheus server with a custom prometheus.yml config.

$ helm install --name team-dev \
    --set alertmanager.enabled=false \
    --set kubeStateMetrics.enabled=false \
    --set nodeExporter.enabled=false \
    stable/prometheus

The config is stored in "team-dev-prometheus-server" ConfigMap. Let's overwrite it with our prometheus.yml.

$ kubectl create configmap team-dev-prometheus-server \
    --from-file=./kube/prometheus/prometheus.yml \
    -o yaml \
    --dry-run | kubectl replace -f -

To see whether "team-dev" Prometheus started, we can run a port forwarding:

$ kubectl port-forward team-dev-prometheus-server-4131857549-c98j0 9091:9090

Prometheus "team-dev" release is accessible at http://localhost:9091/graph.

I hope this helps. Cheers!

Category: Infrastructure Tagged: django prometheus kubernetes monitoring statsd helm

Comments