Distributing Locust with Azure

Maxwell Keeter
3 min readJun 28, 2021

Custom load testing with locust and distributed on Azure Kubernetes Service

Photo by Filipe Resmini on Unsplash

One of the first projects that I worked on with Azure was to create a load testing solution to send heavily customized requests to our specific Apache Kafka / Event Hubs configuration. Since these services are designed for massive data throughput, it would be rather difficult to find something that could properly stress these systems. However, this was primarily practice to get my feet wet in the Azure world. With that being said, the main goals of this project were:

a) connect it to an existing AKS cluster / infrastructure

b) make the deployment easily repeatable

I chose to create a custom docker image, helm installation, and Terraform script to orchestrate this all. One important note to make with this project is that we assume that a secure backend location for tfstate files. Here’s a brief outline of the path I followed!

First, we needed to create the python script for locust to use in sending requests to our Event Hub. It looked roughly along these lines:

The script is a very simplified example of how we organized our locust configuration. In this particular case we used a Shared Access Signature to authenticate along with the proper service bus name and event hub name.

Now, in order for us to be able to get this script into AKS, we had to containerize this script. We did this by first adding a requirements.txt file for our image as a directory neighbor to the locustfile. Both of these files being under the parent directory called locust-tasks. Within the root directory we can find our Dockerfile and the entrypoint script as well. The Dockerfile is a similarly simplified version of what we have used and can be seen in the repo with the entrypoint (run.sh).

Assuming that we build our container and push it to an already created Azure Container Registry, we should be able to move onto the next task which is deploying our helm release to an existing Kubernetes cluster with Terraform. You will be able to see example tfvars and variable.tf files in the repo listed in the beginning of this doc. The main focus of our attention here will be on a few items within the terraform script and our helm installation. So let’s hop on over to our locust.tf.

A couple of items to note with the snippet from above:

  1. We are authenticating with the AKS cluster via secrets pulled from Azure Key Vault from the bottom three data points.
  2. The first data point identifies the identify of the key vault, the second retrieves all of the secret names from said keyvault, and the third data point, along with formatting via “local.core_key_list” writes each secret to it’s own subset of the “core_itter” data point. On larger keyvaults where all secrets are needed this drastically reduces the space taken up by importing secrets. Again, this assumes for a secure state and obscuring secrets via terraform as necessary.

Next, we have the helm installation, where our simple locust script is placed into AKS via terraform. The configuration can be found under “loku-deploy” in the base repo and creates a master pod of 1 to control the worker pods at a default count of 3. With our terraform script, we are able to update items within these files and identify the values files as well as identifying the master and worker repositories. As an added bonus we have increased the default within the Terraform script to 15. The last item to discuss is a null_resource using local-exec. This local-exec command allows for identifying and port forwarding the web ui to localhost:8089 for metrics from the load testing.

To summarize this is a relatively simplified version but gives a few hints as to what can be possible with not only locust but Terraform, Helm, Docker and Azure!

--

--