Web Application Load Testing with Locust

Alireza Moosavi
4 min readJun 16, 2020

In this article, the idea of having load testing in development and production is going to be discussed. First, the main intentions and benefits of load testing is demonstrated. Then, development of load testing with locust is going to be discussed and lastly, analysis of test results is demonstrated.

The project repository is available HERE.

Introduction

After functional and unit testing the main developed app needs to be tested in real-life scenarios. It’s like there are many users out there and they are trying to get there hands dirty with our application. In this case, there are some open-source tools HERE but in this article, the locust load testing tool is going to be used. Locust, is one of the best test tools and nowadays it has been highly used and recommended by many Java, Go and Python developers in various tech areas. Locust is very easy to use and has been written in python. A fundamental feature of Locust is that you describe all your test in Python code. No need for clunky UIs or bloated XML, just plain code.

Testing tools:

There many other testing tools that could be useful for application load testing.

  • JMeter: released 20 years ago, written in java and very frustrating to write tests with xml and not very popular among developers
  • Locust: released 5 years ago, written in python, super-easy to write tests with only python, highly scalable and event-based implementation also very popular among all web developers
  • MORE

Why Load Testing?

Load testing consists of integration and performance (stress) testing.

  • Application stability under load (Availability)
  • Application speed like response time, requests per second and etc.
  • Infrastructure capacity modification
  • Code optimization

Load Testing Procedure:

  • Feed the system largest task it can handle
  • Increase the load until break
  • Simulate with virtual users
  • The goal is to find defects and bottleneck in app
  • Figure out memory and CPU limits
  • Determine the upper limit for all components
  • Observe the behavior of App After Failure
  • Ideally, expect your system to graceful of fail and Recover

Why Locust?

  • Open source
  • Good monitoring tools
  • User behavior in code
  • Written in python
  • Easy to maintain and scale
  • Based on co-routines and benefits the python asynchronous approach
  • Distributed and scalable

Hands on locust

Locust has a very rich documents. It can be used during development and production, all its configurations are HERE.

Development

When the app is designed and created by developer, after passing functional tests, the APIs needed to be under some stress tests such that has 0% failure. Then the number of concurrent user has to increased until the application fails and breaks down. In another words, the bottleneck of application has to be determined. After that, by optimizing the endpoints, or decreasing database hits, maybe using async/await methods and etc., the bottleneck should be improved.

Implementation

Locust can be operate with single command or with docker file. Both methods has been implemented in THIS project. This project is a simple web application with starlette that benefits from asyncpg for database handling. Also, uses faust and kafka for alerting database situations in real-time.

Project APIs:

  • “ / “: return all data from database
  • “/square/{number}”: return the square of number and save to the database

Locust with command

Based on the project after building and starting app with docker-compose, locust will be run with “locust -f main/tests/test_locust.py “ command.

By going to “http://0.0.0.0:8089/" in the browser this will be shown:

The first field is the number of users that simultaneously are going to call APIs and second field is increase rate per second. The host also needs to be defined based on application or nginx hosting.

Locust with docker

The difference is that the developer needs to build the locust testing process with docker-compose that has been done in the project. And has been documented HERE.

So, after building and running docker-compose of application along with locust docker files in it, by going to “http://0.0.0.0:8089/" in the browser this will be shown:

In the project, Nginx is the load balancer of web application and inside the container the APIs are only available with “http://nginx:8000/". And if Nginx was not used, “http://web:5000/" this is going to be used for host.

Results:

Locust Setup Panel
Locust After Stress Tests

As it can be seen APIs has been changed. They use postgresSQL for storing the QA (question and answer) results and Faust with Kafka is going to alert the unique length of database items. So, there was no failure on local PC environment. The 1000 should be increased until the application break down and developer needed to optimize the code as much as possible.

It worth to mention that the number of worker could be increased with:

docker compose command — scale.

More Example:

Deployment

In this part, when application is deployed on test environment with docker or kubernetes, based on expectation of application the resources of VM or kubernetes container should be adjusted.

If with less expected user it fails and based on monitoring the Memory and CPU, they went full; this container needs more instances and resources. Otherwise, it should be analyzed that the container does not have redundant resources.

More info:

  1. https://cloud.google.com/solutions/distributed-load-testing-using-gke
  2. https://github.com/GoogleCloudPlatform/distributed-load-testing-using-kubernetes
  3. Youtube_Links: Youtube, Youtube, Youtube

Key factors in test results:

  • number of failures
  • median (ms)
  • request per second
  • response time
  • number of users simultaneously work with api

Load testing pipeline

Jenkins:

In following resources the procedure of implementation of Locust with Jenkins is been illustrated.

CI pipeline:

--

--