Locust Reference
Free reference guide: Locust Reference
About Locust Reference
The Locust Reference is a searchable cheat sheet covering the Python locust load testing framework. It is organized into six categories — Users, Tasks, Events, Distributed, Config, and Reports — to cover every aspect of writing locustfiles, running load tests at scale, and analyzing results. Locust is widely used to simulate thousands of concurrent users hitting web APIs and microservices.
Backend engineers, QA engineers, and DevOps teams use this reference when writing locustfiles for API load testing, CI/CD performance gates, and capacity planning. The Users section covers the four key classes: HttpUser as the standard HTTP client, wait_time with between() and constant() for configuring think time, weight for controlling the ratio of user types in a mixed-scenario test, on_start/on_stop lifecycle hooks for login/logout flows, and FastHttpUser for high-throughput scenarios using geventhttpclient.
The Tasks section explains how to define workloads: the @task decorator with optional weights, TaskSet for grouping related tasks, SequentialTaskSet for ordered flows like add-to-cart then checkout then payment, and the self.client.get()/post() methods with name aliasing for statistics grouping and catch_response for custom validation. The Distributed section covers master/worker architecture, the --processes flag for multi-core utilization, --expect-workers for automatic test start, and a Docker Compose configuration for deploying distributed Locust clusters.
Key Features
- HttpUser class with host, wait_time (between/constant), weight ratio, on_start/on_stop lifecycle hooks
- FastHttpUser for high-concurrency scenarios using geventhttpclient backend instead of requests
- @task decorator with optional weights, TaskSet for grouped behaviors, SequentialTaskSet for ordered flows
- self.client.get/post with name aliasing for statistics, catch_response for custom pass/fail validation logic
- Event hooks: test_start/test_stop listeners, per-request event with response_time and exception, init for setup
- Distributed testing: --master/--worker flags, --processes for multi-core, --expect-workers for auto-start
- Docker Compose configuration with master/worker services and deploy.replicas for scalable distributed clusters
- CLI options: --headless mode, --csv with full history, --html report, locust.conf file, custom CLI arguments
Frequently Asked Questions
What is Locust and what is it used for?
Locust is an open-source Python load testing framework that simulates large numbers of concurrent users making HTTP requests to your application. You define user behavior as Python code using the HttpUser class and @task decorators. Locust then spawns the specified number of concurrent users and collects statistics on response times, request rates, and failure rates via a web dashboard or CSV/HTML reports.
What is the difference between TaskSet and SequentialTaskSet?
TaskSet groups multiple tasks together and executes them randomly according to their weights, simulating realistic user browsing behavior where users jump between pages. SequentialTaskSet executes tasks in the exact order they are defined in the class, which is useful for modeling ordered workflows like a checkout flow where add-to-cart must always come before payment.
How do I model login/logout behavior in Locust?
Override the on_start() method in your HttpUser subclass to perform login when a user is spawned. Use self.client.post("/login", json={"username": "test", "password": "pass"}) in on_start(). Override on_stop() to perform logout. The session cookies are automatically maintained by the client across requests within the same user instance.
What does catch_response=True do and why is it useful?
By default, Locust marks a request as failed only on connection errors or HTTP error status codes. catch_response=True lets you add custom validation logic. Use it as a context manager with the response object to call response.failure("reason") if the response body contains an error field, if the response time exceeds a threshold, or if the data does not match expectations.
How do I run Locust in distributed mode for large-scale tests?
Start one master node with locust --master -f locustfile.py and multiple worker nodes with locust --worker --master-host=MASTER_IP -f locustfile.py. Each worker spawns its own users and sends statistics to the master, which aggregates them in the dashboard. Use --processes -1 on the master to automatically spawn one worker per CPU core on the same machine.
How do I run Locust without a web UI in a CI/CD pipeline?
Use the --headless flag combined with --users, --spawn-rate, and --run-time flags: locust -f locustfile.py --headless --users 100 --spawn-rate 10 --run-time 60s --host https://api.example.com. Add --csv results to write statistics files and --html report.html to generate an HTML report. A non-zero exit code is returned if failures exceed the configured threshold.
How do I add custom CLI arguments to my locustfile?
Add a listener to the init_command_line_parser event using @events.init_command_line_parser.add_listener. Inside the listener, call parser.add_argument() with the argument name, type, default, and help string. Access the parsed value via environment.parsed_options in your test init event listener or user class.
What statistics does Locust collect and display?
Locust collects per-endpoint statistics including requests per second (RPS), median response time, 95th percentile response time, average response time, minimum and maximum response times, and the failure rate. The web dashboard shows live charts of total RPS, response times over time, and active user count. You can download all statistics as CSV files or as a standalone HTML report.