Getting Started - Locust

Follow this guide for a basic example of running your Locust tests on the Testable platform. Our example project will test our sample REST API.

Start by signing up and creating a new test case using the Create Test button on the dashboard.

Enter the test case name (e.g. Locust Demo) and press Next.


Select Locust file as the scenario type.

Locust Scenario

Let’s use the following settings:

  1. Language: We will use Python.
  2. Version: Lets use version 2.5.0.
  3. Source: Upload Files + Create/Edit. See the detailed documentation for all the options to get our code onto Testable.
  4. Host: (the --host argument to Locust)
  5. Concurrent Clients: 5 (the --clients argument to Locust)
  6. Hatch Rate: 1 (the --hatch-rate argument to Locust)

Locust Settings

We will use the example file that gets populated by default for but add one more API call. This is found in the Locust File section of the scenario definition.

from locust import HttpUser, between, task, events

class MyLocust(HttpUser):
    wait_time = between(1, 3)

    def ibm(self):
    def msft(self):

And that’s it, we’ve now defined our scenario! To try it out before configuring a load test click the Smoke Test button in the upper right and watch Testable execute the scenario with 1 concurrent user. You should see all results including logging, network traces, etc appear in real time as the smoke test runs on one of our shared test runners.

Next, click on the Configuration tab or press the Next button at the bottom to move to the next step.


Now that we have the scenario for our test case we need to define a few parameters before we can execute our test:

  1. Total Engines: Number of Locust instances to start. Each instance will generate load with the parameters defined in our scenario (5 concurrent users, 1 user/sec hatch rate, 25 total requests). So for example 2 Locust instances in this case would simulate 10 concurrent users.
  2. Location(s): Choose the location in which to run your test and the test runner source that indicates which test runners to use in that location to run the load test (e.g. on the public shared grid).

And that’s it! Press Start Test and watch the results start to flow in. See the new configuration guide for full details of all configuration options.

For the sake of this example, let’s use the following parameters:

Test Configuration

View Results

Once the test starts executing, Testable will distribute the work out to the selected test runners (e.g. Public Shared Grid in AWS N. Virginia).

Test Results

In each region, the test runners execute 2 separate Locust instances concurrently. The results will include traces, performance metrics, logging, breakdown by URL, analysis, comparison against previous test runs, and more.

Check out the Locust guide for more details on running your Locust tests on the Testable platform.

We also offer integration (Org Management -> Integration) with third party tools like New Relic. If you enable integration you can do more in depth analytics on your results there as well.

That’s it! Go ahead and try these same steps with your own scripts and feel free to contact us with any questions.