Route Description
POST /start Start test run
POST /setup Setup test without running
PATCH /executions/:id/manual-start Manual start
PATCH /executions/:id/stop Stop test run
POST /executions/:id/live-extension Extend test run

Start test run

Uses a simple multipart form upload to setup and start a new test run. It first creates or updates all necessary parts (the test case, test configuration, and scenario) as required.

See below for an example per scenario type and test runner source combination.

POST /start

Parameters and Examples

Scenario Type:

Test Runner:

Example



Parameters

Name Description Type Required
code Node.js script that each virtual user will execute on each iteration File
init Optional Node.js script that will execute once globally at the start of the test before any virtual users start File
teardown Optional Node.js script that will execute once globally at the end of the test after all virtual users finish File
jmeter_testplan JMeter test plan file (*.jmx) to upload and use when running this test. To upload multiple test plans use -F "jmeter_testplan[0]=@/path/to/testplan1.jmx" -F "jmeter_testplan[1]=@/path/to/testplan2.jmx" File
jmeter_version Which version of JMeter to use or to just auto-detect it based on the test plan. Possible values include: 4.0, 3.3, auto. Default is auto. Text
jmeter_capture_subresults Note that all sub requests are made either way. This only controls whether metrics for sub requests will be captured or ignored in the results. Defaults to true. Text
jmeter_prefix_subresults JMeter labels are used as the resource name for aggregating results. For sub results if this checkbox is checked the parent request's label is used as a prefix. For example: Home Page => http://myserver.com/foo. Defaults to true. Text
jmeter_properties[*] One or more JMeter properties to set at runtime (-J options). For example jmeter_properties[threads]=5. Text
jmeter_system_properties[*] One or more system properties to set at runtime (-D options). For example jmeter_properties[my_custom_flag]=true. Text
jmeter_properties_file A file with JMeter properties. File
jmeter_system_properties_file A file with system properties. File
jmeter_plugins[*] One or more files to upload and include as plugins (i.e. in the lib/ext folder). File
jmeter_utilities[*] One or more files to upload and include as utilities (i.e. in the lib folder). File
Option 1
gatling_zip A zip file with all the contents for your Gatling simulation structured according to the Gatling standard including code, data_files, bodies, libs, etc. The zip file will be extracted on the test runners prior to running Gatling. File
Option 2
gatling_simulation A *.scala file with all your simulation code. Useful for simple simulations. File
gatling_version Which version of Gatling to use. Defaults to 2.3. Text
gatling_simulation_name The simulation class name to run. Passed as the -s parameter to Gatling. If left blank Gatling will run all simulations it finds. Text
gatling_users How many users to simulate per Gatling process. Passed to Gatling as the system property "users". Defaults to 1. Text
gatling_javaopts Any extra options to pass to the Java process at test runtime. Text
gatling_simulations_dir Directory with the simulations as structured in your zip file. Defaults to user-files/simulations. Text
gatling_bodies_dir Directory with the request bodies as structured in your zip file. Defaults to user-files/bodies. Text
gatling_data_dir Directory with the data files as structured in your zip file. Defaults to user-files/data. Text
gatling_libs_dir Directory with any extra libraries as structured in your zip file. Defaults to lib. Text
locust_file Locust file (python) defining your test class. Passed as the --locustfile parameter. File
locust_host Host to load test. Passed as the --host parameter. Not required if the host property is specified on the Locust class. Text
locust_clients Number of concurrent clients per Locust instance. Passed as the --clients parameter. Text
locust_hatch_rate The rate per second in which clients are spawned per Locust instance. Passed as the --hatch-rate parameter. Text
locust_requests Number of requests to perform per Locust instance. Passed as the --num-request parameter. Text
locust_log_level Locust log level. Passed as the --loglevel parameter. Defaults to INFO. Text
locust_classes If your module contains multiple Locust classes you can optionally specify which one(s) to run here. Use a space to separate multiple entries. By default all are run. Text
wdio_conf_file For Webdriver.io only, the wdio.conf.js file. It can also be uploaded as a files[*] parameter directly or as part of a zip file. In that case specify the name of the file with this parameter. Defaults to wdio.conf.js. File or Text
selenium_version Selenium standalone server version. Current default is 3.11.0. Text
selenium_chromedriver_version Chromedriver version to use. Current default is 2.38. Text
selenium_geckodriver_version Geckodriver (firefox) version to use. Current default is 0.18.0. Text
selenium_displaysize Size of the display to simulate loading the browser into (uses xvfb on Ubuntu). Defaults to 1024x768x24. Text
selenium_childlogs Whether or not to capture logging from the Selenium, Webdriver.io, and NPM processes. Defaults to true. Text
selenium_capture_websocket_metrics Whether or not to capture metrics from websocket connections. Defaults to true. Text
selenium_bindings Set to "java" to use Selenium Java. Defaults to "java" automatically if selenium_framework is "Serenity". Text
selenium_framework This can either be Serenity (if you are using Serenity BDD) or None (to run a regular Selenium Java main class). Defaults to None. Text
selenium_main_class For Selenium Java the full name (including packages) of the main class to run. Not relevant for Serenity BDD tests. Text
selenium_version Selenium standalone server version. Current default is 3.11.0. Text
selenium_chromedriver_version Chromedriver version to use. Current default is 2.38. Text
selenium_geckodriver_version Geckodriver (firefox) version to use. Current default is 0.18.0. Text
selenium_displaysize Size of the display to simulate loading the browser into (uses xvfb on Ubuntu). Defaults to 1024x768x24. Text
selenium_childlogs Whether or not to capture logging from the Selenium and Java processes. Defaults to true. Text
selenium_capture_websocket_metrics Whether or not to capture metrics from websocket connections. Defaults to true. Text
phantomjs_url If you want to simply load a URL in the browser use this property to specify the URL. Only required if phantomjs_script and phantomjs_browser_script are not specified. Text
phantomjs_script The PhantomJs or SlimerJs script to run. Required unless phantomjs_url or phantomjs_browser_script are specified. File
phantomjs_browser_script An HTML file to load in the browser. Can be referenced in your script as __index.html. If this is specified, phantomjs_script defaults to a simple script that loads this page. File
phantomjs_version Which version of PhantomJS or SlimerJS to use. Current values include: Phantomjs-2.1.1, Phantomjs-2.5.0-beta, Slimer-0.10.3. Current default is Slimer-0.10.3. Text
phantomjs_childlogs Whether or not to capture logging from the PhantomJS process. Defaults to true. Text
har_file The HAR file to replay as the scenario. File

Common Parameters

Name Description Type Required
instances_per_region Number of instances to run per region as part of this test Text
concurrent_users_per_region Number of concurrent users to simulate as part of this test. Text
duration_mins Duration (in minutes) of the test after the ramp up time is finished. Each virtual user will keep looping through the scenario over and over until the specified duration has passed. Testable waits for the currently running iteration of each virtual user to complete before ending the test instead of killing them abruptly when the duration has been reached. Either duration_mins or iterations is required. Text
iterations Number of iterations of the scenario that each virtual user will run. Either duration_mins or iterations is required. Text
rampup_mins Number of minutes over which to ramp up from 0 virtual users/instances to the specified number. For JMeter/Gatling/Locust we ramp up the number of instances of the tool that are running. Defaults to 0 minutes. Text
testcase_name Name of the test case. A test case provides a logical grouping for one or more test configurations and scenarios. Defaults to "Main Test Case". Text
scenario_name Name of the scenario. This name is auto-generated for you if not specified. Text
conf_name Name of the test configuration. This name is auto-generated for you if not specified. Text
view Upload a custom view and set it as the default for this test case. Create a custom view on the Testable website and then download the definition file using the action menu in the upper right of the test results => Export View Definition. File
params[*] Any scenario parameters. Param keys are setup on the test configuration and values are set on the test configuration. Example: params[threads]=5. Text
files[*] Any additional files that are required to run your scenario including CSVs, data files, etc. File
platform Which OS do you want to host the test runner instances spun up as part of this test. Possible values are \[blank\], linux, win32. Linux: runs in a Docker container that uses a minimal Ubuntu 16.04 LTS as the base system (phusion/baseimage). Windows: Run on Microsoft Windows 2016 Datacenter edition with Containers. Text
iteration_sleep_secs Number of seconds each virtual user sleeps between iterations of the scenario. Defaults to 10 seconds. Text
percentiles For all timing metrics like latency, a comma separated list of what percentiles to capture. Defaults to 50,95,99. Decimal notation is supported (e.g. 50,95,99,99.9,99.99). Text
manual_start If set to true, Testable will wait for the user to manually start the test after all test runners are allocated and initialized. Test can be manually triggered via the results page or using the PATCH /executions/:id/manual-start route. Text
start_concurrent_users_per_region Used to setup a step function for load generation. Testable will start with this number of virtual users and steadily add step_per_region virtual users across the duration of the test. If specified you must also specify step_per_region and duration_minstd> Text
step_per_region Required if start_concurrent_users_per_region is specified. This indicates the number of users to add on each step as Testable builds from the starting number to the final one concurrent_users_per_region across the duration_mins. Text
Per Test Runner
Shared Test Runners
conf_testrunners[*].regions A comma separated list of region names. Possible values for the public shared grid include: aws-us-east-1, aws-us-west-2, aws-ap-south-1, aws-ap-southeast-1, aws-ap-southeast-2, aws-eu-central-1, aws-eu-west-1. For a private shared grid use whatever region name you chose when launching your test runners. Text
conf_testrunners[*].public Boolean indicating whether this configuration is for the public shared grid or your private shared grid. Defaults to true. Text
Per On Demand Region
conf_testrunners[*].regions[*].name The region name in which to spin the instances. Any public AWS region is available including us-east-1, us-east-2, us-west-1, us-west-2, ap-south-1, ap-southeast-1, ap-southeast-2, eu-central-1, eu-west-1. Text
conf_testrunners[*].regions[*].instance_type Instance type to launch. Defaults to m4.large. Text
conf_testrunners[*].regions[*].instances Number of EC2 instances to launch for this test. Defaults to Testable's automatic recommendation based on the instance type, scenario type, and number of concurrent users. Text
conf_testrunners[*].regions[*].spot_max_price Maximum price for spot instances. If specified, Testable will launch spot instances for your test. Text
conf_testrunners[*].regions[*].instance_type Instance type to launch. Defaults to Standard_D2_v2. Text
conf_testrunners[*].regions[*].instances Number of VMs in the VM scale set launched for this test. Defaults to Testable's automatic recommendation based on the instance type, scenario type, and number of concurrent users. Text
Self Hosted
conf_testrunners[*].name Name of the test runner source when setting up your own AWS account as a source. Defaults to "My Aws" if not specified. Text
conf_testrunners[*].aws_access_key_id AWS Access Key ID. Required for self hosted EC2 test runners. Text
conf_testrunners[*].aws_secret_key AWS Secret Key. Required for self hosted EC2 test runners. Text
conf_testrunners[*].vpc The VPC in which to spin up instances. If not chosen the default VPC in your AWS account is used. Text
conf_testrunners[*].subnet The subnet in which to spin up instances. If not chosen one is chosen at random from within the VPC. Text
conf_testrunners[*].key_pair Key pair name for connecting via SSH to any instances spun up as part of your test. First one in the list within your account is chosen by default. Text
conf_testrunners[*].t2_unlimited For t2 family instances, a boolean indicating whether to enable CPU bursting using the T2 Unlimited feature. Defaults to true. Text
conf_testrunners[*].name Name of the test runner source when setting up your own Azure account as a source. Defaults to "My Azure" if not specified. Text
conf_testrunners[*].azure_tenant_id Azure Tenant ID. See our Azure self-hosted guide for more details. Required for self hosted Azure VM scale set test runners. Text
conf_testrunners[*].azure_subscription_id Azure Subscription ID. See our Azure self-hosted guide for more details. Required for self hosted Azure VM scale set test runners. Text
conf_testrunners[*].azure_client_id Azure Client ID. See our Azure self-hosted guide for more details. Required for self hosted Azure VM scale set test runners. Text
conf_testrunners[*].azure_client_secret Azure Client Secret. See our Azure self-hosted guide for more details. Required for self hosted Azure VM scale set test runners. Text
conf_testrunners[*].resource_group The Azure resource group to launch all resource into. Defaults to "testable_[region]" if not specified. Text
conf_testrunners[*].network The Azure virtual network in which to launch the VM scale set. Defaults to "testable-vnet-[region]" if not specified. Text
conf_testrunners[*].subnet The subnet within the chosen network in which to launch the VM scale set. Defaults to Looking for a subnet called "agents" first and if not found using a random subnet that already exists. If no subnet exists in the network, one called "agents" will be created. Text
conf_testrunners[*].storage_account The storage account within the chosen resource group to use for copying the Testable VM image blob. Defaults to using the first storage account found if none is specified. If no storage account exists in the resource group, one with a random name is created. Text
conf_testrunners[*].low_priority Whether or not to create low priority VMs in the scale set. Defaults to false. Text
conf_testrunners[*].image Defaults to the standard test runner image (agent-external). To use the Flash enabled image use agent-flash-external. Text
conf_testrunners[*].tags[key] One or more tags to apply to all instances spun up as part of this test. For example conf_testrunners[0].tags[Cost Center]=19824 Text
KPIs One or more key performance indicators (KPIs) can be set to define what you consider a successful test run.
kpis[*].expr A few quick examples:
  • Response Time[p99] < 1500ms
  • Outcome[success] > 95%

An expression that specifies a KPI for your test. The format depends on the kind of metric you want to define the KPI for. See our metrics guide for a list of valid metric names and our custom metrics guide for more details on the possible metric type.

  • Counter: metric_name op value. For example: Max Concurrent Users >= 100
  • Timing: metric_name[aggregator] op value. For example: Response Time[p99] < 1500ms. Possible aggregator values: median, mean, sd, var, min, max, count, and percentiles (e.g. p50, p95, p99).
  • Histogram: metric_name[bucket] op value. For example: Outcome[success] > 95%. If the expression ends with "%" the KPI is defined on the percent that bucket is of the histogram total otherwise it evaluates the raw count in that bucket.
  • Metered: metric_name[total|largest] op value. For example: CPU % < 95% or Memory % < 85%. largest (default option) means the peak observed value on any one test runner. total is the peak sum at any point in time across all test runners that are a part of your test.
Text
kpis[*].break_on_fail Boolean. If set to true, the KPI will be continuously evaluated as the test runs. As soon as the KPI is no longer met the test will be stopped. Defaults to false. Text
kpis[*].type Four possible types of KPIs (defaults to value):
  • value: Evaluate the value of the metric across the entire test.
  • change: Evaluate the change in the value of this metric between this test and a weighted average of up to 10 recent test runs. This type of KPI is always considered successful the first run of a test configuration.
  • active_value (only allowed if break_on_fail=true): Evaluate the value of the metric in the most recent 1 minute window while the test is running.
  • active_change (only allowed if break_on_fail=true): Evaluate the change in the value of the metric minute over minute while the test is running.
Text
kpi_break_eq_test_fail If one or more KPIs have break_on_fail=true and the breaking point is hit, this setting controls whether the test is then automatically considered a failure. Defaults to true. Text

Setup test without running

Uses a multipart form upload to setup a new test configuration. It does not run the test yet though. Parameters are exactly the same as the /start API.

POST /setup

Manual start

Tests that were started with manual_start=true will simply sit and wait after the test runners are allocated and initialized. This route tells Testable to start executing the test across the test runners. This can be useful with on-demand test runners since we don't know exactly how long AWS EC2 instances take to spin up. Start the test with manual_start=true well in advance of your desired test run time and when you trigger the manual start it will immediately start generating load.

PATCH /executions/:id/start

Stop execution

Request to stop the execution before it completes. Note that once the execution is stopped it will take a little while before it has completed: true status due to cleanup, result aggregation, etc.

PATCH /executions/:id/stop

Extend execution

Changes the duration or number of iterations this test will execute for. Propogates live to all test runners currently running the test. If the new duration/iterations has already passed the test will complete once the currently executing iteration is completed.

This only works for scenario types where Testable manages the concurrency (i.e. not JMeter or Gatling).

POST /executions/:executionId/live-extension

Request Body

For tests that are configured to run for a duration (in seconds):

{
   "newDuration": 360 
}

For tests that are configured to run for a certain number of iterations:

{
   "newIterations": 5
}