Route | Description |
---|---|
POST /start | Start test run |
POST /setup | Setup test without running |
PUT /executions/:id/manual-start | Manual start |
PUT /executions/:id/stop | Stop test run |
POST /executions/:id/live-extension | Extend test run |
Start test run
Uses a simple multipart form upload to setup and start a new test run. It first creates or updates all necessary parts (the test case, test configuration, and scenario) as required.
See below for an example per scenario type and test runner source combination.
POST /start
Parameters and Examples
Example
Parameters
Name | Description | Type | Required |
---|---|---|---|
code | Node.js script that each virtual user will execute on each iteration | File | |
init | Optional Node.js script that will execute once globally at the start of the test before any virtual users start | File | |
teardown | Optional Node.js script that will execute once globally at the end of the test after all virtual users finish | File | |
java_version | Which Java version to run your test with if applicable. Possible values: 1.8,11,17. Generally defaults to 1.8 if not specified though in some cases to other versions as required. | Text | |
jmeter_testplan | JMeter test plan file (*.jmx) to upload and use when running this test. To upload multiple test plans use -F "jmeter_testplan[0]=@/path/to/testplan1.jmx" -F "jmeter_testplan[1]=@/path/to/testplan2.jmx" | File | |
jmeter_version | Which version of JMeter to use or to just auto-detect it based on the test plan. Possible values include: 4.0, 3.3, auto. Default is auto. | Text | |
jmeter_capture_subresults | Note that all sub requests are made either way. This only controls whether metrics for sub requests will be captured or ignored in the results. Defaults to true. | Text | |
jmeter_prefix_subresults | JMeter labels are used as the resource name for aggregating results. For sub results if this checkbox is checked the parent request's label is used as a prefix. For example: Home Page => http://myserver.com/foo. Defaults to true. | Text | |
jmeter_properties[*] | One or more JMeter properties to set at runtime (-J options). For example jmeter_properties[threads]=5. | Text | |
jmeter_system_properties[*] | One or more system properties to set at runtime (-D options). For example jmeter_properties[my_custom_flag]=true. | Text | |
jmeter_properties_file | A file with JMeter properties. | File | |
jmeter_system_properties_file | A file with system properties. | File | |
jmeter_additional_args | Any additional arguments to pass the JMeter engine command line (e.g. -LWARN would set the jmeter log level to WARN). | Text | |
jmeter_plugins[*] | One or more files to upload and include as plugins (i.e. in the lib/ext folder). | File | |
jmeter_utilities[*] | One or more files to upload and include as utilities (i.e. in the lib folder). | File | |
Option 1 | |||
gatling_zip | A zip file with all the contents for your Gatling simulation structured according to the Gatling standard including code, data_files, bodies, libs, etc. The zip file will be extracted on the test runners prior to running Gatling. | File | |
Option 2 | |||
gatling_simulation | A *.scala file with all your simulation code. Useful for simple simulations. | File | |
gatling_version | Which version of Gatling to use. Defaults to the latest version. | Text | |
gatling_simulation_name | The simulation class name to run. Passed as the -s parameter to Gatling. If left blank Gatling will run all simulations it finds. | Text | |
gatling_users | How many users to simulate per Gatling process. Passed to Gatling as the system property "users". Defaults to 1. | Text | |
gatling_javaopts | Any extra options to pass to the Java process at test runtime. | Text | |
gatling_simulations_dir | Directory with the simulations as structured in your zip file. Defaults to user-files/simulations. | Text | |
gatling_bodies_dir | Directory with the request bodies as structured in your zip file. Defaults to user-files/bodies. | Text | |
gatling_data_dir | Directory with the data files as structured in your zip file. Defaults to user-files/data. | Text | |
gatling_libs_dir | Directory with any extra libraries as structured in your zip file. Defaults to lib. | Text | |
locust_file | Locust file (python) defining your test class. Passed as the --locustfile parameter. It can also be uploaded as a files[*] parameter directly or as part of a zip file. In that case specify the name of the file with this parameter. | File or Text | |
locust_host | Host to load test. Passed as the --host parameter. Not required if the host property is specified on the Locust class. | Text | |
locust_clients | Number of concurrent clients per Locust instance. Passed as the --clients parameter. | Text | |
locust_hatch_rate | The rate per second in which clients are spawned per Locust instance. Passed as the --hatch-rate parameter. | Text | |
locust_runtime | Required for Locust 0.9+ only. How long to run the test for (e.g. 60s, 5m, 1h, etc). Passed as the --runtime parameter. | Text | |
locust_requests | Required for Locust 0.8.x only. Number of requests to perform per Locust instance. Passed as the --num-request parameter. | Text | |
locust_log_level | Locust log level. Passed as the --loglevel parameter. Defaults to INFO. | Text | |
locust_classes | If your module contains multiple Locust classes you can optionally specify which one(s) to run here. Use a space to separate multiple entries. By default all are run. | Text | |
locust_step_load | If true, will pass the --step-load argument to Locust. | Text | |
locust_step_clients | The number of step clients. Passed as --step-clients to Locust. If specified, locust_step_load will default to true if unspecified. | Text | |
locust_step_time | The time interval per step (e.g. 1m, 30s, etc). Passed as --step-time to Locust. If specified, locust_step_load will default to true if unspecified. | Text | |
postman_collection | The postman collection to run. Either a file or a path to a file within a zip file uploaded separately. Must specify either this or postman_collection_url | File or Text | |
postman_collection_url | A URL to a postman collection to run. | Text | |
puppeteer_script | The puppeteer script to run or the path to your script within a zip file that was uploaded. | File or Text | |
playwright_runstyle | Whether your test is utilizing Playwright Library (library ) or Playwright Test (test ) (https://playwright.dev/docs/library). Defaults to library . If using Playwright Test you must include a zip file of your project that includes a playwright.config.[ts|js] . |
Text | |
playwright_script | If using Playwright Library, the playwright script to run or the path to your script within a zip file that was uploaded. | File or Text | |
playwright_conf_file | If using Playwright Test, the relative path within your project to the config file. Defaults to playwright.config.ts . |
Text | |
conf_file | For Webdriver.io and Protractor only, the test runner configuration file. It can also be uploaded as a files[*] parameter directly or as part of a zip file. In that case specify the name of the file with this parameter. Defaults to wdio.conf.js for Webdriver.io and conf.js for Protractor. | File or Text | |
run_style | For Webdriver.io only. Either "script" if you want to run in standalone mode or "file" if you are providing a wdio.conf.js configuration file and using the Webdriver.io test runner. In standalone mode selenium_primary_spec must also be specified. Defaults to "file" if a conf_file is set. | File or Text | |
record_video | Whether or not to capture a video for each virtual user iteration. Defaults to false. | Text | |
auto_send_screenshots | Whether or not to automatically capture the response from every screenshot request into the results. If you are already writing your screenshots into the Testable output directory, you may want to disable this feature to avoid duplicated screenshots in the results. Defaults to true. | Text | |
auto_screenshot_trigger | If set, Testable will automatically capture screenshots at the triggered time. Values: afterFailure (after failed commands), always (after all commands), afterFailed (after failed test steps), afterEvery (after every test step), afterAllOnFailure (on failure after entire test finishes), afterAll (after entire test finishes). | Text | |
auto_screenshot_type | If an auto screenshot trigger is set, this parameter indicates whether to take a screenshot of the browser or the desktop. Defaults to browser. | Text | |
mocha_timeout | If framework = Mocha you can set the timeout for your Mocha test execution. Defaults to 1 hour (3600000 ms). | Text | |
mocha_args | If framework = Mocha you can pass additional Mocha command line arguments via this parameter.. | Text | |
child_logs | Whether or not to capture logging from the Selenium, Webdriver.io, and NPM processes. Defaults to true. | Text | |
capture_network_metrics | Whether or not to capture metrics from HTTP requests. Defaults to true. | Text | |
capture_websocket_metrics | Whether or not to capture metrics from websocket connections. Defaults to true. | Text | |
selenium_main_class | For Selenium Java the full name (including packages) of the main class to run. Not relevant for Serenity BDD tests. | Text | |
phantomjs_url | If you want to simply load a URL in the browser use this property to specify the URL. Only required if phantomjs_script and phantomjs_browser_script are not specified. | Text | |
phantomjs_script | The PhantomJs or SlimerJs script to run. Required unless phantomjs_url or phantomjs_browser_script are specified. | File | |
phantomjs_browser_script | An HTML file to load in the browser. Can be referenced in your script as __index.html. If this is specified, phantomjs_script defaults to a simple script that loads this page. | File | |
phantomjs_version | Which version of PhantomJS or SlimerJS to use. Current values include: Phantomjs-2.1.1, Phantomjs-2.5.0-beta, Slimer-1.0.0. Current default is Slimer-1.0.0. | Text | |
phantomjs_childlogs | Whether or not to capture logging from the PhantomJS process. Defaults to true. | Text | |
har_file | The HAR file to replay as the scenario. | File | |
selenium_primary_spec | The script to actually execute with Selenium Javascript (e.g. test.js or test.ts) or Webdriver.io in standalone mode. Required for Selenium javascript and Webdriver.io in standalone mode (i.e. run_style = script). | Text | |
device | The device to emulate. Possible values can be found at
https://api.testable.io/browser-versions. You can either pass the name of a device found in that list or a JSON that follows the same structure as the devices in the list. Defaults to Desktop 1920 x 1080 . |
Text | |
language | For Node.js based scenarios indicate whether the code is javascript (default) or typescript. | Text | |
selenium_bindings | Possible values include: wdio, protractor, java, javascript. Defaults based on other attributes that are provided when possible. | Text | |
selenium_source | Gives us more information about what you are uploading.
uploadAll . |
Text | |
framework | For Selenium Java tests this can either be Serenity (if you are using Serenity BDD) or None (to run a regular Selenium Java main class). For Puppeteer, Playwright, and Selenium Javascript tests this can either be Mocha or None. Defaults to None. | Text | |
selenium_version | Selenium standalone server version. Current default is 3.141.59. | Text | |
framework | This can either be JUnit, TestNG (if you are using testing framework) or None (to run a regular Java main class). Defaults to None. | Text | |
java_test_classes | If using JUnit or TestNG , comma separated list of test classes to execute. |
Text | |
java_main_class | If framework is None or not specified and java_source is not uploadExecutable, then this property specifies the full name (including packages) of the main class to run. If you are uploading code and want to be able to edit it on the scenario page make sure the class name ends with .java (e.g. TestClass.java). If not simply provide the class name. | Text | |
java_source | Gives us more information about what you are uploading.
|
Text | |
java_build_tool | Set to either maven or gradle if you are uploading a project with source code you want to build. Not necessary if you set java_source = uploadZip and the root of your project has either a pom.xml or build.gradle. | Text | |
java_build_tool_args | The arguments to pass maven or gradle if you are providing a project zip to build and run. | Text | |
java_system_propties[name] | Any additional system properties to pass to the java runtime. For example java_system_properties[foo]=bar will get passed as -Dfoo=bar to your scenario. |
Text | |
node_version | Which Node.js version to run your test with if applicable. Possible values: 10,14,18,latest. Default to latest. | Text | |
root_folder | If all of your project files are in a subdirectory specify that relative path here. Your test will run from that subdirectory if specified. | Text | |
additional_setup_command | Specify an additional shell command to run before your test starts on each test runner | Text | |
teardown_command | Specify a shell command to run after your test finishes on each test runner | Text |
device
parameter described above.
Name | Description | Type | Required |
---|---|---|---|
devices[*].browser | Which browser to launch for this device. Possible values include chrome, firefox. Defaults to chrome. | Text | |
devices[*].version | The browser version to launch. Possible values can be found at https://api.testable.io/browser-versions. Defaults to Latest which will always use the latest supported version. | Text | |
devices[*].device | The device to emulate. Possible values can be found at
https://api.testable.io/browser-versions. You can either pass the name of a device found in that list or a JSON that follows the same structure as the devices in the list. Defaults to Desktop 1920 x 1080 . |
Text | |
devices[*].weight | The weight of this device relative to the others. Weights do not need to sum up to 100. For example if you specify two devices with weights 1 and 2 then the first device would be assigned to 1/3 of the concurrent users and the second device to 2/3. By default all devices are distributed evenly across the concurrent users. | Text |
Common Parameters
Name | Description | Type | Required |
---|---|---|---|
virtual_users | The total number of virtual users to distribute across the instances/engines. (virtual_users / instances) is passed to each JMeter engine as JMeter property threads and accessible via the ${__P(threads)} JMeter syntax. |
Text | |
virtual_users | The total number of virtual users to distribute across the instances/engines. (virtual_users / instances) is passed to each Gatling engine as system property users and accessible via the Integer.getInteger("users") syntax. |
Text | |
virtual_users | The total number of virtual users to distribute across the instances/engines. (virtual_users / instances) is passed to each Locust engine via the --clients parameter. |
Text | |
instances | Total number of instances to run as part of this test | Text | |
test_type | Type of test: Load (default), Functional, or Monitor. If unset or Load then you must specify either a number of iterations or a duration. | Text | |
rampup_secs | The rampup for each JMeter engine. Passed to each JMeter engine as JMeter property rampup and accessible via the ${__P(rampup)} JMeter syntax. |
Text | |
duration_secs | The duration each JMeter engine should run for. Passed to each JMeter engine as JMeter property duration and accessible via the ${__P(duration)} JMeter syntax. |
Text | |
duration_secs | The duration each Gatling engine should run for. Passed to each Gatling engine as system property duration and accessible via the Integer.getInteger("duration") syntax. |
Text | |
duration_mins | The total runtime for each Locust engine in minutes. Passed to each Locust engine via the --runtime parameter. |
Text | |
concurrent_users | Total number of concurrent users to simulate as part of this test. | Text | |
duration_mins | Duration (in minutes) of the test after the ramp up time is finished. Each virtual user will keep looping through the scenario over and over until the specified duration has passed. Testable waits for the currently running iteration of each virtual user to complete before ending the test instead of killing them abruptly when the duration has been reached. Either duration_mins or iterations is required. | Text | |
iterations | Number of iterations of the scenario that each virtual user will run. Either duration_mins or iterations is required. | Text | |
rampup_mins | Number of minutes over which to ramp up from 0 virtual users to the specified number. Defaults to 0 minutes. | Text | |
testcase_name | Name of the test case. A test case provides a logical grouping for one or more test configurations and scenarios. Defaults to "Main Test Case". You can also specify testcase_folder if you want this test case to go somewhere other than the root folder (e.g. /Folder1/Nested2). | Text | |
scenario_name | Name of the scenario. This name is auto-generated for you if not specified. | Text | |
conf_name | Name of the test configuration. This name is auto-generated for you if not specified. | Text | |
view | Upload a custom view and set it as the default for this test case. Create a custom view on the Testable website and then download the definition file using the action menu in the upper right of the test results => Export View Definition. | File | |
params[*] | Any scenario parameters. Param keys are setup on the test configuration and values are set on the test configuration. Example: params[threads]=5 . If you want a parameter to be encrypted, also pass
params[threads].encrypted=true . |
Text | |
files[*] | Any additional files that are required to run your scenario including CSVs, data files, etc. | File | |
repo_url | If you want your scenario to be linked to a git repository specify the URL here.
The respository will be cloned onto the test runner at the the time of test execution.
You can either set up any authentication via https://a.testable.io/account/vcs-roots before hand
if the repository is not publicly available or pass it here. For username auth use properties
repo_username and repo_token parameters. For privte key auth use
repo_pvt_key_name (defaults to id_rsa) and repo_pvt_key_contents which
can either be the pvt key itself or a reference to a file. |
Text | |
platform | Which OS do you want to host the test runner instances spun up as part of this test. Possible values are \[blank\], linux, win32. Linux: runs in a Docker container that uses a minimal Ubuntu 22.04 LTS as the base system (phusion/baseimage). Windows: Run on Microsoft Windows 2016 Datacenter edition with Containers. | Text | |
iteration_sleep_secs | Number of seconds each virtual user sleeps between iterations of the scenario. Defaults to 10 seconds. | Text | |
percentiles | For all timing metrics like latency, a comma separated list of what percentiles to capture. Defaults to 50,95,99 . Decimal notation is supported (e.g. 50,95,99,99.9,99.99 ). |
Text | |
manual_start | If set to true, Testable will wait for the user to manually start the test after all test runners are allocated and initialized. Test can be manually triggered via the results page or using the PUT /executions/:id/manual-start route. |
Text | |
results_tags | Comma separated list of tags to apply to the test result for grouping/querying later. Example: tag1,tag2 . Optional.
|
Text | |
max_duration_mins | Optional number of minutes after which Testable will hard stop the test. By default we let your scenario keep running until it finishes. | Text | |
start_concurrent_users | Used to setup a step function for load generation. Testable will start with this number of virtual users and steadily add step virtual users across the duration of the test. If specified you must also specify step and duration_mins |
Text | |
step | Required if start_concurrent_users_per_region is specified. This indicates the number of users to add on each step as Testable builds from the starting number to the final one concurrent_users_per_region across the duration_mins . |
Text | |
note | Note for the execution | Text | |
Test Runner | |||
testrunners_multiplier | Numeric value (default = 1) indicating how much to scale up or down the VM resources assigned to each virtual user. By default (i.e. multiplier = 1), Testable assigns virtual users resources as follows:
|
Text | |
testrunners_clientips | The number of client IPs from which to run the virtual users. Cannot be less than 1 or bigger than the number of virtual users. Depending on the number of VUs and the resource multiplier there could be further limitations. | Text | |
testrunners_tags[key] | One or more tags to apply to all instances spun up as part of this test. For example testrunner_tags[Cost Center]=19824 | Text | |
testrunners_spot | Boolean indicating whether or not to use spot instances to run your test. Defaults to false. | Text | |
Shared Test Runners | |||
conf_testrunners[*].regions | A comma separated list of region names. Possible values for the public shared grid include: aws-us-east-1, aws-us-west-2, aws-ap-south-1, aws-ap-southeast-1, aws-ap-southeast-2, aws-eu-central-1, aws-eu-west-1. For a private self-hosted regions use whatever region name you chose when launching your test runners. | Text | |
conf_testrunners[*].public | Boolean indicating whether this configuration is for the public shared grid or your private self-hosted region. Defaults to true. | Text | |
allow_shared_runners | If true and all test runners in any of your chosen regions are busy running other tests, Testable will not wait and instead run your test alongside another one. False by default. | Text | |
max_runners | Limit the number of test runners your test will be assigned to. By default Testable will utilize as many of the free test runners as possible. So for example if you have a test with 10 VUs and 10 runners, by default Testable will assign 1 VU to each runner. If you set this field to 5 it would assign 2 VUs to 5 runners and not utilize the other 5. | Text | |
reassign_on_fail | Whether or not to reassign the test (or part of the test on a particular test runner for load tests) to a different test runner on failure. If not specified here the value will be taken from the organization settings (Org Management => Settings => Test Runner Related => Reassignment). | Text | |
Per Test Region | |||
conf_testrunners[*].regions[*].name | The region name in which to spin the instances. Any public AWS region is available including us-east-1, us-east-2, us-west-1, us-west-2, ap-south-1, ap-southeast-1, ap-southeast-2, eu-central-1, eu-west-1. | Text | |
conf_testrunners[*].regions[*].instance_type | Instance type to launch. Defaults to m4.large. | Text | |
conf_testrunners[*].regions[*].instances | Number of EC2 instances to launch for this test. Defaults to Testable's automatic recommendation based on the instance type, scenario type, and number of concurrent users. | Text | |
conf_testrunners[*].regions[*].spot_max_price | Maximum price for spot instances. If specified, Testable will launch spot instances for your test. | Text | |
conf_testrunners[*].regions[*].instance_type | Instance type to launch. Defaults to e2-standard-2. | Text | |
conf_testrunners[*].regions[*].instances | Number of VM instances to launch for this test. Defaults to Testable's automatic recommendation based on the instance type, scenario type, and number of concurrent users. | Text | |
conf_testrunners[*].regions[*].spot_max_price | Maximum price for spot instances. If specified, Testable will launch spot instances for your test. | Text | |
conf_testrunners[*].regions[*].instance_type | Instance type to launch. Defaults to Standard_D2_v2. | Text | |
conf_testrunners[*].regions[*].instances | Number of VMs in the VM scale set launched for this test. Defaults to Testable's automatic recommendation based on the instance type, scenario type, and number of concurrent users. | Text | |
Test Runner Source | |||
conf_testrunners[*].name | Name of the test runner source when setting up your own AWS account as a source. Defaults to "My Aws" if not specified. If you setup your source via the website, you do not need to specify the access key or cross account role here. | Text | |
conf_testrunners[*].aws_role_arn | AWS cross account role that grants Testable the access necessary to spin up self hosted EC2 instances. This is an alternative to providing credentials directly. | Text | |
conf_testrunners[*].aws_role_external_id | AWS external ID for cross account role access to provide additional security. | Text | |
conf_testrunners[*].aws_access_key_id | AWS Access Key ID. Required if using credentials based access for per-test EC2 test runners. | Text | |
conf_testrunners[*].aws_secret_key | AWS Secret Key. Required if using credentials based access for self hosted EC2 test runners. | Text | |
conf_testrunners[*].vpc | The VPC in which to spin up instances. If not chosen the default VPC in your AWS account is used. | Text | |
conf_testrunners[*].subnet | The subnet in which to spin up instances. If not chosen one is chosen at random from within the VPC. | Text | |
conf_testrunners[*].key_pair | Key pair name for connecting via SSH to any instances spun up as part of your test. First one in the list within your account is chosen by default. | Text | |
conf_testrunners[*].t2_unlimited | For t2 family instances, a boolean indicating whether to enable CPU bursting using the T2 Unlimited feature. Defaults to true. | Text | |
conf_testrunners[*].elastic_ips | To assign elastic IPs to the test runner instances either set this field to "any" which will use any available elastic IP address in your account if self-hosted or any allocated elastic IPs if Testable hosted. You can also specify a comma separated list of public IPs to choose a specific set of elastic IPs. If empty, elastic IPs will not be enabled. | Text | |
conf_testrunners[*].elastic_ips_force_associate | Only applicable if elastic IPs are enabled. If set to true then elastic IPs will be reassociated to test runner instances even if they are currently associated with another EC2 instance. | Text | |
conf_testrunners[*].name | Name of the test runner source when setting up your own Azure account as a source. Defaults to "My Azure" if not specified. | Text | |
conf_testrunners[*].azure_tenant_id | Azure Tenant ID. See our Azure self-hosted guide for more details. Required for self hosted Azure VM scale set test runners. | Text | |
conf_testrunners[*].azure_subscription_id | Azure Subscription ID. See our Azure self-hosted guide for more details. Required for self hosted Azure VM scale set test runners. | Text | |
conf_testrunners[*].azure_client_id | Azure Client ID. See our Azure self-hosted guide for more details. Required for self hosted Azure VM scale set test runners. | Text | |
conf_testrunners[*].azure_client_secret | Azure Client Secret. See our Azure self-hosted guide for more details. Required for self hosted Azure VM scale set test runners. | Text | |
conf_testrunners[*].resource_group | The Azure resource group to launch all resource into. Defaults to "testable_[region]" if not specified. | Text | |
conf_testrunners[*].network | The Azure virtual network in which to launch the VM scale set. Defaults to "testable-vnet-[region]" if not specified. | Text | |
conf_testrunners[*].subnet | The subnet within the chosen network in which to launch the VM scale set. Defaults to Looking for a subnet called "agents" first and if not found using a random subnet that already exists. If no subnet exists in the network, one called "agents" will be created. | Text | |
conf_testrunners[*].storage_account | The storage account within the chosen resource group to use for copying the Testable VM image blob. Defaults to using the first storage account found if none is specified. If no storage account exists in the resource group, one with a random name is created. | Text | |
conf_testrunners[*].low_priority | Whether or not to create low priority VMs in the scale set. Defaults to false. | Text | |
conf_testrunners[*].image | Defaults to the standard test runner image (agent-external). To use the Flash enabled image use agent-flash-external. | Text | |
conf_testrunners[*].tags[key] | One or more tags to apply to all instances spun up as part of this test. For example conf_testrunners[0].tags[Cost Center]=19824 | Text | |
conf_testrunners[*].name | Name of the test runner source when setting up your own GCP account as a source. Defaults to "My GCP" if not specified. | Text | |
conf_testrunners[*].gcp_service_account | GCP Service Account JSON. You can specify then JSON as text or you can upload the file using -F "gcp_service_account=@/my/local/path/to/service.json" . See our GCP self-hosted guide for more details. Required for self hosted GCP VM instance test runners. | Text | |
conf_testrunners[*].network | GCP Network. See our GCP self-hosted guide for more details. Required for self hosted GCP VM instance test runners. | Text | |
conf_testrunners[*].zone | GCP Zone. See our GCP self-hosted guide for more details. Optional for self hosted GCP VM instance test runners. If not selected, then Testable will select a random zone for the specified region | Text | |
conf_testrunners[*].subnet | GCP Subnet. See our GCP self-hosted guide for more details. Required for self hosted GCP VM instance test runners. | Text | |
Notifications | Configure one or more notification targets for your test run. If not specified the organization default notifications are used (as configured via Org Management => Settings => Notifications). | ||
notifications[*].medium | Possible values: Email, Text, WhatsApp, Voice, HTTP. Case sensitive. | Text | |
notifications[*].target | The target(s) for the notification. Either a comma separated list of email addresses, phone numbers, or URLs depending on the medium. For HTTP notifications you can optionally choose to mask certain sensitive parts of a URL when it's printed to the test log. To do this instead of https://foo.com/bar?key=sensitivetext you would specify secure:https://foo.com/bar?key=[sensitivetext] . In secure URLs you can escape the [ character with a backslash before it (\[ ). |
Text | |
notifications[*].on_success | Possible values: None (no notification on success), Always (notification on success always), or First (notification on the first success after a failure). Defaults to Always. | Text | |
notifications[*].on_failure | Possible values: None (no notification on failure), Always (notification on failure always), or First (notification on the first failure after a success). Defaults to Always. | Text | |
notifications[*].failure_consecutive_times | How many consecutive times does the test need to fail before it is considered a failure. Defaults to 1. | Text | |
notifications[*].via_runner | Boolean. For HTTP notifications whether or not to send the notification via the test runner when the test is run on self-hosted runners. Defaults to false. This is useful if you need the notification to originate from inside of your network. | Text | |
KPIs | One or more key performance indicators (KPIs) can be set to define what you consider a successful test run. | ||
kpis[*].expr |
A few quick examples:
An expression that specifies a KPI for your test. The format depends on the kind of metric you want to define the KPI for. See our metrics guide for a list of valid metric names and our custom metrics guide for more details on the possible metric type. For custom metrics the format for
|
Text | |
kpis[*].break_on_fail | Boolean. If set to true, the KPI will be continuously evaluated as the test runs. As soon as the KPI is no longer met the test will be stopped. Defaults to false. | Text | |
kpis[*].type |
Four possible types of KPIs (defaults to value ):
|
Text | |
kpi_break_eq_test_fail | If one or more KPIs have break_on_fail=true and the breaking point is hit, this setting controls whether the test is then automatically considered a failure. Defaults to true . |
Text | |
Billing | |||
billing_strategy | Either MinimizeCost (default) or ASAP . Definitions:
|
Text | |
billing_categories | Only applicable if your account has multiple plans of different categories. A comma separated list of billing plan categories to help Testable choose between multiple plans to bill your test against when the cost would be the same utilizing more than one plan. Valid values: TestRunner, VU, BrowserSession, LiveSession, Monitor. The order of the options in the comma separated list will help Testable make the choice. Optional. | Text |
Setup test without running
Uses a multipart form upload to setup a new test configuration. It does not run the test yet though. Parameters are exactly the same as the /start API.
POST /setup
NOTE: Attribute note
is not supported.
Manual start
Tests that were started with manual_start=true will simply sit and wait after the test runners are allocated and initialized. This route tells Testable to start executing the test across the test runners. This can be useful with on-demand test runners since we don’t know exactly how long AWS EC2 instances take to spin up. Start the test with manual_start=true well in advance of your desired test run time and when you trigger the manual start it will immediately start generating load.
PUT /executions/:id/start
Stop execution
Request to stop the execution before it completes. Note that once the execution is stopped it will take a little while before it has completed: true
status due to cleanup, result aggregation, etc.
PUT /executions/:id/stop
Extend execution
Changes the duration or number of iterations this test will execute for. Propogates live to all test runners currently running the test. If the new duration/iterations has already passed the test will complete once the currently executing iteration is completed.
This only works for scenario types where Testable manages the concurrency (i.e. not JMeter or Gatling).
POST /executions/:executionId/live-extension
Request Body
For tests that are configured to run for a duration (in seconds):
{
"newDuration": 360
}
For tests that are configured to run for a certain number of iterations:
{
"newIterations": 5
}