Create a Test Configuration

Getting Started

A test configuration define the load parameters that specify which scenario to run and how much load to generate (i.e. concurrent users, regions, duration, etc).

To create a new test configuration first Create a Test Case and define your test scenario.

Click on the new test case name on the left and it should take you to the 'Test Configurations' tab. Any existing configurations will appear with some basic information from the last time it was executed (if any). Feel free to rename, stop, or delete them from here.

To create a new configuration simply press the New Configuration button. This is also available from any scenario definition.

New Test


Required Parameters

  1. Schedule: Whether to run the test immediately or on a schedule.

    1. Immediate: Run the test immediately and on demand via the Start button or API.
    2. Later: Run once at a later point in time. Can also be run on demand via the Start button or API.
    3. Recurruing: Define a regular schedule to run your test. Can also be run on demand via the Start button or API.
  2. Load Profile: The amount of load to generate and exactly what shape it takes.

    1. Flat: Rampup to a constant number of concurrent clients per region for the duration of the test.
    2. Step: Gradually increase the number of concurrent users in a step function. Specify the number of concurrent clients to start, finish, and the step size. The steps will be evenly distributed across the duration of your test. 50 steps is the most allowed per region.
    3. Find My Breaking Point: Let Testable find the maximum number of users that can concurrently execute your scenario without performance degrading below your acceptable performance standards. This means you must specify one or more breaking points to express what you consider unacceptable performance. Testable will gradually increase the number of concurrent clients until either one or more breaking points hit or your infrastructure successfully handles the maximum concurrent clients per test allowed under your account.
  3. Test Runners: Choose the test runners from which to run this test (e.g. on our public shared grid). Each test runner region will simulate the number of concurrent clients defined above.

  4. Length: Select Iterations to have each client execute the scenario a set number of times regardless of how long it takes. Choose Duration if you want each client to continue executing the scenario for a set amount of time (in minutes).

  5. Scenario: Choose which scenario to execute.

  6. Success Criteria: A set of criteria which are evaluated when the test finishes to decide whether the test was successful or not. All conditions must pass in order for the test to be considered a success. Default success criteria are defined per scenario type at Account => Settings. These criteria can be updated and are used by default, but can also be overridden per test configuration.

Optional Parameters

  1. Breaking Points: Specify breaking points to indicate acceptable performance thresholds. If any breaking point is hit the test will stop executing. Breaking points can be set on any metric including custom ones. An example breaking point is: firstReceivedMs-p50 [Active value] >= 1000ms (i.e. median response time greater than or equal to 1 second). Multiple breaking points can be specified.

  2. Email Notifications: Whether or not to send an email notification. Can be configured to send only when the test fails or anytime the test completes. Default notification settings are defined under Account => Settings.

  3. Network Connection: Choose a particular network type to emulate including various types of mobile networks (2G, 3G, LTE, etc) or a custom configuration. The limits set here are applied per concurrent user and automatically adjusted as your test starts/stops users. It even works with tools like JMeter, Gatling, and Locust where if 100 users are started by one process, each user/thread will get these limits.

  4. Iteration Sleep (secs): Number of seconds to sleep between iterations of the scenario on each concurrent client. Minimum is 1 second, default is 10 seconds.

  5. Rampup (secs): Ramp up period. Concurrent clients start evenly across this period. Minimum is 0, default is 60 seconds.

  6. Scenario Params: If your scenario defines parameters their values are specified per test configuration.

  7. Percentiles: Comma separated list of percentiles to calculate for all timing metrics. Defaults to 50, 95, 99. Example of a timing metric is connectionOpenMs. No more than 10 percentiles can be calculated.

  8. Capture All Output: Some scenario types support capturing screenshots and other output. By default this output is sampled in a similar fashion to traces.

  9. Traces Enabled: Enable or disable request tracing. By default Testable will sample some requests and capture all request and response headers/body. This is then made available in the results. Note: every single request has metrics capture (latency, bandwidth, custom metrics, etc). This only controls the tracing functionality.

  10. Network Request Filters: Use this section to block certain network calls from being made during your test. For example if you wanted to block all requests to Google Analytics you might add to the blacklist.

    Items on the blacklist can be hostnames (no wildcards), IP addresses, or IP ranges. Note that full URLs are not supported. Any network request that matches a blacklist entry will not be made unless it also matches a whitelist entry. For example to block all network requests except to you would add (i.e. everything) to the blacklist and to the whitelist.

    Note that any blocked network calls will show up as connection failures in the test results (e.g. CONNECT .. success rate = 0%). To filter out the failed network calls from the test results you must also add the hostname to the result collection filter described below.

    Default network request filters are defined under Account => Settings.

  11. Result Collection Filters: Unlike network request filters these filters do not block the network call from being made, but does filter results from being collected in the test results. So if a test needs to download certain 3rd party resources to work but you do not want to clutter your results with these network calls, use result collection filters.

    Uses a blacklist/whitelist approach but supports wildcards (e.g. * and matches on full URLs instead of hostnames or IPs. Any URL that matches a blacklist entry will not be collected as part of the test results unless it also matches a whitelist entry.

    Default result collection filters are defined under Account => Settings.

  12. Image Comparisons: Some scenario types support capturing screenshots. Those images can be automatically compared against previous test runs or a baseline image uploaded to the scenario definition. This can be useful for detecting anomolies. A percentage difference relative to the baseline is calculated for each image and reported in the results both as a metric (Image Diff) and alongside the image for analysis.

Define Test

Once you press the Start Test button the configuration will get created and the first execution will automatically start. The browser will be redirected to the execution results page where you can watch the results flow in real-time.