Core Concepts

Testable centers around four building blocks—test cases, scenarios, configurations, and executions. Mastering how they interact makes the rest of the platform intuitive.

Concept Purpose Owned by Key docs
Test Case Container that groups scenarios, configurations, and results for a project or service. Teams / projects Create a test case
Scenario Definition of the steps each virtual user performs. QA / developers Scenario hub
Configuration Runtime instructions: load model, runners, browsers, success criteria, schedules. Performance / infra Configuration hub
Execution A concrete run that binds a scenario + configuration at a point in time. Automation / CI Executions guide

Test Cases

  • Organize related scenarios and configurations under a single namespace.
  • Provide historical trends: every execution attached to a test case is available for comparison, diffing, and alerting.
  • Create them from the UI or via API—see the test case endpoints.

Scenarios

  • Describe what each virtual user does: browser journeys, API calls, load scripts, or recorded traffic.
  • Created via upload, Git sync, artifact links, or Testable’s recording proxy.
  • Parameterize behavior using scenario parameters so a single script can run across multiple configs or environments.
  • Supported scenario types include Playwright (Test, Library, Python), Selenium/Webdriver.io, Puppeteer, JMeter, Gatling, Locust, Node.js, Java, Postman, HAR replay, OpenFin, PhantomJS, recordings, and more. Browse the full list in the Scenario hub.

Configurations

  • Capture how a scenario should execute: total users, arrival rate, ramp patterns, duration, regions, and browser channels.
  • Reuse across environments by swapping parameter sets or data files.
  • Attach success criteria, thresholds, tags, filtering rules, or schedules to automate quality gates.
  • Select from multiple test runner options: Testable-managed cloud, your own AWS/Azure/GCP accounts, or self-hosted agents.

Executions

  • Every test run is an execution tied to a specific scenario + configuration pair plus the data/artifacts at runtime.
  • Launch runs manually, through CI triggers, via schedules, or by API (/api/simple.html or /api/execution.html).
  • Executions stream logs, metrics, screenshots, video, traces, and custom metrics in real time; once complete, they power downloadable reports and comparisons.
  • Test runners: Infrastructure nodes that execute your workloads. Configure them per test or keep long-lived pools. See Test Runners and self-hosted guides for AWS/Azure/GCP.
  • Triggers: Lightweight POST endpoints, CI hooks, and remote browser grid URLs for automating launches. Start with Continuous Integration & Triggers or the Simple Test API.
  • Analysis & Alerts: View dashboards in the UI, export via API, or push to tools like New Relic and Datadog. Set success criteria and connect alert channels through Integrations.
  • Enterprise & Portability: Run fully on-prem or hybrid by installing Testable Enterprise or just the runner agents. Review Enterprise deployment options for details.

Where to go next