Selenium Remote Testing

Introduction

You can use the Testable platform as a remote Selenium grid. Each Selenium session will run on a Testable test runner according to the options you provide in your capabilities. This includes support for all Cloud providers, in your account or ours, as well as self-hosted test runners.

Remote Webdriver URL

The Testable Cloud remote grid can be accessed at:

https://selenium.testable.io/wd/hub

To authenticate you can pass the testable:options as part of your capabilities and include key (a valid API key for your account found after logging in under Org Management => API Keys).

For Testable Enterprise the url is similar but instead of selenium.testable.io you should use the address of the coordinator-service.

Every Selenium framework has a different way of passing the remote webdriver URL. Check the specific framework’s documentation for more details.

Capabilities

The easiest way to learn about and configure your capabilities is by logging into your Testable account and going to the Remote Test Configurator.

Capabilities

The standard set of fields within your desiredCapabilities object are supported by Testable. Some notes here on a few of those fields:

  • browserName: Which browser to use. Supported list is always evolving and includes chrome, edge, firefox.
  • browserVersion: Either an absolute version number (e.g. 88) or a relative version number (e.g. latest or latest-1 for the second most recent version etc) or beta for the latest beta version.

Testable Options (testable:options)

As part of your capabilities you can pass a testable:options object with a variety of Testable specific options.

Example:

{
    "browserName": "chrome",
    "testable:options": {
        "key": "xxxxxx",
        "region": "aws-us-east-1",
        "capturePerformance": true
    }
}

A list of all options:

  • user: A user to log as having triggered this test. Can be anything you want, defaults to remote.

  • key: The API key to authenticate with (found after logging in under Org Management => API Keys). Can either be passed here or as a basic Authentication header (user:key).

  • region: A valid Testable region name in which to run your test. This can be either a long runner or per test region. or our test runners doc for more details. If not specified one will be chosen for you based on availability.

  • source: To use an on demand test runner provide the name here. This can either be a Testable cloud account ([AWS|Azure|GCP] - Testable Account) or any source that you configure for your own cloud accounts. Use the configurator to learn more about the cloud specific options that correspond to each cloud provider (e.g. vpc, subnet, etc).

  • deviceName: Either the name of one of the devices that Testable supports or a custom device. Use the configurator to find the current list and format for custom devices.

  • testCaseName: The test case in which to record the Selenium session as a test run. Defaults to Remote Selenium Test.

  • scenarioName: The scenario name to capture the options and capabilities corresponding to your session. Defaults to Remote Selenium.

  • name: The test configuration name for test history tracking purposes. Sessions with the same name will be part of a single test history that includes test pass rate, metrics, etc. Defaults to [BrowserName] - [Device].

  • capturePerformance: Whether or not to capture network request metrics for each HTTP request the browser makes. This is done via Chrome DevTools Protocol (CDP) where possible and enabled by default. For browsers that don’t support CDP it’s down via a Browserup proxy and disabled by default.

  • captureWebSocketPerformance: Whether or not to capture network metrics for each websocket connection the browser makes. Enabled by default if the browser supports CDP. In other cases we use a MITM proxy and it’s disabled by default.

  • recordVideo: Whether or not to capture a video of the test run. Defaults to true.

  • logs: Which Selenium logs to capture by default: All (all, default), None (none), comma separated list (e.g. server,driver).

  • screenshots: Configure when Testable should capture an automatic screenshot: After Failed Commands (afterFailure, default), After All Commands (always), or Never (never).

  • sessionTimeoutMs: Inactivity in milliseconds before the session times out. Defaults to 5 minutes. Valid range 15000ms (15 seconds) - 900,000ms (15 minutes).

  • commandTimeoutMs: Time to wait on a response for each Selenium command. Defaults to 1 minute. Valid range 5000ms (5 seconds) - 300,000ms (5 minutes).

  • openfinConfigUrl: For OpenFin application testing you must specify the URL of your application config json.

  • billingStrategy: A billing related setting for how to decide when to run your test. Defaults to MinimizeCost. Other possible value is ASAP.

  • billingCategories: When billing strategy is ASAP, this parameter can be specified as a comma separated list of plan types. When there are multiple ways to bill your test this setting will help decide which plan to choose (possible values = TestRunner, VU, Monitor, BrowserSession, LiveSession).

“strategy” -> UsageStrategy.withName(strategy).toString, “asapStrategy” -> (if (categories.isEmpty) “cheapest” else “choose”), “chosenCategories” -> categories.getOrElse(“”)

Commands

We use the WebDriver execute(script, args) command to introduce some Testable specific functionality. The examples below use Webdriver.io syntax but the same should work with any Selenium bindings.

Testable Information (testable:info)

Get information about the Testable session including the execution id that you can use to look up the test results via the web application or API.

// result structure: { executionId }. Use this to view test results live: https://a.testable.io/results/[executionId] or the same via the API.
let result = execute('testable:info');

Custom Metrics (testable:metric)

execute('testable:metric', [timing|counter|histogram|metered], { name[, key], val, units });

Report custom metrics that are visualized as part of your test report on Testable. Read more about the different metric types here.

timing

// capture a timing called "Order Execution Time" that is visible in the test results
browser.execute('testable:metric', 'timing', { name: 'Order Execution Time', val: 342, units: 'ms' });

counter

// capture a counter "Orders Placed" that is visible in the test results
browser.execute('testable:metric', 'counter', { name: 'Orders Placed', val: 5, units: 'orders' });

histogram

// capture a histogram "Orders By Type" that is visible in the test results
browser.execute('testable:metric', 'histogram', { name: 'Orders By Type', key: 'Delivery', val: 1 });

metered

// capture a metered metric "Server Memory Usage" that is visible in the test results
browser.execute('testable:metric', 'metered', { name: 'Server Memory Usage', val: 34524232, units: 'bytes' });

Assertions (testable:assertion[:start|:finish])

// after finish
execute('testable:assertion', { suite, name, duration, state[, errorType, error, errorTrace] });

// streaming
execute('testable:assertion:start', { suite, name });
// ...
execute('testable:assertion:finish', { state[, duration, errorType, error, errorTrace] });

Send assertions to be included as part of the test results.

After Finish (testable:assertion)

Use testable:assertion to capture an assertion after it has completed. It will appear in real-time in the test results Assertions widget.

execute('testable:assertion', { 
  suite: 'My Test Suite', 
  name: 'Should place order', 
  duration: 1423, 
  state: 'passed' 
});

Or with an error:

execute('testable:assertion', { 
  suite: 'My Test Suite', 
  name: 'Should place order', 
  duration: 1423, 
  state: 'failed', 
  errorType: 'BACKEND_ERROR', 
  error: 'Failed to connect to order placement service', 
  errorTrace: '...stacktrace...' 
});

Streaming (testable:assertion:[start|finish])

Use streaming assertions to indicate when a test step starts and finishes. As soon as the start command is received the assertion will appear as in progress in the test results. Once the finish message is received or the test ends, the assertion will be marked as finished.

Any assertion that is started but not finished will be marked as finished at the end of the test.

execute('testable:assertion:start', { suite: 'My Suite', name: 'Place an order' });

// ... 

// passed
execute('testable:assertion:finish', { suite: 'My Suite', name: 'Place an order', state: 'passed' });

// failed
execute('testable:assertion:finish', { 
  suite: 'My Suite', 
  name: 'Place an order',
  state: 'failed', 
  errorType: 'BACKEND_ERROR', 
  error: 'Failed to connect to order placement service', 
  errorTrace: '...stacktrace...' 
});

Pass/Fail Test (testable:[pass|fail])

Mark the entire test run as having passed or failed. Testable will show you the P/F status in the test report and also track the P/F history over time to provide pass rate trends and allow you to track it against your SLAs.

execute('testable:[pass|fail]', 'Optional message indicating why the test passed or failed');

Logging (testable:log)

Write a message into the Testable report log. This will be visible in the Logging tab of the test results.

execute('testable:log',[fatal|error|info|debug], [msg]);

For example:

execute('testable:log', 'info', 'Order was placed for 2 items totalling $12.82 with no issues');