Selenium Remote Testing

Introduction

You can use the Testable platform as a remote Selenium grid. Each Selenium session will run on a Testable test runner according to the options you provide in your capabilities. This includes support for all Cloud providers, in your account or ours, as well as self-hosted test runners.

Remote Webdriver URL

The Testable Cloud remote grid can be accessed at:

https://[user]:[api-key]@agents.testable.io/wd/hub

user can be anything you want and will be logged as part of the test results for tracking purpose. api-key should be a valid API key for your account found after logging in under Org Management => API Keys.

For Testable Enterprise the url is similar but instead of agents.testable.io you should use the address of the coordinator-service.

Every Selenium framework has a different way of passing the remote webdriver URL. Check the specific framework’s documentation for more details.

Capabilities

The easiest way to learn about and configure your capabilities is by logging into your Testable account and going to the Remote Test Configurator.

Capabilities

The standard set of fields within your desiredCapabilities object are supported by Testable. Some notes here on a few of those fields:

  • browserName: Which browser to use. Supported list is always evolving and includes chrome, firefox.
  • browserVersion: Either an absolute version number (e.g. 88) or a relative version number (e.g. latest or latest-1 for the second most recent version etc).

Testable Options (testable:options)

As part of your capabilities you can pass a testable:options object with a variety of Testable specific options.

Example:

{
    "browserName": "chrome",
    "testable:options": {
        "region": "aws-us-east-1",
        "capturePerformance": true
    }
}

A list of all options:

  • region: A valid Testable region name in which to run your test. This can be either a long runner or per test region. or our test runners doc for more details. If not specified one will be chosen for you based on availability.

  • source: To use an on demand test runner provide the name here. This can either be a Testable cloud account ([AWS|Azure|GCP] - Testable Account) or any source that you configure for your own cloud accounts. Use the configurator to learn more about the cloud specific options that correspond to each cloud provider (e.g. vpc, subnet, etc).

  • deviceName: Either the name of one of the devices that Testable supports or a custom device. Use the configurator to find the current list and format for custom devices.

  • testCaseName: The test case in which to record the Selenium session as a test run. Defaults to Remote Selenium Test.

  • scenarioName: The scenario name to capture the options and capabilities corresponding to your session. Defaults to Remote Selenium.

  • name: The test configuration name for test history tracking purposes. Sessions with the same name will be part of a single test history that includes test pass rate, metrics, etc. Defaults to [BrowserName] - [Device].

  • capturePerformance: Whether or not to insert a Browserup proxy to capture network request metrics from the browser for every HTTP request. Defaults to false.

  • captureWebSocketPerformance: Whether or not to use a MITM proxy to capture performance metrics for every websocket connection. Defaults to false.

  • recordVideo: Whether or not to capture a video of the test run. Defaults to true.

  • logs: Which Selenium logs to capture by default: All (all, default), None (none), comma separated list (e.g. server,driver).

  • screenshots: Configure when Testable should capture an automatic screenshot: After Failed Commands (afterFailure, default), After All Commands (always), or Never (never).

Commands

We use the WebDriver execute(script, args) command to introduce some Testable specific functionality. The examples below use Webdriver.io syntax but the same should work with any Selenium bindings.

Custom Metrics (testable:metric)

execute('testable:metric', [timing|counter|histogram|metered], { name[, key], val, units });

Report custom metrics that are visualized as part of your test report on Testable. Read more about the different metric types here.

timing

// capture a timing called "Order Execution Time" that is visible in the test results
browser.execute('testable:metric', 'timing', { name: 'Order Execution Time', val: 342, units: 'ms' });

counter

// capture a counter "Orders Placed" that is visible in the test results
browser.execute('testable:metric', 'timing', { name: 'Orders Placed', val: 5, units: 'orders' });

histogram

// capture a histogram "Orders By Type" that is visible in the test results
browser.execute('testable:metric', 'histogram', { name: 'Orders By Type', key: 'Delivery', val: 1 });

metered

// capture a metered metric "Server Memory Usage" that is visible in the test results
browser.execute('testable:metric', 'histogram', { name: 'Server Memory Usage', val: 34524232, units: 'bytes' });

Assertions (testable:assertion[:start|:finish])

// after finish
execute('testable:assertion', { suite, name, duration, state[, errorType, error, errorTrace] });

// streaming
execute('testable:assertion:start', { suite, name });
// ...
execute('testable:assertion:finish', { state[, duration, errorType, error, errorTrace] });

Send assertions to be included as part of the test results.

After Finish (testable:assertion)

Use testable:assertion to capture an assertion after it has completed. It will appear in real-time in the test results Assertions widget.

execute('testable:assertion', { suite: 'My Test Suite', name: 'Should place order', duration: 1423, state: 'passed' });

Or with an error:

execute('testable:assertion', { suite: 'My Test Suite', name: 'Should place order', duration: 1423, state: 'failed', errorType: 'BACKEND_ERROR', error: 'Failed to connect to order placement service', errorTrace: '...stacktrace...' });

Streaming (testable:assertion:[start|finish])

Use streaming assertions to indicate when a test step starts and finishes. As soon as the start command is received the assertion will appear as in progress in the test results. Once the finish message is received or the test ends, the assertion will be marked as finished.

Only one assertion can be in progress at a time per test. It’s assumed the finish message relates to the most recently started assertion. If a start message is received while a previous assertion is active, the previous assertion will be marked as finished.

execute('testable:assertion:start', { suite: 'My Suite', name: 'Place an order' });

// ... 

// passed
execute('testable:assertion:finish', { state: 'passed' });

// failed
execute('testable:assertion', { state: 'failed', errorType: 'BACKEND_ERROR', error: 'Failed to connect to order placement service', errorTrace: '...stacktrace...' });

Pass/Fail Test (testable:[pass|fail])

Mark the entire test run as having passed or failed. Testable will show you the P/F status in the test report and also track the P/F history over time to provide pass rate trends and allow you to track it against your SLAs.

execute('testable:[pass|fail]');

Logging (testable:log)

Write a message into the Testable report log. This will be visible in the Logging tab of the test results.

execute('testable:log',[fatal|error|info|debug], [msg]);

For example:

execute('testable:log', 'info', 'Order was placed for 2 items totalling $12.82 with no issues');