Getting Started - Playwright Test + Testable Fixture

Introduction

You can upload or git link your Playwright code to Testable and execute the tests on our global test runner infrastructure as one or more virtual users. Each virtual user will run Playwright on a test runner according to the options you configure. Test runners are available on all Cloud providers, in your account or ours, as well as self-hosted test runners. A test report is then produced that aggregated results from all Playwright session including all tests, commands, screenshots, video, browser performance metrics, and network metrics. The test reports can be shared and customized to your requirements.

There are 2 flavors of Playwright that can be executed on our test runners:

  1. Playwright Library: Playwright Library provides a set of APIs for launching and interacting with browsers.
  2. Playwright Test + Testable Fixture [THIS GUIDE]: Playwright Test framework is a framework for end-to-end testing. Optionally use the testable-playwright-test fixture to get details like tests, commands, screenshots, video, and metrics. Test results are available whether you use our fixture or not.

In addition to this you can run your Playwright tests locally and point them at Testable as a remote Playwright grid. See this guide for more details on the remote options.

Example Use Case

For our example we will run a simple Playwright Test suite on a variety of browsers and devices.

Step 1: Create a Test Case

Start by signing up and creating a new test case using the Create Test button on the dashboard.

Enter the test case name (e.g. Playwright Demo) and press Next.

Step 2: Setup the Scenario

Select Playwright as the scenario type.

Playwright Scenario

Let’s use the following settings:

  1. Run Style: Playwright Test.
  2. Playwright Conf File: playwright.config.ts. This is the path of the config file within our uploaded project.
  3. Source: Upload All Files. We will upload a zip file of our Playwright Test project. We can also connect our scenario to version control as an alternative.

Playwright Test Settings

Our example.spec.ts looks exactly like it would when run outside of Testable with one key difference:

import { expect } from '@playwright/test';
import { createFixture as createTestableFixture } from 'testable-playwright-test';

const test = createTestableFixture();

test('has title', async ({ page }) => {
  await page.goto('https://google.com/');

  // Expect a title "to contain" a substring.
  await expect(page).toHaveTitle(/Google/);

  await page.screenshot({ path: 'google.png' });
});

test('get started link', async ({ page }) => {
  await page.goto('https://playwright.dev/');

  // Click the get started link.
  await page.getByRole('link', { name: 'Get started' }).click();

  // Expects the URL to contain intro.
  await expect(page).toHaveURL(/.*intro/);

  await page.screenshot({ path: 'playwright.png' });
});

Using the Testable Fixture as your test variable will work exactly like import { test } from '@playwright/test when run locally. But when you upload and run on Testable this will allow us to capture all the useful data like tests, commands, screenshots, video, and performance metrics.

Our playwright.config.ts looks exactly the same as if you ran your test locally:

import { defineConfig, devices } from '@playwright/test';

export default defineConfig({
  testDir: './tests',
  projects: [
    {
      name: 'chromium',
      use: { ...devices['Desktop Chrome'] },
    },
    {
      name: 'firefox',
      use: { ...devices['Desktop Firefox'] },
    },
    {
      name: 'webkit',
      use: { ...devices['Desktop Safari'] },
    },
    {
      name: 'Mobile Chrome',
      use: { ...devices['Pixel 5'] },
    },
    {
      name: 'Mobile Safari',
      use: { ...devices['iPhone 12'] },
    },
    {
      name: 'Microsoft Edge',
      use: { ...devices['Desktop Edge'], channel: 'msedge' },
    },
    {
      name: 'Google Chrome',
      use: { ...devices['Desktop Chrome'], channel: 'chrome' },
    },
  ]
});

Testable will capture a session for each device. A session includes video, screenshots, and the commands executed. In the Testable test results you’ll be able to see all the sessions and switch between them.

To try it out before configuring a load test click the Smoke Test button in the upper right and watch Testable execute the scenario 1 time as 1 user.

Click on the Configuration tab or press the Next button at the bottom to move to the next step.

Step 3: Setup and Run the Configuration

Now that we have the scenario for our test case we need to define a few parameters before we can execute our test:

  1. Test Type: We select Load so that we can simulate multiple users as part of our test.
  2. Total Virtual Users: Number of users that will execute in parallel. Each user will execute the scenario.
  3. Test Length: Select Iterations to have each client execute the scenario a set number of times regardless of how long it takes. Choose Duration if you want each client to continue executing the scenario for a set amount of time (in minutes).
  4. Location(s): Choose the location in which to run your test and the test runner source that indicates which test runners to use in that location to run the load test (e.g. on the public shared grid).

And that’s it! Press Start Test and watch the results start to flow in. See the new configuration guide for full details of all configuration options.

For the sake of this example, let’s use the following parameters:

Test Configuration

Step 4: View Results

Once the test starts executing, Testable will distribute the work out to the selected test runners (e.g. Public Shared Grid in AWS N. Virginia).

Test Results

The results will include videos, screenshots, test outcomes, commands, traces, performance metrics, logging, breakdown by URL, analysis, comparison against previous test runs, and more.

For each browser session that is part of your test you will see a separate video and all screenshots and commands will be correlated to that same session.

Check out the Playwright guide for more details on running your Playwright tests on the Testable platform.

We also offer integration (Org Management -> Integration) with third party tools like New Relic. If you enable integration you can do more in depth analytics on your results there as well.

That’s it! Go ahead and try these same steps with your own scripts and feel free to contact us with any questions.