Custom Metrics and Traces


When tests run, all network calls (i.e. HTTP, websocket, TCP, etc) are instrumented to capture a standard set of metrics: latency, bandwidth, packets sent/received, connection success, and more. When writing a script it can be useful to capture your own metrics that are specific to your use case.

Example - Subscribe to Tick

Let's start with a simple example where we want to capture the latency from subscribing to a symbol using the Testable WebSocket Sample Service and the first price update arriving.

var results = require('testable-utils').results;

var ws = new WebSocket("ws://");

var sentSubscribe = 0;
ws.on('open', function open() {
  sentSubscribe = moment().valueOf();    
  ws.send('{ "subscribe": "IBM" }');

ws.on('message', function(data, flags) {
  results().timing('subscribe2tick', moment().valueOf() - sentSubscribe);

The key line in this script is:

results().timing('subscribe2tick', moment().valueOf() - sentSubscribe);

This captures a timing metric, subscribe2tick, in the default namespace (User) with the value being the latency between "now" and the time the subscribe message was sent.

Metric Namespace

Metrics are grouped into namespaces. All the system generated metrics are in the Testable namespace. User generated metrics are in the User namespace by default, but a different namespace can be used.

Valid Metric Names

In order for a metric name to be valid it must:

  1. Not contain the following special characters: '-', '__'. All other valid UTF8 characters are allowed including spaces.
  2. Be 1-255 characters long

Metric names must be unique within the namespace. Users cannot write metrics to the Testable namespace.


During test execution we trace all connections details (including metrics, data sent, and data received) on some iterations. We try to capture at least one trace for each resource + response status combination each minute of your test. This will help you when analyzing the results to track down errors and better understand what went wrong.

Your script can also capture custom traces for any trace information you might find useful when analyzing results. Same sampling frequency applies (about one trace per resource + status + minute) unless result.forceReportTrace() is called.

var results = require('testable-utils').results;

results('IBM').addTrace('FirstTick', {}, 'some data here');
results().addTrace('Error', { header1: 'val1' }, 'error trace');

Metric Types

There are 3 types of metrics that can be captured:

  1. Timing: A timing metric will have the following aggregation functions computed: min, max, mean, count, variance, standard deviation, median (p50), p95, and p99. Additional percentiles can be configured as well.
  2. Counter: Keeps a counter with a running total. You can add/subtract from the counter from your script.
  3. Histogram: Keep a count for multiple buckets.

Metric Aggregation

Metrics are aggregated on the following dimensions:

  1. Execution: Test execution wide metric aggregation.
  2. Region: Tests can execute in multiple regions. Metrics aggregation is done per region as well as across all regions.
  3. Resource: Every time you make a network request you are accessing a Resource. We associate metrics with these resource labels. For example GET or ws:// For custom metrics you can optionally specify a resource which can be any valid string less than 256 characters.
  4. Interval: Test execution is broken into 10 second intervals. Metric aggregation is applied for each interval.

The test results page provides the UI to graph the relevant aggregations (any combination of the above 4 dimensions is supported).

Timing Metrics

To capture a timing in your script:

result().timing({ namespace: 'User', name: 'appInitMs', val: 100, units: 'ms' });

For timings, the following aggregation functions are computed by default: min, max, mean, count, p50, p95, and p99. The set of percentiles can be changed when creating a load configuration.

The units parameter defaults to ms if left blank for timings.

Counter Metrics

A counter keeps a running total for that metric.

result().counter({ namespace: 'User', name: 'myCustomerCounter', val: 1, units: 'requests' });

The units parameter is required for counters and has no default.

Histogram Metrics

Histograms can be useful when you want to keep a count for an unknown number of buckets and keep them grouped together. For example, a histogram is useful for tracking HTTP response codes:

result().histogram({ namespace: 'User', name: 'httpResponseCodes', key: '200', val: 1 });

The bucket can be any valid string that is less than 256 characters. If the value to increment is not provided, it defaults to 1.

Custom Result Grouping

As noted in the previous section, every time you make a network request you are accessing a "Resource".


Testable uses the following default format for the resource label:


This default behavior is intended to avoid a potential explosion of resource labels on which to aggregate metrics. A test can only contain 600 resources before it will be automatically stopped.

Some example URL to resource label default mappings:

  • GET => GET
  • POST => POST
  • GET => GET

Changing Resource Labels

The default behavior for resource labels is not always desired. Testable provides an API to both override the default behavior or access it for creating new result metrics. In a Node.js script:

var results = require('testable-utils').results;

// returns 'GET'
results.toResourceName('', 'GET');

// to override the default behavior and instead use full URLs as resource labels
results.toResourceName = function(url, method) {
    return method + ' ' + url;

Attach Custom Metrics to Network Calls

It can sometimes be useful to attach or update a metric associated with a network call (e.g. HTTP GET) to ensure they get aggregated inline with the system captured metrics. The basic results([resource], [url]) API will use whatever resource label is provided or only aggregate your metric into the "Overall Results" if none is specified.

Within an event listener for any network API (e.g. request, http, ws, socketio, etc) the current result can be accessed via results.current.

The following example shows how to define your own outcome histogram. The Testable built in outcome histogram consider an HTTP response a success if it has a status < 400. We can define our own version that considers a response body of "bad" as a failure.

var results = require('testable-utils').results;
var request = require('request');

request.get('', function (error, response, body) {
  if (response && response.statusCode < 400 && body !== 'bad')
    results.current.histogram({ namespace: 'User', name: 'outcome', key: 'success', val: 1 });
  else {
    results.current.histogram({ namespace: 'User', name: 'outcome', key: 'failure', val: 1 });
    results.current.setTraceStatus('Custom Error');

Note that results.current is null when outside of a network call related event handler.


The results module provides all functionality related to metric capture.

var results = require('testable-utils').results;
results([resource], [url])

Returns a result object that can capture metrics for that resource/url. resource can optionally group a set of metrics together. Results can also be associated with a url, which does not affect aggregation, but will be available when downloading all results. For example the HTTP module groups the metrics it captures using a resource name of [METHOD] [URL] minus any query parameters after the ? (e.g. GET since it is useful to see all results for each URL aggregated together.

var result = results('my optional custom grouping');
result.timing(name, value[, units])

Capture a timing metric. Timings have various aggregators calculated like average, median, percentiles (95th, 99th), and standard deviation.

Namespace defaults to User if not specified. Units default to ms if not specified.

results().timing({ namespace: 'User', name: 'customTimerMs', val: 100, units: 'ms' });
results().timing('latencyMs', 100);
result.counter(name, value[, units])

Capture a counter metric. Counters are summed across your test execution as well as per 10 second interval.

Namespace defaults to User if not specified. Units default to empty if not specified.

results().counter({ namespace: 'User', name: 'beeCount', val: 2, units: 'bees' });
results().counter('myCounter', -3, 'units');
result.histogram(name, key, [value])

Capture a bucket key/value into the histogram. The value defaults to 1 if not specified. During test execution you can view the count in each bucket, the total across all buckets, and the percent each bucket is of the total.

results().histogram({ namespace: 'User', name: 'outcome', key: 'failure', val: 1 });
results().histogram('httpMethod', 'GET', 1);
result.setTraceStatus(status[, statusMsg][, isError])

Set the status of the trace associated with this result. A trace must have a status to be valid. Optionally include a status message and an indication of whether or not this an error status.

result.addTrace(type[, headers][, data])

Capture a trace packet. type can be any string less than 255 characters. headers must be an object if specified, and data is either a Buffer or a String.

Trace packets are visible on the test results page.

// ...
results('custom').addTrace('DataSent', { a: 'b' }, 'some data');
// ...
results('custom').addTrace('DataReceived', {}, 'some response');
// ...

Overrides the sampling nature of traces and forces this trace to be captured and saved. Note that each test execution has a limited number of traces so if you force capture a trace on each test iteration, this limit can quickly be reached.

Force reporting is result specific so if you add a trace to multiple results, each one needs to have this method called separately.


Merge trace packets of the same type within a single trace. Headers are merged (a header key added later overrides the same key from earlier) and data concatenated in the order the trace packets were added. This feature is used when capturing HTTP Data Sent and Data Received traces. Because HTTP responses are often chunked and compressed we need to recombine the packets before they can be processed. The first and last timestamp when a packet was added is maintained during the merge.

This feature is result specific so if you add a trace to multiple results, each one needs to have this method called separately.

results('GET').addTrace('DataReceived', { a: 'b' }, 'weee1');
results('GET').addTrace('DataReceived', { c: 'd' }, 'weee2');