Basis for easy benchmarking and sharing of the results
Last updated 2 years ago by avaq .
MIT · Repository · Bugs · Original npm · Tarball · package.json
$ cnpm install sanctuary-benchmark 
SYNC missed versions from official npm registry.

Sanctuary Benchmark

Sanctuary Benchmark is a small wrapper over benchmarkjs to enable a consistent style of benchmarking across all Sanctuary projects. Allows for the easy definition of comparative benchmarks, and outputs results in a standardized format for sharing.


Create a file in the bench directory, for example old-vs-new.js:

const sb = require('sanctuary-benchmark');

// Imagine these are libs. Normally they would be require()'d.
const oldVersion = (f, xs) =>;
const newVersion = (f, xs) => {
  const len = xs.length;
  const out = new Array(len);
  for (let idx = 0; idx < len; idx += 1) out[idx] = f(xs[idx]);
  return out;

const small = Array.from({length: 1}, (_, i) => i);
const large = Array.from({length: 1000}, (_, i) => i);

module.exports = sb(oldVersion, newVersion, {}, {
  'map/small': [{}, map => map(x => x + 1, small)],
  'map/large': [{}, map => map(x => x + 1, large)],

Run the sanctuary-benchmark command. Pass --help for options.

$ ./node_modules/.bin/sanctuary-benchmark

Alternatively, you can use the value now assigned to module.exports programmatically. Consult the API documentation.

Reading the output

Running the benchmarks will print a table to the terminal with the following columns:

  • suite: The name of the test suite
  • left: The hertz and standard deviation measured for the number of rounds run for the library passed as first argument.
  • right: The hertz and standard deviation measured for the number of rounds run for the library passed as second argument.
  • diff: A percentage representing the difference between left and right, where 0 means "makes no difference" and 100 means "makes all the difference". You can use this number to tweak the significantDifference option, which determines whether a line will be highlighted.
  • change: The increase or decrease from left to right. You can use this to show your friends how well you've optimized a feature.
  • α: Wheter the difference is significant. Possible values are "✓" for an increase or "✗" for a decrease. Nothing will be rendered if the difference was insignificant.

API Documentation

benchmark :: (a, b, Options, StrMap (Spec a b)) -⁠> Options -⁠> Undefined

Spec a b :: [Object, (a | b) -> Any]
          | [Object, a -> Any, b -> Any]

Options :: { callback :: Function?
             colors :: Boolean?
             config :: Object?
             leftHeader :: String?
             match :: String?
             rightHeader :: String?
             significantDifference :: Number? }

This module exports a single function. It takes four arguments and returns another function. The four arguments are:

  1. The left-hand benchmarking input: could be an older version of the library you're testing, or a competing library.
  2. The right-hand benchmarking input: usually the current version of the library you're testing required directly from the working directory.
  3. An object containing defaults to the options passed in the next step. Refer to the documentation on the returned function to see what options are available.
  4. A mapping of benchmarks where the keys represent the names, and the values describe the work we're benchmarking. The names can later be used to filter benchmarks by using a glob, so it's recommended to use the forward slash character as a separator, as shown in usage. The value specifies the test. It's a Tuple with two or three items. The first item must always be an Object, and is used for per-test configuration overrides. The second and third items are the functions to run. When given a single function, it's used to test both libraries. When given two functions, they are used for the left and right library respectively.

Once these inputs are provided, a function is returned. The function will run the benchmarks and print the results to StdOut when it is called. It takes as input an object of options for the customization of this process:

  • callback (() => {})): Called when the benchmarks have completed.
  • colors (true): Set to false to disable terminal colors.
  • config ({}): Default Benchmark options to use for every benchmark. These can be overridden per benchmark.
  • leftHeader ('left'): Header describing the library on the left.
  • match ("**"): This glob allows one to filter benchmarks by name.
  • rightHeader ('right'): Header describing the library on the right.
  • significantDifference (0.1): The resulting difference (between 0 and 1) required for the output table to draw attention to these results.

Current Tags

  • 1.0.0                                ...           latest (2 years ago)

1 Versions

  • 1.0.0                                ...           2 years ago
Today 0
This Week 0
This Month 3
Last Day 0
Last Week 1
Last Month 5
Dependencies (3)
Dev Dependencies (3)
Dependents (0)

Copyright 2014 - 2016 © |