Skip to content

lib: add experimental benchmark module #50768

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
293 changes: 293 additions & 0 deletions doc/api/benchmark.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,293 @@
# Benchmark

<!--introduced_in=REPLACEME-->

> Stability: 1.1 - Active Development

<!-- source_link=lib/benchmark.js -->

The `node:benchmark` module gives the ability to measure
performance of JavaScript code. To access it:

```mjs
import benchmark from 'node:benchmark';
```

```cjs
const benchmark = require('node:benchmark');
```

This module is only available under the `node:` scheme. The following will not
work:

```mjs
import benchmark from 'benchmark';
```

```cjs
const benchmark = require('benchmark');
```

The following example illustrates how benchmarks are written using the
`benchmark` module.

```mjs
import { Suite } from 'node:benchmark';

const suite = new Suite();

suite.add('Using delete to remove property from object', function() {
const data = { x: 1, y: 2, z: 3 };
delete data.y;

data.x;
data.y;
data.z;
});

suite.run();
```

```cjs
const { Suite } = require('node:benchmark');

const suite = new Suite();

suite.add('Using delete to remove property from object', function() {
const data = { x: 1, y: 2, z: 3 };
delete data.y;

data.x;
data.y;
data.z;
});

suite.run();
```

```console
$ node my-benchmark.js
(node:14165) ExperimentalWarning: The benchmark module is an experimental feature and might change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
Using delete property x 5,853,505 ops/sec ± 0.01% (10 runs sampled) min..max=(169ns ... 171ns) p75=170ns p99=171ns
```

## Class: `Suite`

> Stability: 1.1 Active Development

<!-- YAML
added: REPLACEME
-->

An `Suite` is responsible for managing and executing
benchmark functions. It provides two methods: `add()` and `run()`.

### `new Suite([options])`

<!-- YAML
added: REPLACEME
-->

* `options` {Object} Configuration options for the suite. The following
properties are supported:
* `reporter` {Function} Callback function with results to be called after
benchmark is concluded. The callback function should receive two arguments:
`suite` - A {Suite} object and
`result` - A object containing three properties:
`opsSec` {string}, `iterations {Number}`, `histogram` {Histogram} instance.

If no `reporter` is provided, the results will printed to the console.

```mjs
import { Suite } from 'node:benchmark';
const suite = new Suite();
```

```cjs
const { Suite } = require('node:benchmark');
const suite = new Suite();
```

### `suite.add(name[, options], fn)`

<!-- YAML
added: REPLACEME
-->

* `name` {string} The name of the benchmark, which is displayed when reporting
benchmark results.
* `options` {Object} Configuration options for the benchmark. The following
properties are supported:
* `minTime` {number} The minimum time a benchmark can run.
**Default:** `0.05` seconds.
* `maxTime` {number} The maximum time a benchmark can run.
**Default:** `0.5` seconds.
* `fn` {Function|AsyncFunction}
* Returns: {Suite}

This method stores the benchmark of a given function (`fn`).
The `fn` parameter can be either an asynchronous (`async function () {}`) or
a synchronous (`function () {}`) function.

```console
$ node my-benchmark.js
(node:14165) ExperimentalWarning: The benchmark module is an experimental feature and might change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
Using delete property x 5,853,505 ops/sec ± 0.01% (10 runs sampled) min..max=(169ns ... 171ns) p75=170ns p99=171ns
```

### `suite.run()`

* Returns: `{Promise<Array<Object>>}`
* `opsSec` {number} The amount of operations per second
* `iterations` {number} The amount executions of `fn`
* `histogram` {Histogram} Histogram object used to record benchmark iterations

<!-- YAML
added: REPLACEME
-->

The purpose of the run method is to run all the benchmarks that have been
added to the suite using the [`suite.add()`][] function.
By calling the run method, you can easily trigger the execution of all
the stored benchmarks and obtain the corresponding results.

### Using custom reporter

You can customize the data reporting by passing an function to the `reporter` argument while creating your `Suite`:

```mjs
import { Suite } from 'node:benchmark';

function reporter(bench, result) {
console.log(`Benchmark: ${bench.name} - ${result.opsSec} ops/sec`);
}

const suite = new Suite({ reporter });

suite.add('Using delete to remove property from object', () => {
const data = { x: 1, y: 2, z: 3 };
delete data.y;

data.x;
data.y;
data.z;
});

suite.run();
```

```cjs
const { Suite } = require('node:benchmark');

function reporter(bench, result) {
console.log(`Benchmark: ${bench.name} - ${result.opsSec} ops/sec`);
}

const suite = new Suite({ reporter });

suite.add('Using delete to remove property from object', () => {
const data = { x: 1, y: 2, z: 3 };
delete data.y;

data.x;
data.y;
data.z;
});

suite.run();
```

```console
$ node my-benchmark.js
Benchmark: Using delete to remove property from object - 6032212 ops/sec
```

### Setup and Teardown

The benchmark function has a special handling when you pass an argument,
for example:

```cjs
const { Suite } = require('node:benchmark');
const { readFileSync, writeFileSync, rmSync } = require('node:fs');

const suite = new Suite();

suite.add('readFileSync', (timer) => {
const randomFile = Date.now();
const filePath = `./${randomFile}.txt`;
writeFileSync(filePath, Math.random().toString());

timer.start();
readFileSync(filePath, 'utf8');
timer.end();

rmSync(filePath);
}).run();
```

In this way, you can control when the `timer` will start
and also when the `timer` will stop.

In the timer, we also give you a property `count`
that will tell you how much iterations
you should run your function to achieve the `benchmark.minTime`,
see the following example:

```mjs
import { Suite } from 'node:benchmark';
import { readFileSync, writeFileSync, rmSync } from 'node:fs';

const suite = new Suite();

suite.add('readFileSync', (timer) => {
const randomFile = Date.now();
const filePath = `./${randomFile}.txt`;
writeFileSync(filePath, Math.random().toString());

timer.start();
for (let i = 0; i < timer.count; i++)
readFileSync(filePath, 'utf8');
// You must send to the `.end` function the amount of
// times you executed the function, by default,
// the end will be called with value 1.
timer.end(timer.count);

rmSync(filePath);
});

suite.run();
```

```cjs
const { Suite } = require('node:benchmark');
const { readFileSync, writeFileSync, rmSync } = require('node:fs');

const suite = new Suite();

suite.add('readFileSync', (timer) => {
const randomFile = Date.now();
const filePath = `./${randomFile}.txt`;
writeFileSync(filePath, Math.random().toString());

timer.start();
for (let i = 0; i < timer.count; i++)
readFileSync(filePath, 'utf8');
// You must send to the `.end` function the amount of
// times you executed the function, by default,
// the end will be called with value 1.
timer.end(timer.count);

rmSync(filePath);
});

suite.run();
```

Once your function has at least one argument,
you must call `.start` and `.end`, if you didn't,
it will throw the error [ERR\_BENCHMARK\_MISSING\_OPERATION](./errors.md#err_benchmark_missing_operation).

[`suite.add()`]: #suiteaddname-options-fn
7 changes: 7 additions & 0 deletions doc/api/errors.md
Original file line number Diff line number Diff line change
@@ -705,6 +705,13 @@ An attempt was made to register something that is not a function as an
The type of an asynchronous resource was invalid. Users are also able
to define their own types if using the public embedder API.

<a id="ERR_BENCHMARK_MISSING_OPERATION"></a>

### `ERR_BENCHMARK_MISSING_OPERATION`

The user forgot to call .start or .end during the execution of
the benchmark.

<a id="ERR_BROTLI_COMPRESSION_FAILED"></a>

### `ERR_BROTLI_COMPRESSION_FAILED`
1 change: 1 addition & 0 deletions doc/api/index.md
Original file line number Diff line number Diff line change
@@ -13,6 +13,7 @@
* [Assertion testing](assert.md)
* [Asynchronous context tracking](async_context.md)
* [Async hooks](async_hooks.md)
* [Benchmark](benchmark.md)
* [Buffer](buffer.md)
* [C++ addons](addons.md)
* [C/C++ addons with Node-API](n-api.md)
10 changes: 10 additions & 0 deletions lib/benchmark.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
'use strict';
const { ObjectAssign } = primordials;
const { Suite } = require('internal/benchmark/runner');
const { emitExperimentalWarning } = require('internal/util');

emitExperimentalWarning('The benchmark module');

ObjectAssign(module.exports, {
Suite,
});
Loading