Skip to content

Commit 5d12fd9

Browse files
joyeecheungaddaleax
authored andcommitted
doc: add benchmark/README.md and fix guide
* Write a new benchmark/README.md describing the benchmark directory layout and common API. * Fix the moved benchmarking guide accordingly, add tips about how to get the help text from the benchmarking tools. PR-URL: #11237 Fixes: #11190 Reviewed-By: James M Snell <[email protected]> Reviewed-By: Andreas Madsen <[email protected]>
1 parent 22a6edd commit 5d12fd9

File tree

2 files changed

+278
-22
lines changed

2 files changed

+278
-22
lines changed

benchmark/README.md

+246
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,246 @@
1+
# Node.js Core Benchmarks
2+
3+
This folder contains code and data used to measure performance
4+
of different Node.js implementations and different ways of
5+
writing JavaScript run by the built-in JavaScript engine.
6+
7+
For a detailed guide on how to write and run benchmarks in this
8+
directory, see [the guide on benchmarks](../doc/guides/writing-and-running-benchmarks.md).
9+
10+
## Table of Contents
11+
12+
* [Benchmark directories](#benchmark-directories)
13+
* [Common API](#common-api)
14+
15+
## Benchmark Directories
16+
17+
<table>
18+
<thead>
19+
<tr>
20+
<th>Directory</th>
21+
<th>Purpose</th>
22+
</tr>
23+
</thead>
24+
<tbody>
25+
<tr>
26+
<td>arrays</td>
27+
<td>
28+
Benchmarks for various operations on array-like objects,
29+
including <code>Array</code>, <code>Buffer</code>, and typed arrays.
30+
</td>
31+
</tr>
32+
<tr>
33+
<td>assert</td>
34+
<td>
35+
Benchmarks for the <code>assert</code> subsystem.
36+
</td>
37+
</tr>
38+
<tr>
39+
<td>buffers</td>
40+
<td>
41+
Benchmarks for the <code>buffer</code> subsystem.
42+
</td>
43+
</tr>
44+
<tr>
45+
<td>child_process</td>
46+
<td>
47+
Benchmarks for the <code>child_process</code> subsystem.
48+
</td>
49+
</tr>
50+
<tr>
51+
<td>crypto</td>
52+
<td>
53+
Benchmarks for the <code>crypto</code> subsystem.
54+
</td>
55+
</tr>
56+
<tr>
57+
<td>dgram</td>
58+
<td>
59+
Benchmarks for the <code>dgram</code> subsystem.
60+
</td>
61+
</tr>
62+
<tr>
63+
<td>domain</td>
64+
<td>
65+
Benchmarks for the <code>domain</code> subsystem.
66+
</td>
67+
</tr>
68+
<tr>
69+
<td>es</td>
70+
<td>
71+
Benchmarks for various new ECMAScript features and their
72+
pre-ES2015 counterparts.
73+
</td>
74+
</tr>
75+
<tr>
76+
<td>events</td>
77+
<td>
78+
Benchmarks for the <code>events</code> subsystem.
79+
</td>
80+
</tr>
81+
<tr>
82+
<td>fixtures</td>
83+
<td>
84+
Benchmarks fixtures used in various benchmarks throughout
85+
the benchmark suite.
86+
</td>
87+
</tr>
88+
<tr>
89+
<td>fs</td>
90+
<td>
91+
Benchmarks for the <code>fs</code> subsystem.
92+
</td>
93+
</tr>
94+
<tr>
95+
<td>http</td>
96+
<td>
97+
Benchmarks for the <code>http</code> subsystem.
98+
</td>
99+
</tr>
100+
<tr>
101+
<td>misc</td>
102+
<td>
103+
Miscellaneous benchmarks and benchmarks for shared
104+
internal modules.
105+
</td>
106+
</tr>
107+
<tr>
108+
<td>module</td>
109+
<td>
110+
Benchmarks for the <code>module</code> subsystem.
111+
</td>
112+
</tr>
113+
<tr>
114+
<td>net</td>
115+
<td>
116+
Benchmarks for the <code>net</code> subsystem.
117+
</td>
118+
</tr>
119+
<tr>
120+
<td>path</td>
121+
<td>
122+
Benchmarks for the <code>path</code> subsystem.
123+
</td>
124+
</tr>
125+
<tr>
126+
<td>process</td>
127+
<td>
128+
Benchmarks for the <code>process</code> subsystem.
129+
</td>
130+
</tr>
131+
<tr>
132+
<td>querystring</td>
133+
<td>
134+
Benchmarks for the <code>querystring</code> subsystem.
135+
</td>
136+
</tr>
137+
<tr>
138+
<td>streams</td>
139+
<td>
140+
Benchmarks for the <code>streams</code> subsystem.
141+
</td>
142+
</tr>
143+
<tr>
144+
<td>string_decoder</td>
145+
<td>
146+
Benchmarks for the <code>string_decoder</code> subsystem.
147+
</td>
148+
</tr>
149+
<tr>
150+
<td>timers</td>
151+
<td>
152+
Benchmarks for the <code>timers</code> subsystem, including
153+
<code>setTimeout</code>, <code>setInterval</code>, .etc.
154+
</td>
155+
</tr>
156+
<tr>
157+
<td>tls</td>
158+
<td>
159+
Benchmarks for the <code>tls</code> subsystem.
160+
</td>
161+
</tr>
162+
<tr>
163+
<td>url</td>
164+
<td>
165+
Benchmarks for the <code>url</code> subsystem, including the legacy
166+
<code>url</code> implementation and the WHATWG URL implementation.
167+
</td>
168+
</tr>
169+
<tr>
170+
<td>util</td>
171+
<td>
172+
Benchmarks for the <code>util</code> subsystem.
173+
</td>
174+
</tr>
175+
<tr>
176+
<td>vm</td>
177+
<td>
178+
Benchmarks for the <code>vm</code> subsystem.
179+
</td>
180+
</tr>
181+
</tbody>
182+
</table>
183+
184+
### Other Top-level files
185+
186+
The top-level files include common dependencies of the benchmarks
187+
and the tools for launching benchmarks and visualizing their output.
188+
The actual benchmark scripts should be placed in their corresponding
189+
directories.
190+
191+
* `_benchmark_progress.js`: implements the progress bar displayed
192+
when running `compare.js`
193+
* `_cli.js`: parses the command line arguments passed to `compare.js`,
194+
`run.js` and `scatter.js`
195+
* `_cli.R`: parses the command line arguments passed to `compare.R`
196+
* `_http-benchmarkers.js`: selects and runs external tools for benchmarking
197+
the `http` subsystem.
198+
* `common.js`: see [Common API](#common-api).
199+
* `compare.js`: command line tool for comparing performance between different
200+
Node.js binaries.
201+
* `compare.R`: R script for statistically analyzing the output of
202+
`compare.js`
203+
* `run.js`: command line tool for running individual benchmark suite(s).
204+
* `scatter.js`: command line tool for comparing the performance
205+
between different parameters in benchmark configurations,
206+
for example to analyze the time complexity.
207+
* `scatter.R`: R script for visualizing the output of `scatter.js` with
208+
scatter plots.
209+
210+
## Common API
211+
212+
The common.js module is used by benchmarks for consistency across repeated
213+
tasks. It has a number of helpful functions and properties to help with
214+
writing benchmarks.
215+
216+
### createBenchmark(fn, configs[, options])
217+
218+
See [the guide on writing benchmarks](../doc/guides/writing-and-running-benchmarks.md#basics-of-a-benchmark).
219+
220+
### default\_http\_benchmarker
221+
222+
The default benchmarker used to run HTTP benchmarks.
223+
See [the guide on writing HTTP benchmarks](../doc/guides/writing-and-running-benchmarks.md#creating-an-http-benchmark).
224+
225+
226+
### PORT
227+
228+
The default port used to run HTTP benchmarks.
229+
See [the guide on writing HTTP benchmarks](../doc/guides/writing-and-running-benchmarks.md#creating-an-http-benchmark).
230+
231+
### sendResult(data)
232+
233+
Used in special benchmarks that can't use `createBenchmark` and the object
234+
it returns to accomplish what they need. This function reports timing
235+
data to the parent process (usually created by running `compare.js`, `run.js` or
236+
`scatter.js`).
237+
238+
### v8ForceOptimization(method[, ...args])
239+
240+
Force V8 to mark the `method` for optimization with the native function
241+
`%OptimizeFunctionOnNextCall()` and return the optimization status
242+
after that.
243+
244+
It can be used to prevent the benchmark from getting disrupted by the optimizer
245+
kicking in halfway through. However, this could result in a less effective
246+
optimization. In general, only use it if you know what it actually does.

doc/guides/writing-and-running-benchmarks.md

+32-22
Original file line numberDiff line numberDiff line change
@@ -1,26 +1,34 @@
1-
# Node.js core benchmark
1+
# How to Write and Run Benchmarks in Node.js Core
22

3-
This folder contains benchmarks to measure the performance of the Node.js APIs.
4-
5-
## Table of Content
3+
## Table of Contents
64

75
* [Prerequisites](#prerequisites)
6+
* [HTTP Benchmark Requirements](#http-benchmark-requirements)
7+
* [Benchmark Analysis Requirements](#benchmark-analysis-requirements)
88
* [Running benchmarks](#running-benchmarks)
9-
* [Running individual benchmarks](#running-individual-benchmarks)
10-
* [Running all benchmarks](#running-all-benchmarks)
11-
* [Comparing node versions](#comparing-node-versions)
12-
* [Comparing parameters](#comparing-parameters)
9+
* [Running individual benchmarks](#running-individual-benchmarks)
10+
* [Running all benchmarks](#running-all-benchmarks)
11+
* [Comparing Node.js versions](#comparing-nodejs-versions)
12+
* [Comparing parameters](#comparing-parameters)
1313
* [Creating a benchmark](#creating-a-benchmark)
14+
* [Basics of a benchmark](#basics-of-a-benchmark)
15+
* [Creating an HTTP benchmark](#creating-an-http-benchmark)
1416

1517
## Prerequisites
1618

19+
Basic Unix tools are required for some benchmarks.
20+
[Git for Windows][git-for-windows] includes Git Bash and the necessary tools,
21+
which need to be included in the global Windows `PATH`.
22+
23+
### HTTP Benchmark Requirements
24+
1725
Most of the HTTP benchmarks require a benchmarker to be installed, this can be
1826
either [`wrk`][wrk] or [`autocannon`][autocannon].
1927

20-
`Autocannon` is a Node script that can be installed using
21-
`npm install -g autocannon`. It will use the Node executable that is in the
28+
`Autocannon` is a Node.js script that can be installed using
29+
`npm install -g autocannon`. It will use the Node.js executable that is in the
2230
path, hence if you want to compare two HTTP benchmark runs make sure that the
23-
Node version in the path is not altered.
31+
Node.js version in the path is not altered.
2432

2533
`wrk` may be available through your preferred package manager. If not, you can
2634
easily build it [from source][wrk] via `make`.
@@ -34,9 +42,7 @@ benchmarker to be used by providing it as an argument, e. g.:
3442

3543
`node benchmark/http/simple.js benchmarker=autocannon`
3644

37-
Basic Unix tools are required for some benchmarks.
38-
[Git for Windows][git-for-windows] includes Git Bash and the necessary tools,
39-
which need to be included in the global Windows `PATH`.
45+
### Benchmark Analysis Requirements
4046

4147
To analyze the results `R` should be installed. Check you package manager or
4248
download it from https://www.r-project.org/.
@@ -50,7 +56,6 @@ install.packages("ggplot2")
5056
install.packages("plyr")
5157
```
5258

53-
### CRAN Mirror Issues
5459
In the event you get a message that you need to select a CRAN mirror first.
5560

5661
You can specify a mirror by adding in the repo parameter.
@@ -108,7 +113,8 @@ buffers/buffer-tostring.js n=10000000 len=1024 arg=false: 3783071.1678948295
108113
### Running all benchmarks
109114

110115
Similar to running individual benchmarks, a group of benchmarks can be executed
111-
by using the `run.js` tool. Again this does not provide the statistical
116+
by using the `run.js` tool. To see how to use this script,
117+
run `node benchmark/run.js`. Again this does not provide the statistical
112118
information to make any conclusions.
113119

114120
```console
@@ -135,18 +141,19 @@ It is possible to execute more groups by adding extra process arguments.
135141
$ node benchmark/run.js arrays buffers
136142
```
137143

138-
### Comparing node versions
144+
### Comparing Node.js versions
139145

140-
To compare the effect of a new node version use the `compare.js` tool. This
146+
To compare the effect of a new Node.js version use the `compare.js` tool. This
141147
will run each benchmark multiple times, making it possible to calculate
142-
statistics on the performance measures.
148+
statistics on the performance measures. To see how to use this script,
149+
run `node benchmark/compare.js`.
143150

144151
As an example on how to check for a possible performance improvement, the
145152
[#5134](https://github.com/nodejs/node/pull/5134) pull request will be used as
146153
an example. This pull request _claims_ to improve the performance of the
147154
`string_decoder` module.
148155

149-
First build two versions of node, one from the master branch (here called
156+
First build two versions of Node.js, one from the master branch (here called
150157
`./node-master`) and another with the pull request applied (here called
151158
`./node-pr-5135`).
152159

@@ -219,7 +226,8 @@ It can be useful to compare the performance for different parameters, for
219226
example to analyze the time complexity.
220227

221228
To do this use the `scatter.js` tool, this will run a benchmark multiple times
222-
and generate a csv with the results.
229+
and generate a csv with the results. To see how to use this script,
230+
run `node benchmark/scatter.js`.
223231

224232
```console
225233
$ node benchmark/scatter.js benchmark/string_decoder/string-decoder.js > scatter.csv
@@ -286,6 +294,8 @@ chunk encoding mean confidence.interval
286294

287295
## Creating a benchmark
288296

297+
### Basics of a benchmark
298+
289299
All benchmarks use the `require('../common.js')` module. This contains the
290300
`createBenchmark(main, configs[, options])` method which will setup your
291301
benchmark.
@@ -369,7 +379,7 @@ function main(conf) {
369379
}
370380
```
371381

372-
## Creating HTTP benchmark
382+
### Creating an HTTP benchmark
373383

374384
The `bench` object returned by `createBenchmark` implements
375385
`http(options, callback)` method. It can be used to run external tool to

0 commit comments

Comments
 (0)