@@ -27,24 +27,24 @@ either [`wrk`][wrk] or [`autocannon`][autocannon].
27
27
28
28
` Autocannon ` is a Node.js script that can be installed using
29
29
` npm install -g autocannon ` . It will use the Node.js executable that is in the
30
- path. Hence if you want to compare two HTTP benchmark runs, make sure that the
30
+ path. In order to compare two HTTP benchmark runs, make sure that the
31
31
Node.js version in the path is not altered.
32
32
33
- ` wrk ` may be available through your preferred package manager . If not, you can
34
- easily build it [ from source] [ wrk ] via ` make ` .
33
+ ` wrk ` may be available through one of the available package managers . If not, it can
34
+ be easily built [ from source] [ wrk ] via ` make ` .
35
35
36
36
By default, ` wrk ` will be used as the benchmarker. If it is not available,
37
- ` autocannon ` will be used in its place. When creating an HTTP benchmark, you can
38
- specify which benchmarker should be used by providing it as an argument:
37
+ ` autocannon ` will be used in its place. When creating an HTTP benchmark, the
38
+ benchmarker to be used should be specified by providing it as an argument:
39
39
40
40
` node benchmark/run.js --set benchmarker=autocannon http `
41
41
42
42
` node benchmark/http/simple.js benchmarker=autocannon `
43
43
44
44
### Benchmark Analysis Requirements
45
45
46
- To analyze the results, ` R ` should be installed. Use your package manager or
47
- download it from https://www.r-project.org/ .
46
+ To analyze the results, ` R ` should be installed. Use one of the available
47
+ package managers or download it from https://www.r-project.org/ .
48
48
49
49
The R packages ` ggplot2 ` and ` plyr ` are also used and can be installed using
50
50
the R REPL.
@@ -55,8 +55,8 @@ install.packages("ggplot2")
55
55
install.packages(" plyr" )
56
56
```
57
57
58
- In the event you get a message that you need to select a CRAN mirror first, you
59
- can specify a mirror by adding in the repo parameter.
58
+ In the event that a message is reported stating that a CRAN mirror must be
59
+ selected first, specify a mirror by adding in the repo parameter.
60
60
61
61
If we used the "http://cran.us.r-project.org " mirror, it could look something
62
62
like this:
@@ -65,7 +65,7 @@ like this:
65
65
install.packages(" ggplot2" , repo = " http://cran.us.r-project.org" )
66
66
```
67
67
68
- Of course, use the mirror that suits your location.
68
+ Of course, use an appropriate mirror based on location.
69
69
A list of mirrors is [ located here] ( https://cran.r-project.org/mirrors.html ) .
70
70
71
71
## Running benchmarks
@@ -98,7 +98,7 @@ process. This ensures that benchmark results aren't affected by the execution
98
98
order due to v8 optimizations. ** The last number is the rate of operations
99
99
measured in ops/sec (higher is better).**
100
100
101
- Furthermore you can specify a subset of the configurations, by setting them in
101
+ Furthermore a subset of the configurations can be specified , by setting them in
102
102
the process arguments:
103
103
104
104
``` console
@@ -179,9 +179,9 @@ In the output, _improvement_ is the relative improvement of the new version,
179
179
hopefully this is positive. _ confidence_ tells if there is enough
180
180
statistical evidence to validate the _ improvement_ . If there is enough evidence
181
181
then there will be at least one star (` * ` ), more stars is just better. ** However
182
- if there are no stars, then you shouldn 't make any conclusions based on the
183
- _ improvement_ .** Sometimes this is fine, for example if you are expecting there
184
- to be no improvements , then there shouldn't be any stars.
182
+ if there are no stars, then don 't make any conclusions based on the
183
+ _ improvement_ .** Sometimes this is fine, for example if no improvements are
184
+ expected , then there shouldn't be any stars.
185
185
186
186
** A word of caution:** Statistics is not a foolproof tool. If a benchmark shows
187
187
a statistical significant difference, there is a 5% risk that this
@@ -198,9 +198,9 @@ same for both versions. The confidence field will show a star if the p-value
198
198
is less than ` 0.05 ` ._
199
199
200
200
The ` compare.R ` tool can also produce a box plot by using the ` --plot filename `
201
- option. In this case there are 48 different benchmark combinations, thus you
202
- may want to filter the csv file. This can be done while benchmarking using the
203
- ` --set ` parameter (e.g. ` --set encoding=ascii ` ) or by filtering results
201
+ option. In this case there are 48 different benchmark combinations, and there
202
+ may be a need to filter the csv file. This can be done while benchmarking
203
+ using the ` --set ` parameter (e.g. ` --set encoding=ascii ` ) or by filtering results
204
204
afterwards using tools such as ` sed ` or ` grep ` . In the ` sed ` case be sure to
205
205
keep the first line since that contains the header information.
206
206
@@ -295,7 +295,7 @@ chunk encoding mean confidence.interval
295
295
### Basics of a benchmark
296
296
297
297
All benchmarks use the ` require('../common.js') ` module. This contains the
298
- ` createBenchmark(main, configs[, options]) ` method which will setup your
298
+ ` createBenchmark(main, configs[, options]) ` method which will setup the
299
299
benchmark.
300
300
301
301
The arguments of ` createBenchmark ` are:
@@ -312,20 +312,20 @@ The arguments of `createBenchmark` are:
312
312
` createBenchmark ` returns a ` bench ` object, which is used for timing
313
313
the runtime of the benchmark. Run ` bench.start() ` after the initialization
314
314
and ` bench.end(n) ` when the benchmark is done. ` n ` is the number of operations
315
- you performed in the benchmark.
315
+ performed in the benchmark.
316
316
317
317
The benchmark script will be run twice:
318
318
319
319
The first pass will configure the benchmark with the combination of
320
320
parameters specified in ` configs ` , and WILL NOT run the ` main ` function.
321
321
In this pass, no flags except the ones directly passed via commands
322
- that you run the benchmarks with will be used.
322
+ when running the benchmarks will be used.
323
323
324
324
In the second pass, the ` main ` function will be run, and the process
325
325
will be launched with:
326
326
327
- * The flags you've passed into ` createBenchmark ` (the third argument)
328
- * The flags in the command that you run this benchmark with
327
+ * The flags passed into ` createBenchmark ` (the third argument)
328
+ * The flags in the command passed when the benchmark was run
329
329
330
330
Beware that any code outside the ` main ` function will be run twice
331
331
in different processes. This could be troublesome if the code
@@ -346,7 +346,7 @@ const configs = {
346
346
};
347
347
348
348
const options = {
349
- // Add --expose-internals if you want to require internal modules in main
349
+ // Add --expose-internals in order to require internal modules in main
350
350
flags: [' --zero-fill-buffers' ]
351
351
};
352
352
@@ -357,9 +357,9 @@ const bench = common.createBenchmark(main, configs, options);
357
357
// in different processes, with different command line arguments.
358
358
359
359
function main (conf ) {
360
- // You will only get the flags that you have passed to createBenchmark
361
- // earlier when main is run. If you want to benchmark the internal modules,
362
- // require them here. For example:
360
+ // Only flags that have been passed to createBenchmark
361
+ // earlier when main is run will be in effect.
362
+ // In order to benchmark the internal modules, require them here. For example:
363
363
// const URL = require('internal/url').URL
364
364
365
365
// Start the timer
0 commit comments