Skip to content

Commit f3f5a89

Browse files
committed
doc: *.md formatting fixes in the benchmark dir
* Add language specification for the txt code blocks. * Move the definitions to the bottom. Ref: #7727 PR-URL: #7727 Reviewed-By: Rich Trott <[email protected]> Reviewed-By: Michaël Zasso <[email protected]> Reviewed-By: James M Snell <[email protected]>
1 parent fc11fe8 commit f3f5a89

File tree

1 file changed

+13
-14
lines changed

1 file changed

+13
-14
lines changed

benchmark/README.md

+13-14
Original file line numberDiff line numberDiff line change
@@ -30,8 +30,6 @@ install.packages("ggplot2")
3030
install.packages("plyr")
3131
```
3232

33-
[wrk]: https://github.com/wg/wrk
34-
3533
## Running benchmarks
3634

3735
### Running individual benchmarks
@@ -43,7 +41,7 @@ conclusions about the performance.
4341
Individual benchmarks can be executed by simply executing the benchmark script
4442
with node.
4543

46-
```
44+
```console
4745
$ node benchmark/buffers/buffer-tostring.js
4846

4947
buffers/buffer-tostring.js n=10000000 len=0 arg=true: 62710590.393305704
@@ -65,7 +63,7 @@ measured in ops/sec (higher is better).**
6563
Furthermore you can specify a subset of the configurations, by setting them in
6664
the process arguments:
6765

68-
```
66+
```console
6967
$ node benchmark/buffers/buffer-tostring.js len=1024
7068

7169
buffers/buffer-tostring.js n=10000000 len=1024 arg=true: 3498295.68561504
@@ -78,7 +76,7 @@ Similar to running individual benchmarks, a group of benchmarks can be executed
7876
by using the `run.js` tool. Again this does not provide the statistical
7977
information to make any conclusions.
8078

81-
```
79+
```console
8280
$ node benchmark/run.js arrays
8381

8482
arrays/var-int.js
@@ -98,7 +96,7 @@ arrays/zero-int.js n=25 type=Buffer: 90.49906662339653
9896
```
9997

10098
It is possible to execute more groups by adding extra process arguments.
101-
```
99+
```console
102100
$ node benchmark/run.js arrays buffers
103101
```
104102

@@ -119,13 +117,13 @@ First build two versions of node, one from the master branch (here called
119117

120118
The `compare.js` tool will then produce a csv file with the benchmark results.
121119

122-
```
120+
```console
123121
$ node benchmark/compare.js --old ./node-master --new ./node-pr-5134 string_decoder > compare-pr-5134.csv
124122
```
125123

126124
For analysing the benchmark results use the `compare.R` tool.
127125

128-
```
126+
```console
129127
$ cat compare-pr-5134.csv | Rscript benchmark/compare.R
130128

131129
improvement significant p.value
@@ -159,16 +157,14 @@ _For the statistically minded, the R script performs an [independent/unpaired
159157
same for both versions. The significant field will show a star if the p-value
160158
is less than `0.05`._
161159

162-
[t-test]: https://en.wikipedia.org/wiki/Student%27s_t-test#Equal_or_unequal_sample_sizes.2C_unequal_variances
163-
164160
The `compare.R` tool can also produce a box plot by using the `--plot filename`
165161
option. In this case there are 48 different benchmark combinations, thus you
166162
may want to filter the csv file. This can be done while benchmarking using the
167163
`--set` parameter (e.g. `--set encoding=ascii`) or by filtering results
168164
afterwards using tools such as `sed` or `grep`. In the `sed` case be sure to
169165
keep the first line since that contains the header information.
170166

171-
```
167+
```console
172168
$ cat compare-pr-5134.csv | sed '1p;/encoding=ascii/!d' | Rscript benchmark/compare.R --plot compare-plot.png
173169

174170
improvement significant p.value
@@ -190,15 +186,15 @@ example to analyze the time complexity.
190186
To do this use the `scatter.js` tool, this will run a benchmark multiple times
191187
and generate a csv with the results.
192188

193-
```
189+
```console
194190
$ node benchmark/scatter.js benchmark/string_decoder/string-decoder.js > scatter.csv
195191
```
196192

197193
After generating the csv, a comparison table can be created using the
198194
`scatter.R` tool. Even more useful it creates an actual scatter plot when using
199195
the `--plot filename` option.
200196

201-
```
197+
```console
202198
$ cat scatter.csv | Rscript benchmark/scatter.R --xaxis chunk --category encoding --plot scatter-plot.png --log
203199

204200
aggregating variable: inlen
@@ -229,7 +225,7 @@ can be solved by filtering. This can be done while benchmarking using the
229225
afterwards using tools such as `sed` or `grep`. In the `sed` case be
230226
sure to keep the first line since that contains the header information.
231227

232-
```
228+
```console
233229
$ cat scatter.csv | sed -E '1p;/([^,]+, ){3}128,/!d' | Rscript benchmark/scatter.R --xaxis chunk --category encoding --plot scatter-plot.png --log
234230

235231
chunk encoding mean confidence.interval
@@ -290,3 +286,6 @@ function main(conf) {
290286
bench.end(conf.n);
291287
}
292288
```
289+
290+
[wrk]: https://github.com/wg/wrk
291+
[t-test]: https://en.wikipedia.org/wiki/Student%27s_t-test#Equal_or_unequal_sample_sizes.2C_unequal_variances

0 commit comments

Comments
 (0)