diff --git a/benchmark/README.md b/benchmark/README.md new file mode 100644 index 00000000000000..6fd9a97bdfb3bb --- /dev/null +++ b/benchmark/README.md @@ -0,0 +1,246 @@ +# Node.js Core Benchmarks + +This folder contains code and data used to measure performance +of different Node.js implementations and different ways of +writing JavaScript run by the built-in JavaScript engine. + +For a detailed guide on how to write and run benchmarks in this +directory, see [the guide on benchmarks](../doc/guides/writing-and-running-benchmarks.md). + +## Table of Contents + +* [Benchmark directories](#benchmark-directories) +* [Common API](#common-api) + +## Benchmark Directories + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DirectoryPurpose
arrays + Benchmarks for various operations on array-like objects, + including Array, Buffer, and typed arrays. +
assert + Benchmarks for the assert subsystem. +
buffers + Benchmarks for the buffer subsystem. +
child_process + Benchmarks for the child_process subsystem. +
crypto + Benchmarks for the crypto subsystem. +
dgram + Benchmarks for the dgram subsystem. +
domain + Benchmarks for the domain subsystem. +
es + Benchmarks for various new ECMAScript features and their + pre-ES2015 counterparts. +
events + Benchmarks for the events subsystem. +
fixtures + Benchmarks fixtures used in various benchmarks throughout + the benchmark suite. +
fs + Benchmarks for the fs subsystem. +
http + Benchmarks for the http subsystem. +
misc + Miscellaneous benchmarks and benchmarks for shared + internal modules. +
module + Benchmarks for the module subsystem. +
net + Benchmarks for the net subsystem. +
path + Benchmarks for the path subsystem. +
process + Benchmarks for the process subsystem. +
querystring + Benchmarks for the querystring subsystem. +
streams + Benchmarks for the streams subsystem. +
string_decoder + Benchmarks for the string_decoder subsystem. +
timers + Benchmarks for the timers subsystem, including + setTimeout, setInterval, .etc. +
tls + Benchmarks for the tls subsystem. +
url + Benchmarks for the url subsystem, including the legacy + url implementation and the WHATWG URL implementation. +
util + Benchmarks for the util subsystem. +
vm + Benchmarks for the vm subsystem. +
+ +### Other Top-level files + +The top-level files include common dependencies of the benchmarks +and the tools for launching benchmarks and visualizing their output. +The actual benchmark scripts should be placed in their corresponding +directories. + +* `_benchmark_progress.js`: implements the progress bar displayed + when running `compare.js` +* `_cli.js`: parses the command line arguments passed to `compare.js`, + `run.js` and `scatter.js` +* `_cli.R`: parses the command line arguments passed to `compare.R` +* `_http-benchmarkers.js`: selects and runs external tools for benchmarking + the `http` subsystem. +* `common.js`: see [Common API](#common-api). +* `compare.js`: command line tool for comparing performance between different + Node.js binaries. +* `compare.R`: R script for statistically analyzing the output of + `compare.js` +* `run.js`: command line tool for running individual benchmark suite(s). +* `scatter.js`: command line tool for comparing the performance + between different parameters in benchmark configurations, + for example to analyze the time complexity. +* `scatter.R`: R script for visualizing the output of `scatter.js` with + scatter plots. + +## Common API + +The common.js module is used by benchmarks for consistency across repeated +tasks. It has a number of helpful functions and properties to help with +writing benchmarks. + +### createBenchmark(fn, configs[, options]) + +See [the guide on writing benchmarks](../doc/guides/writing-and-running-benchmarks.md#basics-of-a-benchmark). + +### default\_http\_benchmarker + +The default benchmarker used to run HTTP benchmarks. +See [the guide on writing HTTP benchmarks](../doc/guides/writing-and-running-benchmarks.md#creating-an-http-benchmark). + + +### PORT + +The default port used to run HTTP benchmarks. +See [the guide on writing HTTP benchmarks](../doc/guides/writing-and-running-benchmarks.md#creating-an-http-benchmark). + +### sendResult(data) + +Used in special benchmarks that can't use `createBenchmark` and the object +it returns to accomplish what they need. This function reports timing +data to the parent process (usually created by running `compare.js`, `run.js` or +`scatter.js`). + +### v8ForceOptimization(method[, ...args]) + +Force V8 to mark the `method` for optimization with the native function +`%OptimizeFunctionOnNextCall()` and return the optimization status +after that. + +It can be used to prevent the benchmark from getting disrupted by the optimizer +kicking in halfway through. However, this could result in a less effective +optimization. In general, only use it if you know what it actually does. diff --git a/doc/guides/writing-and-running-benchmarks.md b/doc/guides/writing-and-running-benchmarks.md index d1233470757f20..a20f321b7c2408 100644 --- a/doc/guides/writing-and-running-benchmarks.md +++ b/doc/guides/writing-and-running-benchmarks.md @@ -1,26 +1,34 @@ -# Node.js core benchmark +# How to Write and Run Benchmarks in Node.js Core -This folder contains benchmarks to measure the performance of the Node.js APIs. - -## Table of Content +## Table of Contents * [Prerequisites](#prerequisites) + * [HTTP Benchmark Requirements](#http-benchmark-requirements) + * [Benchmark Analysis Requirements](#benchmark-analysis-requirements) * [Running benchmarks](#running-benchmarks) - * [Running individual benchmarks](#running-individual-benchmarks) - * [Running all benchmarks](#running-all-benchmarks) - * [Comparing node versions](#comparing-node-versions) - * [Comparing parameters](#comparing-parameters) + * [Running individual benchmarks](#running-individual-benchmarks) + * [Running all benchmarks](#running-all-benchmarks) + * [Comparing Node.js versions](#comparing-nodejs-versions) + * [Comparing parameters](#comparing-parameters) * [Creating a benchmark](#creating-a-benchmark) + * [Basics of a benchmark](#basics-of-a-benchmark) + * [Creating an HTTP benchmark](#creating-an-http-benchmark) ## Prerequisites +Basic Unix tools are required for some benchmarks. +[Git for Windows][git-for-windows] includes Git Bash and the necessary tools, +which need to be included in the global Windows `PATH`. + +### HTTP Benchmark Requirements + Most of the HTTP benchmarks require a benchmarker to be installed, this can be either [`wrk`][wrk] or [`autocannon`][autocannon]. -`Autocannon` is a Node script that can be installed using -`npm install -g autocannon`. It will use the Node executable that is in the +`Autocannon` is a Node.js script that can be installed using +`npm install -g autocannon`. It will use the Node.js executable that is in the path, hence if you want to compare two HTTP benchmark runs make sure that the -Node version in the path is not altered. +Node.js version in the path is not altered. `wrk` may be available through your preferred package manager. If not, you can easily build it [from source][wrk] via `make`. @@ -34,9 +42,7 @@ benchmarker to be used by providing it as an argument, e. g.: `node benchmark/http/simple.js benchmarker=autocannon` -Basic Unix tools are required for some benchmarks. -[Git for Windows][git-for-windows] includes Git Bash and the necessary tools, -which need to be included in the global Windows `PATH`. +### Benchmark Analysis Requirements To analyze the results `R` should be installed. Check you package manager or download it from https://www.r-project.org/. @@ -50,7 +56,6 @@ install.packages("ggplot2") install.packages("plyr") ``` -### CRAN Mirror Issues In the event you get a message that you need to select a CRAN mirror first. You can specify a mirror by adding in the repo parameter. @@ -108,7 +113,8 @@ buffers/buffer-tostring.js n=10000000 len=1024 arg=false: 3783071.1678948295 ### Running all benchmarks Similar to running individual benchmarks, a group of benchmarks can be executed -by using the `run.js` tool. Again this does not provide the statistical +by using the `run.js` tool. To see how to use this script, +run `node benchmark/run.js`. Again this does not provide the statistical information to make any conclusions. ```console @@ -135,18 +141,19 @@ It is possible to execute more groups by adding extra process arguments. $ node benchmark/run.js arrays buffers ``` -### Comparing node versions +### Comparing Node.js versions -To compare the effect of a new node version use the `compare.js` tool. This +To compare the effect of a new Node.js version use the `compare.js` tool. This will run each benchmark multiple times, making it possible to calculate -statistics on the performance measures. +statistics on the performance measures. To see how to use this script, +run `node benchmark/compare.js`. As an example on how to check for a possible performance improvement, the [#5134](https://github.com/nodejs/node/pull/5134) pull request will be used as an example. This pull request _claims_ to improve the performance of the `string_decoder` module. -First build two versions of node, one from the master branch (here called +First build two versions of Node.js, one from the master branch (here called `./node-master`) and another with the pull request applied (here called `./node-pr-5135`). @@ -219,7 +226,8 @@ It can be useful to compare the performance for different parameters, for example to analyze the time complexity. To do this use the `scatter.js` tool, this will run a benchmark multiple times -and generate a csv with the results. +and generate a csv with the results. To see how to use this script, +run `node benchmark/scatter.js`. ```console $ node benchmark/scatter.js benchmark/string_decoder/string-decoder.js > scatter.csv @@ -286,6 +294,8 @@ chunk encoding mean confidence.interval ## Creating a benchmark +### Basics of a benchmark + All benchmarks use the `require('../common.js')` module. This contains the `createBenchmark(main, configs[, options])` method which will setup your benchmark. @@ -369,7 +379,7 @@ function main(conf) { } ``` -## Creating HTTP benchmark +### Creating an HTTP benchmark The `bench` object returned by `createBenchmark` implements `http(options, callback)` method. It can be used to run external tool to