diff --git a/benchmarks.md b/benchmarks.md index 7ddcef131a..40bf372683 100644 --- a/benchmarks.md +++ b/benchmarks.md @@ -6,6 +6,6 @@ It is important to note that the benchmark codes are not written for absolute ma ![Benchmark results](/assets/benchmarks/benchmarks.svg) -The benchmark data shown above were computed with Julia v1.0.0, SciLua v1.0.0-b12, Rust 1.27.0, Go 1.9, Java 1.8.0_17, Javascript V8 6.2.414.54, Matlab R2018a, Anaconda Python 3.6.3, R 3.5.0, and Octave 4.2.2. C and Fortran are compiled with gcc 7.3.1, taking the best timing from all optimization levels (-O0 through -O3). C, Fortran, Go, Julia, Lua, Python, and Octave use [OpenBLAS](https://github.com/xianyi/OpenBLAS) v0.2.20 for matrix operations; Mathematica uses Intel® MKL. The Python implementations of matrix_statistics and matrix_multiply use [NumPy](https://www.numpy.org/) v1.14.0 and OpenBLAS v0.2.20 functions; the rest are pure Python implementations. Raw benchmark numbers in CSV format are available [here](/assets/benchmarks/benchmarks.csv) and the benchmark source code for each language can be found in the perf. files listed [here](https://github.com/JuliaLang/Microbenchmarks). The plot is generated using this [IJulia benchmarks notebook](/assets/benchmarks/benchmarks.ipynb). +The vertical axis shows each benchmark time normalized against the C implementation. The benchmark data shown above were computed with Julia v1.0.0, SciLua v1.0.0-b12, Rust 1.27.0, Go 1.9, Java 1.8.0_17, Javascript V8 6.2.414.54, Matlab R2018a, Anaconda Python 3.6.3, R 3.5.0, and Octave 4.2.2. C and Fortran are compiled with gcc 7.3.1, taking the best timing from all optimization levels (-O0 through -O3). C, Fortran, Go, Julia, Lua, Python, and Octave use [OpenBLAS](https://github.com/xianyi/OpenBLAS) v0.2.20 for matrix operations; Mathematica uses Intel® MKL. The Python implementations of matrix_statistics and matrix_multiply use [NumPy](https://www.numpy.org/) v1.14.0 and OpenBLAS v0.2.20 functions; the rest are pure Python implementations. Raw benchmark numbers in CSV format are available [here](/assets/benchmarks/benchmarks.csv) and the benchmark source code for each language can be found in the perf. files listed [here](https://github.com/JuliaLang/Microbenchmarks). The plot is generated using this [IJulia benchmarks notebook](/assets/benchmarks/benchmarks.ipynb). These micro-benchmark results were obtained on a single core (serial execution) on an Intel® Core™ i7-3960X 3.30GHz CPU with 64GB of 1600MHz DDR3 RAM, running openSUSE LEAP 15.0 Linux.