git clone --depth=1 https://github.com/micl2e2/bench-wrap-libs
cd bench-wrap-libs
# start benchmark
bash bench-all.bash
- A Linux distribution.
- At least, the following programs are installed:
- bash
- coreutils
- valgrind
- gnuplot
- cargo(minimal profile)
- g++(gcc or clang)
- jdk, mvn
- go
Note: if anyone finds out that, there is improper or unfair usage of any of selected libraries, or there is any other counterpart missed in this benchmark, please feel free to open an issue.
Below are the details of this benchmark:
-
(Random Samples) Samples' content looks like:
abaaebdd c a deeccadd cbcbedbaeadecbdbccbdbeeaaa bacecddd cb cdcabcdccdceecca cc aeebeaee bededdbddddad ae d bcaacccaba eccdc cadc aac eddedbada babbd bbb bbbcdd aecd becc ab debb daecbeddaedaaebaccba edd dac d adba c ebba dc aeede bcdde bed b eb ddbdaacbe bda aa d
, which simulates an English article as close to reality as possible. They are generated randomly from the characters pool consisting of alphabets "abcde" and ASCII space " ", it results in 10 files of size ranging from 512KB to 5MB(the approximate size of plain-text version of the Bible).
-
(Same Task) All libraries are assigned the same task: take samples from standard in, wrap them with an 80-width limit, then print out the result to standard out.
-
(Correct Enough) Although all libraries' results are distinct from each other, they are correct enough, i.e., no lines exceed the 80-width limit.
-
While all libraries are measured in "time elapsed" benchmark, not all will do so in "memory peak" benchmark. This is because their memory allocation mechanism, are completely different from the one used by system programming lang like C/C++ or Rust.
On a i5-3337u/8G machine, the benchmark will take about 20min, and the result will be similar to: