CoPTS is a powerful and extensible benchmarking suite implemented using Bash and Python. It automates the orchestration of parallel benchmark runs across different container runtimes. Whether you want to evaluate the performance of your containerized applications or test the capabilities of various container runtimes, CoPTS has you covered. This tool supports a wide range of benchmarks and container runtimes, allowing you to conduct comprehensive performance testing effortlessly.
CoPTS currently supports the following benchmarks:
- Bonnie++ (-b)
- Linpack (-l)
- Noploop (-n)
- Stream (-a)
- Sysbench (-s)
- Unixbench (-u)
- Y-Cruncher (-y)
You can execute these benchmarks as Docker containers on OCI-compatible runtimes, including:
- runc (Docker)
- runsc (gVisor)
- crun
- Kata 1.0
- runnc (Nabla)
CoPTS is highly configurable, allowing you to specify:
- The benchmark you want to run.
- The distribution of benchmark runs.
- The Container runtime you wish to run on.
CoPTS provides two methods for distributing benchmark runs:
Parallel testing is achieved by having multiple processes create and execute containers in parallel on the same host.
You can control the number of containers launched serially for your benchmark runs in each process.
Fine-tune the number of runs for each iteration of your benchmark to suit your testing needs.
One of the core strengths of CoPTS is its extensible design. You can easily add new benchmarks or container runtimes to expand its capabilities and tailor it to your specific requirements. This makes CoPTS a flexible tool that can adapt to your evolving performance testing needs.
CoPTS doesn't just run benchmarks; it also aggregates key performance indicators generated by these benchmarks into a tabular format, enhancing readability and simplifying analysis. For each run, CoPTS generates 'x+1' CSV files:
- One file from each of the 'x' parallel processes, aggregating all benchmark iterations within it.
- One file that aggregates the runs from all the parallel processes.
To use CoPTS, you can invoke it with the following syntax:
./runBenchmarksloop -t "runtime1" -l "x y z" -t "runtime2" -n "x y z" -u "x y z" -y "x y z" -b "x y z" -s "x y z isCpu"
Where the options are as follows:
-e
: Remove all images/instances and exit.-r
: Rebuild all container images for benchmarks mentioned after.-t
: Define the runtime for the benchmarks.-b
: Run the Bonnie++ benchmark with repetitions specified as 'x y z'.-l
: Run the Linpack benchmark with repetitions specified as 'x y z'.-n
: Run the Noploop benchmark with repetitions specified as 'x y z'.-a
: Run the STREAM benchmark with repetitions specified as 'x y z'.-s
: Run the Sysbench benchmark with repetitions specified as 'x y z isCpu'.-u
: Run the Unixbench benchmark with repetitions specified as 'x y z'.-y
: Run the Y-Cruncher benchmark with repetitions specified as 'x y z'.
For Sysbench, an extra argument, isCpu
, is required, where 'true' will run CPU tests and 'false' will run memory tests.
As an example, you can run the following command to execute Linpack and Sysbench benchmarks with CoPTS:
./runBenchmarksloop -t "runc" -l "40 1 50" -t "runsc" -s "30 20 20 true"
This command will run Linpack with 40 parallel processes, each launching 1 container with 50 runs, and Sysbench with 30 parallel processes, each launching 20 containers with 20 CPU test runs.
We welcome contributions to CoPTS. If you have new benchmarks or container runtimes to add, bug fixes, or feature enhancements, please open an issue or submit a pull request.
Disclaimer: CoPTS is a powerful tool for conducting containerized benchmarking and performance testing. Please use it responsibly and consider the impact on your system's resources.