Eventually, these will contain a series of benchmarks against a variety of models in the swift-models repository. The following benchmarks have been implemented:
- Training LeNet against the MNIST dataset
- Performing inference with LeNet using MNIST-sized random images
These benchmarks should provide a baseline to judge performance improvements and regressions in Swift for TensorFlow.
To begin, you'll need the latest version of Swift for
TensorFlow
installed. Make sure you've added the correct version of swift
to your path.
To run all benchmarks, type the following while in the swift-models directory:
swift run -c release Benchmarks
To run an an individual benchmark, use --filter
:
swift run -c release Benchmarks --filter <name>
To show more compact output, you can explicitly specify a subset of columns to show:
swift run -c release Benchmarks --columns name,median,std
To list all benchmarks run benchmarks with 0 iterations:
swift run -c release Benchmarks --iterations 0 --warmup-iterations 0 --columns name