Skip to content

Commit

Permalink
Merge pull request #65 from onnx/canary
Browse files Browse the repository at this point in the history
Release v1.0.0 from canary
  • Loading branch information
jeremyfowers authored Dec 6, 2023
2 parents 3ceda8a + 6aae9b5 commit 0df8c8e
Show file tree
Hide file tree
Showing 43 changed files with 760 additions and 1,203 deletions.
9 changes: 8 additions & 1 deletion .github/workflows/publish-to-test-pypi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ on:
branches: ["main", "canary"]
tags:
- v*
- RC*
pull_request:
branches: ["main", "canary"]

Expand Down Expand Up @@ -33,7 +34,13 @@ jobs:
models=$(turnkey models location --quiet)
turnkey $models/selftest/linear.py
- name: Publish distribution package to PyPI
if: startsWith(github.ref, 'refs/tags')
if: startsWith(github.ref, 'refs/tags/v')
uses: pypa/gh-action-pypi-publish@release/v1
with:
password: ${{ secrets.PYPI_API_TOKEN }}
- name: Publish distribution package to Test PyPI
if: startsWith(github.ref, 'refs/tags/RC')
uses: pypa/gh-action-pypi-publish@release/v1
with:
password: ${{ secrets.TEST_PYPI_API_TOKEN }}
repository_url: https://test.pypi.org/legacy/
3 changes: 0 additions & 3 deletions .github/workflows/test_turnkey.yml
Original file line number Diff line number Diff line change
Expand Up @@ -53,8 +53,6 @@ jobs:
# turnkey examples
# Note: we clear the default cache location prior to each example run
rm -rf ~/.cache/turnkey
python examples/model_api/hello_world.py
rm -rf ~/.cache/turnkey
python examples/files_api/onnx_opset.py --onnx-opset 15
rm -rf ~/.cache/turnkey
turnkey examples/cli/scripts/hello_world.py
Expand All @@ -71,7 +69,6 @@ jobs:
cd test/
python cli.py
python analysis.py
python model_api.py
- name: Test example plugins
shell: bash -el {0}
run: |
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

We are on a mission to understand and use as many models as possible while leveraging the right toolchain and AI hardware for the job in every scenario.

Evaluating a deep learning model with a familiar toolchain and hardware accelerator is pretty straightforward. Scaling these evaluations to get apples-to-applies insights across a landscape of millions of permutations of models, toolchains, and hardware targets is not straightforward. Not without help, anyways.
Evaluating a deep learning model with a familiar toolchain and hardware accelerator is pretty straightforward. Scaling these evaluations to get apples-to-apples insights across a landscape of millions of permutations of models, toolchains, and hardware targets is not straightforward. Not without help, anyways.

TurnkeyML is a *tools framework* that integrates models, toolchains, and hardware backends to make evaluation and actuation of this landscape as simple as turning a key.

Expand Down
11 changes: 5 additions & 6 deletions docs/code.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ The TurnkeyML source code has a few major top-level directories:
- `models`: the corpora of models that makes up the TurnkeyML models (see [the models readme](https://github.com/onnx/turnkeyml/blob/main/models/readme.md)).
- Each subdirectory under `models` represents a corpus of models pulled from somewhere on the internet. For example, `models/torch_hub` is a corpus of models from [Torch Hub](https://github.com/pytorch/hub).
- `src/turnkey`: source code for the TurnkeyML tools (see [Benchmarking Tools](#benchmarking-tools) for a description of how the code is used).
- `src/turnkeyml/analyze`: functions for profiling a model script, discovering model instances, and invoking `benchmark_model()` on those instances.
- `src/turnkeyml/run`: implements the runtime and device plugin APIs and the built-in runtimes and devices.
- `src/turnkeyml/analyze`: functions for profiling a model script, discovering model instances, and invoking `build_model()` and/or `BaseRT.benchmark()` on those instances.
- `src/turnkeyml/run`: implements `BaseRT`, an abstract base class that defines TurnkeyML's vendor-agnostic benchmarking functionality. This module also includes the runtime and device plugin APIs and the built-in runtimes and devices.
- `src/turnkeyml/cli`: implements the `turnkey` CLI and reporting tool.
- `src/turnkeyml/common`: functions common to the other modules.
- `src/turnkeyml/version.py`: defines the package version number.
Expand All @@ -29,10 +29,9 @@ TurnkeyML provides two main tools, the `turnkey` CLI and benchmarking APIs. Inst
1. The default command for `turnkey` CLI runs the `benchmark_files()` API, which is implemented in [files_api.py](https://github.com/onnx/turnkeyml/blob/main/src/turnkeyml/files_api.py).
- Other CLI commands are also implemented in `cli/`, for example the `report` command is implemented in `cli/report.py`.
1. The `benchmark_files()` API takes in a set of scripts, each of which should invoke at least one model instance, to evaluate and passes each into the `evaluate_script()` function for analysis, which is implemented in [analyze/script.py](https://github.com/onnx/turnkeyml/blob/main/src/turnkeyml/analyze/script.py).
1. `evaluate_script()` uses a profiler to discover the model instances in the script, and passes each into the `benchmark_model()` API, which is defined in [model_api.py](https://github.com/onnx/turnkeyml/blob/main/src/turnkeyml/model_api.py).
1. The `benchmark_model()` API prepares the model for benchmarking (e.g., exporting and optimizing an ONNX file), which creates an instance of a `*Model` class, where `*` can be CPU, GPU, etc. The `*Model` classes are defined in [run/](https://github.com/onnx/turnkeyml/blob/main/src/turnkeyml/run/).
1. The `*Model` classes provide a `.benchmark()` method that benchmarks the model on the device and returns an instance of the `MeasuredPerformance` class, which includes the performance statistics acquired during benchmarking.
1. `benchmark_model()` and the `*Model` classes are built using [`build_model()`](#model-build-tool)
1. `evaluate_script()` uses a profiler to discover the model instances in the script, and passes each into the `build_model()` API, which is defined in [build_api.py](https://github.com/onnx/turnkeyml/blob/main/src/turnkeyml/build_api.py).
1. The `build_model()` API prepares the model for benchmarking (e.g., exporting and optimizing an ONNX file).
1. `evaluate_script()` passes the build into `BaseRT.benchmark()` to benchmarks the model on the device and returns an instance of the `MeasuredPerformance` class, which includes the performance statistics acquired during benchmarking.

# Model Build Tool

Expand Down
46 changes: 45 additions & 1 deletion docs/contribute.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ The guidelines document is organized as the following sections:
- [Pull Requests](#pull-requests)
- [Testing](#testing)
- [Versioning](#versioning)
- [Public APIs](#public-apis)


## Contributing a model
Expand Down Expand Up @@ -87,7 +88,7 @@ To add a runtime to a plugin:
- `"RuntimeClass": <class_name>`, where `<class_name>` is a unique name for a Python class that inherits `BaseRT` and implements the runtime.
- For example, `"RuntimeClass": ExampleRT` implements the `example` runtime.
- The interface for the runtime class is defined in [Runtime Class](#runtime-class) below.
- (Optional) `"status_stats": List[str]`: a list of keys from the build stats that should be printed out at the end of benchmarking in the CLI's `Status` output. These keys, and corresponding values, must be set in the runtime class using `self.stats.add_build_stat(key, value)`.
- (Optional) `"status_stats": List[str]`: a list of keys from the build stats that should be printed out at the end of benchmarking in the CLI's `Status` output. These keys, and corresponding values, must be set in the runtime class using `self.stats.save_model_eval_stat(key, value)`.
- (Optional) `"requirement_check": Callable`: a callable that runs before each benchmark. This may be used to check whether the device selected is available and functional before each benchmarking run. Exceptions raised during this callable will halt the benchmark of all selected files.

1. Populate the package with the following files (see [Plugin Directory Layout](#plugin-directory-layout)):
Expand Down Expand Up @@ -225,3 +226,46 @@ We don't have any fancy testing framework set up yet. If you want to run tests l
## Versioning

We use semantic versioning, as described in [versioning.md](https://github.com/onnx/turnkeyml/blob/main/docs/versioning.md).

## Public APIs

The following public APIs are available for developers. The maintainers aspire to change these as infrequently as possible, and doing so will require an update to the package's major version number.

- From the top-level `__init__.py`:
- `turnkeycli`: the `main()` function of the `turnkey` CLI
- `benchmark_files()`: the top-level API called by the CLI's `benchmark` command
- `build_model()`: API for building a model with a Sequence
- `load_state()`: API for loading the state of a previous build
- `turnkeyml.version`: The package version number
- From the `run` module:
- The `BaseRT` class: abstract base class used in all runtime plugins
- From the `common.filesystem` module:
- `get_available_builds()`: list the builds in a turnkey cache
- `make_cache_dir()`: create a turnkey cache
- `MODELS_DIR`: the location of turnkey's model corpus on disk
- `Stats`: handle for saving and reading evaluation statistics
- `Keys`: reserves keys in the evaluation statistics
- From the `common.printing` module:
- `log_info()`: print an info statement in the style of the turnkey APIs/CLIs
- `log_warning()`: print a warning statement in the style of the turnkey APIs/CLIs
- `log_error()`: print an error statement in the style of the turnkey APIs/CLIs
- From the `build.export` module:
- `onnx_dir()`: location on disk of a build's ONNX files
- `ExportPlaceholder(Stage)`: build Stage for exporting models to ONNX
- `OptimizeOnnxModel(Stage)`: build Stage for using ONNX Runtime to optimize an ONNX model
- `ConvertOnnxToFp16(Stage)`: build Stage for using ONNX ML Tools to downcast an ONNX model to fp16
- From the `build.stage` module:
- The `Sequence` class: ordered collection of build Stages that define a build flow
- The `Stage` class: abstract base class that is used to define a model-to-model transformation
- From the `common.build` module:
- The `State` class: data structure that holds the inputs, outputs, and intermediate values for a Sequence
- From the `common.exceptions` module:
- `StageError`: exception raised when something goes wrong during a Stage
- `ModelRuntimeError`: exception raised when something goes wrong running a model in hardware
- From `run.plugin_helpers` everything
- `get_python_path()`: returns the Python executable
- `run_subprocess()`: execute a command in a subprocess
- `logged_subprocess()`: execute a command in a subprocess while capturing all terminal outputs to a file
- `CondaError`: exception raised when something goes wrong in a Conda environment created by TurnkeyML
- `SubprocessError`: exception raised when something goes wrong in a subprocess created by TurnkeyML
- `HardwareError`: exception raised when something goes wrong in hardware managed by TurnkeyML
1 change: 0 additions & 1 deletion docs/coverage.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,6 @@ Name Stmts Miss Branch BrPart Cover Mi
--------------------------------------------------------------------------------------------------------
\turnkeyml\build\__init__.py 0 0 0 0 100%
\turnkeyml\build\onnx_helpers.py 70 34 28 2 45% 15-21, 28-87, 92, 95-100
\turnkeyml\build\quantization_helpers.py 29 20 18 0 19% 13-30, 35, 50-78
\turnkeyml\build\sequences.py 15 1 8 2 87% 62->61, 65
\turnkeyml\build\tensor_helpers.py 47 26 34 4 41% 17-44, 57, 61, 63-74, 78
\turnkeyml\build_api.py 31 9 8 3 64% 68-71, 120-125, 140-147
Expand Down
2 changes: 1 addition & 1 deletion docs/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
This directory contains documentation for the TurnkeyML project:
- [code.md](https://github.com/onnx/turnkeyml/blob/main/docs/code.md): Code organization for the benchmark and tools.
- [install.md](https://github.com/onnx/turnkeyml/blob/main/docs/install.md): Installation instructions for the tools.
- [tools_user_guide.md](https://github.com/onnx/turnkeyml/blob/main/docs/tools_user_guide.md): User guide for the tools: `turnkey` CLI, `benchmark_files()`, and `benchmark_model()`.
- [tools_user_guide.md](https://github.com/onnx/turnkeyml/blob/main/docs/tools_user_guide.md): User guide for the tools: the `turnkey` CLI and the `benchmark_files()` and `build_model()` APIs.
- [versioning.md](https://github.com/onnx/turnkeyml/blob/main/docs/versioning.md): Defines the semantic versioning rules for the `turnkey` package.

There is more useful documentation available in:
Expand Down
82 changes: 82 additions & 0 deletions docs/release_notes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# Release Notes

This document tracks the major changes in each package release of TurnkeyML.

We are tracking two types of major changes:
- New features that enhance the user and developer experience
- Breaking changes to the CLI or public APIs

If you are creating the release notes for a new version, please see the [template](#template-version-majorminorpatch). Release notes should capture all of the significant changes since the last numbered package release.

# Version 1.0.0

This version focuses on cleaning up technical debts and most of the changes are not visible to users. It removes cumbersome requirements for developers, removes unused features to streamline the codebase, and also clarifying some API naming schemes.

Users, however, will enjoy improved fidelity in their reporting telemetry thanks to the streamlined code.

## Users

### User Improvements

Improvements to the information in `turnkey_stats.yaml` and report CSVs:

- Now reports all model labels. Including, but not limited to, the model's OSS license.
- `build_status` and `benchmark_status` now accurately report the status of their respective toolchain phases.
- Previously, `benchmark_status` was a superset of the status of both build and benchmark.

## User Breaking Changes

None.

## Developers

### Developer Improvements

- Build success has been conceptually reworked for Stages/Sequences such that the `SetSuccess` Stage is no longer required at the end of every Sequence.
- Previously, `build_model()` would only return a `State` object if the `state.build_status == successful_build`, which in turn had to be manually set in a Stage.
- Now, if a Sequence finishes then the underlying toolflow will automatically set `state.build_status = successful_build` on your behalf.

### Developer Breaking Changes

- The `benchmark_model()` API has been removed as there were no known users / use cases. Anyone who wants to run standalone benchmarking can still instantiate any `BaseRT` child class and call `BaseRT.benchmark()`.
- The APIs for saving and loading labels `.txt` files in the cache have been removed since no code was using those APIs. Labels are now saved into `turnkey_stats.yaml` instead.
- The `quantization_samples` argument to the `build_model()` API has been removed.
- The naming scheme of the members of `Stats` has been adjusted for consistency. It used to refer to both builds and benchmarks as "builds", whereas now it uses "evaluations" as a superset of the two.
- `Stats.add_build_stat()` is now `Stats.save_model_eval_stat()`.
- `Stats.add_build_sub_stat()` is now `Stats.save_model_eval_sub_stat()`.
- `Stats.stat_id` is now `Stats.evaluation_id`.
- The `builds` section of the stats/reports is now `evaluations`.
- `Stats.save_stat()` is now `Stats.save_model_stat()`.
- `Stats.build_stats` is now `Stats.evaluation_stats`.
- The `SetSuccess` build stage has been removed because build success has been reworked (see improvements).
- The `logged_subprocess()` API has been moved from the `common.build` module to the `run.plugin_helpers` module.

# Version 0.3.0

This version was used to initialize the repository.

# Template: Version Major.Minor.Patch

Headline statement.



## Users

### User Improvements

List of enhancements specific to users of the tools.

### User Breaking Changes

List of breaking changes specific to users of the tools.

## Developers

### Developer Improvements

List of enhancements specific to developers who build on the tools.

### Developer Breaking Changes

List of breaking changes specific to developers who build on the tools.
Loading

0 comments on commit 0df8c8e

Please sign in to comment.