Skip to content

Commit

Permalink
Updated docs
Browse files Browse the repository at this point in the history
  • Loading branch information
ctuning-admin committed Feb 3, 2024
1 parent dc9cebf commit f2f68a2
Show file tree
Hide file tree
Showing 4 changed files with 33 additions and 37 deletions.
7 changes: 5 additions & 2 deletions cm-mlops/script/get-ml-model-gptj/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,13 +110,14 @@ ___
* `_pytorch,fp32`
- Environment variables:
- *CM_DOWNLOAD_EXTRA_OPTIONS*: ` --output-document checkpoint.zip`
- *CM_DOWNLOAD_FILENAME*: `checkpoint.zip`
- *CM_UNZIP*: `yes`
- *CM_DOWNLOAD_CHECKSUM_NOT_USED*: `e677e28aaf03da84584bb3073b7ee315`
- *CM_PACKAGE_URL*: `https://cloud.mlcommons.org/index.php/s/QAZ2oM94MkFtbQx/download`
- *CM_RCLONE_CONFIG*: `rclone config create mlc-inference s3 provider=LyveCloud access_key_id=0LITLNQMHZALM5AK secret_access_key=YQKYTMBY23TMZHLOYFJKL5CHHS0CWYUC endpoint=s3.us-east-1.lyvecloud.seagate.com`
- *CM_RCLONE_CONFIG_CMD*: `rclone config create mlc-inference s3 provider=LyveCloud access_key_id=0LITLNQMHZALM5AK secret_access_key=YQKYTMBY23TMZHLOYFJKL5CHHS0CWYUC endpoint=s3.us-east-1.lyvecloud.seagate.com`
- *CM_RCLONE_URL*: `mlc-inference:mlcommons-inference-wg-s3/gpt-j`
- Workflow:
* `_pytorch,fp32,wget`
- Workflow:
* `_pytorch,int4,intel`
- Workflow:
* `_pytorch,int8,intel`
Expand Down Expand Up @@ -155,11 +156,13 @@ ___

* `_rclone`
- Environment variables:
- *CM_DOWNLOAD_FILENAME*: `checkpoint`
- *CM_DOWNLOAD_URL*: `<<<CM_RCLONE_URL>>>`
- Workflow:
* **`_wget`** (default)
- Environment variables:
- *CM_DOWNLOAD_URL*: `<<<CM_PACKAGE_URL>>>`
- *CM_DOWNLOAD_FILENAME*: `checkpoint.zip`
- Workflow:

</details>
Expand Down
28 changes: 14 additions & 14 deletions cm-mlops/script/get-ml-model-resnet50/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,15 +136,15 @@ ___
- *CM_ML_MODEL_FRAMEWORK*: `onnx`
- *CM_ML_MODEL_INPUT_LAYERS*: `input_tensor`
- *CM_ML_MODEL_INPUT_LAYER_NAME*: `input_tensor`
- *CM_ML_MODEL_OUTPUT_LAYERS*: `softmax_tensor`
- *CM_ML_MODEL_INPUT_SHAPES*: `\"input_tensor\": (BATCH_SIZE, 224, 224, 3)`
- *CM_ML_MODEL_OUTPUT_LAYERS*: `softmax_tensor`
- *CM_ML_MODEL_OUTPUT_LAYER_NAME*: `softmax_tensor`
- *CM_ML_MODEL_STARTING_WEIGHTS_FILENAME*: `https://zenodo.org/record/2535873/files/resnet50_v1.pb`
- Workflow:
* `_onnx,from-tf,fp32`
- Environment variables:
- *CM_PACKAGE_URL*: `https://drive.google.com/uc?id=15wZ_8Vt12cb10IEBsln8wksD1zGwlbOM`
- *CM_DOWNLOAD_FILENAME*: `resnet50_v1_modified.onnx`
- *CM_PACKAGE_URL*: `https://drive.google.com/uc?id=15wZ_8Vt12cb10IEBsln8wksD1zGwlbOM`
- Workflow:
* `_onnx,opset-11`
- Environment variables:
Expand Down Expand Up @@ -172,25 +172,25 @@ ___
- CM script: [get-generic-python-lib](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-generic-python-lib)
* `_tflite,argmax`
- Environment variables:
- *CM_ML_MODEL_INPUT_SHAPES*: `\"input_tensor 2\": (BATCH_SIZE, 224, 224, 3)`
- *CM_PACKAGE_URL*: `https://www.dropbox.com/s/cvv2zlfo80h54uz/resnet50_v1.tflite.gz?dl=1`
- *CM_DAE_EXTRACT_DOWNLOADED*: `yes`
- *CM_ML_MODEL_FILE*: `resnet50_v1.tflite`
- *CM_EXTRACT_FINAL_ENV_NAME*: `CM_ML_MODEL_FILE_WITH_PATH`
- *CM_DOWNLOAD_FINAL_ENV_NAME*: ``
- *CM_EXTRACT_FINAL_ENV_NAME*: `CM_ML_MODEL_FILE_WITH_PATH`
- *CM_ML_MODEL_FILE*: `resnet50_v1.tflite`
- *CM_ML_MODEL_INPUT_SHAPES*: `\"input_tensor 2\": (BATCH_SIZE, 224, 224, 3)`
- *CM_PACKAGE_URL*: `https://www.dropbox.com/s/cvv2zlfo80h54uz/resnet50_v1.tflite.gz?dl=1`
- Workflow:
* `_tflite,int8,no-argmax`
- Environment variables:
- *CM_DOWNLOAD_FINAL_ENV_NAME*: `CM_ML_MODEL_FILE_WITH_PATH`
- *CM_ML_MODEL_FILE*: `resnet50_quant_full_mlperf_edgetpu.tflite`
- *CM_ML_MODEL_INPUT_SHAPES*: `\"input_tensor 2\": (BATCH_SIZE, 224, 224, 3)`
- *CM_PACKAGE_URL*: `https://zenodo.org/record/8234946/files/resnet50_quant_full_mlperf_edgetpu.tflite?download=1`
- *CM_ML_MODEL_FILE*: `resnet50_quant_full_mlperf_edgetpu.tflite`
- *CM_DOWNLOAD_FINAL_ENV_NAME*: `CM_ML_MODEL_FILE_WITH_PATH`
- Workflow:
* `_tflite,no-argmax`
- Environment variables:
- *CM_ML_MODEL_FILE*: `resnet50_v1.no-argmax.tflite`
- *CM_ML_MODEL_INPUT_SHAPES*: `\"input_tensor 2\": (BATCH_SIZE, 224, 224, 3)`
- *CM_PACKAGE_URL*: `https://www.dropbox.com/s/vhuqo0wc39lky0a/resnet50_v1.no-argmax.tflite?dl=1`
- *CM_ML_MODEL_FILE*: `resnet50_v1.no-argmax.tflite`
- Workflow:

</details>
Expand All @@ -211,8 +211,8 @@ ___
- *CM_ML_MODEL_FRAMEWORK*: `onnx`
- *CM_ML_MODEL_INPUT_LAYERS*: `input_tensor:0`
- *CM_ML_MODEL_INPUT_LAYER_NAME*: `input_tensor:0`
- *CM_ML_MODEL_OUTPUT_LAYERS*: `softmax_tensor:0`
- *CM_ML_MODEL_INPUT_SHAPES*: `\"input_tensor:0\": (BATCH_SIZE, 3, 224, 224)`
- *CM_ML_MODEL_OUTPUT_LAYERS*: `softmax_tensor:0`
- *CM_ML_MODEL_OUTPUT_LAYER_NAME*: `softmax_tensor:0`
- *CM_ML_MODEL_STARTING_WEIGHTS_FILENAME*: `<<<CM_PACKAGE_URL>>>`
- *CM_ML_MODEL_VER*: `1.5`
Expand All @@ -221,23 +221,23 @@ ___
- Environment variables:
- *CM_ML_MODEL_DATA_LAYOUT*: `NCHW`
- *CM_ML_MODEL_FRAMEWORK*: `pytorch`
- *CM_ML_MODEL_GIVEN_CHANNEL_MEANS*: `?`
- *CM_ML_MODEL_INPUT_LAYER_NAME*: `input_tensor:0`
- *CM_ML_MODEL_INPUT_SHAPES*: `\"input_tensor:0\": [BATCH_SIZE, 3, 224, 224]`
- *CM_ML_MODEL_OUTPUT_LAYERS*: `output`
- *CM_ML_MODEL_OUTPUT_LAYER_NAME*: `?`
- *CM_ML_MODEL_INPUT_SHAPES*: `\"input_tensor:0\": [BATCH_SIZE, 3, 224, 224]`
- *CM_ML_MODEL_GIVEN_CHANNEL_MEANS*: `?`
- *CM_ML_STARTING_WEIGHTS_FILENAME*: `<<<CM_PACKAGE_URL>>>`
- Workflow:
* `_tensorflow`
- Aliases: `_tf`
- Environment variables:
- *CM_ML_MODEL_INPUT_SHAPES*: `\"input_tensor:0\": (BATCH_SIZE, 3, 224, 224)`
- *CM_ML_MODEL_ACCURACY*: `76.456`
- *CM_ML_MODEL_DATA_LAYOUT*: `NHWC`
- *CM_ML_MODEL_FRAMEWORK*: `tensorflow`
- *CM_ML_MODEL_GIVEN_CHANNEL_MEANS*: `123.68 116.78 103.94`
- *CM_ML_MODEL_INPUT_LAYERS*: `input_tensor`
- *CM_ML_MODEL_INPUT_LAYER_NAME*: `input_tensor`
- *CM_ML_MODEL_INPUT_SHAPES*: `\"input_tensor:0\": (BATCH_SIZE, 3, 224, 224)`
- *CM_ML_MODEL_NORMALIZE_DATA*: `0`
- *CM_ML_MODEL_OUTPUT_LAYERS*: `softmax_tensor`
- *CM_ML_MODEL_OUTPUT_LAYER_NAME*: `softmax_tensor`
Expand All @@ -247,13 +247,13 @@ ___
- Workflow:
* `_tflite`
- Environment variables:
- *CM_ML_MODEL_INPUT_SHAPES*: `\"input_tensor 2\": (BATCH_SIZE, 224, 224, 3)`
- *CM_ML_MODEL_ACCURACY*: `76.456`
- *CM_ML_MODEL_DATA_LAYOUT*: `NHWC`
- *CM_ML_MODEL_FRAMEWORK*: `tflite`
- *CM_ML_MODEL_GIVEN_CHANNEL_MEANS*: `123.68 116.78 103.94`
- *CM_ML_MODEL_INPUT_LAYERS*: `input_tensor`
- *CM_ML_MODEL_INPUT_LAYER_NAME*: `input_tensor`
- *CM_ML_MODEL_INPUT_SHAPES*: `\"input_tensor 2\": (BATCH_SIZE, 224, 224, 3)`
- *CM_ML_MODEL_NORMALIZE_DATA*: `0`
- *CM_ML_MODEL_OUTPUT_LAYERS*: `softmax_tensor`
- *CM_ML_MODEL_OUTPUT_LAYER_NAME*: `softmax_tensor`
Expand Down
2 changes: 1 addition & 1 deletion cm-mlops/script/get-ml-model-stable-diffusion/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ ___
- Workflow:
* `_rclone`
- Environment variables:
- *CM_RCLONE_CONFIG*: `rclone config create mlc-inference s3 provider=LyveCloud access_key_id=0LITLNQMHZALM5AK secret_access_key=YQKYTMBY23TMZHLOYFJKL5CHHS0CWYUC endpoint=s3.us-east-1.lyvecloud.seagate.com`
- *CM_RCLONE_CONFIG_CMD*: `rclone config create mlc-inference s3 provider=LyveCloud access_key_id=0LITLNQMHZALM5AK secret_access_key=YQKYTMBY23TMZHLOYFJKL5CHHS0CWYUC endpoint=s3.us-east-1.lyvecloud.seagate.com`
- *CM_DOWNLOAD_TOOL*: `rclone`
- Workflow:
* `_wget`
Expand Down
33 changes: 13 additions & 20 deletions cm-mlops/script/run-mlperf-inference-app/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,20 +28,13 @@

### About

This portable CM (CK2) script provides a unified and portable interface to the MLPerf inference benchmark
modularized by other [portable CM scripts](https://github.com/mlcommons/ck/blob/master/docs/list_of_scripts.md)
being developed by the open [MLCommons taskforce on automation and reproducibility](https://github.com/mlcommons/ck/blob/master/docs/mlperf-education-workgroup.md).
This is a ready-to-use CM automation recipe that provides a unified and portable interface to the MLPerf inference benchmark
assembled from other [portable CM scripts](https://github.com/mlcommons/ck/blob/master/docs/list_of_scripts.md)
being developed by the open [MLCommons taskforce on automation and reproducibility](https://github.com/mlcommons/ck/blob/master/docs/taskforce.md).

It is a higher-level wrapper that automatically generates the command line for the [universal MLPerf inference script](../app-mlperf-inference)
This automation recipe automatically generates the command line for the [universal MLPerf inference script](../app-mlperf-inference)
to run MLPerf scenarios for a given ML task, model, runtime and device, and prepare and validate submissions.

Check these [tutorials](https://github.com/mlcommons/ck/blob/master/docs/tutorials/sc22-scc-mlperf.md) from the Student Cluster Competition
at Supercomputing'22 to understand how to use this script to run the MLPerf inference benchmark and automate submissions.

See the development roadmap [here](https://github.com/mlcommons/ck/issues/536).

See extension projects to enable collaborative benchmarking, design space exploration and optimization of ML and AI Systems [here](https://github.com/mlcommons/ck/issues/627).


See extra [notes](README-extra.md) from the authors and contributors.

Expand All @@ -50,7 +43,7 @@ See extra [notes](README-extra.md) from the authors and contributors.
* Category: *Modular MLPerf inference benchmark pipeline.*
* CM GitHub repository: *[mlcommons@ck](https://github.com/mlcommons/ck/tree/master/cm-mlops)*
* GitHub directory for this script: *[GitHub](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/run-mlperf-inference-app)*
* CM meta description for this script: *[_cm.json](_cm.json)*
* CM meta description for this script: *[_cm.yaml](_cm.yaml)*
* CM "database" tags to find this script: *run,common,generate-run-cmds,run-mlperf,vision,mlcommons,mlperf,inference,reference*
* Output cached? *False*
___
Expand Down Expand Up @@ -156,23 +149,23 @@ ___

* `_r2.1`
- Environment variables:
- *CM_RUN_MLPERF_INFERENCE_APP_DEFAULTS*: `r2.1_default`
- *CM_MLPERF_INFERENCE_VERSION*: `2.1`
- *CM_RUN_MLPERF_INFERENCE_APP_DEFAULTS*: `r2.1_default`
- Workflow:
* `_r3.0`
- Environment variables:
- *CM_RUN_MLPERF_INFERENCE_APP_DEFAULTS*: `r3.0_default`
- *CM_MLPERF_INFERENCE_VERSION*: `3.0`
- *CM_RUN_MLPERF_INFERENCE_APP_DEFAULTS*: `r3.0_default`
- Workflow:
* `_r3.1`
- Environment variables:
- *CM_RUN_MLPERF_INFERENCE_APP_DEFAULTS*: `r3.1_default`
- *CM_MLPERF_INFERENCE_VERSION*: `3.1`
- *CM_RUN_MLPERF_INFERENCE_APP_DEFAULTS*: `r3.1_default`
- Workflow:
* **`_r4.0`** (default)
- Environment variables:
- *CM_RUN_MLPERF_INFERENCE_APP_DEFAULTS*: `r4.0_default`
- *CM_MLPERF_INFERENCE_VERSION*: `4.0`
- *CM_RUN_MLPERF_INFERENCE_APP_DEFAULTS*: `r4.0_default`
- Workflow:

</details>
Expand Down Expand Up @@ -371,7 +364,7 @@ ___
<details>
<summary>Click here to expand this section.</summary>

1. ***Read "deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/run-mlperf-inference-app/_cm.json)***
1. ***Read "deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/run-mlperf-inference-app/_cm.yaml)***
* detect,os
- CM script: [detect-os](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/detect-os)
* detect,cpu
Expand All @@ -385,11 +378,11 @@ ___
* get,sut,description
- CM script: [get-mlperf-inference-sut-description](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-mlperf-inference-sut-description)
1. ***Run "preprocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/run-mlperf-inference-app/customize.py)***
1. Read "prehook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/run-mlperf-inference-app/_cm.json)
1. Read "prehook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/run-mlperf-inference-app/_cm.yaml)
1. ***Run native script if exists***
1. Read "posthook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/run-mlperf-inference-app/_cm.json)
1. Read "posthook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/run-mlperf-inference-app/_cm.yaml)
1. ***Run "postrocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/run-mlperf-inference-app/customize.py)***
1. Read "post_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/run-mlperf-inference-app/_cm.json)
1. Read "post_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/run-mlperf-inference-app/_cm.yaml)
</details>

___
Expand Down

0 comments on commit f2f68a2

Please sign in to comment.