-
Notifications
You must be signed in to change notification settings - Fork 162
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
📻 (AST) Audio data classification optimization and data pre-process (#…
…762) ## Describe your changes For this issue #735, I tried to add an example for AST model optimization with Olive data configs on huggingface examples to achieve script-free for model optimization. ## Checklist before requesting a review - [ ] Add unit tests for this change. - [ ] Make sure all tests can pass. - [ ] Update documents if necessary. - [ ] Lint and apply fixes to your code by running `lintrunner -a` - [ ] Is this a user-facing change? If yes, give a description of this change to be included in the release notes. ## (Optional) Issue link
- Loading branch information
Showing
4 changed files
with
207 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
# AST Optimization | ||
This folder contains examples of AST(Audio Spectrogram Transformer) optimization using olive workflows. | ||
|
||
- Model: https://huggingface.co/MIT/ast-finetuned-speech-commands-v2 | ||
- Dataset: https://huggingface.co/datasets/speech_commands | ||
|
||
### Run example using config | ||
|
||
The `ast.json` is used on CPU optimization which tries to quantize the model and tune the inference config for better performance. | ||
|
||
First, install required packages according to passes. | ||
```sh | ||
python -m olive.workflows.run --config ast.json --setup | ||
``` | ||
|
||
Then, optimize the model | ||
```sh | ||
python -m olive.workflows.run --config ast.json | ||
``` | ||
|
||
or run simply with python code: | ||
```python | ||
from olive.workflows import run as olive_run | ||
olive_run("ast.json") | ||
``` | ||
|
||
After running the above command, the model candidates and corresponding config will be saved in the output directory. | ||
You can then select the best model and config from the candidates and run the model with the selected config. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,106 @@ | ||
{ | ||
"input_model":{ | ||
"type": "PyTorchModel", | ||
"config": { | ||
"hf_config": { | ||
"model_class": "ASTForAudioClassification", | ||
"model_name": "MIT/ast-finetuned-speech-commands-v2", | ||
"task": "audio-classification", | ||
"dataset": { | ||
"data_name":"speech_commands", | ||
"subset": "v0.02", | ||
"split": "validation", | ||
"input_cols": ["audio"], | ||
"label_cols": ["label"], | ||
"max_samples": 100, | ||
"batch_size": 1, | ||
"component_kwargs": { | ||
"pre_process_data": { | ||
"labels_to_filter": ["_silence_"] | ||
} | ||
} | ||
} | ||
}, | ||
"io_config": { | ||
"input_names": ["input_values"], | ||
"output_names": ["logits"], | ||
"dynamic_axes": { | ||
"input_values": { | ||
"0": "batch_size", "1": "max_length", "2": "num_mel_bins" | ||
}, | ||
"logits": { | ||
"0": "batch_size" | ||
} | ||
} | ||
|
||
} | ||
} | ||
}, | ||
"evaluators": { | ||
"common_evaluator": { | ||
"metrics":[ | ||
{ | ||
"name": "accuracy", | ||
"type": "accuracy", | ||
"backend": "huggingface_metrics", | ||
"sub_types": [ | ||
{"name": "accuracy", "priority": 1, "goal": {"type": "max-degradation", "value": 0.05}}, | ||
{"name": "f1", "metric_config": {"compute_params": {"average": "macro"}}} | ||
] | ||
}, | ||
{ | ||
"name": "latency", | ||
"type": "latency", | ||
"sub_types": [ | ||
{"name": "avg", "priority": 2, "goal": {"type": "percent-min-improvement", "value": 5}}, | ||
{"name": "max"}, | ||
{"name": "min"} | ||
] | ||
} | ||
] | ||
} | ||
}, | ||
"passes": { | ||
"conversion": { | ||
"type": "OnnxConversion" | ||
}, | ||
"transformers_optimization": { | ||
"type": "OrtTransformersOptimization", | ||
"disable_search": true, | ||
"config": { | ||
"model_type": "vit" | ||
} | ||
}, | ||
"quantization": { | ||
"type": "OnnxQuantization", | ||
"disable_search": true, | ||
"config": { | ||
"quant_mode": "static", | ||
"quant_preprocess": true, | ||
"per_channel": false, | ||
"reduce_range": false, | ||
"data_config": "__input_model_data_config__" | ||
} | ||
}, | ||
"perf_tuning": { | ||
"type": "OrtPerfTuning", | ||
"config": { | ||
"data_config": "__input_model_data_config__" | ||
} | ||
} | ||
}, | ||
"engine": { | ||
"search_strategy": { | ||
"execution_order": "joint", | ||
"search_algorithm": "tpe", | ||
"search_algorithm_config": { | ||
"num_samples": 3, | ||
"seed": 0 | ||
} | ||
}, | ||
"evaluator": "common_evaluator", | ||
"execution_providers": ["CPUExecutionProvider"], | ||
"cache_dir": "cache", | ||
"output_dir" : "models/ast_cpu" | ||
} | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters