Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[microTVM] Rework evaluate_model_accuracy into a more generic helper function #12539

Merged
merged 2 commits into from
Aug 23, 2022

Conversation

guberti
Copy link
Member

@guberti guberti commented Aug 22, 2022

Currently, evaluate_model_accuracy addresses only a very specific use case - when one only cares about the average accuracy and runtime of a host-driven AOT model. This prevents users from implementing workarounds to bugs like #12538, and means we redundantly return both model predictions and the accuracy.

This PR changes evaluate_model_accuracy in python/tvm/micro/testing/evaluation.py into predict_labels_aot. This function has the type signature:

def predict_labels_aot(session, aot_executor, input_data, runs_per_sample=1):

Unlike evaluate_model_accuracy, predict_labels_aot does not take as input the true labels of the data. Instead, it returns an iterator of (prediction, runtime) tuples, and lets users do what they want with these.

cc @alanmacd @gromero @mehrdadh

@guberti guberti changed the title [microTVM] Return median of model runtimes by default, instead of mean [microTVM] Rework evaluate_model_accuracy into a more generic helper function Aug 22, 2022
@mehrdadh mehrdadh merged commit 5cef6bf into apache:main Aug 23, 2022
xinetzone pushed a commit to daobook/tvm that referenced this pull request Nov 25, 2022
…function (apache#12539)

* Add workaround for apache#12538

* Rework evaluate_model_accuracy into predict_labels_aot
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants