Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[WIP]add inference mode for op perf benchmark #15453

Closed
wants to merge 2 commits into from

Conversation

roywei
Copy link
Member

@roywei roywei commented Jul 3, 2019

As one of the efforts to address #15429

We need to run op perf in inference mode as well as in 1.4.1 where runtime_feature is not available.

@pengzhao-intel
Copy link
Contributor

Thanks for the improvement.
Could you paste the output of inference OP for both CPU and GPU?

@roywei roywei changed the title add inference mode for op perf benchmark [WIP]add inference mode for op perf benchmark Jul 8, 2019
@roywei
Copy link
Member Author

roywei commented Jul 8, 2019

@mxnet-label-bot add [performance, pr-work-in-progress]

@marcoabreu marcoabreu added Performance pr-work-in-progress PR is still work in progress labels Jul 8, 2019
@roywei roywei closed this Jul 22, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Performance pr-work-in-progress PR is still work in progress
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants