Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

add uint8 bn mkldnn implementation #16003

Merged
merged 7 commits into from
Aug 26, 2019
Merged

Conversation

ElaineBao
Copy link
Contributor

Description

add uint8 batchnorm, mkldnn implementation and test
@PatricZhao @ZhennanQin

Details

Usage

Check the doc in https://github.com/apache/incubator-mxnet/tree/master/example/quantization/README.md to quantize models and do inference.
Quantized bn will be used automatically when a bn operator cannot be fused.

Performance

In most cases, bn can be fused, so quantized bn is not introduced. In reset50 v2, some of the bn operators are standalone, quantizing these bn give a performance as follows:

Model FP32 (Top-1 / Top-5) Fusion + fp32 bn Fusion + int8 bn
Resnet50 v2 0.764 / 0.935 0.722 / 0.901 0.712 / 0.897

@ElaineBao ElaineBao requested a review from szha as a code owner August 26, 2019 02:00
@@ -216,7 +216,7 @@ def save_params(fname, arg_params, aux_params, logger=None):
if exclude_first_conv:
excluded_sym_names += ['resnetv10_conv0_fwd']
elif args.model.find('resnet') != -1 and args.model.find('v2') != -1:
excluded_sym_names += ['resnetv20_flatten0_flatten0']
excluded_sym_names += ['resnetv20_flatten0_flatten0', 'resnetv20_stage1_batchnorm0_fwd']
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why exclude the first one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is for the sake of accuracy, if do not exclude this layer, top-1 accuracy will drop to 52.3. Reason of this accuracy drop is under investigation.

Copy link
Contributor

@xinyu-intel xinyu-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.

Copy link
Contributor

@ZhennanQin ZhennanQin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Just add a comment to remind that the excluded BN layer is for accuracy purpose.

Copy link
Contributor

@pengzhao-intel pengzhao-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution.

LGTM and mering now.

@pengzhao-intel pengzhao-intel merged commit 9410cc4 into apache:master Aug 26, 2019
@ElaineBao ElaineBao deleted the bn-uint8 branch August 29, 2019 04:11
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants