Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Small fixes related to BF16 fusion_gru and fusion_lstm #33295

Merged
merged 7 commits into from
Jun 11, 2021

Conversation

wozna
Copy link
Contributor

@wozna wozna commented Jun 2, 2021

PR types

Bug fixes

PR changes

OPs

Describe

In this PR there are small fixes related to fusion_gru and fusion_lstm:

  • added rewriting of the use_mkldnn attribute in fc_gru_fuse_pass and fc_lstm_fuse_pass and tests for that. Thanks to this, I could remove mkldnn_placement_pass in test paddle/fluid/inference/tests/api/analyzer_lexical_analysis_gru_tester.cc that runs twice.
  • added fusion_lstm to the list of bf16 operators
  • added the mkldnn_data_type attribute to fusion_lstm, which is necessary to run the model on bfloat16

@paddle-bot-old
Copy link

paddle-bot-old bot commented Jun 2, 2021

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@@ -48,7 +48,8 @@ static int BuildFusion(Graph* graph, const std::string& name_scope,

// Create New OpDesc
auto gru_creater = [&](Node* gru, Node* x, Node* weight_x, Node* weight_h,
Node* bias, Node* hidden, Node* fc_bias) {
Node* bias, Node* hidden, Node* fc_bias,
const bool& use_mkldnn) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe passing use_mkldnn as value would be better? Same with lstm_creater

Copy link
Contributor Author

@wozna wozna Jun 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, for such small types, it is better to pass by value.

@lidanqing-intel
Copy link
Contributor

lidanqing-intel commented Jun 9, 2021

@wozna Hi is this PR ready to review? If yes please ask members to review? because last time we informed Baidu that at least 2 LGTM the PR will be merged. Because I m not sure if this is ready so if it's ready you ask other members to review ok? Thanks

@wozna wozna requested a review from jczaja June 9, 2021 08:10
@lidanqing-intel
Copy link
Contributor

LGTM. Thank you very much the fix! And thank you for calling and explaining it, the reviewing process get much faster.

@juncaipeng Could you please review and merge this PR. If you like, we can call and speed up the reviewing process.

juncaipeng
juncaipeng previously approved these changes Jun 9, 2021
Copy link
Contributor

@juncaipeng juncaipeng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -48,7 +48,8 @@ static int BuildFusion(Graph* graph, const std::string& name_scope,

// Create New OpDesc
auto gru_creater = [&](Node* gru, Node* x, Node* weight_x, Node* weight_h,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found the name "gru_creater" a bit confusing. Maybe you could use "gru_creator" ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yes, it could be a spelling mistake.

jczaja
jczaja previously approved these changes Jun 9, 2021
Copy link
Contributor

@jczaja jczaja left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@arlesniak arlesniak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please double check the tests as they passed with typo.

paddle/fluid/framework/ir/fc_gru_fuse_pass.cc Outdated Show resolved Hide resolved
paddle/fluid/framework/ir/fc_lstm_fuse_pass.cc Outdated Show resolved Hide resolved
paddle/fluid/framework/ir/fc_lstm_fuse_pass.cc Outdated Show resolved Hide resolved
@lidanqing-intel lidanqing-intel self-requested a review June 11, 2021 16:11
Copy link
Contributor

@lidanqing-intel lidanqing-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks a lot

@jczaja jczaja merged commit cd95ea8 into PaddlePaddle:develop Jun 11, 2021
@wozna wozna deleted the bf16_fusion_rnn branch February 24, 2023 16:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants