Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix to #38126 #39097

Merged
merged 13 commits into from
Feb 8, 2022
Merged

Fix to #38126 #39097

merged 13 commits into from
Feb 8, 2022

Conversation

jczaja
Copy link
Contributor

@jczaja jczaja commented Jan 20, 2022

PR types

Bug fixes

PR changes

OPs

Describe

This PR is fixing #38126 . In particular it fixes braodcasting of elementwise operation of oneDNN kernels for NHWC scenario.

@jczaja jczaja added the Intel label Jan 20, 2022
@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@jczaja jczaja changed the title Fix to #38126 (WIP) Fix to #38126 Jan 21, 2022
@lidanqing-intel
Copy link
Contributor

@baoachun Please review

const std::string& var_name, const Tensor& tensor,
const framework::OpKernelType& expected_kernel_type) const {
#ifdef PADDLE_WITH_MKLDNN
// When activation is first oneDNN op (there was some non oneDNN op
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do all operators need to adapt to the format of the input NHWC when it is the first mkldnn operator?

Copy link
Contributor Author

@jczaja jczaja Jan 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. So this PR is only doing this change for fixing #38126 , but in general similar change has to be added to other oneDNN ops.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be added to a class that other operators inherit?

@jczaja
Copy link
Contributor Author

jczaja commented Jan 27, 2022

@baoachun I have a problem with CI 👍
../paddle/pten/core/sparse_coo_tensor.h(58): error: function returning abstract class "pten::SparseCooTensor" is not allowed:
This PR is not touching this pten, so please advice how to proceed to have it resolved

@baoachun
Copy link
Contributor

Hi @jczaja , it may be the compilation failure caused by other people's pr. I have triggered PR-CI-Build. If it still fails, I will contact my colleagues to help solve it.

@jczaja jczaja requested a review from jakpiase February 2, 2022 14:47
@jczaja
Copy link
Contributor Author

jczaja commented Feb 2, 2022

@Silv3S , @piotrekobiIntel Please help with review

const std::string& var_name, const Tensor& tensor,
const framework::OpKernelType& expected_kernel_type) const {
#ifdef PADDLE_WITH_MKLDNN
// When activation is first oneDNN op (there was some non oneDNN op
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be added to a class that other operators inherit?

@Silv3S
Copy link
Member

Silv3S commented Feb 3, 2022

LGTM

@jczaja
Copy link
Contributor Author

jczaja commented Feb 7, 2022

@baoachun Please continue your review

@jczaja
Copy link
Contributor Author

jczaja commented Feb 7, 2022

@piotrekobiIntel Thanks for suggesstion. Technically yes but in practice we need to create another of class to fit this code there. Code is a bit diffrent for operators with parameters and may differ even for parameter-less ops due to some input and output vars having diffrent names. So in fact adding another class sounds like significant architectural effort while at least half of ops will override this code anyway due to diffrences. So for now I will leave it like this and when more ops got similar code then We can judge if it is profitable to make a separate class with this code.

Copy link
Contributor

@baoachun baoachun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@lidanqing-intel lidanqing-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants