-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MKLDNN: SoftmaxGrad Op #11519
MKLDNN: SoftmaxGrad Op #11519
Conversation
@@ -99,5 +99,124 @@ inline mkldnn::memory::format GetMKLDNNFormat(const mkldnn::memory memory) { | |||
memory.get_primitive_desc().desc().data.format); | |||
} | |||
|
|||
class MKLDNNHandler { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you give more introductions about MKLDNNHandler
?
Is this specifically added for softmax grad since this.
When adding functions at mkldnn_helper.h
, it should be something common I suppose.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tensor-tang You are right that softmax functionality should go to into MKLDNNHandler. MKLDNNHandler will agregate common functionality as for softmax/conv/pool primitive specific functionality there will be designated derived classes eg. SoftmaxMKLDNNHandler , ConvMKLDNNHandler etc. I updated the code adding SoftmaxMKLDNNHandler. Please take a look
8a571ef
to
6a2a282
Compare
Hi, I answered on @tensor-tang question within relevant disscussion. My changes were updated |
There was a merge conflict but it is resolved now. |
- Added hash function inside of MKLDNN softmax op to be used as handle for primitives stroing in a context - Style fixes to softmax mkldnn op - Fixes after review - Coding style - Fix to style - style fixes - style fix - style fixes - Fix to cody style check - Rephrasing a comment fix t obroken merge Fixes to rebase Conflicts: benchmark/fluid/models/machine_translation.py cmake/external/mkldnn.cmake paddle/fluid/operators/softmax_mkldnn_op.cc - Bumped revision of MKL-DNN up to have softmax backward primitive - Added choosing MKLDNN softmax grad operator - First reuse of softmax backward - Reinvented reusing for softmax - Fix to crash in reinvented reuse - Clang format fixes - Clang format fixes - Improved softmax mkldnn reuse mechanism - clang format fixes - Fix to broken merge - Fix
be4a58c
to
98f3ad3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks very much!
With this PR machine translation when MKLDNN is used (use_mkldnn=True, shall be added in fc layer of machine_translation.py) is speeding up significantly