-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MKL-DNN] MKL-DNN specific Tensor modification #15429
Conversation
9f0da40
to
5500a75
Compare
@panyx0718 To have CI passed I need your approve as PR is modifying tensor.h file. Coule you please take a look and comment/approve ? |
Do you have some benchmark of transpose op before and after this PR? |
@luotao1 This PR does not improve performance. It is on making Transpose MKL-DNN working |
@luotao1 I updated PR with some changes to convolution , so performance on resnet-50 (SKX 8180, B1 & B128) is restored to original level. |
c7b6c76
to
043d94f
Compare
@luotao1 @panyx0718 Do you need any more information to have this PR reviewed? |
*/ | ||
|
||
mkldnn::memory::format format_ = mkldnn::memory::format::format_undef; | ||
mutable mkldnn::memory::primitive_desc mem_pd_; | ||
#endif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there is too much MKLDNN things in base Tensor
.
MKLDNNTensor is better to read for adding library.
Need @panyx0718 approve this I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tensor-tang How do think about #11040 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That 's just how this current version comes from, but not the reason we cloud continue adding more.
primitive_desc
contains more info like dims, format and data_type which is duplicated with many conceptions in tensor itself.
paddle/fluid/framework/tensor.h
Outdated
|
||
inline void set_format(const mkldnn::memory::format format) { | ||
format_ = format; | ||
void CreateMemoryPrimDescFromDims( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- please unify the name style. like
setMKLDNNMemPrimDesc
,CreateMemoryPrimDescFromDims
- MemoryPrimDesc specify the data_type, but tensor do not aware this only if get data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- It will be done
- Yes. This is correct. data() / mutable_data() , returns pointer to allocation. Op has to ask separately for layout and/or Memory Primitive descriptor. Perhaps it would be good to extend function data()/ mutable_data() so they can be aware of layouts and return converted data. Definitely having data() function that can return pointer to data in Paddle Layout , would help with implementing unit test so they can operate on block layout , and then we get result in paddle layout (NCHW or other) for comparison with reference. As paddlepaddle supports many devices (CPU,GPU,MKL-DNN...) I would like someone more familiar with other devices to trigger change in interface.
@sneaxiy Can you help take a look and see if there is a cleaner way? |
So what are your suggestions on how to create MKLDNNTensor so it match current design ? |
paddle/fluid/framework/tensor.h
Outdated
|
||
inline void set_format(const mkldnn::memory::format format) { | ||
format_ = format; | ||
void create_prim_desc_from_dims( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this doesn't need to be a method of tensor? Can this be independent function and create prim_desc elsewhere and then call set_prim_desc_?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@panyx0718 I moved body of create_prim_desc_from_dims into utils file eg. mkldnn_utils.h and left thin wrapper that only calls moved out function. Is this good enough or you suggest to remove entirely create_prim_desc_from_dims() ?
How about MKLDNNtensor -> LODTensor -> Tensor |
test=develop - TransposeMKLDNNHandler::AcquireSrcMemory was reimplemented - Added nchw and nc formats setting for sake of compatiblity Fixed unit tests - Worakaround to problem with 5D data in conv - Added 3D and 1D MKL-DNN formats for name handles for tensor test=develop - Fix to UTs test=develop - Conv fp32 op was updated Cosmetic fixes test=develop - tensor mkldnn cosmetics test=develop - Moved most of mkl-dnn specific code from Tensor to mkl-dnn utils
04a5d2f
to
f3e47bb
Compare
test=develop
@tensor-tang I haven't tried to implement MKLDNNTensor -> LoDTensor -> Tensor, as by looking into ops that are clearly checking if Tensor is LoDTensor etc. I suspected lots of changes . So I updated PR with original concept (no MKLDNNTensor) , but I moved most of MKLDNN functionality away to mkldnn_utils.h . Eg. Tensor contains thin wrapper functions to MKLDNN specific functions. |
paddle/fluid/framework/tensor.h
Outdated
|
||
inline void set_format(const mkldnn::memory::format format) { | ||
format_ = format; | ||
inline mkldnn::memory::primitive_desc get_prim_desc() const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this method and the ones above should be called mkldnn_prim_desc
to make its usage more obvious.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@panyx0718 Ok, methods renamed as suggested.
@panyx0718 , @tensor-tang , @luotao1 I updated review following your suggestions. Please review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tensor-tang @luotao1 any comments? seems ok to me.
test=develop
269e605
to
8e8c934
Compare
@panyx0718 , @luotao1 , @tensor-tang I adjusted changes to your suggestions. Please take a look. CI cannot pass without @panyx0718 approval. |
* Revert "[MKL-DNN] Fix to crash of Transformer when mkldnn is to be used (#16233)" This reverts commit 13816dd. Apart from enabling transformer for MKL-DNN * Revert "- MKL-DNN pooling updated to set_prim_desc" This reverts commit c63f6b2. Conflicts: paddle/fluid/operators/mkldnn/concat_mkldnn_op.cc * Revert "[MKL-DNN] MKL-DNN specific Tensor modification (#15429)" test=develop This reverts commit dec9cf5. * - concat compilation fix - lint test=develop - Lint fixes test=develop - Lint fixes test=develop - Fix Transpose MKLDNN op test=develop
This is first of series of PRs on tensor modifications for MKL-DNN operations that are replacing mkl-dnn format storing with MKL-DNN primitive descriptor storing.
Idea is that currently MKL-DNN format is stored in Tensor. format is not good enough for Transpose MKL-DNN as format is ambigious in terms of idenditifcations of actual mkldnn-layout. More over it does require recreation of primitive descriptions for given tensor.
Adventages:
In this PR: