Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix dropout bug in backward when input is 1d tensor #26837

Merged
merged 4 commits into from
Sep 2, 2020

Conversation

huangjun12
Copy link
Contributor

@huangjun12 huangjun12 commented Aug 31, 2020

PR types

Bug fixes

PR changes

OPs

Describe

fix dropout op bug in backward when input is 1d tensor

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot-old
Copy link

paddle-bot-old bot commented Aug 31, 2020

✅ This PR's description meets the template requirements!
Please wait for other CI results.

.format(len(input_shape), max(drop_axes)))
if len(drop_axes) > len(input_shape):
raise ValueError(
"length of axis should not greater than dimensions of x:{}, but get length of drop axes: {}".
"length of axis should not greater than dimensions of x:{}, but get length of axis: {}".

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

be greater than

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

if max(drop_axes) > len(input_shape) - 1:
raise ValueError("axis value should less than dimensions of x:{}, but get drop_axes value:{} " \
if min(drop_axes) < 0 or max(drop_axes) > len(input_shape) - 1:
raise ValueError("axis value should greater equal than 0 and less than dimensions of x:{}, but get axis value:{} " \

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

be greater than

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

auto dY = EigenMatrix<T>::Reshape(*grad_y, 1);
auto M = EigenVector<uint8_t>::Flatten(*mask);
auto dX = EigenVector<T>::Flatten(*grad_x);
auto dY = EigenVector<T>::Flatten(*grad_y);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是否添加输入是 1D 的单测以确保其正确性?以往这个问题没有被测试出来,是因为从没有测试过 1D case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

XiaoguangHu01
XiaoguangHu01 previously approved these changes Sep 1, 2020
Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

class TestDropoutOpInput1d(OpTest):
def setUp(self):
self.op_type = "dropout"
self.inputs = {'X': np.random.random((2000)).astype("float32")}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

当表示形状的时候, 1-tuple 需要用逗号

Copy link

@iclementine iclementine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@iclementine iclementine merged commit e480168 into PaddlePaddle:develop Sep 2, 2020
huangjun12 added a commit to huangjun12/Paddle that referenced this pull request Sep 3, 2020
* fix dropout bug in backward when input is 1d tensor, test=develop

* add test case and refine error message, test=develop

* refine error message, test=develop
iclementine pushed a commit that referenced this pull request Sep 4, 2020
* fix dropout bug in backward when input is 1d tensor, test=develop

* add test case and refine error message, test=develop

* refine error message, test=develop
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants