-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix dropout bug in backward when input is 1d tensor #26837
Conversation
Thanks for your contribution! |
✅ This PR's description meets the template requirements! |
.format(len(input_shape), max(drop_axes))) | ||
if len(drop_axes) > len(input_shape): | ||
raise ValueError( | ||
"length of axis should not greater than dimensions of x:{}, but get length of drop axes: {}". | ||
"length of axis should not greater than dimensions of x:{}, but get length of axis: {}". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
be greater than
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
if max(drop_axes) > len(input_shape) - 1: | ||
raise ValueError("axis value should less than dimensions of x:{}, but get drop_axes value:{} " \ | ||
if min(drop_axes) < 0 or max(drop_axes) > len(input_shape) - 1: | ||
raise ValueError("axis value should greater equal than 0 and less than dimensions of x:{}, but get axis value:{} " \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
be greater than
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
auto dY = EigenMatrix<T>::Reshape(*grad_y, 1); | ||
auto M = EigenVector<uint8_t>::Flatten(*mask); | ||
auto dX = EigenVector<T>::Flatten(*grad_x); | ||
auto dY = EigenVector<T>::Flatten(*grad_y); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
是否添加输入是 1D 的单测以确保其正确性?以往这个问题没有被测试出来,是因为从没有测试过 1D case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
class TestDropoutOpInput1d(OpTest): | ||
def setUp(self): | ||
self.op_type = "dropout" | ||
self.inputs = {'X': np.random.random((2000)).astype("float32")} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
当表示形状的时候, 1-tuple 需要用逗号
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* fix dropout bug in backward when input is 1d tensor, test=develop * add test case and refine error message, test=develop * refine error message, test=develop
PR types
Bug fixes
PR changes
OPs
Describe
fix dropout op bug in backward when input is 1d tensor