-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Matmul op #26411
Add Matmul op #26411
Conversation
Thanks for your contribution! |
✅ This PR's description meets the template requirements! |
@@ -5095,7 +5095,65 @@ def matmul(x, y, transpose_x=False, transpose_y=False, alpha=1.0, name=None): | |||
y = fluid.layers.data(name='y', shape=[3, 2], dtype='float32') | |||
out = fluid.layers.matmul(x, y, True, True) | |||
""" | |||
return paddle.matmul(x, y, transpose_x, transpose_y, alpha, name) | |||
attrs = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
给fluid.layers.nn.matmul加一个deprecated
decorator吧。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok. fix it.
python/paddle/tensor/linalg.py
Outdated
|
||
Currently, the input tensors' rank can be any, but when the rank of any | ||
inputs is bigger than 3, this two inputs' rank should be equal. | ||
Currently, the input tensors' rank can be any, `matmul` can be used to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rank -> number of dimensions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok. fix it.
python/paddle/tensor/linalg.py
Outdated
Also note that if the raw tensor :math:`x` or :math:`y` is rank-1 and | ||
nontransposed, the prepended or appended dimension :math:`1` will be | ||
removed after matrix multiplication. | ||
are transposed. If the tensor is rank-1 of shape, the transpose is invalid. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rank -> ndim
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok. fix it.
the matrix-vector product is obtained. | ||
- If both arguments are at least 1-dimensional and at least one argument | ||
is N-dimensional (where N > 2), then a batched matrix multiply is obtained. | ||
If the first argument is 1-dimensional, a 1 is prepended to its dimension |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
后面的几个if缺少-
符号。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这是一段话。我加了换行符。
python/paddle/tensor/linalg.py
Outdated
x (Variable): The input variable which is a Tensor or LoDTensor. | ||
y (Variable): The input variable which is a Tensor or LoDTensor. | ||
x (Tensor): The input tensor which is a Tensor or LoDTensor. | ||
y (Tensor): The input tensor which is a Tensor or LoDTensor. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no LoDTensor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix it。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
Others
PR changes
OPs
Describe
add matmul op for support the broadcast rule