-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Det &Slogdet #34992
Det &Slogdet #34992
Conversation
…e determinant value.
Thanks for your contribution! |
if (tid < numel) { | ||
out[tid] = static_cast<T>(1); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个CUDA kernel可以用set_constant替代
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这部分后面会被删除。
auto input_dim_size = input->dims().size(); | ||
|
||
std::vector<int64_t> res_in = vectorize(framework::stride(input->dims())); | ||
paddle::framework::Tensor input_stride_tensor; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
line21 已经声明了using Tensor = framework::Tensor;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
auto output_dim_size = output->dims().size(); | ||
|
||
int batch_count = 1; | ||
for (int i = 0; i < input->dims().size() - 2; i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
line39 声明了auto input_dim_size = input->dims().size();
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
auto end_idx = input_vec.begin() + (i + 1) * rank * rank; | ||
std::vector<T> sub_vec(begin_idx, | ||
end_idx); // get every square matrix data | ||
Eigen::MatrixXf matrix(rank, rank); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eigen::MatrixXf
这个看着仅能支持fp32类型的计算,注册类型应该是不止fp32类型的,可以仿照这种方式做一下。
https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/amp/fp16_type_traits.h
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quite curious about we can only copy data to use Eigen::MatrixXf::determinant. Have you seen https://eigen.tuxfamily.org/dox/group__TutorialMapClass.html ? I am not sure whether the link would help, but I think that we can have a try.
auto* output = context.Output<framework::Tensor>("Out"); | ||
|
||
int batch_count = 1; | ||
for (int i = 0; i < input->dims().size() - 2; i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里加一行注释解释一下为什么要减去2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also add a Paddle Enforce to make sure the input->dims().size() >= 2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both are done.
auto end_idx = input_vec.begin() + (i + 1) * rank * rank; | ||
std::vector<T> sub_vec(begin_idx, | ||
end_idx); // get every square matrix data | ||
Eigen::MatrixXf matrix(rank, rank); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里的问题同上
Eigen::MatrixXf matrix(rank, rank); | ||
for (int i = 0; i < rank; ++i) { | ||
for (int j = 0; j < rank; ++j) { | ||
matrix(i, j) = sub_vec[rank * i + j]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
可以尝试按行填值,而不是elmentwise级别填值
int batch_count = 1; | ||
for (int i = 0; i < input->dims().size() - 2; i++) { | ||
batch_count *= input_dim[i]; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这块函数调用了很多次,封装一下吧
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
auto* output = context.Output<framework::Tensor>("Out"); | ||
|
||
int batch_count = 1; | ||
for (int i = 0; i < input->dims().size() - 2; i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also add a Paddle Enforce to make sure the input->dims().size() >= 2
def init_data(self): | ||
self.case = np.random.randn(3, 3, 3, 3).astype('float32') | ||
self.inputs = {'Input': self.case} | ||
self.target = np.array(np.linalg.slogdet(self.inputs['Input'])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
|
||
void InferShape(framework::InferShapeContext *ctx) const override { | ||
OP_INOUT_CHECK(ctx->HasInput("Input"), "Input", "Input", | ||
"DeterminantGradOp"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
反向还需要Out
的值,所以这儿麻烦加上
OP_INOUT_CHECK(ctx->HasInput("Out"), "Input", "Out",
"DeterminantGradOp");
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
protected: | ||
void Apply(GradOpPtr<T> grad_op) const override { | ||
grad_op->SetType("determinant_grad"); | ||
grad_op->SetInput("Input", this->Input("Input")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
反向还需要Out
的值,所以这儿麻烦加上
grad_op->SetInput("Out", this->Output("Out"));
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
|
||
void InferShape(framework::InferShapeContext *ctx) const override { | ||
OP_INOUT_CHECK(ctx->HasInput("Input"), "Input", "Input", | ||
"SlogDeterminantGradOp"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
protected: | ||
void Apply(GradOpPtr<T> grad_op) const override { | ||
grad_op->SetType("slogdeterminant_grad"); | ||
grad_op->SetInput("Input", this->Input("Input")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
ops::SlogDeterminantGradOpMaker<paddle::imperative::OpBase>); | ||
|
||
REGISTER_OPERATOR(slogdeterminant_grad, | ||
ops::DeterminantGradOp) // reuse det grad op |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这儿应该是ops::SlogDeterminantGradOp
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里暂时复用 DeterminantGradOp.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
|
||
void InferShape(framework::InferShapeContext *ctx) const override { | ||
OP_INOUT_CHECK(ctx->HasInput("Input"), "Input", "Input", "determinant"); | ||
OP_INOUT_CHECK(ctx->HasOutput("Out"), "Output", "Out", "determinant"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should do infer shape at compile time at least?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for skip_check_grad_ci
20b5c87
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for API change
Add new API : paddle.linalg.det & paddle.linalg.slogdet API Alias:paddle.det& paddle.slogdet
Add new API : paddle.linalg.det & paddle.linalg.slogdet API Alias:paddle.det& paddle.slogdet
Add new API : paddle.linalg.det & paddle.linalg.slogdet API Alias:paddle.det& paddle.slogdet
Add new API : paddle.linalg.det & paddle.linalg.slogdet API Alias:paddle.det& paddle.slogdet
PR types
New features
PR changes
APIs
Describe
Add new API : paddle.linalg.det & paddle.linalg.slogdet
API Alias:paddle.det& paddle.slogdet
Example:
paddle.det():
Calculates the determinant value of a square matrix or batches of square matrices.
Supports the input of float, double
Args:
x (Tensor): input (Tensor): the input matrix of size
(n, n)
or the batch of matrices of size(*, n, n)
where*
is one or more batch dimensions.Returns:
y (Tensor): the determinant value of a square matrix or batches of square matrices.
paddle.slogdet():
Calculates the sign and natural logarithm of the absolute value of a square matrix's or batches square matrices' determinant.
The determinant can be computed with ``sign * exp(logabsdet)
Supports the input of float, double
Note that for matrices that have zero determinant, this returns
(0, -inf)
Args:
x (Tensor): the batch of matrices of size : math:
(*, n, n)
where math:
*
is one or more batch dimensions.Returns:
y (Tensor): A tensor containing the sign of the determinant and the natural logarithm
of the absolute value of determinant, respectively.