Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Det &Slogdet #34992

Merged
merged 22 commits into from
Sep 22, 2021
Merged

Det &Slogdet #34992

merged 22 commits into from
Sep 22, 2021

Conversation

huangxu96
Copy link
Contributor

@huangxu96 huangxu96 commented Aug 18, 2021

PR types

New features

PR changes

APIs

Describe

Add new API : paddle.linalg.det & paddle.linalg.slogdet

API Alias:paddle.det& paddle.slogdet

Example:

paddle.det():

Calculates the determinant value of a square matrix or batches of square matrices.

Supports the input of float, double

Args:
x (Tensor): input (Tensor): the input matrix of size (n, n) or the batch of matrices of size
(*, n, n) where * is one or more batch dimensions.

Returns:
y (Tensor): the determinant value of a square matrix or batches of square matrices.

Example: 
    .. code-block:: python

    import paddle

    x =  paddle.randn([3,3,3])

    A = paddle.det(x)

    print(A)

    # [ 0.02547996,  2.52317095, -6.15900707])

paddle.slogdet():

Calculates the sign and natural logarithm of the absolute value of a square matrix's or batches square matrices' determinant.
The determinant can be computed with ``sign * exp(logabsdet)

Supports the input of float, double

Note that for matrices that have zero determinant, this returns (0, -inf)

Args:
x (Tensor): the batch of matrices of size : math:(*, n, n)
where math:* is one or more batch dimensions.

Returns:
y (Tensor): A tensor containing the sign of the determinant and the natural logarithm
of the absolute value of determinant, respectively.

Example:
.. code-block:: python

    import paddle

    x =  paddle.randn([3,3,3])

    A = paddle.slogdet(x)

    print(A)

    # [[ 1.        ,  1.        , -1.        ],
    # [-0.98610914, -0.43010661, -0.10872950]])

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@huangxu96 huangxu96 changed the title Slogdet Det &Slogdet Sep 8, 2021
if (tid < numel) {
out[tid] = static_cast<T>(1);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个CUDA kernel可以用set_constant替代

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这部分后面会被删除。

auto input_dim_size = input->dims().size();

std::vector<int64_t> res_in = vectorize(framework::stride(input->dims()));
paddle::framework::Tensor input_stride_tensor;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

line21 已经声明了using Tensor = framework::Tensor;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

auto output_dim_size = output->dims().size();

int batch_count = 1;
for (int i = 0; i < input->dims().size() - 2; i++) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

line39 声明了auto input_dim_size = input->dims().size();

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

auto end_idx = input_vec.begin() + (i + 1) * rank * rank;
std::vector<T> sub_vec(begin_idx,
end_idx); // get every square matrix data
Eigen::MatrixXf matrix(rank, rank);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eigen::MatrixXf 这个看着仅能支持fp32类型的计算,注册类型应该是不止fp32类型的,可以仿照这种方式做一下。
https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/amp/fp16_type_traits.h

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quite curious about we can only copy data to use Eigen::MatrixXf::determinant. Have you seen https://eigen.tuxfamily.org/dox/group__TutorialMapClass.html ? I am not sure whether the link would help, but I think that we can have a try.

auto* output = context.Output<framework::Tensor>("Out");

int batch_count = 1;
for (int i = 0; i < input->dims().size() - 2; i++) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里加一行注释解释一下为什么要减去2

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also add a Paddle Enforce to make sure the input->dims().size() >= 2

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both are done.

auto end_idx = input_vec.begin() + (i + 1) * rank * rank;
std::vector<T> sub_vec(begin_idx,
end_idx); // get every square matrix data
Eigen::MatrixXf matrix(rank, rank);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的问题同上

Eigen::MatrixXf matrix(rank, rank);
for (int i = 0; i < rank; ++i) {
for (int j = 0; j < rank; ++j) {
matrix(i, j) = sub_vec[rank * i + j];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以尝试按行填值,而不是elmentwise级别填值

int batch_count = 1;
for (int i = 0; i < input->dims().size() - 2; i++) {
batch_count *= input_dim[i];
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这块函数调用了很多次,封装一下吧

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

paddle/fluid/operators/determinant_op.cc Outdated Show resolved Hide resolved
paddle/fluid/operators/determinant_op.cc Outdated Show resolved Hide resolved
paddle/fluid/operators/determinant_op.cu Show resolved Hide resolved
auto* output = context.Output<framework::Tensor>("Out");

int batch_count = 1;
for (int i = 0; i < input->dims().size() - 2; i++) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also add a Paddle Enforce to make sure the input->dims().size() >= 2

paddle/fluid/operators/determinant_op.h Outdated Show resolved Hide resolved
python/paddle/fluid/tests/unittests/test_determinant_op.py Outdated Show resolved Hide resolved
python/paddle/fluid/tests/unittests/test_determinant_op.py Outdated Show resolved Hide resolved
python/paddle/fluid/tests/unittests/test_determinant_op.py Outdated Show resolved Hide resolved
def init_data(self):
self.case = np.random.randn(3, 3, 3, 3).astype('float32')
self.inputs = {'Input': self.case}
self.target = np.array(np.linalg.slogdet(self.inputs['Input']))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

python/paddle/fluid/tests/unittests/test_determinant_op.py Outdated Show resolved Hide resolved

void InferShape(framework::InferShapeContext *ctx) const override {
OP_INOUT_CHECK(ctx->HasInput("Input"), "Input", "Input",
"DeterminantGradOp");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

反向还需要Out的值,所以这儿麻烦加上

OP_INOUT_CHECK(ctx->HasInput("Out"), "Input", "Out",
                   "DeterminantGradOp");

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

protected:
void Apply(GradOpPtr<T> grad_op) const override {
grad_op->SetType("determinant_grad");
grad_op->SetInput("Input", this->Input("Input"));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

反向还需要Out的值,所以这儿麻烦加上

grad_op->SetInput("Out", this->Output("Out"));

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


void InferShape(framework::InferShapeContext *ctx) const override {
OP_INOUT_CHECK(ctx->HasInput("Input"), "Input", "Input",
"SlogDeterminantGradOp");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

protected:
void Apply(GradOpPtr<T> grad_op) const override {
grad_op->SetType("slogdeterminant_grad");
grad_op->SetInput("Input", this->Input("Input"));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

ops::SlogDeterminantGradOpMaker<paddle::imperative::OpBase>);

REGISTER_OPERATOR(slogdeterminant_grad,
ops::DeterminantGradOp) // reuse det grad op
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这儿应该是ops::SlogDeterminantGradOp

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里暂时复用 DeterminantGradOp.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


void InferShape(framework::InferShapeContext *ctx) const override {
OP_INOUT_CHECK(ctx->HasInput("Input"), "Input", "Input", "determinant");
OP_INOUT_CHECK(ctx->HasOutput("Out"), "Output", "Out", "determinant");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should do infer shape at compile time at least?

zhhsplendid
zhhsplendid previously approved these changes Sep 18, 2021
Copy link
Member

@zhhsplendid zhhsplendid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

zhangting2020
zhangting2020 previously approved these changes Sep 18, 2021
Copy link
Contributor

@zhangting2020 zhangting2020 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for skip_check_grad_ci

Copy link
Contributor

@lanxianghit lanxianghit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for API change

@zhhsplendid zhhsplendid merged commit 9ce45dd into PaddlePaddle:develop Sep 22, 2021
huangxu96 added a commit to huangxu96/Paddle that referenced this pull request Sep 24, 2021
Add new API : paddle.linalg.det & paddle.linalg.slogdet

API Alias:paddle.det& paddle.slogdet
ghost pushed a commit to piotrekobi/Paddle that referenced this pull request Sep 24, 2021
Add new API : paddle.linalg.det & paddle.linalg.slogdet

API Alias:paddle.det& paddle.slogdet
zhhsplendid pushed a commit to zhhsplendid/Paddle that referenced this pull request Sep 26, 2021
Add new API : paddle.linalg.det & paddle.linalg.slogdet

API Alias:paddle.det& paddle.slogdet
lanxianghit pushed a commit that referenced this pull request Sep 26, 2021
This PR added det and slogdet API to release/2.2
It is cherry-pick from #34992 and #36013
AnnaTrainingG pushed a commit to AnnaTrainingG/Paddle that referenced this pull request Sep 29, 2021
Add new API : paddle.linalg.det & paddle.linalg.slogdet

API Alias:paddle.det& paddle.slogdet
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants