Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[phi] Transfer lgamma, kldiv_loss, isclose, cumprod kernels into phi and pass the tests of these four kernels #39770

Merged
merged 19 commits into from
Mar 15, 2022

Conversation

2742195759
Copy link
Contributor

@2742195759 2742195759 commented Feb 21, 2022

PR types

Others

PR changes

Others

Describe

[phi] Transfer lgamma, kldiv_loss, isclose, cumprod kernels into phi and pass the tests of these four kernels

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

// limitations under the License.

#include <unsupported/Eigen/SpecialFunctions>
#include "paddle/fluid/operators/elementwise/elementwise_op_impl.cu.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里可以使用pten下的头文件

std::vector<const DenseTensor*> ins = {&x};
std::vector<DenseTensor*> outs = {out};
auto functor = CudaLgammaFunctor<T>();
paddle::operators::LaunchSameDimsElementwiseCudaKernel<T>(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

参考一下这个函数的实现,其实调用的是Pten下的elementwise组件

@@ -12,7 +12,12 @@
// See the License for the specific language governing permissions and
// limitations under the License.

#include "paddle/fluid/operators/lgamma_op.h"
#include <unsupported/Eigen/SpecialFunctions>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个是多余的头文件吗

#include "paddle/fluid/framework/infershape_utils.h"
#include "paddle/fluid/framework/op_registry.h"
#include "paddle/fluid/framework/operator.h"
#include "paddle/fluid/platform/for_range.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个是多余的头文件吗

Comment on lines 38 to 39
const DenseTensor* d_out = &d_out_tmp;
const DenseTensor* x = &x_tmp;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里操作有些多余吧,建议参数名改一下,去掉tmp这种后缀,函数体直接改使用 . 操作符就行

@2742195759 2742195759 changed the title Transfer lgamma [phi] Transfer lgamma, kldiv_loss, isclose, cumprod kernels into phi and pass the tests of these four kernels Feb 25, 2022
@@ -156,6 +160,10 @@ class ScalarBase {
CopyScalar(other, this);
}

// NOTE(xiongkun): some op need to judge the dtype of the Scalar, we expose a
// interface.
bool is_dype(DataType d) const { return d == dtype_; }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

和其他接口保持统一,使用驼峰命名

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


#include "paddle/phi/kernels/cumprod_grad_kernel.h"

#include "paddle/fluid/platform/for_range.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for_range已迁到phi下

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

#include <cstdint>
#include <type_traits>

#include "paddle/fluid/platform/for_range.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

#include "paddle/fluid/platform/for_range.h"
#include "paddle/phi/backends/cpu/cpu_context.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/cumprod_kernel.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个头文件可以放置在最上边

// limitations under the License.

#include "paddle/phi/kernels/lgamma_kernel.h"
#include "paddle/fluid/platform/for_range.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

#include "paddle/phi/kernels/cumprod_kernel.h"
#include "paddle/phi/kernels/funcs/cumprod.h"
#include "paddle/fluid/operators/math/inclusive_scan.h"
#include "paddle/fluid/platform/for_range.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

#include <string>

#include "paddle/fluid/framework/data_type.h"
#include "paddle/fluid/framework/eigen.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个头文件有用到吗

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


#pragma once
#include <string>
#include "paddle/fluid/framework/eigen.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

使用phi下的eigen

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


#pragma once
#include <string>
#include "paddle/fluid/framework/eigen.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

// limitations under the License.

#pragma once
#include "paddle/fluid/platform/for_range.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@2742195759 2742195759 marked this pull request as draft March 9, 2022 06:24
@2742195759 2742195759 marked this pull request as ready for review March 9, 2022 06:25
@2742195759 2742195759 marked this pull request as draft March 9, 2022 06:31
@2742195759 2742195759 marked this pull request as ready for review March 9, 2022 06:32
@@ -12,7 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the Licnse. */

#include "paddle/fluid/operators/kldiv_loss_op.h"
#include <string>
#include "paddle/fluid/framework/eigen.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

npu的kernel是不是不需要eigen?

// NOTE(xiongkun): why we put definition here?
// test_custom_op can't include enforce.h, because enforce.h includes gflags.
// so we decouple the include dependence of enforce.h by link.
void ThrowErrorIfNotScalar(int num) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里函数名可以稍微调整下,看函数功能主要是抛出一个Tensor无法转换为Scalar的异常信息,比如可以用ThrowTensorConvertError之类的命名

@@ -156,6 +155,10 @@ class ScalarBase {
CopyScalar(other, this);
}

// NOTE(xiongkun): some op need to judge the dtype of the Scalar, we expose a
// interface.
bool IsDtype(DataType d) const { return d == dtype_; }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

目前Scalar已经有一个dtype()接口获取dtype,也可以用来判断,这个IsDtype接口不算是一个常见的类调用接口,非必须的情况下建议移除


#include <cstdint>
#include <type_traits>
#include "paddle/fluid/platform/for_range.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

引用phi下的for_range.h

#include "paddle/phi/kernels/funcs/cumprod.h"

namespace phi {
using Tensor = DenseTensor;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

using Tensor = DenseTensor;可以去掉,目前phi目录下不建议使用这种方式

@@ -0,0 +1,31 @@

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

多了空行

@@ -0,0 +1,30 @@

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

多了空行

const DenseTensor& label,
const std::string& reduction,
DenseTensor* out);
} // phi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// namespace phi

const DenseTensor& d_out,
const DenseTensor& x,
DenseTensor* d_x);
} // pten
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// namespace phi

void LgammaKernel(const Context& dev_ctx,
const DenseTensor& x,
DenseTensor* out);
} // pten
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// namespace phi

#include "paddle/fluid/operators/math/inclusive_scan.h"
#include "paddle/phi/backends/gpu/gpu_context.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/cumprod_kernel.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cumprod_kernel.h放在开头

#include "paddle/phi/backends/gpu/gpu_context.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/impl/kldiv_loss_grad_kernel_impl.h"
#include "paddle/phi/kernels/kldiv_loss_grad_kernel.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kldiv_loss_grad_kernel.h放在开头

#include "paddle/phi/backends/gpu/gpu_context.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/impl/kldiv_loss_kernel_impl.h"
#include "paddle/phi/kernels/kldiv_loss_kernel.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

#include "paddle/phi/backends/gpu/gpu_context.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/impl/lgamma_grad_kernel_impl.h"
#include "paddle/phi/kernels/lgamma_grad_kernel.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

#include "paddle/phi/backends/gpu/gpu_context.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/funcs/elementwise_base.h"
#include "paddle/phi/kernels/lgamma_kernel.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

std::string atol_map_str = (ctx.HasInput("Atol") ? "Atol" : "atol");
return KernelSignature("isclose",
{"Input", "Other"},
{rtol_map_str, atol_map_str, "equal_nan"},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

按照推理目前的设计方案,KernelSignature中的变量需要是原始的变量名字符串,即"Rtol"或者"rtol",rtol_map_str这里解析会有问题,这里还是得写成最原始的if else判断分别返回

Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Aurelius84 Aurelius84 merged commit 6422362 into PaddlePaddle:develop Mar 15, 2022
@2742195759 2742195759 deleted the transfer_lgamma branch March 16, 2022 07:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants