Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix incorrect parameter replacing of PIR constant_folding_pass caused by stack op #68152

Merged
merged 4 commits into from
Sep 26, 2024
Merged

Fix incorrect parameter replacing of PIR constant_folding_pass caused by stack op #68152

merged 4 commits into from
Sep 26, 2024

Conversation

lszxb
Copy link
Contributor

@lszxb lszxb commented Sep 11, 2024

PR Category

Performance Optimization

PR Types

Improvements

Description

Currently, the PIR constant_folding_pass will decide the constant in PIR program to be a builtin.constant, which is on CPU, or a builtin.parameter, which is on GPU. In some situation, this will produce incorrect results, causing a constant being placed on GPU, which leads to unnecessary D2H synchronization.

The following code

out = paddle.full([mask.shape[0], 1], 1)

will translate to the following PIR:

    (%7) = "pd_op.full" () {dtype:(pd_op.DataType)int32,persistable:[false],place:(pd_op.Place)Place(cpu),shape:(pd_op.IntArray)[],stop_gradient:[false],value:(Float)1} : () -> builtin.tensor<i32>
    (%8) = "pd_op.full" () {dtype:(pd_op.DataType)float32,place:(pd_op.Place)Place(cpu),shape:(pd_op.IntArray)[1],stop_gradient:[true],value:(Double)1} : () -> builtin.tensor<1xf32>
    (%9) = "builtin.combine" (%6, %7) {} : (builtin.tensor<i32>, builtin.tensor<i32>) -> vec[builtin.tensor<i32>,builtin.tensor<i32>]
    (%10) = "pd_op.stack" (%9) {axis:(Int32)0,stop_gradient:[true]} : (vec[builtin.tensor<i32>,builtin.tensor<i32>]) -> builtin.tensor<2xi32>
    (%11) = "pd_op.full_with_tensor" (%8, %10) {dtype:(pd_op.DataType)int64,persistable:[false],stop_gradient:[false]} : (builtin.tensor<1xf32>, builtin.tensor<2xi32>) -> builtin.tensor<-1x1xi64>

The first constant "1" (%7) as a part of the shape will become a builtin.parameter in the current version. This PR fixes this problem.

Copy link

paddle-bot bot commented Sep 11, 2024

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@yuanlehome
Copy link
Contributor

Hi同学,merge一下develop再提个commit吧

@lszxb
Copy link
Contributor Author

lszxb commented Sep 24, 2024

好的,merge了

@yuanlehome
Copy link
Contributor

yuanlehome commented Sep 25, 2024

Details

PR-CI-Codestyle-Check有问题,执行一下 pre-commit run --files paddle/fluid/pir/transforms/general/constant_folding_pass.cc

@yuanlehome
Copy link
Contributor

https://cla-assistant.io/PaddlePaddle/Paddle?pullRequest=68152 点进去这个链接签署一下

@lszxb
Copy link
Contributor Author

lszxb commented Sep 25, 2024

好的,签署了

@yuanlehome yuanlehome closed this Sep 26, 2024
@yuanlehome yuanlehome reopened this Sep 26, 2024
@yuanlehome yuanlehome merged commit 8245f8c into PaddlePaddle:develop Sep 26, 2024
26 of 27 checks passed
@lszxb lszxb deleted the fix_pir_constant_folding_pass_stack_op branch September 26, 2024 03:16
@luotao1
Copy link
Contributor

luotao1 commented Sep 27, 2024

hi, @lszxb

  • 非常感谢你对飞桨的贡献,我们正在运营一个PFCC组织,会通过定期分享技术知识与发布开发者主导任务的形式持续为飞桨做贡献,详情可见 https://github.com/luotao1 主页说明。
  • 如果你对PFCC有兴趣,请发送邮件至 ext_paddle_oss@baidu.com,我们会邀请你加入~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants