-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PRIM][PIR]Migrate prim rules #57554
[PRIM][PIR]Migrate prim rules #57554
Conversation
… migrate_prim_custom_vjp_rules
… migrate_prim_custom_vjp_rules
… migrate_prim_custom_vjp_rules
… migrate_prim_custom_vjp_rules
…e/Paddle into migrate_prim_custom_vjp_rules
… migrate_prim_custom_vjp_rules
… migrate_prim_custom_vjp_rules
你的PR提交成功,感谢你对开源项目的贡献! |
… migrate_prim_custom_vjp_rules
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for backward
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for check_dygraph
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* fix bugs of generating Op::Build when Op has optional tensor * add default constructor for IrMetaTensor * fix bugs * polish guard * pir support prim gelu and rsqrt * support prim bwd ops * migrate vjp rules of cast,add,multiply,elementwise_pow * add cast as primitive op * fix bugs in elementwise_pow_grad * add test for cast_grad * add test for elementwise_add_grad * add test for elementwise_mul_grad * add test for elementwise_pow_grad * fix bugs * fix bugs * support pir prim backward ops * refien * fix bug * migrate layer_norm custom vjp rules to pir * fix bugs in ir_backward * fix backward , scope, and concat_grad prim * add layer_norm fwd decompose logic * fix pow * change _use_new_ir_api to in_pir_mode * add _static_guard * fix * fix executor cuda700 error caused by full and full_like * refine * add vjp rules * fix bugs * add scope * add test * add add op prim rules --------- Co-authored-by: YuanRisheng <yuanrisheng@baidu.com> Co-authored-by: cyber-pioneer <chenzhuo@tju.edu.cn> Co-authored-by: Charles-hit <wanghao107@baidu.com> Co-authored-by: zhangbo9674 <zhangbo54@baidu.com>
* fix bugs of generating Op::Build when Op has optional tensor * add default constructor for IrMetaTensor * fix bugs * polish guard * pir support prim gelu and rsqrt * support prim bwd ops * migrate vjp rules of cast,add,multiply,elementwise_pow * add cast as primitive op * fix bugs in elementwise_pow_grad * add test for cast_grad * add test for elementwise_add_grad * add test for elementwise_mul_grad * add test for elementwise_pow_grad * fix bugs * fix bugs * support pir prim backward ops * refien * fix bug * migrate layer_norm custom vjp rules to pir * fix bugs in ir_backward * fix backward , scope, and concat_grad prim * add layer_norm fwd decompose logic * fix pow * change _use_new_ir_api to in_pir_mode * add _static_guard * fix * fix executor cuda700 error caused by full and full_like * refine * add vjp rules * fix bugs * add scope * add test * add add op prim rules --------- Co-authored-by: YuanRisheng <yuanrisheng@baidu.com> Co-authored-by: cyber-pioneer <chenzhuo@tju.edu.cn> Co-authored-by: Charles-hit <wanghao107@baidu.com> Co-authored-by: zhangbo9674 <zhangbo54@baidu.com>
* fix bugs of generating Op::Build when Op has optional tensor * add default constructor for IrMetaTensor * fix bugs * polish guard * pir support prim gelu and rsqrt * support prim bwd ops * migrate vjp rules of cast,add,multiply,elementwise_pow * add cast as primitive op * fix bugs in elementwise_pow_grad * add test for cast_grad * add test for elementwise_add_grad * add test for elementwise_mul_grad * add test for elementwise_pow_grad * fix bugs * fix bugs * support pir prim backward ops * refien * fix bug * migrate layer_norm custom vjp rules to pir * fix bugs in ir_backward * fix backward , scope, and concat_grad prim * add layer_norm fwd decompose logic * fix pow * change _use_new_ir_api to in_pir_mode * add _static_guard * fix * fix executor cuda700 error caused by full and full_like * refine * add vjp rules * fix bugs * add scope * add test * add add op prim rules --------- Co-authored-by: YuanRisheng <yuanrisheng@baidu.com> Co-authored-by: cyber-pioneer <chenzhuo@tju.edu.cn> Co-authored-by: Charles-hit <wanghao107@baidu.com> Co-authored-by: zhangbo9674 <zhangbo54@baidu.com>
PR types
New features
PR changes
Others
Description
Pcard-66975
问题修复:
full
api 调用fill_constant
api中place参数在老ir下是采用未定义place在执行阶段做place选择,新ir跟动态图接口统一,采用的place是通过_current_expected_place
获取,这儿会返回一个gpu place,但是静态图真正期望的设备是在执行阶段才知道的,这儿修改full place为未定义place,跟老ir形式一样,让其在执行的时候去自动推导,否则cpu执行器在执行的时候可能会通过full op影响下游op选到cuda kernel出现cuda700的错误。full_like api
中动态图place参数直接传入x.place,但是新ir下value没有place属性,这儿传入的是_current_expected_place
,可能会与执行器执行的设备不一致出现cuda700的问题,这儿修改为未定义place让其在执行期间根据设备自动去推导。后续这儿是否考虑可以删除pybind接口中place参数,内部调用c++api时直接传入x的place即可。concat_grad
组合算子单侧core dump,是由于其拆解规则没考虑部分输出为空指针情况。grad
->grad_value
。