Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Hackathon No.82] Support fp16 #1151

Closed
wants to merge 11 commits into from
Closed

[Hackathon No.82] Support fp16 #1151

wants to merge 11 commits into from

Conversation

Zheng-Bicheng
Copy link
Collaborator

No description provided.

@Zheng-Bicheng
Copy link
Collaborator Author

Zheng-Bicheng commented Sep 28, 2023

新增

  • Paddle2ONNX删除对FP16的错误提示(完成)
  • BN层支持OP参数通过Cast实现FP32->Fp16(完成)
  • BN层通过升级OP版本实现FP32->FP16(完成)
  • Mul层支持OP参数通过Cast实现FP32->Fp16(完成)
  • Pool新增对FP16的支持(完成)
  • Scale层新增对FP16的支持(完成)

修复

  • Min OP版本有问题,通过提高OP版本支持int64(完成)
  • 修复Matmul层输出永远是FP32的问题

@Zheng-Bicheng
Copy link
Collaborator Author

@jiangjiajun

* Fixed a bug where Pool does not support FP16
* Fix a bug where Matmul output is always FP32
* Support dynamic modification of Min's OP version
* Add Scale OP support for Fp16
@jiangjiajun
Copy link
Collaborator

@Zheng-Bicheng PR拆成两个来提,一个是移除legacy代码,支持Paddle 2.5;另一个支持fp16

@Zheng-Bicheng Zheng-Bicheng deleted the support_fp16 branch January 29, 2024 02:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants