-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
在paddle中如何对模型进行显存占用分析? #38193
Comments
您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档、常见问题、历史Issue、AI社区来寻求解答。祝您生活愉快~ Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the API,FAQ,Github Issue and AI community to get the answer.Have a nice day! |
你好,paddle暂时没有这种动态的显存占用分析工具。可以使用paddle.summary获取模型显存占用的信息。 https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/summary_cn.html#summary |
@joey12300 , 谢谢! 请问利用paddle实现该功能(动态的显存占用分析),是不是要用到cuda编程呢? |
嗯,可能要通过cuda的一些接口获取这些信息,都要改动底层框架的 |
@joey12300 ,谢谢,明白了! |
用户你好,显存优化相关 api 我们已有研发计划,敬请关注~ |
@geoexploring 你好,你提到的几个API,目前Paddle中均已进行了支持,将会在Paddle 2.3版本中发布,详见PR:#38657 感谢对Paddle的关注,若在使用中或自己实现相关功能过程中遇到问题,欢迎再次反馈。 |
在Pytorch中可以利用torch.cuda.memory_allocated()、torch.cuda.max_memory_allocated()和torch.cuda.memory_reserved()对模型进行显存占用分析,具体可参看PyTorch显存机制分析.
但是,我在paddle中没有找到对应的api,请问paddle中有相应的api吗?
谢谢!
The text was updated successfully, but these errors were encountered: