-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Print memory peak message for UT #42092
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
@@ -539,7 +536,7 @@ inline void retry_sleep(unsigned milliseconds) { | |||
::paddle::platform::details::ExternalApiType< \ | |||
__CUDA_STATUS_TYPE__>::kSuccess; \ | |||
while (UNLIKELY(__cond__ != __success_type__) && retry_count < 5) { \ | |||
paddle::platform::retry_sleep(FLAGS_gpu_allocator_retry_time); \ | |||
paddle::platform::retry_sleep(10000); \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
此处enforce.h中的PADDLE_RETRY_CUDA_SUCCESS宏复用了定义在allocator_facade.cc中的FLAGS_gpu_allocator_retry_time,该宏被包括phi算子库在内的许多地方使用,从而引入了对allocator_facade的依赖,而在设计上allocator_facade应该是依赖于phi算子库的,因而此处既存在链接错误的风险,也给后续phi与fluid的完全解耦带来阻碍。经与 @zhhsplendid @chenwhql 讨论后决定直接将PADDLE_RETRY_CUDA_SUCCESS中的FLAGS_gpu_allocator_retry_time替换成固定的默认值10000。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
Others
PR changes
Others
Describe
在单测运行结束时打印显存占用峰值信息:
此信息供CI流水线监控单测的显存占用情况,同时也方便RD在本地进行单元测试时了解单测的显存使用情况,后续将考虑据此在CI上拦截显存占用过多的新增单测。
是否打印信息可通过FLAGS_enable_gpu_memory_usage_log环境变量设置,目前全局默认关闭,但在test_runner.py和paddle_gtest_main.cc中会自动开启。