-
-
Notifications
You must be signed in to change notification settings - Fork 617
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(docs): add usage in supervised_training_steps #1661
chore(docs): add usage in supervised_training_steps #1661
Conversation
Should we remove non_blocking (bool, optional): if True and this copy is between CPU and GPU, the copy may occur asynchronously
with respect to the host. For other cases, this argument has no effect. or leave as it is for simplicity? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @ydcjeff !
Left a nit comment to add import.
As for non_blocking for TPU, we can keep the docstring, just adapt the text for TPU. Do you know if it works for TPU this argument ?
Asked in pytorch/xla#2791 |
Thanks @ydcjeff ! I would expect that everything is non-blocking in xla until an op that should sync and output a result. My intuition is that this parameter is not taken into account... |
Yes, I also think like that, since xla docs doesn't mention about |
So non_blocking is already taken care by xla, current docstring seems ok. |
Follow up of #1589
Description: add usage examples in supervised_training_steps
Check list: