Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(docs): add usage in supervised_training_steps #1661

Merged
merged 4 commits into from
Feb 22, 2021

Conversation

ydcjeff
Copy link
Contributor

@ydcjeff ydcjeff commented Feb 21, 2021

Follow up of #1589

Description: add usage examples in supervised_training_steps

Check list:

  • New tests are added (if a new feature is added)
  • New doc strings: description and/or example code are in RST format
  • Documentation is updated (if required)

@ydcjeff
Copy link
Contributor Author

ydcjeff commented Feb 21, 2021

Should we remove non_blocking argument in supervised_training_step_tpu
as per

non_blocking (bool, optional): if True and this copy is between CPU and GPU, the copy may occur asynchronously
    with respect to the host. For other cases, this argument has no effect.

or leave as it is for simplicity?

Copy link
Collaborator

@vfdev-5 vfdev-5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot @ydcjeff !
Left a nit comment to add import.
As for non_blocking for TPU, we can keep the docstring, just adapt the text for TPU. Do you know if it works for TPU this argument ?

ignite/engine/__init__.py Show resolved Hide resolved
ignite/engine/__init__.py Show resolved Hide resolved
@ydcjeff
Copy link
Contributor Author

ydcjeff commented Feb 22, 2021

Do you know if it works for TPU this argument?

Asked in pytorch/xla#2791

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Feb 22, 2021

Do you know if it works for TPU this argument?

Asked in pytorch/xla#2791

Thanks @ydcjeff ! I would expect that everything is non-blocking in xla until an op that should sync and output a result. My intuition is that this parameter is not taken into account...

@ydcjeff
Copy link
Contributor Author

ydcjeff commented Feb 22, 2021

Do you know if it works for TPU this argument?

Asked in pytorch/xla#2791

Thanks @ydcjeff ! I would expect that everything is non-blocking in xla until an op that should sync and output a result. My intuition is that this parameter is not taken into account...

Yes, I also think like that, since xla docs doesn't mention about non_blocking.
But, since I am not sure, I filed a question just to be sure. Let's wait for the answer.

@ydcjeff
Copy link
Contributor Author

ydcjeff commented Feb 22, 2021

So non_blocking is already taken care by xla, current docstring seems ok.
ready to go?

@vfdev-5 vfdev-5 merged commit 442bd08 into pytorch:master Feb 22, 2021
@ydcjeff ydcjeff deleted the supervised-training-steps-examples branch February 22, 2021 15:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants