-
-
Notifications
You must be signed in to change notification settings - Fork 617
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TPU Example / Support #952
Comments
@ryanwongsa unfortunately, there is no yet examples of using Ignite with TPUs. I think there is no limitations on that as PyTorch xla API is rather similar to GPUs. But, yes, it will be good idea to add some example with Colab link. |
@ryanwongsa please, checkout added example or notebook in Colab and feel free to give us any feedback. |
Thanks. I am not sure if it is suitable for including in an example notebook but there are currently Kaggle kernels which use TPU training on 8 cores, such as bert-multi-lingual-tpu-training-8-cores and I was wondering what the best approach would be to do it in a similar way with Ignite. The torch_xla |
@ryanwongsa let me check this kernel with Ignite. |
❓ Questions/Help/Support
I would like to use Ignite with TPUs and was wondering if there are any examples currently available which currently use TPUs with Ignite or does Ignite have some limitations which would not allow for easy compatibility with TPUs.
The text was updated successfully, but these errors were encountered: