We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support ROCm PyTorch distributed training runtime.
PyTorch has been advertising support for AMD ROCm and AMD Instinct and Radeon GPUs since version 2.0.
The latest generation AMD Instinct accelerators such as the MI300X makes it possible to run state-of-the-art large scale training jobs as demonstrated in https://developers.redhat.com/articles/2024/10/03/amd-gpus-model-training-openshift-ai.
It would be great to bring ROCm Torch distributed training runtime along with the NVIDIA one brought by #2328.
Generally that would be useful to define how to manage support for multiple accelerators among the training runtimes.
Give it a 👍 We prioritize the features with most 👍
The text was updated successfully, but these errors were encountered:
Thanks for creating this @astefanutti! /remove-label lifecycle/needs-triage /area runtime
Sorry, something went wrong.
No branches or pull requests
What you would like to be added?
Support ROCm PyTorch distributed training runtime.
Why is this needed?
PyTorch has been advertising support for AMD ROCm and AMD Instinct and Radeon GPUs since version 2.0.
The latest generation AMD Instinct accelerators such as the MI300X makes it possible to run state-of-the-art large scale training jobs as demonstrated in https://developers.redhat.com/articles/2024/10/03/amd-gpus-model-training-openshift-ai.
It would be great to bring ROCm Torch distributed training runtime along with the NVIDIA one brought by #2328.
Generally that would be useful to define how to manage support for multiple accelerators among the training runtimes.
Love this feature?
Give it a 👍 We prioritize the features with most 👍
The text was updated successfully, but these errors were encountered: