Intergrate Autotrain Advance with Ray Framework for distributed Fine-Tuning LLM #626
Unanswered
ducanh-ho2296
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm currently using AutoTrain Advance from the Hugging Face transformers library to perform automatic fine-tuning of large language models for question-answering tasks. I'm interested in leveraging the Ray framework to distribute the fine-tuning process across multiple workers or machines to improve efficiency and scalability.
Specifically:
Can AutoTrain Advance be seamlessly integrated with Ray for parallelized training across multiple workers?
Are there any known compatibility issues or considerations when using AutoTrain Advance with Ray?
Are there any recommended best practices or examples for integrating AutoTrain Advance with Ray for distributed fine-tuning?
Any insights or guidance on how to integrate these two frameworks effectively would be greatly appreciated.
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions