Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MVP example for multiple GPU training #42

Closed
pdogra89 opened this issue Jan 24, 2020 · 3 comments
Closed

MVP example for multiple GPU training #42

pdogra89 opened this issue Jan 24, 2020 · 3 comments
Assignees

Comments

@pdogra89
Copy link

-Additional refactoring iteration to unify single-gpu/multiple gpu training?

@Nic-Ma Nic-Ma assigned Nic-Ma and madil90 and unassigned wyli, ericspod and Nic-Ma Jan 25, 2020
@Nic-Ma
Copy link
Contributor

Nic-Ma commented Jan 25, 2020

Hi @madil90 ,

As you are expert on Deep Learning performance tuning and also done many investigations on PyTorch, could you please help to develop this task?
The minimal requirements will be:

  1. Wenqi extracts pure python example from notebook example(already committed PR).
  2. You can submit a PR to add Multi-GPU logics for training and evaluation(still in development) examples.
  3. BTW, please note that: don't save the parallel model to files, be careful.

Please feel free to contact if you have any questions.
Thanks in advance.

@pdogra89
Copy link
Author

Eric will create a work in progress PR - Eric and Adil to align - don't want to independent pull requests.

@wyli
Copy link
Contributor

wyli commented Feb 7, 2020

resolved via #49 #51

@wyli wyli closed this as completed Feb 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants