Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GraphBolt][CUDA] Inplace pin memory for Graph and TorchFeatureStore #6962

Merged
merged 12 commits into from
Jan 18, 2024

Conversation

mfbalin
Copy link
Collaborator

@mfbalin mfbalin commented Jan 17, 2024

Description

torch pin_memory method creates a copy of the tensor. When we work with large datasets or use multi-GPU training, we don't want copies to be made. So, this PR ensures that pin_memory_() method is in-place by using cudaHostRegister.

Checklist

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [$CATEGORY] (such as [NN], [Model], [Doc], [Feature]])
  • I've leverage the tools to beautify the python and c++ code.
  • The PR is complete and small, read the Google eng practice (CL equals to PR) to understand more about small PR. In DGL, we consider PRs with less than 200 lines of core code change are small (example, test and documentation could be exempted).
  • All changes have test coverage
  • Code is well-documented
  • To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
  • Related issue is referred in this PR
  • If the PR is for a new model/paper, I've updated the example index here.

Changes

@dgl-bot
Copy link
Collaborator

dgl-bot commented Jan 17, 2024

To trigger regression tests:

  • @dgl-bot run [instance-type] [which tests] [compare-with-branch];
    For example: @dgl-bot run g4dn.4xlarge all dmlc/master or @dgl-bot run c5.9xlarge kernel,api dmlc/master

@dgl-bot
Copy link
Collaborator

dgl-bot commented Jan 17, 2024

Commit ID: e33ca34

Build ID: 1

Status: ❌ CI test failed in Stage [Lint Check].

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Jan 17, 2024

Commit ID: a5b3e9d

Build ID: 2

Status: ❌ CI test failed in Stage [Lint Check].

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Jan 17, 2024

Commit ID: abf466c

Build ID: 4

Status: ⚪️ CI test cancelled due to overrun.

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Jan 17, 2024

Commit ID: 3fef383

Build ID: 3

Status: ⚪️ CI test cancelled due to overrun.

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Jan 17, 2024

Commit ID: 93c8a18

Build ID: 5

Status: ⚪️ CI test cancelled due to overrun.

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Jan 17, 2024

Commit ID: cdfb60c

Build ID: 6

Status: ⚪️ CI test cancelled due to overrun.

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Jan 17, 2024

Commit ID: b7ca852

Build ID: 7

Status: ⚪️ CI test cancelled due to overrun.

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Jan 17, 2024

Commit ID: a28a3c8

Build ID: 8

Status: ✅ CI test succeeded.

Report path: link

Full logs path: link

@mfbalin
Copy link
Collaborator Author

mfbalin commented Jan 17, 2024

@frozenbugs, I measured the memory consumption of the multi-GPU example in #6961. Without this PR, the consumption grows as more GPUs are used. With this PR, adding more GPUs does not significantly change the memory consumption. The tests pass as well. The multi-GPU example also seems to terminate cleanly.

@TristonC
Copy link
Collaborator

We saw better scale up performance one a single DGX node from 1GU to 8GPUs with this PR.

@mfbalin
Copy link
Collaborator Author

mfbalin commented Jan 17, 2024

We saw better scale up performance one a single DGX node from 1GU to 8GPUs with this PR.

Why does this PR lead to better performance? I thought it would only help lower the memory usage.

@TristonC
Copy link
Collaborator

We saw better scale up performance one a single DGX node from 1GU to 8GPUs with this PR.

Why does this PR lead to better performance? I thought it would only help lower the memory usage.

Will find out if is just his PR or other PR combined effect.

@dgl-bot
Copy link
Collaborator

dgl-bot commented Jan 18, 2024

Commit ID: 1c88ff7

Build ID: 9

Status: ✅ CI test succeeded.

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Jan 18, 2024

Commit ID: cf6a022

Build ID: 10

Status: ⚪️ CI test cancelled due to overrun.

Report path: link

Full logs path: link

@mfbalin mfbalin requested a review from czkkkkkk January 18, 2024 03:01
@dgl-bot
Copy link
Collaborator

dgl-bot commented Jan 18, 2024

Commit ID: 04372f7

Build ID: 11

Status: ✅ CI test succeeded.

Report path: link

Full logs path: link

@mfbalin mfbalin merged commit c864c91 into dmlc:master Jan 18, 2024
2 checks passed
@mfbalin mfbalin deleted the gb_cuda_inplace_pin_memory_ branch January 18, 2024 06:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants