Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GraphBolt][CUDA] Eliminate synchronization for overlap_graph_fetch. #7709

Merged
merged 5 commits into from
Aug 16, 2024

Conversation

mfbalin
Copy link
Collaborator

@mfbalin mfbalin commented Aug 16, 2024

Description

When fetching insubgraph in the pinned case, we make sure to eliminate the CPU GPU synchronization.

With the asynchronous PRs, finally the GPU Graph Cache helps both neighbor sampler and layer neighbor sampler.

Checklist

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [$CATEGORY] (such as [NN], [Model], [Doc], [Feature]])
  • I've leverage the tools to beautify the python and c++ code.
  • The PR is complete and small, read the Google eng practice (CL equals to PR) to understand more about small PR. In DGL, we consider PRs with less than 200 lines of core code change are small (example, test and documentation could be exempted).
  • All changes have test coverage
  • Code is well-documented
  • To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
  • Related issue is referred in this PR
  • If the PR is for a new model/paper, I've updated the example index here.

Changes

@mfbalin mfbalin added the expedited if it doesn't affect the main path approve first to unblock related projects, and review later label Aug 16, 2024
@mfbalin mfbalin requested a review from frozenbugs August 16, 2024 01:41
@dgl-bot
Copy link
Collaborator

dgl-bot commented Aug 16, 2024

To trigger regression tests:

  • @dgl-bot run [instance-type] [which tests] [compare-with-branch];
    For example: @dgl-bot run g4dn.4xlarge all dmlc/master or @dgl-bot run c5.9xlarge kernel,api dmlc/master

@dgl-bot
Copy link
Collaborator

dgl-bot commented Aug 16, 2024

Commit ID: 2280ecc

Build ID: 1

Status: ⚪️ CI test cancelled due to overrun.

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Aug 16, 2024

Commit ID: 3bbd8ee

Build ID: 2

Status: ✅ CI test succeeded.

Report path: link

Full logs path: link

@mfbalin mfbalin removed the expedited if it doesn't affect the main path approve first to unblock related projects, and review later label Aug 16, 2024
@mfbalin mfbalin marked this pull request as draft August 16, 2024 03:03
@mfbalin mfbalin marked this pull request as ready for review August 16, 2024 21:04
@mfbalin mfbalin added the expedited if it doesn't affect the main path approve first to unblock related projects, and review later label Aug 16, 2024
@dgl-bot
Copy link
Collaborator

dgl-bot commented Aug 16, 2024

Commit ID: b303543fa328b90f4f8ddb7275e08369527124d7

Build ID: 3

Status: ⚪️ CI test cancelled due to overrun.

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Aug 16, 2024

Commit ID: 8a6730e

Build ID: 4

Status: ⚪️ CI test cancelled due to overrun.

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Aug 16, 2024

Commit ID: 8a6730e

Build ID: 5

Status: ⚪️ CI test cancelled due to overrun.

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Aug 16, 2024

Commit ID: 128dd37

Build ID: 6

Status: ✅ CI test succeeded.

Report path: link

Full logs path: link

@mfbalin mfbalin merged commit 2521081 into dmlc:master Aug 16, 2024
2 checks passed
@mfbalin mfbalin deleted the gb_cuda_async_overlap_graph_fetch branch August 16, 2024 21:42
@frozenbugs
Copy link
Collaborator

LGTM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
expedited if it doesn't affect the main path approve first to unblock related projects, and review later
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants