Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explicitly setting torch.cuda.device at init_model #1056

Merged
merged 1 commit into from
Dec 1, 2021

Conversation

aldakata
Copy link
Contributor

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

Motivation

#1055

Modification

Explicitly setting the cuda device before model.to(device)

Checklist

  1. Pre-commit or other linting tools are used to fix the potential lint issues.
  2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects.
  4. The documentation has been modified accordingly, like docstring or example tutorials.

@CLAassistant
Copy link

CLAassistant commented Nov 17, 2021

CLA assistant check
All committers have signed the CLA.

@ZwwWayne
Copy link
Collaborator

Hi @aldakata ,
Could you sign the CLA so that we can merge this PR?

@aldakata
Copy link
Contributor Author

Signed!

@ZwwWayne ZwwWayne requested a review from wHao-Wu November 24, 2021 09:30
@ZwwWayne
Copy link
Collaborator

It seems that this issue is caused in CenterPoint, we will fix the root issue in CenterPoint rather than in the inference API.

@ZwwWayne ZwwWayne merged commit 3fd5ea2 into open-mmlab:master Dec 1, 2021
@wHao-Wu
Copy link
Contributor

wHao-Wu commented Dec 1, 2021

Hi, @aldakata
Thanks for your feedback and contribution. We found that other spconv-based models will also cause this problem. We will fix the spconv problem in ops directly in the future.

@wHao-Wu wHao-Wu linked an issue Dec 7, 2021 that may be closed by this pull request
@OpenMMLab-Coodinator
Copy link

Hi @aldakata , first of all, we want to express our gratitude for your significant PR in the MMDet3d project. Your contribution is highly appreciated, and we are grateful for your efforts in helping improve this open-source project during your personal time. We believe that many developers will benefit from your PR.

We would also like to invite you to join our Special Interest Group (SIG) private channel on Discord, where you can share your experiences, ideas, and build connections with like-minded peers. To join the SIG channel, simply message moderator— OpenMMLab on Discord or briefly share your open-source contributions in the #introductions channel and we will assist you. Look forward to seeing you there! Join us :https://discord.gg/raweFPmdzG

If you are Chinese or have WeChat,welcome to join our community on WeChat. You can add our assistant :openmmlabwx. Please add "mmsig + Github ID" as a remark when adding friends:)
Thank you again for your contribution❤

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Dual GPU cuda core dumped.
5 participants