Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update probe to always use cpu for loading models #6128

Merged
merged 1 commit into from
Apr 3, 2024

Conversation

brandonrising
Copy link
Collaborator

Summary

Currently, the base model probe class doesn't identify a map device on torch loads. Due to this, if a model with gpu tensors is imported, it will attempt to use cuda whether or not it exists on the device

QA Instructions

Attempt to install this model on a device with no GPU.

Prior to this change (on MacOS):
Screenshot 2024-04-03 at 3 19 43 PM

After this change (on MacOS):
Screenshot 2024-04-03 at 3 20 06 PM

Merge Plan

Can be merged when approved

@github-actions github-actions bot added python PRs that change python files backend PRs that change backend files labels Apr 3, 2024
@psychedelicious psychedelicious enabled auto-merge (rebase) April 3, 2024 20:13
@psychedelicious psychedelicious merged commit 51ca59c into main Apr 3, 2024
14 checks passed
@psychedelicious psychedelicious deleted the probe-should-always-use-cpu branch April 3, 2024 20:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend PRs that change backend files python PRs that change python files
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants