Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluator device placement #193

Merged
merged 8 commits into from
Jul 20, 2022
Merged

Evaluator device placement #193

merged 8 commits into from
Jul 20, 2022

Conversation

lvwerra
Copy link
Member

@lvwerra lvwerra commented Jul 20, 2022

This PR ads device placement for the pipeline in the evaluator. There are two mechanisms:

  1. The user can pass a device to Evaluator.compute.
  2. If no device is passed we attempt to infer if a GPU is available and device 0 is used.

Alternatively, a user can always use a custom configuration and initialize the pipeline before passing it to the evaluator. If the an initialized pipeline is passed the device has no effect.

@lvwerra lvwerra requested review from ola13 and lewtun July 20, 2022 13:52
@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Jul 20, 2022

The documentation is not available anymore as the PR was closed or merged.

Copy link
Contributor

@ola13 ola13 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM ✅

Copy link
Member

@lewtun lewtun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding this killer feature @lvwerra - finally evaluations can go brum brum brum 🏎️ !

I've left a few nits and a question about the device variable being set when neither torch or tf are installed (edge case). Otherwise, this LGTM!

src/evaluate/evaluator/base.py Outdated Show resolved Hide resolved
src/evaluate/evaluator/base.py Outdated Show resolved Hide resolved
except ImportError:
device = -1

if device == -1:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the rare case that neither torch or tensorflow are installed, this will error out because device is not assigned a value. Would it make sense to have a warning in that case?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment in test

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah sorry I missed that - all good then!

src/evaluate/evaluator/image_classification.py Outdated Show resolved Hide resolved
src/evaluate/evaluator/image_classification.py Outdated Show resolved Hide resolved
src/evaluate/evaluator/question_answering.py Outdated Show resolved Hide resolved
src/evaluate/evaluator/question_answering.py Outdated Show resolved Hide resolved
src/evaluate/evaluator/question_answering.py Outdated Show resolved Hide resolved
src/evaluate/evaluator/text_classification.py Outdated Show resolved Hide resolved
# neither pt or tf are available
pt_available = False
tf_available = False
self.assertEqual(Evaluator._infer_device(), -1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, I wonder how this test passes since my understanding is that device is never explicitly defined when neither torch or tf are installed 🤔

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If neither tf or torch are installed then the second except sets it to -1:

except ImportError:
device = -1

Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
@lvwerra lvwerra merged commit cb4c343 into main Jul 20, 2022
@lvwerra lvwerra deleted the evaluator-device-placement branch July 20, 2022 16:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants