Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update support for HumanEval #2550

Merged
merged 11 commits into from
Sep 25, 2023
Merged

Update support for HumanEval #2550

merged 11 commits into from
Sep 25, 2023

Conversation

mcarbin
Copy link
Contributor

@mcarbin mcarbin commented Sep 20, 2023

What does this PR do?

Updated the support for HumanEval to compute the pass@k metric with n samples. This specifically, separates n and k, which were previously coupled together to have the same value. Now we can compute pass@k with n > k as is the case for implementations in other work.

What issue(s) does this change relate to?

N/A

Before submitting

  • Have you read the contributor guidelines?
  • Did you update any related docs and document your change?
  • Did you update any related tests and add any new tests related to your change? (see testing)
  • Did you run the tests locally to make sure they pass?
  • Did you run pre-commit on your change? (see the pre-commit section of prerequisites)

@mcarbin mcarbin requested a review from a team as a code owner September 20, 2023 03:18
Copy link
Contributor

@mvpatel2000 mvpatel2000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please add a unit test?

@mcarbin mcarbin self-assigned this Sep 20, 2023
@mcarbin
Copy link
Contributor Author

mcarbin commented Sep 20, 2023

Sure, @mvpatel2000. Would it be acceptable to use the previous tests (#2301?), update (set different values of pass_at_k)and post output of tests?

@mvpatel2000
Copy link
Contributor

Sure, @mvpatel2000. Would it be acceptable to use the previous tests (#2301?), update (set different values of pass_at_k)and post output of tests?

Yep, should be fine to update previous tests. No need to paste output -- CI/CD will run it automatically so as long as it passes it should be fine

composer/datasets/in_context_learning_evaluation.py Outdated Show resolved Hide resolved
composer/metrics/nlp.py Outdated Show resolved Hide resolved
@mcarbin
Copy link
Contributor Author

mcarbin commented Sep 22, 2023

@mvpatel2000 @dakinggg, open to any more input. Otherwise, latest is passing tests and I've resolved other asks.

Copy link
Contributor

@mvpatel2000 mvpatel2000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but will wait for Daniel to approve since he has more context

composer/datasets/in_context_learning_evaluation.py Outdated Show resolved Hide resolved
Copy link
Contributor

@dakinggg dakinggg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mcarbin mcarbin merged commit d3d3a4e into mosaicml:dev Sep 25, 2023
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants