Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Properly handle empty metric_names passed to Trainer._filter_metrics #2700

Merged
merged 6 commits into from
Nov 9, 2023

Conversation

irenedea
Copy link
Contributor

@irenedea irenedea commented Nov 8, 2023

What does this PR do?

Addresses error using use_train_metrics: false: KeyError: 'continuation_indices'
Manual test runs:
no fix: empty-eval-gFgK8Q
with fix: empty-eval-with-fix-wpQHSK

Changes
(1) Previously, if metric_names is an empty list, we return metrics without filtering. This is incorrect as an empty list should signify no metrics. We update this to handle an empty list.
(2) evaluator.metric_names should default to None not an empty list, so we can properly populate it with defaults later

What issue(s) does this change relate to?

Before submitting

  • Have you read the contributor guidelines?
  • Is this change a documentation change or typo fix? If so, skip the rest of this checklist.
  • Was this change discussed/approved in a GitHub issue first? It is much more likely to be merged if so.
  • Did you update any related docs and document your change?
  • Did you update any related tests and add any new tests related to your change? (see testing)
  • Did you run the tests locally to make sure they pass?
  • Did you run pre-commit on your change? (see the pre-commit section of prerequisites)

@irenedea irenedea requested a review from dakinggg November 8, 2023 22:25
@irenedea irenedea marked this pull request as draft November 8, 2023 22:55
@irenedea
Copy link
Contributor Author

irenedea commented Nov 8, 2023

TODO: see why it's causing a test failure in console logging

@irenedea irenedea marked this pull request as ready for review November 9, 2023 01:26
Copy link
Contributor

@mvpatel2000 mvpatel2000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Can you add a unit test for this?
  2. The PR changes the effective public API by changing the behavior of what happens when you don't specify metric_names. I think that's not intended -- we should still default to all metrics if none are specified (putting aside the concerns of the current API design of deepcopying metrics etc...). What if we did a milder change -- keep behavior as-is for None, and instead filter all if user list is []?

composer/trainer/trainer.py Outdated Show resolved Hide resolved
@irenedea
Copy link
Contributor Author

irenedea commented Nov 9, 2023

@mvpatel2000 Actually I think it does preserve the original intended behavior. What happens around the callsites for _filter_metrics is the following:
(1) Upon instantiation of an evaluator, evaluator.metric_names is None by default. (Previously, it was an empty list by default, so it was problematic that we weren't able to differentiate because default and empty list.)
(2) ensure_evaluator(evaluator, default_metrics=model_metric_names) is called for each evaluator. I made the change that this should populate the evaluator with the defaults (aka all metrics) if not set yet. It seemed appropriate given the call parameters.
(3) evaluator.metric_names is now populated with defaults and used in _filter_metrics

@irenedea
Copy link
Contributor Author

irenedea commented Nov 9, 2023

Hm, though on second thought, it could be nicer to just not populate the defaults in ensure_evaluator, so we only need to edit _filter_metrics and evaluator init. The biggest thing was that evaluator.metric_names was never None previously, so we never actually differentiated between emtpy list and defaults.

edit: okay i made this change

Copy link
Contributor

@mvpatel2000 mvpatel2000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for hunting this down :)

@irenedea irenedea enabled auto-merge (squash) November 9, 2023 16:19
@irenedea irenedea merged commit 6f29ad6 into mosaicml:dev Nov 9, 2023
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants