-
Notifications
You must be signed in to change notification settings - Fork 423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Properly handle empty metric_names passed to Trainer._filter_metrics #2700
Conversation
TODO: see why it's causing a test failure in console logging |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Can you add a unit test for this?
- The PR changes the effective public API by changing the behavior of what happens when you don't specify metric_names. I think that's not intended -- we should still default to all metrics if none are specified (putting aside the concerns of the current API design of deepcopying metrics etc...). What if we did a milder change -- keep behavior as-is for
None
, and instead filter all if user list is[]
?
@mvpatel2000 Actually I think it does preserve the original intended behavior. What happens around the callsites for |
Hm, though on second thought, it could be nicer to just not populate the defaults in ensure_evaluator, so we only need to edit _filter_metrics and evaluator init. The biggest thing was that evaluator.metric_names was never None previously, so we never actually differentiated between emtpy list and defaults. edit: okay i made this change |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks for hunting this down :)
What does this PR do?
Addresses error using
use_train_metrics: false
:KeyError: 'continuation_indices'
Manual test runs:
no fix: empty-eval-gFgK8Q
with fix: empty-eval-with-fix-wpQHSK
Changes
(1) Previously, if
metric_names
is an empty list, we returnmetrics
without filtering. This is incorrect as an empty list should signify no metrics. We update this to handle an empty list.(2) evaluator.metric_names should default to None not an empty list, so we can properly populate it with defaults later
What issue(s) does this change relate to?
Before submitting
pre-commit
on your change? (see thepre-commit
section of prerequisites)