-
Notifications
You must be signed in to change notification settings - Fork 423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix bug in trainer.eval and add test cases for test_console_logger #1937
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, pending a final manual test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add a test ensuring calling eval doesn't add a new evaluator to state and leave it there? I think it works, but I would prefer a unit test verifying it to test_trainer_eval.py
Also to verify Mihir's question, maybe we can add a test that 1) creates a trainer with an evaluator, 2) calls eval with a passed in evaluator 3) verifies that the original evaluator remains present on state.evaluators |
Sure |
Ok added the unit test. We good to merge? |
LGTM! |
🙌 |
What does this PR do?
hot fix for bug caused by ConsoleLogger due to
trainer.eval
(when called with aneval_dataloader
specified) not addingevaluators
tostate.evaluators
:test_console_logger
for the following use cases:trainer.eval(eval_dataloader=...)
trainer.fit(eval_dataloader=...)
Tests
Results:
What issue(s) does this change relate to?
fix CO-1732