-
-
Notifications
You must be signed in to change notification settings - Fork 617
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wandb logger #926
Wandb logger #926
Conversation
FYI, just found on their site : https://github.com/wandb/gitbook/blob/master/library/frameworks/pytorch/ignite.md |
@fdlm thanks for the PR ! Could you please provide MNIST example of using WandB like https://github.com/pytorch/ignite/blob/master/examples/contrib/mnist/mnist_with_neptune_logger.py |
@sdesrozis could you please check mnist example by running it locally. Thanks! |
criterion = nn.CrossEntropyLoss() | ||
trainer = create_supervised_trainer(model, optimizer, criterion, device=device) | ||
|
||
if sys.version_info > (3,): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can remove GPU logging as W&D does it by itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, will fix.
}, | ||
) | ||
|
||
def iteration(engine): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can use global_step_from_engine
instead of iteration
:
global_step_transform=global_step_from_engine(trainer)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is that global_step_from_engine
will use the iteration number for handlers called on Events.ITERATION_COMPLETED
, but epoch number for handlers called on Events.EPOCH_COMPLETED
. W&B does not allow to log events with a smaller step than previously logged, so I need to make sure to always use the iteration number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I see what you mean. So, we can not log at all any epoch number if we log iterations ?
Anyway, we can do that iteration(engine)
in a more simple way:
global_step_transform=lambda _, _: trainer.state.iteration
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly, at least not directly. The workaround (as described in the W&B docs) is to log epochs as "metric", and then select it as X-axis in the W&B web interface. Not sure how to facilitate this in the Logger here.
Thanks for the simpler global_step_transform
, will add it!
@fdlm thanks for the update! |
WB is just an awesome logger !! That works perfectly. Data in cloud on W&B are amazing. Very very nice job :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very good. We can merge the PR when conversations will be resolved.
Fixes #865
Description:
Implements logging to Weights & Biases.
Potential problems:
Check list: