-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add wandb log #92
base: dev
Are you sure you want to change the base?
Add wandb log #92
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @Sentz98!
I left some comments on the code. During the next week, I will proceed to test the code functionalities. If it works well, we can merge once the changes are approved.
"output_folder": "results", | ||
"experiment_name": "micromind_exp", | ||
"opt": "adam", # this is ignored if you are overriding the configure_optimizers | ||
"lr": 0.001, # this is ignored if you are overriding the configure_optimizers | ||
"debug": False, | ||
"log_wandb": False, | ||
"wandb_resume": "auto", # ["allow", "must", "never", "auto" or None] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what is the best way to do this. But we need to define better the effect of each field of default_cfg
.
Maybe a long comment ("""xx"""
) after the closing bracket?
name=self.hparams.experiment_name, | ||
resume=self.hparams.wandb_resume, | ||
id=self.hparams.experiment_name, | ||
config=self.hparams, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here we can check if the configuration provides any extra arguments to be passed to the init. eventually, we load them (check the usage of ** operator).
@@ -574,6 +594,10 @@ def train( | |||
else: | |||
val_metrics = train_metrics.update({"val_loss": loss_epoch / (idx + 1)}) | |||
|
|||
if self.hparams.log_wandb: # wandb log |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be wrong, but it would be nicer if we divide the train and valid panes inside the wandb logging. this would look something like val/{metric_name}
will be logged instead of val_{metric_name}
. ofc this change only applies to the .log
call and should not be propagated to the entire core.
Add wandb log, from default is not active.
To start log just set hyperparameter 'log_wandb' = True
During train loop it will automatically log train, validation metrics and lr if there is a lr_scheduler.
It start a wandb project named by default 'micromind' and each run are the name of the experiment. It automatically resume the log if the experiment restart after a crash.