-
-
Notifications
You must be signed in to change notification settings - Fork 617
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WeightsHistHandler should plot all weights (incl those without grad) #2328
Comments
Assume you have a partial frozen model, it makes sense to only trace the evolution of the weight which are updated. |
I see what you mean. Ultimately, we would like to have an optional list of weights to plot ? from ignite.contrib.handlers import TensorboardLogger
from ignite.contrib.handlers.tensorboard_logger import WeightsHistHandler
class FixedWeightsHistHandler(WeightsHistHandler):
def __init__(self, model, tag=None, names_whitelist=None):
super(WeightsHistHandler, self).__init__(model, tag=tag)
self.names_whitelist = names_whitelist
def __call__(self, engine, logger, event_name):
if not isinstance(logger, TensorboardLogger):
raise RuntimeError("Handler 'WeightsHistHandler' works only with TensorboardLogger")
global_step = engine.state.get_event_attrib_value(event_name)
tag_prefix = f"{self.tag}/" if self.tag else ""
for name, p in self.model.named_parameters():
if self.names_whitelist is not None and name not in self.names_whitelist:
continue
name = name.replace(".", "/")
logger.writer.add_histogram(
tag=f"{tag_prefix}weights/{name}", values=p.data.detach().cpu().numpy(), global_step=global_step,
)
tb_logger = TensorboardLogger(log_dir="experiments/tb_logs")
# ...
tb_logger.attach(
trainer,
event_name=Events.ITERATION_COMPLETED,
log_handler=FixedWeightsHistHandler(model, names_whitelist=[n for n, _ in model.named_parameters() if "conv" in n])
) |
@DhDeepLIT I thought you wanted to avoid the case of inexisting grads and log weights in any case ? if p.grad is None:
continue Or you are now talking about |
Yes that is my need, but on your side, what was THE thing you wanted to do ? |
BTW your proposed patch is fine for my need, the whitelist is a great idea too. |
FYI: zero_grad does not set grads to None by default but just zeros them :) |
Maybe it should be the other way around though : |
I agree that non-exact match for names_whitelist could be helpful... |
https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html There is a option with 'set_to_none=True' |
Hi @vfdev-5 |
* Remove unnecessary code in BaseOutputHandler Closes #2438 * Add ReduceLROnPlateauScheduler Closes #1754 * Fix indentation issue * Fix another indentation issue * Fix PEP8 related issues * Fix other PEP8 related issues * Fix hopefully the last PEP8 related issue * Fix hopefully the last PEP8 related issue * Remove ReduceLROnPlateau's specific params and add link to it Also fix bug in min_lr check * Fix state_dict bug and add a test * Update docs * Add doctest and fix typo * Add feature FixedWeightsHistHandler Closes #2328 * Move FixedWeightsHistHandler's job to WeightsHistHandler Closes #2328 * Enable whitelist to be callable * autopep8 fix * Refactor constructor * Change whitelist to be List[str] * Add whitelist callable type * Fix bug in MNIST tensorboard example Co-authored-by: vfdev <vfdev.5@gmail.com> Co-authored-by: sadra-barikbin <sadra-barikbin@users.noreply.github.com>
* Remove unnecessary code in BaseOutputHandler Closes #2438 * Add ReduceLROnPlateauScheduler Closes #1754 * Fix indentation issue * Fix another indentation issue * Fix PEP8 related issues * Fix other PEP8 related issues * Fix hopefully the last PEP8 related issue * Fix hopefully the last PEP8 related issue * Remove ReduceLROnPlateau's specific params and add link to it Also fix bug in min_lr check * Fix state_dict bug and add a test * Update docs * Add doctest and fix typo * Add feature FixedWeightsHistHandler Closes #2328 * Move FixedWeightsHistHandler's job to WeightsHistHandler Closes #2328 * Enable whitelist to be callable * autopep8 fix * Refactor constructor * Change whitelist to be List[str] * Add whitelist callable type * Fix bug in MNIST tensorboard example * Fix docstring * Update ignite/contrib/handlers/tensorboard_logger.py Co-authored-by: vfdev <vfdev.5@gmail.com> Co-authored-by: sadra-barikbin <sadra-barikbin@users.noreply.github.com>
🚀 Feature
Currently,
WeightsHistHandler
do not log weights without grad:ignite/ignite/contrib/handlers/tensorboard_logger.py
Lines 434 to 435 in a168476
Let's put an option to enable logging all weights. Maybe, we can make it even without option and just log everything ?
Meanwhile, here is a workaround code for TensorboardLogger:
The text was updated successfully, but these errors were encountered: