Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WeightsHistHandler should plot all weights (incl those without grad) #2328

Closed
vfdev-5 opened this issue Nov 16, 2021 · 10 comments · Fixed by #2523 or #2547
Closed

WeightsHistHandler should plot all weights (incl those without grad) #2328

vfdev-5 opened this issue Nov 16, 2021 · 10 comments · Fixed by #2523 or #2547

Comments

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Nov 16, 2021

🚀 Feature

Currently, WeightsHistHandler do not log weights without grad:

if p.grad is None:
continue

Let's put an option to enable logging all weights. Maybe, we can make it even without option and just log everything ?

Meanwhile, here is a workaround code for TensorboardLogger:

from ignite.contrib.handlers import TensorboardLogger
from ignite.contrib.handlers.tensorboard_logger import WeightsHistHandler

class FixedWeightsHistHandler(WeightsHistHandler):
    
    def __call__(self, engine, logger, event_name):
        if not isinstance(logger, TensorboardLogger):
            raise RuntimeError("Handler 'WeightsHistHandler' works only with TensorboardLogger")

        global_step = engine.state.get_event_attrib_value(event_name)
        tag_prefix = f"{self.tag}/" if self.tag else ""
        for name, p in self.model.named_parameters():            
            name = name.replace(".", "/")
            logger.writer.add_histogram(
                tag=f"{tag_prefix}weights/{name}", values=p.data.detach().cpu().numpy(), global_step=global_step,
            )
@DhDeepLIT
Copy link
Contributor

Assume you have a partial frozen model, it makes sense to only trace the evolution of the weight which are updated.
The question is does checking if p.grad is not None is what we want ?
What about weights modification with other computation ? like pca_low_rank ?

@vfdev-5
Copy link
Collaborator Author

vfdev-5 commented Nov 16, 2021

I see what you mean. Ultimately, we would like to have an optional list of weights to plot ?
Something like:

from ignite.contrib.handlers import TensorboardLogger
from ignite.contrib.handlers.tensorboard_logger import WeightsHistHandler


class FixedWeightsHistHandler(WeightsHistHandler):

    def __init__(self, model, tag=None, names_whitelist=None):
        super(WeightsHistHandler, self).__init__(model, tag=tag)
        self.names_whitelist = names_whitelist
    
    def __call__(self, engine, logger, event_name):
        if not isinstance(logger, TensorboardLogger):
            raise RuntimeError("Handler 'WeightsHistHandler' works only with TensorboardLogger")

        global_step = engine.state.get_event_attrib_value(event_name)
        tag_prefix = f"{self.tag}/" if self.tag else ""
        for name, p in self.model.named_parameters():
            
            if self.names_whitelist is not None and name not in self.names_whitelist:
                continue

            name = name.replace(".", "/")
            logger.writer.add_histogram(
                tag=f"{tag_prefix}weights/{name}", values=p.data.detach().cpu().numpy(), global_step=global_step,
            )


tb_logger = TensorboardLogger(log_dir="experiments/tb_logs")
# ...
tb_logger.attach(
    trainer,
    event_name=Events.ITERATION_COMPLETED,
    log_handler=FixedWeightsHistHandler(model, names_whitelist=[n for n, _ in model.named_parameters() if "conv" in n])
)

@vfdev-5
Copy link
Collaborator Author

vfdev-5 commented Nov 16, 2021

@DhDeepLIT I thought you wanted to avoid the case of inexisting grads and log weights in any case ?

if p.grad is None:
    continue

Or you are now talking about GradsHistHandler ?

@DhDeepLIT
Copy link
Contributor

Yes that is my need, but on your side, what was THE thing you wanted to do ?
Consider that the zero_grad() function can also set grad to None.
Thus checking if the grad attribute is None or Not may not be the right option.

@DhDeepLIT
Copy link
Contributor

BTW your proposed patch is fine for my need, the whitelist is a great idea too.

@vfdev-5
Copy link
Collaborator Author

vfdev-5 commented Nov 16, 2021

FYI: zero_grad does not set grads to None by default but just zeros them :)

https://github.com/pytorch/pytorch/blob/33e9a0b5f6674bd429bda689534e1c987b38cf6e/torch/nn/modules/module.py#L1785-L1794

@DhDeepLIT
Copy link
Contributor

Maybe it should be the other way around though :
it is easier to filter the weights/grad we want if a key is present in the module full name say you want to check ALL fc layer. It would be better to check if 'fc' is in the named parameter of the module, right ?

@vfdev-5
Copy link
Collaborator Author

vfdev-5 commented Nov 16, 2021

I agree that non-exact match for names_whitelist could be helpful...

@DhDeepLIT
Copy link
Contributor

https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html

There is a option with 'set_to_none=True'

@sadra-barikbin
Copy link
Collaborator

Hi @vfdev-5
I'm back. Assign it to me if you will.

sadra-barikbin added a commit to sadra-barikbin/ignite that referenced this issue Mar 22, 2022
sadra-barikbin added a commit to sadra-barikbin/ignite that referenced this issue Mar 22, 2022
sadra-barikbin added a commit to sadra-barikbin/ignite that referenced this issue Mar 22, 2022
sadra-barikbin added a commit to sadra-barikbin/ignite that referenced this issue Mar 22, 2022
sadra-barikbin added a commit to sadra-barikbin/ignite that referenced this issue Mar 29, 2022
sadra-barikbin added a commit to sadra-barikbin/ignite that referenced this issue Mar 29, 2022
sadra-barikbin added a commit to sadra-barikbin/ignite that referenced this issue Apr 10, 2022
sadra-barikbin added a commit to sadra-barikbin/ignite that referenced this issue Apr 10, 2022
vfdev-5 added a commit that referenced this issue Apr 11, 2022
* Remove unnecessary code in BaseOutputHandler

Closes #2438

* Add ReduceLROnPlateauScheduler

Closes #1754

* Fix indentation issue

* Fix another indentation issue

* Fix PEP8 related issues

* Fix other PEP8 related issues

* Fix hopefully the last PEP8 related issue

* Fix hopefully the last PEP8 related issue

* Remove ReduceLROnPlateau's specific params and add link to it

Also fix bug in min_lr check

* Fix state_dict bug and add a test

* Update docs

* Add doctest and fix typo

* Add feature FixedWeightsHistHandler

Closes #2328

* Move FixedWeightsHistHandler's job to WeightsHistHandler

Closes #2328

* Enable whitelist to be callable

* autopep8 fix

* Refactor constructor

* Change whitelist to be List[str]

* Add whitelist callable type

* Fix bug in MNIST tensorboard example

Co-authored-by: vfdev <vfdev.5@gmail.com>
Co-authored-by: sadra-barikbin <sadra-barikbin@users.noreply.github.com>
vfdev-5 added a commit that referenced this issue Apr 14, 2022
* Remove unnecessary code in BaseOutputHandler

Closes #2438

* Add ReduceLROnPlateauScheduler

Closes #1754

* Fix indentation issue

* Fix another indentation issue

* Fix PEP8 related issues

* Fix other PEP8 related issues

* Fix hopefully the last PEP8 related issue

* Fix hopefully the last PEP8 related issue

* Remove ReduceLROnPlateau's specific params and add link to it

Also fix bug in min_lr check

* Fix state_dict bug and add a test

* Update docs

* Add doctest and fix typo

* Add feature FixedWeightsHistHandler

Closes #2328

* Move FixedWeightsHistHandler's job to WeightsHistHandler

Closes #2328

* Enable whitelist to be callable

* autopep8 fix

* Refactor constructor

* Change whitelist to be List[str]

* Add whitelist callable type

* Fix bug in MNIST tensorboard example

* Fix docstring

* Update ignite/contrib/handlers/tensorboard_logger.py

Co-authored-by: vfdev <vfdev.5@gmail.com>
Co-authored-by: sadra-barikbin <sadra-barikbin@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment