Skip to content
/ iia Public

The official PyTorch implementation of the Iterated Integrated Attributions (IIA) method.

Notifications You must be signed in to change notification settings

iia-iccv23/iia

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyTorch Implementation of Iterated Integrated Attributrions

ViT

Introduction

This is the official PyTorch implementation of the Iterated Integrated Attributrions (IIA) method.

We introduce a novel method that enables visualization of predictions made by vision models, as well as visualization of explanations for a specific class. In this method, we present the concept of iterated integrated attributrions.

Producing IIA Classification Saliency Maps

Images should be stored in the data\ILSVRC2012_img_val directory. The information on the images being evaluated and their target class can be found in the data\pics.txt file, with each line formatted as follows: <file_name> target_class_number (e.g. ILSVRC2012_val_00002214.JPEG 153).

To generate saliency maps using our method on CNN, run the following command:

python cnn_saliency_map_generator.py

And to produce maps for ViT, run:

python vit_saliency_map_generator.py

By default, saliency maps for CNNs are generated using the Resnet101 network on the last layer. You can change the selected network and layer by modifying the model_name and FEATURE_LAYER_NUMBER variables in the cnn_saliency_map_generator.py class. For ViT, the default is ViT-Base, which can also be configured using the model_name variable in the vit_saliency_map_generator.py class.

The generated saliency maps will be stored in the qualitive_results directory.

ViT models weight files:

Reproducing Segmentation Results

Download the segmentaion datasets:

To run the image segmentation, download the VOC and COCO datasets to data/VOC and data/COCO respectively. For Imagenet segmentation, use the dataset provided in the link (Link to download dataset). To run segmentation using ViT, configure the chosen_dataset variable in the seg_vit_datasets class (the default is COCO) and run the following command:

python seg_vit_datasets.py

To run segmentation using CNN, specify the desired dataset {dataset} (which can be imagenet, coco, or voc) in the following command and run it:

python segmentation_cnn_{dataset}.py

Credits

For comparison, we used the following implementations of code from git repositories:

Citation

Please cite our work if you use it in your research:

@InProceedings{Barkan_2023_ICCV,
    author    = {Barkan, Oren and Elisha‬‏, ‪Yehonatan and Asher, Yuval and Eshel, Amit and Koenigstein, Noam},
    title     = {Visual Explanations via Iterated Integrated Attributions},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2023},
    pages     = {2073-2084}
}

About

The official PyTorch implementation of the Iterated Integrated Attributions (IIA) method.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages