This repository contains the code for the paper "Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations''.
Credits for images: pexels.comCheck out our project page here.
We provide the code for all experiments here.
However, our aim for this repository is to provide an implementation that is as easy as possible, which we are currently working on.
Ensuring both transparency and safety is critical when deploying Deep Neural Networks (DNNs) in high-risk applications, such as medicine. The field of explainable AI (XAI) has proposed various methods to comprehend the decision-making processes of opaque DNNs. However, only few XAI methods are suitable of ensuring safety in practice as they heavily rely on repeated labor-intensive and possibly biased human assessment. In this work, we present a novel post-hoc concept-based XAI framework that conveys besides instance-wise (local) also class-wise (global) decision-making strategies via prototypes. What sets our approach apart is the combination of local and global strategies, enabling a clearer understanding of the (dis-)similarities in model decisions compared to the expected (prototypical) concept use, ultimately reducing the dependence on human long-term assessment. Quantifying the deviation from prototypical behavior not only allows to associate predictions with specific model sub-strategies but also to detect outlier behavior. As such, our approach constitutes an intuitive and explainable tool for model validation. We demonstrate the effectiveness of our approach in identifying out-of-distribution samples, spurious model behavior and data quality issues across three datasets (ImageNet, CUB-200, and CIFAR-10) utilizing VGG, ResNet, and EfficientNet architectures.
We provide a tutorial on how to use the PCX framework on the ImageNet flamingo class. The tutorial is divided into the following steps:
First, clone the repository and install the required packages:
pip install -r requirements.txt
Note: We use python 3.8.10 for this tutorial.
Secondly, unzip the flamingo data samples that were retrieved from pexels.com:
unzip datasets/pexels/pexels_imgs.zip -d datasets/pexels/
To generate concept-based explanations using the CRP package for the ImageNet flamingo class, please run the following Jupyter notebook called tutorial_0_concept_explanation.ipynb
.
To generate prototypical concept-based explanations for the ImageNet flamingo class, please run the following Jupyter notebook called tutorial_1_pcx_explanation.ipynb
.
We provide the code for all experiments in the paper here.
Please feel free to cite our work, if used in your research:
@article{dreyer2023understanding,
title={Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations},
author={Dreyer, Maximilian and Achtibat, Reduan and Samek, Wojciech and Lapuschkin, Sebastian},
journal={arXiv preprint arXiv:2311.16681},
year={2023}
}