boldreams is a suite of tools to train neural encoding models using the Natural Scenes Dataset(NSD). The family of encoding models we deal with are the ones based on a pre-trained visual or text backbone. Here, a particular focus is on the interpretability and explainability (xAI) of these models.
After downloading the dataset, we need to define some paths as environment variables, these can be found in /configs/config.py
. An approach we take here is to create a preprocessed dataset that only contains the visual cortex voxels. This is done with the nsdhandling
class, sample usage can be found in /scripts/make_preprocessed_data.py
.
Training is done with lightning and is configurable with .yaml
files. Various examples of such config files can be found in /configs/
. A simple example is alexnet.yaml
. Here one can define various parameters of the training, for example, which layers to use for feature extraction, LAYERS_TO_EXTRACT
, and the percentage of filters to use per layer, PERCENT_OF_CHANNELS
. A typical training script can be found in /scripts/training_script.py
. The predictions are relatively straightforward and can be conducted using /scripts/prediction_script.py
.
We adapt the lucent package to generate dreams with objective functions involving brain regions and voxels. One natural objective is to maximally activate a particular ROI, although any objective function can be defined. In /scripts/dreams/abstract.py
we show typical usage with objectives that maximally activate an ROI and also an objective function that promotes diversity in the dreams.
Here is an example of the dreams for subj01
that maximize activation in the face related areas,
Each coloumn shows the backbone used, for example, Alexnet-25-False
denotes the Alexnet backbone, 25% of filters per layer and finetuning off. The second row shows a word cloud of top nouns where the similarity score is predicted using CLIP.
Here is an example for the same subject, dreams that maximize activation in the place related areas,
We see that the CLIP
backbone (RN50x4) creates the most elaborate dreams.
Here is an example with the CLIP
backbone (RN50x4) and a small diversity term in the objective,
We can also generate dreams for retinotopic eccentricity ROIs,
We use the integrated gradients approach to calculate saliency maps for each ROI. Here are some examples of faces, bodies and places ROIs respectively,