We recommend creating a conda environment for running the whole DEPICTER pipeline:
conda env create -n depicter_env -f environment.yml
To activate the environment:
conda activate depicter_env
In order to use the interactive tool, installing TissUUmaps 3.0 is also necessary. Please follow the instructions to install on Windows, Linux or macOS.
To install the DEPICTER plugin itself, start TissUUmaps (either the executable file or from terminal) and click on Plugins on the top left, Add plugin, tick the DEPICTER box and click OK. After restarting TissUUmaps, DEPICTER will appear in the Plugins tab.
The first step is to divide the whole slide images into patches and save them in TIF format under /path/to/saving
. The images should be saved under /path/to/images
and have their corresponding masks saved in /path/to/masks
. Depending on the naming used on them, minor details might be needed in the code extract.patches.py
code.
The command below is an example run extracting patches from a magnification level corresponding to the second level of the pyramid of size 224 x 224 with no overlap and accepting only patches where the masks covers at least 90%.
python extract_patches.py \
--slide_path='/path/to/images' \
--mask_path='/path/to/masks' \
--level=2 \
--patch_shape=224 \
--overlap=0 \
--mask_th=0.9 \
--save_path='/path/to/saving'
Note: Networks pretrained on ImageNet or publicly available pretrained networks, such as the one proposed by Ciga et al, 2021 available here already showed great results with DEPICTER. Thus, we recommend this basic approach before going on trying to train your own model (advanced block explained afterwards).
After the images are divided into patches, we can generate their embeddings. This will produce, among others, the [experiment].h5ad
file that will be the input to the DEPICTER plugin in TissUUmaps.
The command below is an example run for generating patch embeddings using a pretrained model with the ResNet18 architecture. One could include the --imagenet
argument instead of --no_imagenet
and --model_path
to use the default weights pretrained on ImageNet.
python generate_embeddings.py \
--save_path='/path/to/saving' \
--architecture='resnet18' \
--experiment_name='experiment' \
--no_imagenet \
--model_path='/path/to/pretrained/model.ckpt' \
--num_workers=32
- The image you want to annotate in TissUUmaps by dragging it and dropping it.
- Click on the plus (+) sign on the left channel and select the
[experiment].h5ad
file created for the corresponding file. - Select
/obsm/spatial;0
as X coordinate and/obsm/spatial;1
as Y coordinate. Click Update view. - On the top left, select Plugins and DEPICTER. You may need to adjust the Marker size on the top right.
- Place the Positive class (usually cancer) and Negative class seeds either by clicking on the markers or by holding shift and drawing around them.
- Now you have two options:
- Click Run Seeded Iterative Clustering. Correct and repeat as needed.
- Based on where the positive seeds ended up in the feature space, click shift and draw around the markers in the feature space. Click Feature space annotation to complete the rest of the annotations with the negative class.
- When you are happy with the results, they can be downloaded as CSV containing the (X, Y) coordinates the DEPICTER parametes and the final class.
We used lightly for pretraining self-supervised models with each dataset. You can find the installation instructions here.
Modyfing lighly's SimCLR tutorial, pretrain_simclr.py
contains the hyperparameters used for fine-tuning every model, starting from the previously mentioned model by Ciga et al. 2021. Note that we additionally used the stainlib library for H&E specific augmentations. The resulting collate function:
collate_fn = ImageCollateFunction(input_size = 224,
min_scale = 0.25,
vf_prob = 0.5,
hf_prob = 0.5,
rr_prob = 0.5,
hed_thresh = 0.3)
The three datasets used for testing DEPICTER are publicly available:
Dataset | Grand challenge link | Elegible cases (train set) | Evaluated cases |
---|---|---|---|
CAMELYON17 | https://camelyon17.grand-challenge.org/ | 100 | 17 |
ACDC | https://acdc-lunghp.grand-challenge.org/ | 150 | 104 |
DigestPath | https://digestpath2019.grand-challenge.org/ | 250 | 196 |
Chelebian, E., Avenel, C., Ciompi, F., & Wählby, C. (2024). DEPICTER: Deep representation clustering for histology annotation. Computers in Biology and Medicine, 108026. https://doi.org/10.1016/j.compbiomed.2024.108026
@article{chelebian2024depicter,
title={DEPICTER: Deep representation clustering for histology annotation},
author={Chelebian, Eduard and Avenel, Chirstophe and Ciompi, Francesco and W{\"a}hlby, Carolina},
journal={Computers in Biology and Medicine},
pages={108026},
year={2024},
publisher={Elsevier}
}