2024.09.06
🌟 We have temporarily open-sourced our datasetCTIs: the largest dataset on Asians to date on choroid neoplasm imaging
on Google Drive, since our paper has not been accepted.2024.08.26
🌟 We have submitted our paper to Nature Communications. The paper is under review.2024.06.03
🌟 We have open-sourced the checkpoint of MMCBM.
This is the official repository for A Concept-based Interpretable Model for the Diagnosis of Choroid Neoplasias using Multimodal Data
This study introduces a multimodal concept-based interpretable model (MMCBM) tailored to distinguish uveal melanoma
(0.4-0.6 per million in Asians) from hemangioma and metastatic carcinoma following the clinical practise. We collected
the largest dataset on Asians to date on choroid neoplasm imaging, encompassing over 750 patients from 2013 to 2019.
Our model integrates domain expert insights from radiological reports and differentiates between three types of choroidal
tumors, achieving an
To support the development of interpretable models for diagnosing choroidal tumors, we built the Choroid Tri-Modal Imaging (CTI) dataset, an anonymized, multimodal, and annotated collection of medical images from Beijing Tongren Hospital encompassing Fluorescence Angiography (FA), Indocyanine Green Angiography (ICGA), and Ocular Ultrasound (US) images. The construction of this dataset was approved by the Ethics Committee of Beijing Tongren Hospital.
License:
CTI dataset is only used for academic research. Commercial use in any form is prohibited.
The copyright of the dataset belongs to Beijing Tongren Hospital.
Without prior approval, you cannot distribute, publish, copy, disseminate, or modify CTI in whole or in part.
You must strictly comply with the above restrictions.
-
Create a conda environment from the
environment.yml
file:conda env create -f environment.yml
-
Install from the requirements.txt file:
pip install -r requirements.txt
📍download
download the checkpoint from the model checkpoint, and put it in the work_dir/result
directory.
After downloading, the directory structure should look like this:
- backbone: pretrained encoders trained on the same dataset split as MMCBM. Put the backbone in
work_dir/result/Efficientb0_SCLS_attnscls_CrossEntropy_32
- MMCBM (Optional): the checkpoint of the MMCBM model. Put the MMCBM in
work_dir/result/CAV_m2CBM_sigmoid_C0.1CrossEntropy_32_report_strict_aow_zero_MM_max
📍 ⚙️configuration
- ChatGPT
fill in the
openai_info
section inparams.py
with your ownapi_key
andapi_base
from OpenAI. - Tencent Translation (Optional)
fill in the
tencent_info
section inparams.py
with your ownapp_id
andapp_key
from Tencent Cloud.
To train the MMCBM, you should first download our CTI dataset and put it in the <work dir>/data
directory.
And then download the pre-trained backbone and put it in the <work dir>/result/Efficientb0_SCLS_attnscls_CrossEntropy_32
directory.
Then you can run the following command in the terminal:
python train_CAV_CBM.py --name <dirname of experiment> --backbone <backbone dir> --epoch 200 --lr 1e-3 --device 0 --bz 8 --k 0
For five-fold cross-validation, you can run the following command in the terminal:
python execute_concept.py --name <dirname of experiment> --bz 8 --backbone <backbone dir> --clip_name cav --device 1,2,3,4,5 --k 0,1,2,3,4,5 --lr 0.001
📍 Web Interface
-
Web Interface using Gradio (Recommended), our web interface is available at Interface Link.
-
you can also run this website locally by running the following command in the terminal:
python web_interface.py
then open the browser and enter
http://127.0.0.1:7860/
to access the website. -
We also provide a human evaluation website, used for doctors to diagnose the disease based on the model's prediction.
- For blackbox model:
python web_human_blackbox_evaluation.py
- For MMCBM model:
python web_human_mmcbm_evaluation.py
then open the browser and enter
http://127.0.0.1:7890
to access the website. - For blackbox model:
📍Command Line :
Command Line without Gradio. We also provide a bash script to run the model inference from the command line:
python mmcmb_inference.py
If you find our work helpful for your research, please consider citing our work.
@misc{wu2024conceptbasedinterpretablemodeldiagnosis,
title={A Concept-based Interpretable Model for the Diagnosis of Choroid Neoplasias using Multimodal Data},
author={Yifan Wu and Yang Liu and Yue Yang and Michael S. Yao and Wenli Yang and Xuehui Shi and Lihong Yang and Dongjun Li and Yueming Liu and James C. Gee and Xuan Yang and Wenbin Wei and Shi Gu},
year={2024},
eprint={2403.05606},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2403.05606},
}