If you use any methods, data, or code from this repository please consider citing our paper:
@article{pewton2023dca,
title = {Dermoscopic dark corner artifacts removal: Friend or foe?},
journal = {Computer Methods and Programs in Biomedicine},
volume = {244},
pages = {107986},
year = {2024},
issn = {0169-2607},
doi = {https://doi.org/10.1016/j.cmpb.2023.107986},
author = {Samuel William Pewton and Bill Cassidy and Connah Kendrick and Moi Hoon Yap}
}
If you only require the dark corner artifact masks from these experiments to use in your own dataset, they can be downloaded from the following Kaggle Database repository:
https://www.kaggle.com/datasets/mmucomputervision/dark-corner-artifact-masks-for-isic-images
- Datasets:
- ISIC unbalanced dataset (Duplicates removed).. follow guide at https://github.com/mmu-dermatology-research/isic_duplicate_removal_strategy - save this dataset within the
Data
directory. - Fitzpatrick 17k.. follow guide at https://github.com/mattgroh/fitzpatrick17k - save this dataset within theData
directory. - DCA Masks.. use "Generate all DCA masks" method at https://github.com/mmu-dermatology-research/dark_corner_artifact_removal and save results withinData
directory../Data/DCA_Masks/
- Models:
- Download
EDSR.pb
from https://github.com/Saafke/EDSR_Tensorflow and save inside theModels
directory./Models/EDSR_x4.pb
- Download
- Installations:
- Python 3.9.7
- Anaconda 4.11.0
- pandas 1.3.5
- numpy 1.21.5
- scikit-learn 1.0.2
- scikit-image 0.16.2
- Jupyter Notebook
- matplotlib 3.5.0
- OpenCV 4.5.5
- Pillow 8.4.0
- Tensorflow 2.9.0-dev20220203
- Tensorflow-GPU 2.9.0-dev20220203
- CUDA 11.2.1
- CuDNN 8.1
- Keras
- Python 3.9.7
- Open "./Modules/create_balanced_dca_dataset.py" module
- Read through docstring for module carefully - changing filepaths as necessary
- Execute the module
- Train the models: train three InceptionResNetV2 networks on each of the training/validation sets to form a model on the clean set, a model on the binary dca set, and a model on the realistic dca set. Refer to the paper for more information on the network hyper-parameters.
- Score the models: score the each of the models on each of the individual test sets, this can be done with the model_performance.py module.
- Extract the gradcam heatmaps from all images: run the extract_gradcam.ipynb notebook. (ensure that all of the required filepaths are uncommented)
- Calculate the brightness intensities for each of the test set images: modify the base image filepath in the split_intensity.py module to reflect the root folder of the extracted heatmaps. Run the script to generate a .csv file for the internal and external brightness measures for each image. Once this is complete, run the calculate_intensity_averages.py module to calculate the averages across all of the images.
Full Model Performances on all individual testing sets:
Model Used | Test Set | Metrics | Micro-Average | ||||
Acc | TPR | TNR | F1 | AUC | Precision | ||
Clean | base-small | 0.59 | 0.86 | 0.32 | 0.68 | 0.63 | 0.56 |
ns-small | 0.59 | 0.86 | 0.31 | 0.68 | 0.62 | 0.56 | |
telea-small | 0.59 | 0.86 | 0.31 | 0.68 | 0.62 | 0.56 | |
base-medium | 0.57 | 0.91 | 0.24 | 0.68 | 0.64 | 0.54 | |
ns-medium | 0.62 | 0.88 | 0.36 | 0.70 | 0.68 | 0.58 | |
telea-medium | 0.62 | 0.87 | 0.36 | 0.69 | 0.68 | 0.58 | |
base-large | 0.51 | 0.99 | 0.01 | 0.67 | 0.58 | 0.50 | |
ns-large | 0.64 | 0.85 | 0.44 | 0.70 | 0.71 | 0.60 | |
telea-large | 0.65 | 0.85 | 0.45 | 0.71 | 0.71 | 0.61 | |
base-oth | 0.58 | 0.90 | 0.26 | 0.67 | 0.65 | 0.55 | |
ns-oth | 0.58 | 0.87 | 0.29 | 0.67 | 0.66 | 0.55 | |
telea-oth | 0.58 | 0.87 | 0.29 | 0.67 | 0.66 | 0.55 | |
Binary DCA | base-small | 0.61 | 0.90 | 0.33 | 0.70 | 0.67 | 0.57 |
ns-small | 0.61 | 0.89 | 0.33 | 0.70 | 0.67 | 0.57 | |
telea-small | 0.61 | 0.89 | 0.33 | 0.70 | 0.67 | 0.57 | |
base-medium | 0.63 | 0.94 | 0.31 | 0.72 | 0.68 | 0.58 | |
ns-medium | 0.65 | 0.85 | 0.44 | 0.71 | 0.73 | 0.60 | |
telea-medium | 0.65 | 0.85 | 0.45 | 0.70 | 0.73 | 0.61 | |
base-large | 0.55 | 0.96 | 0.13 | 0.68 | 0.62 | 0.53 | |
ns-large | 0.70 | 0.79 | 0.61 | 0.73 | 0.75 | 0.67 | |
telea-large | 0.70 | 0.78 | 0.61 | 0.72 | 0.75 | 0.67 | |
base-oth | 0.60 | 0.83 | 0.36 | 0.67 | 0.67 | 0.57 | |
ns-oth | 0.60 | 0.82 | 0.39 | 0.67 | 0.68 | 0.57 | |
telea-oth | 0.60 | 0.82 | 0.39 | 0.67 | 0.68 | 0.57 | |
Realistic DCA | base-small | 0.60 | 0.85 | 0.35 | 0.68 | 0.65 | 0.57 |
ns-small | 0.60 | 0.85 | 0.35 | 0.68 | 0.66 | 0.57 | |
telea-small | 0.60 | 0.84 | 0.36 | 0.68 | 0.66 | 0.57 | |
base-medium | 0.64 | 0.75 | 0.53 | 0.68 | 0.70 | 0.62 | |
ns-medium | 0.66 | 0.84 | 0.48 | 0.71 | 0.72 | 0.62 | |
telea-medium | 0.66 | 0.82 | 0.49 | 0.71 | 0.73 | 0.62 | |
base-large | 0.60 | 0.39 | 0.80 | 0.49 | 0.63 | 0.66 | |
ns-large | 0.66 | 0.70 | 0.63 | 0.68 | 0.74 | 0.65 | |
telea-large | 0.67 | 0.69 | 0.65 | 0.67 | 0.74 | 0.66 | |
base-oth | 0.58 | 0.81 | 0.35 | 0.66 | 0.65 | 0.55 | |
ns-oth | 0.58 | 0.79 | 0.37 | 0.65 | 0.65 | 0.56 | |
telea-oth | 0.58 | 0.79 | 0.37 | 0.65 | 0.65 | 0.56 |
@article{groh2021evaluating,
title = {Evaluating Deep Neural Networks Trained on Clinical Images in Dermatology with the Fitzpatrick 17k Dataset},
author = {Groh, Matthew and Harris, Caleb and Soenksen, Luis and Lau, Felix and Han, Rachel and Kim, Aerin and Koochek, Arash and Badri, Omar},
journal = {arXiv preprint arXiv:2104.09957},
year = {2021}
}
@article{cassidy2021isic,
title = {Analysis of the ISIC Image Datasets: Usage, Benchmarks and Recommendations},
author = {Bill Cassidy and Connah Kendrick and Andrzej Brodzicki and Joanna Jaworek-Korjakowska and Moi Hoon Yap},
journal = {Medical Image Analysis},
year = {2021},
issn = {1361-8415},
doi = {https://doi.org/10.1016/j.media.2021.102305},
url = {https://www.sciencedirect.com/science/article/pii/S1361841521003509}
}
@misc{rosebrock_2020,
title = {Grad-cam: Visualize class activation maps with Keras, tensorflow, and Deep Learning},
url = {https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/},
journal = {PyImageSearch},
author = {Rosebrock, Adrian},
year = {2020},
month = {3},
note = {[Accessed: 10-03-2022]}
}
@article{scikit-image,
title = {scikit-image: image processing in {P}ython},
author = {van der Walt, {S}t\'efan and {S}ch\"onberger, {J}ohannes {L}. and
{Nunez-Iglesias}, {J}uan and {B}oulogne, {F}ran\c{c}ois and {W}arner,
{J}oshua {D}. and {Y}ager, {N}eil and {G}ouillart, {E}mmanuelle and
{Y}u, {T}ony and the scikit-image contributors},
year = {2014},
month = {6},
keywords = {Image processing, Reproducible research, Education,
Visualization, Open source, Python, Scientific programming},
volume = {2},
pages = {e453},
journal = {PeerJ},
issn = {2167-8359},
url = {https://doi.org/10.7717/peerj.453},
doi = {10.7717/peerj.453}
}
@article{scikit-learn,
title = {Scikit-learn: Machine Learning in {P}ython},
author = {Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
journal = {Journal of Machine Learning Research},
volume = {12},
pages = {2825--2830},
year = {2011}
}
@inproceedings{lim2017enhanced,
title = {Enhanced deep residual networks for single image super-resolution},
author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Mu Lee, Kyoung},
booktitle = {Proceedings of the IEEE conference on computer vision and pattern recognition workshops},
pages = {136--144},
year = {2017}
}