This work presented at the Uncertainty & Robustness in Deep Learning workshop at ICML 2021. You can view our current arXiv paper here. Note: some of the normalization techniques we used we later found out were not appropriate for certain models. However, originally, normalziation for all models was the same values (imagenet normalization). This is a workshop paper and is preliminary work done by all undergraduates. We appreciate the kind feedback.
We explored corruption robustness across different Convolutional Neural Networks, Vision Transformer architectures, and the MLP-Mixer.
Coming soon - view appendix in the meantime!