Official repository for "On Adversarial Training without Perturbing all Examples", Accepted at ICLR 2024.
Paper PDF, Reviews
Poster: poster.pdf
- python 3.8
- pytorch 1.6.0
- autoattack
- tensorboard
- apex
All our experiments are represented by yaml config files. They can be found in the directory 'config/'.
To train on multiple GPUs, make sure to update the value for the config key train_gpu
, e.g. train_gpu: [0,1,2,3]
.
To train, e.g. ESAT on ImageNet-200, run bash config/imagenet200/weightedincreasing2to10_esat_pgd7_decreasing_entropy/run_all.sh
. This trains 10 models, according to the config files defined at the same location: config/imagenet200/weightedincreasing2to10_esat_pgd7_decreasing_entropy/*.yaml
.
After training completion, adversarial robustness is evaluated via AutoAttack. To start, call bash config/imagenet200/weightedincreasing2to10_esat_pgd7_decreasing_entropy/eval_all.sh
Results on Wide-ResNets (discussed here) training script can be found here: 'config/cifar10/wrn70-16_esat_pgd7_decreasing_entropy/'.