PyTorch code for our paper:"On Success and Simplicity: A Second Look at Transferable Targeted Attacks".
Zhengyu Zhao, Zhuoran Liu, Martha Larson. NeurIPS 2021.
We demonstrate that the conventional simple, iterative attacks can actually achieve even higher targeted transferability than the current SOTA, resource-intensive attacks. The key is to use enough iterations for ensuring convergence and to replace the widely-used Cross-Entropy loss with a simpler Logit loss for preventing the decreasing gradient problem.
torch>=1.7.0; torchvision>=0.8.1; tqdm>=4.31.1; pillow>=7.0.0; matplotlib>=3.2.2; numpy>=1.18.1;
The 1000 images from the NIPS 2017 ImageNet-Compatible dataset are provided in the folder dataset/images
, along with their metadata in dataset/images.csv
. More details about this dataset can be found in its official repository.
We evaluated three simple transferable targeted attacks (CE, Po+Trip, and Logit) in the following six transfer scenarios. If not mentioned specifically, all attacks are integrated with TI, MI, and DI, and run with 300 iterations to ensure convergence. L∞=16 is applied.