Adversarial Transferability refers to the ability of adversarial examples designed for one model to successfully deceive a different model.
-
This repo lists relevant papers summarized in our survey paper.
A Survey on Transferability of Adversarial Examples across Deep Neural Networks. Jindong Gu, Xiaojun Jia, Pau de Jorge, Wenqain Yu, Xinwei Liu, Avery Ma, Yuan Xun, Anjun Hu, Ashkan Khakzar, Zhijiang Li, Xiaochun Cao, Philip Torr. Preprint 2023. [pdf]
If you find our paper and repo helpful to your research, please cite the following paper:
@article{gu2023survey_atrans, title={A Survey on Transferability of Adversarial Examples across Deep Neural Networks}, author={Gu, Jindong and Jia, Xiaojun and Pau de Jorge, and Yu, Wenqain and Liu, Xinwei and Ma, Avery and Xun, Yuan and Hu, Anjun and Khakzar, Ashkan and Li, Zhijiang and Cao, Xiaochun and Torr, Philip} journal={arXiv preprint arXiv:2310.17626}, year={2023} }
-
Each paper will be marked with one or more tags, which mean the paper can be categorized into the corresponding category, e.g., optimization-based transferability-enhancing approach.
-
Transferable Adversarial Attacks for Object Detection Using Object-Aware Significant Feature Distortion. [pdf] [code]
Xinlong Ding, Jiansheng Chen, Hongwei Yu, Yu Shang, Yining Qin, Huimin Ma. AAAI, 2023. -
Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent Diffusion Model. [pdf] [code]
Decheng Liu, Xijun Wang, Chunlei Peng, Nannan Wang, Ruimin Hu, Xinbo Gao. AAAI, 2023. -
Towards Transferable Adversarial Attacks with Centralized Perturbation. [pdf]
Shangbo Wu, Yu-an Tan, Yajie Wang, Ruinan Ma, Wencong Ma, Yuanzhang Li. AAAI, 2023. -
LRS: Enhancing Adversarial Transferability through Lipschitz Regularized Surrogate. [pdf] [code]
Tao Wu, Tie Luo, Donald C. Wunsch II. AAAI, 2023. -
FACL-Attack Frequency-Aware Contrastive Learning for Transferable Adversarial Attacks. [pdf]
Hunmin Yang, Jongoh Jeong, Kuk-Jin Yoon. AAAI, 2023. -
AGS: Affordable and Generalizable Substitute Training for Transferable Adversarial Attack. [pdf] [code]
Ruikui Wang, Yuanfang Guo, Yunhong Wang. AAAI, 2023. -
Improving the Adversarial Transferability of Vision Transformers with Virtual Dense Connection. [pdf]
Jianping Zhang, Yizhan Huang, Zhuoer Xu, Weibin Wu, Michael R. Lyu. AAAI, 2023. -
Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples. [pdf] [code]
Qizhang Li, Yiwen Guo, Wangmeng Zuo, Hao Chen. ICLR, 2023. -
Diffusion Models for Imperceptible and Transferable Adversarial Attack. [pdf] [code]
Jianqi Chen, Hao Chen, Keyan Chen, Yilan Zhang, Zhengxia Zou, Zhenwei Shi. arXiv, 2023. -
An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability. [pdf] [code]
Bin Chen, Jiali Yin, Shukai Chen, Bohao Chen, Ximeng Liu. ICCV, 2023. -
T-SEA: Transfer-Based Self-Ensemble Attack on Object Detection. [pdf] [code]
Hao Huang, Ziyan Chen, Huanran Chen, Yongtao Wang, Kevin Zhang. CVPR, 2023. -
Transferable Adversarial Attack for Both Vision Transformers and Convolutional Networks via Momentum Integrated Gradients. [pdf]
Wenshuo Ma, Yidong Li, Xiaofeng Jia, Wei Xu. ICCV, 2023. -
LEA2: A Lightweight Ensemble Adversarial Attack via Non-overlapping Vulnerable Frequency Regions. [pdf]
Yaguan Qian, Shuke He, Chenyu Zhao, Jiaqiang Sha, Wei Wang, Bin Wang. ICCV, 2023. -
Boosting Adversarial Transferability via Gradient Relevance Attack. [pdf] [code]
Hegui Zhu, Yuchen Ren, Xiaoyan Sui, Lianping Yang, Wuming Jiang. ICCV, 2023. -
Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted Attacks. [pdf] [code]
Anqi Zhao, Tong Chu, Yahao Liu, Wen Li, Jingjing Li, Lixin Duan. CVPR, 2023.
-
Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input. [pdf] [code]
Junyoung Byun, Seungju Cho, Myung-Joon Kwon, Hee-Seon Kim, Changick Kim. CVPR, 2022. -
Investigating Top-k White-Box and Transferable Black-box Attack. [pdf] [code]
Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, In So Kweon. CVPR, 2022. -
Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability. [pdf] [code]
Yifeng Xiong, Jiadong Lin, Min Zhang, John E. Hopcroft, Kun He. CVPR, 2022. -
Cross-Modal Transferable Adversarial Attacks from Images to Videos. [pdf]
Zhipeng Wei, Jingjing Chen, Zuxuan Wu, Yu-Gang Jiang. CVPR, 2022. -
Boosting the Transferability of Video Adversarial Examples via Temporal Translation. [pdf] [code]
Zhipeng Wei, Jingjing Chen, Zuxuan Wu, Yu-Gang Jiang. AAAI, 2022. -
Transferable Adversarial Attack based on Integrated Gradients. [pdf]
Yi Huang, Adams Wai-Kin Kong. arXiv, 2022. -
Making Adversarial Examples More Transferable and Indistinguishable. [pdf] [code]
Junhua Zou, Yexin Duan, Boyu Li, Wu Zhang, Yu Pan, Zhisong Pan. AAAI, 2022. -
Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation. [pdf] [code]
Zeyu Qin, Yanbo Fan, Yi Liu, Li Shen, Yong Zhang, Jue Wang, Baoyuan Wu. NIPS, 2022. -
Learning to Learn Transferable Attack. [pdf]
Shuman Fang, Jie Li, Xianming Lin, Rongrong Ji. AAAI, 2022. -
Adversarially Robust Models may not Transfer Better:
Sufficient Conditions for Domain Transferability from the View of Regularization. [pdf]
Xiaojun Xu, Jacky Y Zhang, Evelyn Ma, Hyun Ho Son, Sanmi Koyejo, Bo Li. ICML, 2022. -
Improving Adversarial Transferability via Neuron Attribution-Based Attacks. [pdf] [code]
Jianping Zhang, Weibin Wu, Jen-tse Huang, Yizhan Huang, Wenxuan Wang, Yuxin Su, Michael R. Lyu. CVPR, 2022. -
On Improving Adversarial Transferability of Vision Transformers. [pdf] [code]
Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Fahad Shahbaz Khan, Fatih Porikli. ICLR, 2022. -
LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity. [pdf] [code]
Martin Gubri, Maxime Cordy, Mike Papadakis, Yves Le Traon, Koushik Sen. ECCV, 2022. -
Efficient and transferable adversarial examples from bayesian neural networks. [pdf] [code]
Martin Gubri, Maxime Cordy, Mike Papadakis, Yves Le Traon, Koushik Sen. UAI, 2022. -
Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks. [pdf] [code]
Xiao Yang, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu. ECCV, 2022. -
Boosting black-box attack with partially transferred conditional adversarial distribution. [pdf] [code]
Yan Feng, Baoyuan Wu, Yanbo Fan, Li Liu, Zhifeng Li, Shu-Tao Xia. CVPR, 2022. -
Adversarial Pixel Restoration as a Pretext Task for Transferable Perturbations. [pdf] [code]
Hashmat Shadab Malik, Shahina Kunhimon, Muzammal Naseer, Salman Khan Fahad Shahbaz Khan. BMVC, 2022. -
Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks. [pdf] [code]
Shengming Yuan, Qilong Zhang, Lianli Gao, Yaya Cheng, Jingkuan Song. NIPS, 2022.
-
Admix: Enhancing the Transferability of Adversarial Attacks. [pdf] [code]
Xiaosen Wang, Xuanran He, Jingdong Wang, Kun He. ICCV, 2021. -
On Generating Transferable Targeted Perturbations. [pdf] [code]
Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Fatih Porikli. ICCV, 2021. -
Batch Normalization Increases Adversarial Vulnerability and Decreases Adversarial Transferability:
A Non-Robust Feature Perspective. [pdf]
Philipp Benz, Chaoning Zhang, In So Kweon. ICCV, 2021. -
Improving the Transferability of Adversarial Samples with Adversarial Transformations. [pdf]
Weibin Wu, Yuxin Su, Michael R. Lyu, Irwin King. CVPR, 2021. -
Enhancing the Transferability of Adversarial Attacks Through Variance Tuning. [pdf] [code]
Xiaosen Wang, Kun He. CVPR, 2021. -
Improving Transferability of Adversarial Patches on Face Recognition With Generative Models. [pdf]
Zihao Xiao, Xianfeng Gao, Chilin Fu, Yinpeng Dong, Wei Gao, Xiaolu Zhang, Jun Zhou, Jun Zhu. CVPR, 2021. -
On Success and Simplicity: A Second Look at Transferable Targeted Attacks. [pdf] [code]
Zhengyu Zhao, Zhuoran Liu, Martha Larson. NIPS, 2021. -
Learning Transferable Adversarial Perturbations. [pdf]
Krishna kanth Nakka, Mathieu Salzmann. NIPS, 2021. -
Feature Importance-Aware Transferable Adversarial Attacks. [pdf] [code]
Zhibo Wang, Hengchang Guo, Zhifei Zhang, Wenxin Liu, Zhan Qin, Kui Ren. ICCV, 2021. -
A Unified Approach to Interpreting and Boosting Adversarial Transferability. [pdf] [code]
Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, Quanshi Zhang. ICLR, 2021.
-
Adversarial Examples on Segmentation Models Can be Easy to Transfer. [pdf] [code]
Jindong Gu, Hengshuang Zhao, Volker Tresp, Philip Torr. arXiv, 2020. -
Improving the Transferability of Adversarial Examples with Resized-Diverse-Inputs, Diversity-Ensemble and Region Fitting. [pdf] [code]
Junhua Zou, Zhisong Pan, Junyang Qiu, Xin Liu, Ting Rui, Wei Li. ECCV, 2020. -
Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses. [pdf] [code]
Yingwei Li, Song Bai, Cihang Xie, Zhenyu Liao, Xiaohui Shen, Alan Yuille. ECCV, 2020. -
Towards Transferable Targeted Attack. [pdf] [code]
Maosen Li, Cheng Deng, Tengjiao Li, Junchi Yan, Xinbo Gao, Heng Huang. CVPR, 2020. -
Boosting the Transferability of Adversarial Samples via Attention. [pdf]
Weibin Wu, Yuxin Su, Xixian Chen, Shenglin Zhao, Irwin King, Michael R. Lyu, Yu-Wing Tai. CVPR, 2020. -
Transferable Perturbations of Deep Feature Distributions. [pdf]
Nathan Inkawhich, Kevin J Liang, Lawrence Carin, Yiran Chen. arXiv, 2020. -
Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability. [pdf]
Nathan Inkawhich, Kevin Liang, Binghui Wang, Matthew Inkawhich, Lawrence Carin, Yiran Chen. NIPS, 2020. -
Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets. [pdf] [code]
Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, Xingjun Ma. ICLR, 2020. -
Backpropagating Linearly Improves Transferability of Adversarial Examples. [pdf] [code]
Yiwen Guo, Qizhang Li, Hao Chen. NIPS, 2020. -
CAG: A Real-Time Low-Cost Enhanced-Robustness High-Transferability Content-Aware Adversarial Attack Generator. [pdf]
Huy Phan, Yi Xie, Siyu Liao, Jie Chen, Bo Yua. AAAI, 2020.
-
Improving Transferability of Adversarial Examples with Input Diversity. [pdf] [code]
Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, Alan L. Yuille. CVPR, 2019. -
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks. [pdf] [code]
Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu. CVPR, 2019. -
Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks. [pdf] [code]
Jiadong Lin, Chuanbiao Song, Kun He, Liwei Wang, John E. Hopcroft. arxiv, 2019. -
Enhancing Adversarial Example Transferability With an Intermediate Level Attack. [pdf] [code]
Qian Huang, Isay Katsman, Horace He, Zeqi Gu, Serge Belongie, Ser-Nam Lim. ICCV, 2019. -
FDA: Feature Disruptive Attack. [pdf] [code]
Aditya Ganeshan, Vivek B.S., R. Venkatesh Babu. ICCV, 2019 -
Feature Space Perturbations Yield More Transferable Adversarial Examples. [pdf] [code]
Nathan Inkawhich, Wei Wen, Hai (Helen) Li, Yiran Chen. CVPR, 2019. -
Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once. [pdf]
Jiangfan Han, Xiaoyi Dong, Ruimao Zhang, Dongdong Chen, Weiming Zhang, Nenghai Yu, Ping Luo, Xiaogang Wang. ICCV, 2019. -
Cross-Domain Transferability of Adversarial Perturbations. [pdf] [code]
Muhammad Muzammal Naseer, Salman H. Khan, Muhammad Haris Khan, Fahad Shahbaz Khan, Fatih Porikli. NIPS, 2019.
-
Task-generalizable Adversarial Attack based on Perceptual Metric. [pdf]
Muzammal Naseer, Salman H. Khan, Shafin Rahman, Fatih Porikli. arxiv, 2018. -
Boosting Adversarial Attacks with Momentum. [pdf] [code]
Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li. CVPR, 2018. -
Transferable Adversarial Perturbations. [pdf]
Wen Zhou, Xin Hou, Yongjun Chen, Mengyun Tang, Xiangqi Huang, Xiang Gan, Yong Yang. ECCV, 2018. -
Transferable Adversarial Attacks for Image and Video Object Detection. [pdf] [code]
Xingxing Wei, Siyuan Liang, Ning Chen, Xiaochun Cao. arxiv, 2018. -
Generative Adversarial Perturbations. [pdf] [code]
Omid Poursaeed, Isay Katsman, Bicheng Gao, Serge Belongie. CVPR, 2018. -
Generating adversarial examples with adversarial networks. [pdf]
Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, Dawn Song. IJCAI, 2018.
-
LOTS about Attacking Deep Features. [pdf]
Andras Rozsa, Manuel Günther, Terrance E. Boult. IJCB, 2017. -
Delving into Transferable Adversarial Examples and Black-box Attacks. [pdf] [code]
Yanpei Liu, Xinyun Chen, Chang Liu, Dawn Song, ICLR, 2017
Please contact us (jindong.gu@outlook.com) if
- you would like to add your paper in this repo,
- you find any mistake in this repo,
- you have any suggestion for this repo.