A curated list of adversarial samples. Inspired by awesome-deep-vision, awesome-adversarial-machine-learning, awesome-deep-learning-papers, and awesome-architecture-search.
Please feel free to pull requests or open an issue to add papers.
-
Adversarial examples in the physical world (ICLR2017 Workshop)
-
DeepFool: a simple and accurate method to fool deep neural networks (CVPR2016) The idea in this work is close to the orginal idea. Loop until the predicted label change.
-
Learning with a strong adversary (rejected by ICLR2016?) Apply the spirit of GAN to optimization.
-
Decision-based Adversarial Attacks: Reliable Attacks Against Black-box Machine Learning Models (ICLR2018) [code]
-
The limitations of deep learning in adversarial settings (ESSP) (European Symposium on Security & Privacy) Propose SaliencyMapAttack. Do not use loss function.
-
Generating Natural Adversarial Examples (ICLR2018)
-
Simple Black-Box Adversarial Perturbations for Deep Networksh One pixel attack (CVPR17 Workshop)
-
Boosting Adversarial Attacks with Momentum (CVPR2018 Spotlight)
-
Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition (CCS2016) same with the least-likely class
-
Adversarial examples for semantic image segmentation (ICLR2017 Workshop) same with the classification case.
-
Explaining and Harnessing Adversarial Examples (ICLR2015) Fast Gradient Sign Method
-
U-turn: Crafting Adversarial Queries with Opposite-direction Features Attack Image Retrieval [Code] (IJCV2022)
-
Ensemble Adversarial Training: Attacks and Defenses (ICLR2018)
-
Adversarial Manipulation of Deep Representations (ICLR2016) Attack the intermediate activation.
-
Query-efficient Meta Attack to Deep Neural Networks (ICLR2019) Attack the image model based on meta learning.
-
Sparse adversarial perturbations for videos (AAAI2019) Focus on sparse adversarial perturbations for videos.
-
Black-box adversarial attacks on video recognition models (MultiMedia2019) Attack video model in blac-box setting.
-
Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior (ECCV2020) Attack the video via direct application of motion map.
-
Exploring the space of adversarial images (IJCNN2016)
-
Towards Deep Learning Models Resistant to Adversarial Attacks (ICLR2018)
-
Stochastic Activation Pruning for Robust Adversarial Defense (ICLR2018)
-
Mitigating Adversarial Effects Through Randomization (ICLR2018)
-
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples (ICLR2018) [Github]
-
Adversarial Examples Are Not Bugs, They Are Features (NeurIPS 2019)