Please star or watch this repository to keep tracking the latest updates! Contributions are welcome!
- [Nov/2024] We release a comprehensive survey of model inversion attacks. Check the paper.
Outlines:
- NEWS
- What is the model inversion attack?
- Survey
- Computer vision domain
- Graph learning domain
- Natural language processing domain
- Tools
- Others
- Related repositories
- Star History
A model inversion attack is a privacy attack where the attacker is able to reconstruct the original samples that were used to train the synthetic model from the generated synthetic data set. (Mostly.ai)
The goal of model inversion attacks is to recreate training data or sensitive attributes. (Chen et al, 2021.)
In model inversion attacks, a malicious user attempts to recover the private dataset used to train a supervised neural network. A successful model inversion attack should generate realistic and diverse samples that accurately describe each of the classes in the private dataset. (Wang et al, 2021.)
-
[arXiv 2024] Model Inversion Attacks: A Survey of Approaches and Countermeasures. [paper]
-
[Physical and Engineering Sciences 2024] Algorithms that remember: model inversion attacks and data protection law. [paper]
-
[CSF 2023] SoK: Model Inversion Attack Landscape: Taxonomy, Challenges, and Future Roadmap [paper]
-
[arXiv 2022] Trustworthy Graph Neural Networks: Aspects, Methods and Trends. [paper]
-
[arXiv 2022] A Survey of Trustworthy Graph Learning: Reliability, Explainability, and Privacy Protection. [paper]
-
[arXiv 2022] A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability. [paper]
-
[arXiv 2022] Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups [paper]
-
[arXiv 2022] I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences [paper]
-
[arXiv 2021] Survey: Leakage and Privacy at Inference Time [paper]
-
[arXiv 2021] A Review of Confidentiality Threats Against Embedded Neural Network Models [paper]
-
[arXiv 2021] Membership Inference Attacks on Machine Learning: A Survey [paper]
-
[arXiv 2021] ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models [paper]
-
[IEEE Access 2020] Privacy and Security Issues in Deep Learning: A Survey [paper]
-
[arXiv 2020] A Survey of Privacy Attacks in Machine Learning [paper]
-
[arXiv 2020] Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks [paper]
-
[arXiv 2020] An Overview of Privacy in Machine Learning [paper]
-
[CVPR 2024] Model Inversion Robustness: Can Transfer Learning Help? [paper] [code]
-
[ICLR 2024] Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks [paper] [code]
-
[ICASSP 2023] (black-box) Sparse Black-Box Inversion Attack with Limited Information [paper] [code]
-
[IEEE Transactions on Information Forensics and Security 2023] A GAN-Based Defense Framework Against Model Inversion Attacks [paper]
-
[CVPR 2023] (black-box) Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack [paper] [code]
-
[AAAI 2023] (white-box) Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network [paper] [code]
-
[TDSC 2023] (black-box) C2FMI: Coarse-to-Fine Black-box Model Inversion Attack [paper]
-
[TDSC 2023] (black-box) Boosting Model Inversion Attacks with Adversarial Examples [paper]
-
[CVPR 2023] (black-box) Reinforcement Learning-Based Black-Box Model Inversion Attacks [paper] [code]
-
[CVPR 2023] (white-box) Re-thinking Model Inversion Attacks Against Deep Neural Networks [paper] [code]
-
[AAAI 2023] (black-box (defense)) Purifier: Defending Data Inference Attacks via Transforming Confidence Scores [paper]
-
[CCS 2023] (black-box) Unstoppable Attack: Label-Only Model Inversion via Conditional Diffusion Model [paper]
-
[TDSC 2023] C2FMI: Corse-to-Fine Black-Box Model Inversion Attack [paper] [code]
-
[ICML 2022] Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations [paper]
-
[ICML 2022] (white-box) Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks [paper] [code]
-
[CVPR 2022] (black-box) Label-Only Model Inversion Attacks via Boundary Repulsion [paper] [code]
-
[CVPR 2022] (white-box (defense)) ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning [paper] [code]
-
[KDD 2022] (white-box (defense)) Bilateral Dependency Optimization: Defending Against Model-inversion Attacks [paper] [code]
-
[USENIX Security 2022] (holistic risk assessment) ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models [paper] [code]
-
[TIFS 2022] (white-box) Model Inversion Attack by Integration of Deep Generative Models: Privacy-Sensitive Face Generation From a Face Recognition System [paper]
-
[TIFS 2022] (black-box (defense)) One Parameter Defense—Defending Against Data Inference Attacks via Differential Privacy [paper]
-
[WACV 2022] (white-box) Reconstructing Training Data from Diverse ML Models by Ensemble Inversion [paper]
-
[ECCV 2022] (white-box) SecretGen: Privacy Recovery on Pre-trained Models via Distribution Discrimination [paper]
-
[WPES 2022] (black-box) UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning [paper] [code]
-
[NDSS 2022] (white-box) MIRROR: Model Inversion for Deep Learning Network with High Fidelity [paper] [code]
-
[SP 2022] (white-box) Reconstructing Training Data with Informed Adversaries [paper]
-
[BMVC 2022] (white-box) Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks [paper]
-
[NeurIPS 2022] (white-box) Reconstructing Training Data from Trained Neural Networks [paper]
-
[KDD 2022] Bilateral Dependency Optimization: Defending Against Model-inversion Attacks [paper]
-
[NeurIPS 2021] (white-box) Variational Model Inversion Attacks [paper] [code]
-
[ICCV 2021] (white-box) Exploiting Explanations for Model Inversion Attacks [paper]
-
[ICCV 2021] (white-box) Knowledge-Enriched Distributional Model Inversion Attacks [paper] [code]
-
[AAAI 2021] (white-box (defense)) Improving Robustness to Model Inversion Attacks via Mutual Information Regularization [paper]
-
[ICLR Workshop 2021] (black-box (defense)) Practical Defences Against Model Inversion Attacks for Split Neural Networks [paper] [code]
-
[ICDE 2021] (white-box) Feature inference attack on model predictions in vertical federated learning [paper] [code]
-
[DAC 2021] (black & white-box) PRID: Model Inversion Privacy Attacks in Hyperdimensional Learning Systems [paper]
-
[CSR Workshops 2021] (black-box (defense)) Defending Against Model Inversion Attack by Adversarial Examples [paper]
-
[ECML PKDD 2021] (black-box) Practical Black Box Model Inversion Attacks Against Neural Nets [paper]
-
[APSIPA 2021] (black-box) Model Inversion Attack against a Face Recognition System in a Black-Box Setting [paper]
-
[CCS 2021] Unleashing the tiger: Inference attacks on split learning [paper] [code]
-
[CSR 2021] Defending Against Model Inversion Attack by Adversarial Examples [paper]
-
[CVPR 2020] Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion [paper] [code]
-
[CVPR 2020] (white-box) The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks [paper] [code] [video]
-
[ICLR 2020] (white-box) Overlearning Reveals Sensitive Attributes [paper]
-
[APSIPA ASC 2020] (white-box) Deep Face Recognizer Privacy Attack: Model Inversion Initialization by a Deep Generative Adversarial Data Space Discriminator [paper]
-
[USENIX Security 2020] (black-box) Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning [paper]
-
[IoT-J 2020] (black-box) Attacking and Protecting Data Privacy in Edge-Cloud Collaborative Inference Systems [paper] [code]
-
[ECCV Workshop 2020] (black-box) Black-Box Face Recovery from Identity Features [paper]
-
[arXiv 2020] (white-box) MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery [paper]
-
[Globecom 2020] (white-box (defense)) Privacy Preserving Facial Recognition Against Model Inversion Attacks [paper]
-
[Big Data 2020] (white-box (defense)) Broadening Differential Privacy for Deep Learning Against Model Inversion Attacks [paper]
-
[AdvML 2020] (metric) Evaluation Indicator for Model Inversion Attack [paper]
-
[CVPR 2020] The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks [paper]
-
[AAAI 2020] (black & white-box) Improving Robustness to Model Inversion Attacks via Mutual Information Regularization [paper]
-
[arXiv 2020] Defending Model Inversion and Membership Inference Attacks via Prediction Purification [paper]
-
[arXiv 2019] (black-box) GAMIN: An Adversarial Approach to Black-Box Model Inversion [paper]
-
[ACSAC 2019] Model Inversion Attacks Against Collaborative Inference [paper] [code]
-
[CCS 2019] (black-box) Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment [paper] [code]
-
[ACSAC 2019] (black & white-box) Model Inversion Attacks Against Collaborative Inference [paper]
-
[GLSVLSI 2019] (black-box (defense)) MLPrivacyGuard: Defeating Confidence Information based Model Inversion Attacks on Machine Learning Systems [paper]
-
[CVPR 2019] A style-based generator architecture for generative adversarial networks [paper]
-
[arXiv 2019] (white-box) An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack [paper]
-
[CSF 2018] (white-box) Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting [paper]
-
[CCS 2017] (white-box) Machine Learning Models that Remember Too Much [paper] [code]
-
[PST 2017] (white-box) Model Inversion Attacks for Prediction Systems: Without knowledge of Non-Sensitive Attributes [paper]
-
[NeurIPS 2016] Generating Images with Perceptual Similarity Metrics based on Deep Networks [paper]
-
[CVPR 2016] Inverting visual representations with convolutional networks [paper]
-
[CSF 2016] (black & white-box) A Methodology for Formalizing Model-Inversion Attacks [paper]
-
[CVPR 2015] Understanding Deep Image Representations by Inverting Them [paper]
-
[IJCAI 2015] (white-box (defense)) Regression Model Fitting under Differential Privacy and Model Inversion Attack [paper] [code]
-
[CCS 2015] (black & white-box) Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [paper] [code1] [code2] [code3] [code4]
-
[ICLR 2014] Intriguing properties of neural networks [paper]
-
[ICLR 2014] Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps [paper]
-
[USENIX Security 2014] (black & white-box) Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing [paper]
-
[USENIX Security 2014] Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing [paper]
-
[SecureComm 2023] (white-box) Model Inversion Attacks on Homogeneous and Heterogeneous Graph Neural Networks [paper]
-
[ICML 2023] (white-box) On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation [paper] [code]
-
[TKDE 2022] Model Inversion Attacks against Graph Neural Networks [paper]
-
[IJIS 2022] Defense Against Membership Inference Attack in Graph Neural Networks Through Graph Perturbation [paper]
-
[CCS 2022] Finding MNEMON: Reviving Memories of Node Embeddings [paper]
-
[arXiv 2022] Privacy and Transparency in Graph Machine Learning: A Unified Perspective [paper]
-
[arXiv 2022] Private Graph Extraction via Feature Explanations [paper]
-
[arXiv 2022] Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy [paper]
-
[arXiv 2022] Sok: Differential Privacy on Graph-Structured Data[paper]
-
[arXiv 2022] GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation [paper]
-
[arXiv 2022] Differentially Private Graph Classification With GNNs [paper]
-
[IEEE S&P 2022] Model Stealing Attacks Against Inductive Graph Neural Networks [paper] [code]
-
[USENIX Security 2022] Inference Attacks Against Graph Neural Networks [paper] [code]
-
[WWW 2022] Learning Privacy-Preserving Graph Convolutional Network with Partially Observed Sensitive Attributes [paper]
-
[arXiv 2022] (black & white-box) A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability [paper]
-
[arXiv 2021] Node-Level Membership Inference Attacks Against Graph Neural Networks [paper]
-
[IJCAI 2021] (white-box) GraphMI: Extracting Private Graph Data from Graph Neural Networks [paper] [code]
-
[ICML 2021] DeepWalking Backwards: From Node Embeddings Back to Graphs [paper] [code]
-
[ICDE 2021] (black-box) NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data [paper] [code]
-
[IJCAI 2021] (white-box) A Survey on Gradient Inversion: Attacks, Defenses and Future Directions [paper]
-
[MobiQuitous 2020] Quantifying Privacy Leakage in Graph Embedding [paper] [code]
-
[arXiv 2020] (black & white-box) Reducing Risk of Model Inversion Using Privacy-Guided Training [paper]
-
[USENIX Security 2020] Stealing Links from Graph Neural Networks [paper] [code]
-
[ACL 2024] (black-box) Text Embedding Inversion Security for Multilingual Language Models [paper] [code]
-
[ICLR 2024] (black-box) Language Model Inversion [paper] [code]
-
[arXiv 2024] (white-box) Do Membership Inference Attacks Work on Large Language Models? [paper]
-
[EMNLP 2024] (black-box) Extracting Prompts by Inverting LLM Outputs [paper] [code]
-
[ACL 2024] (black-box) Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries [paper]
-
[COLM 2024] Effective Prompt Extraction from Language Models [paper]
-
[EMNLP 2023] (black-box) Text Embeddings Reveal (Almost) As Much As Text [paper] [code]
-
[arXiv 2023] (white-box) Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models [paper]
-
[ACL 2023] (black-box) Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recover the Whole Sentence [paper] [code]
-
[SaTML 2023] (black-box) Model Inversion Attack with Least Information and an In-depth Analysis of its Disparate Vulnerability [paper]
-
[NAACL 2022] (white-box) Are Large Pre-Trained Language Models Leaking Your Personal Information? [paper] [code]
-
[NeurIPS 2022] (white-box) Recovering Private Text in Federated Learning of Language Models [paper] [code]
-
[ACL 2022] (white-box) Canary Extraction in Natural Language Understanding Models [paper]
-
[arXiv 2022] (white-box) Text Revealer: Private Text Reconstruction via Model Inversion Attacks against Transformers [paper]
-
[arXiv 2022] (black-box) KART: Parameterization of Privacy Leakage Scenarios from Pre-trained Language Models [paper] [code]
-
[CEUR Workshop 2021] (black-box) Dataset Reconstruction Attack against Language Models [paper]
-
[EMNLP 2021] (white-box) TAG: Gradient Attack on Transformer-based Language Models [paper]
-
[CCS 2020] (black & white-box) Information Leakage in Embedding Models [paper]
-
[S&P 2020] (black & white-box) Privacy Risks of General-Purpose Language Models [paper]
-
[USENIX Security 2020] (black-box) Extracting Training Data from Large Language Models [paper] [code]
-
[USENIX Security 2019] The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks [paper]
-
[arXiv 2018] Towards Robust and Privacy-preserving Text Representations [paper]
-
[arXiv 2018] Privacy-preserving Neural Representations of Text [paper]
-
[arXiv 2018] Adversarial Removal of Demographic Attributes from Text Data [paper]
-
[NeurIPS 2017] Controllable Invariance through Adversarial Feature Learning [paper]
-
[arXiv 2015] Censoring Representations with an Adversary [paper]
-
AIJack: Implementation of algorithms for AI security.
-
Privacy-Attacks-in-Machine-Learning: Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.
-
ml-attack-framework: Universität des Saarlandes - Privacy Enhancing Technologies 2021 - Semester Project.
-
(Trail of Bits) PrivacyRaven [GitHub]
-
(TensorFlow) TensorFlow Privacy [GitHub]
-
(NUS Data Privacy and Trustworthy Machine Learning Lab) Machine Learning Privacy Meter [GitHub]
-
(IQT Labs/Lab 41) CypherCat (archive-only) [GitHub]
-
(IBM) Adversarial Robustness Toolbox (ART) [GitHub]
- [arXiv 2023] A Linear Reconstruction Approach for Attribute Inference Attacks against Synthetic Data [paper] [code]
- [USENIX 2022] Synthetic Data - Anonymisation of Groundhog Day [paper] [code]
- [Blog 2020] Uncovering a model's secrets [blog1] [blog2]
- [Blog 2020] Attacks against Machine Learning Privacy (Part 1): Model Inversion Attacks with the IBM-ART Framework [blog]
- [Slides 2020] ML and DP [slides]
- awesome-ml-privacy-attacks [repo]