A curated list of resources for model inversion attack (MIA).
-
Updated
Dec 5, 2024
A curated list of resources for model inversion attack (MIA).
Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.
Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"
[ICML 2023] "On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation"
[ICML 2023] On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation
📄 [Talk] OFFZONE 2022 / ODS Data Halloween 2022: Black-box attacks on ML models + with use of open-source tools
Universität des Saarlandes - Privacy Enhancing Technologies 2021 - Semester Project
Official code for paper: Z. Zhang, X. Wang, J. Huang and S. Zhang, "Analysis and Utilization of Hidden Information in Model Inversion Attacks," in IEEE Transactions on Information Forensics and Security, doi: 10.1109/TIFS.2023.3295942
My attempt to recreate the attack described in "Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures" by Fredrikson et al. in 2015 using Tensorflow 2.9.1
Implementation of "An Approximate Memory based Defense against Model Inversion Attacks to Neural Networks" and "MIDAS: Model Inversion Defenses Using an Approximate Memory System"
Add a description, image, and links to the model-inversion-attack topic page so that developers can more easily learn about it.
To associate your repository with the model-inversion-attack topic, visit your repo's landing page and select "manage topics."