Skip to content

Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19

Notifications You must be signed in to change notification settings

ebagdasa/differential-privacy-vs-fairness

Repository files navigation

Readme

The paper discusses how Differential Privacy (specifically DPSGD from [1]) impacts model performance for underrepresented groups.

Usage

Configure environment by running: pip install -r requirements.txt

We use Python3.7 and GPU Nvidia TitanX.

File playing.py allows run the code. It uses utils/params.yaml to set parameters from the paper and builds a graph on Tensorboard. For Sentiment prediction we use playing_nlp.py.

Datasets:

  1. MNIST (part of PyTorch)
  2. Diversity in Faces (obtained from IBM here)
  3. iNaturalist (download from here)
  4. UTKFace (from here)
  5. AAE Twitter corpus (from here)

We use compute_dp_sgd_privacy.py copied from public repo

DP-FedAvg implementation is taken from public repo

Implementation of DPSGD is based on TF Privacy repo and papers:

[1] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep learning with differential privacy. In CCS, 2016.

[2] H. B. McMahan and G. Andrew. A general approach to adding differential privacy to iterative training procedures. arXiv:1812.06210, 2018

[3] H. B. McMahan, D. Ramage, K. Talwar, and L. Zhang. Learning differentially private recurrent language models. In ICLR, 2018

About

Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages