-
Trained a pre-trained resnet18, with its weights frozen, added a classifier, and finetuned the classifier layers
Training_CelebA.ipynb
-
Evaluated on the test data and obtained accuracies wrt every attribute
inference.py
,attr acc.png
-
Based on some analysis of the data population and the above results, chose "Male" as the spurious attribute and "Bald" as the target (affecting) attribute.
-
Then trained a Binary Classifier (a finetuned resnet18) and observed the bias (Equalized Odds).
Bald_Classifier(Biased).ipynb
,equal_odds.py
. -
Implemented an adversarial training pipeline to reduce the above bias.
adversarial_training.ipynb
-
Lastly, rechecked the bias to ensure Equalized Odds is satisfied.
recheck_odds.py
-
Conclusion: Adversarial Training helped reduce the original bias by 20%
-
Notifications
You must be signed in to change notification settings - Fork 0
adityagandhamal/fairness
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Bias addressed via adversarial training for spurious attributes
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published