Oracle Guardian AI Open Source Project is a library consisting of tools to assess fairness/bias and privacy of machine learning models and data sets.
-
Updated
Jul 26, 2024 - Python
Oracle Guardian AI Open Source Project is a library consisting of tools to assess fairness/bias and privacy of machine learning models and data sets.
Official code of "Discover and Mitigate Unknown Biases with Debiasing Alternate Networks" (ECCV 2022)
[Nature Medicine] The Limits of Fair Medical Imaging AI In Real-World Generalization
CIRCLe: Color Invariant Representation Learning for Unbiased Classification of Skin Lesions
Demographic Bias of Vision-Language Foundation Models in Medical Imaging
[ICCV 2023] Partition-and-Debias: Agnostic Biases Mitigation via a Mixture of Biases-Specific Experts
Code implementation for BiasMitigationRL, a reinforcement learning-based bias mitigation method.
CIRCLe: Color Invariant Representation Learning for Unbiased Classification of Skin Lesions. Mirror of https://github.com/arezou-pakzad/CIRCLe
The code generates simulated research landscapes of any given bias. It can be used to calculate the Machine Learning Gain under different conditions.
Debiaser for Multiple Variables, a model- and data- agnostic method to improve fairness in binary and multi-class classification tasks
Code for the paper The Other Side of Compression: Measuring Bias in Pruned Transformers (IDA23)
Pytorch implementation of 'Explaining text classifiers with counterfactual representations' (Lemberger & Saillenfest, 2024)
Add a description, image, and links to the bias-mitigation topic page so that developers can more easily learn about it.
To associate your repository with the bias-mitigation topic, visit your repo's landing page and select "manage topics."