This project implements a collaborative agent pipeline to detect and reduce biases in large language model outputs, focusing on improving pronoun inclusivity and fair queer representation.
-
Updated
Nov 13, 2024 - Python
This project implements a collaborative agent pipeline to detect and reduce biases in large language model outputs, focusing on improving pronoun inclusivity and fair queer representation.
Oracle Guardian AI Open Source Project is a library consisting of tools to assess fairness/bias and privacy of machine learning models and data sets.
Pytorch implementation of 'Explaining text classifiers with counterfactual representations' (Lemberger & Saillenfest, 2024)
Debiaser for Multiple Variables, a model- and data- agnostic method to improve fairness in binary and multi-class classification tasks
Demographic Bias of Vision-Language Foundation Models in Medical Imaging
[Nature Medicine] The Limits of Fair Medical Imaging AI In Real-World Generalization
[ICCV 2023] Partition-and-Debias: Agnostic Biases Mitigation via a Mixture of Biases-Specific Experts
Code implementation for BiasMitigationRL, a reinforcement learning-based bias mitigation method.
Code for the paper The Other Side of Compression: Measuring Bias in Pruned Transformers (IDA23)
CIRCLe: Color Invariant Representation Learning for Unbiased Classification of Skin Lesions. Mirror of https://github.com/arezou-pakzad/CIRCLe
CIRCLe: Color Invariant Representation Learning for Unbiased Classification of Skin Lesions
Official code of "Discover and Mitigate Unknown Biases with Debiasing Alternate Networks" (ECCV 2022)
The code generates simulated research landscapes of any given bias. It can be used to calculate the Machine Learning Gain under different conditions.
Add a description, image, and links to the bias-mitigation topic page so that developers can more easily learn about it.
To associate your repository with the bias-mitigation topic, visit your repo's landing page and select "manage topics."