fairness-ai
Here are 133 public repositories matching this topic...
Examining the susceptibility of Fairness Measures to various scenarious of data imbalance
-
Updated
Jan 21, 2023 - Jupyter Notebook
Codebase for "Fair-GAIN" for fair ML predictions.
-
Updated
Mar 24, 2023 - Python
FACT-AI (Fairness, Accountability, Confidentiality, and Transparency in AI) Project on Debiasing word embeddings
-
Updated
Feb 3, 2020 - Jupyter Notebook
[Official Codes] Exploring the Impact of Temporal Bias in Point-of-Interest Recommendation (RecSys 22)
-
Updated
Jul 26, 2022 - Jupyter Notebook
Hmumu classifiers trained to be fair w.r.t. the invariant mass.
-
Updated
Feb 26, 2019 - Python
The aim of the project is to apply different global, local and performance interpretability methods as well as model fairness evaluations to a dataset with protected attributes. The dataset regards traffic violations in Montgomery, Maryland, USA. This is a fork of a group project of my Data Science for Business Master's Degree at HEC Paris.
-
Updated
Nov 10, 2023 - Jupyter Notebook
Trustworthy AI/ML course by Professor Birhanu Eshete, University of Michigan, Dearborn.
-
Updated
May 10, 2024 - HTML
Human in the Loop
-
Updated
Feb 8, 2024
🔍In recent years the advancement of ML (machine learning) increased automation for tasks in different domains. One of the challanges was an issues with job recruitment systems that demonstrated bias toward female applicants [4]. This repo will investigate some of the techniques used to overcome these challenges. 👨🏽🔧
-
Updated
Jul 3, 2022 - Jupyter Notebook
Affevtive Bias in Large Pre-trained Language Models
-
Updated
Sep 9, 2024 - Jupyter Notebook
Fairness in Digital Image Forgery Detection System
-
Updated
Oct 20, 2023 - Jupyter Notebook
-
Updated
Feb 27, 2019 - Jupyter Notebook
-
Updated
Apr 22, 2020
FairPy: A Python Library for Machine Learning Fairness
-
Updated
Mar 1, 2023
All the material needed to implement FairVIC for bias mitigation in deep neural networks.
-
Updated
Sep 26, 2024 - Jupyter Notebook
Building Fair AI models tutorial at PyData Berlin / REVISION 2018
-
Updated
Nov 20, 2018 - Jupyter Notebook
Report for INFO4900 Independent Research under Prof. Dawn Schrader. Surveyed bias detection and mitigation methods in language models. Identified emerging Language Model tasks where existing mechanisms fail. Designed a novel fairness test and proposed a framework to update large language models when what society considers fair changes.
-
Updated
Jan 22, 2022
Improve this page
Add a description, image, and links to the fairness-ai topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the fairness-ai topic, visit your repo's landing page and select "manage topics."