robust-fairness-code

Fairness experiments

A framework for experimenting with robust optimization methods to improve fairness in machine learning models on noisy protected groups.

Code for experiments in paper "Robust Optimization for Fairness with Noisy Protected Groups".

GitHub

6 stars
1 watching
4 forks
Language: Python
last commit: 4 months ago
Linked from 1 awesome list


Backlinks from these awesome lists:

Related projects:

Repository Description Stars
taoqi98/fairvfl A collection of code implementing the FairVFL algorithm and its associated data structures and utilities for efficient and accurate fairness-aware machine learning model training. 7
google/ml-fairness-gym An open-source tool for simulating the long-term impacts of machine learning-based decision systems on social environments 314
guanghelee/neurips19-certificates-of-robustness Provides a framework for computing tight certificates of adversarial robustness for randomly smoothed classifiers. 17
algofairness/fairness-comparison An online repository providing benchmarking tools and data for evaluating fairness-aware machine learning algorithms 159
mbilalzafar/fair-classification Provides a Python implementation of fairness mechanisms in classification models to mitigate disparate impact and mistreatment. 190
fairlearn/fairlearn A tool to assess and mitigate unfairness in AI systems, helping developers ensure their models do not disproportionately harm certain groups of people. 1,974
megantosh/fairness_measures_code This repository contains implementations of measures to quantify discrimination in data 38
ucsb-nlp-chang/fairness-reprogramming A method to improve machine learning model fairness without retraining the entire network 15
tensorflow/fairness-indicators An evaluation toolkit to assess fairness in machine learning models 343
litian96/fair_flearn This project develops and evaluates algorithms for fair resource allocation in federated learning, aiming to promote more inclusive AI systems. 244
zjelveh/learning-fair-representations An implementation of Zemel et al.'s 2013 algorithm for learning fair representations in machine learning 26
i-gallegos/fair-llm-benchmark Compiles bias evaluation datasets and provides access to original data sources for large language models 115
chenhongge/robusttrees An implementation of robust decision tree based models against adversarial examples using the XGBoost framework. 67
borealisai/advertorch A toolbox for researching and evaluating robustness against attacks on machine learning models 1,311
koaning/scikit-fairness A Python library providing tools and algorithms for fairness in machine learning model development 29