fairness-comparison

Fairness comparison tool

An online repository providing benchmarking tools and data for evaluating fairness-aware machine learning algorithms

Comparing fairness-aware machine learning techniques.

GitHub

159 stars
17 watching
50 forks
Language: HTML
last commit: almost 2 years ago
Linked from 3 awesome lists


Backlinks from these awesome lists:

Related projects:

Repository Description Stars
fairlearn/fairlearn A Python package to assess and improve the fairness of machine learning models. 1,948
tensorflow/fairness-indicators Tools for evaluating and visualizing fairness in machine learning models 343
algofairness/blackboxauditing A software package for auditing and analyzing machine learning models to detect unfair biases 130
google/ml-fairness-gym An open source framework for studying long-term fairness effects in machine learning decision systems 312
mbilalzafar/fair-classification Provides a Python implementation of fairness mechanisms in classification models to mitigate disparate impact and mistreatment. 189
alco/benchfella Tools for comparing and benchmarking small code snippets 516
borealisai/advertorch A toolbox for researching and evaluating robustness against attacks on machine learning models 1,308
litian96/fair_flearn This project develops and evaluates algorithms for fair resource allocation in federated learning, aiming to promote more inclusive AI systems. 243
ethicalml/xai An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance 1,125
koaning/scikit-fairness A Python library providing tools and algorithms for fairness in machine learning model development 29
responsiblyai/responsibly A toolkit for auditing and mitigating bias in machine learning systems 94
ayong8/fairsight A system that uses data visualization and machine learning to help make fair decisions in a transparent and explainable way 25
wenshuoguo/robust-fairness-code A framework for experimenting with robust optimization methods to improve fairness in machine learning models on noisy protected groups. 6
adebayoj/fairml An auditing toolbox to assess the fairness of black-box predictive models 360
i-gallegos/fair-llm-benchmark Compiles bias evaluation datasets and provides access to original data sources for large language models 110