fairness-comparison
Fairness comparison tool
An online repository providing benchmarking tools and data for evaluating fairness-aware machine learning algorithms
Comparing fairness-aware machine learning techniques.
159 stars
17 watching
50 forks
Language: HTML
last commit: about 2 years ago
Linked from 3 awesome lists
Related projects:
Repository | Description | Stars |
---|---|---|
| A tool to assess and mitigate unfairness in AI systems, helping developers ensure their models do not disproportionately harm certain groups of people. | 1,974 |
| An evaluation toolkit to assess fairness in machine learning models | 343 |
| A software package for auditing and analyzing machine learning models to detect unfair biases | 130 |
| An open-source tool for simulating the long-term impacts of machine learning-based decision systems on social environments | 314 |
| Provides a Python implementation of fairness mechanisms in classification models to mitigate disparate impact and mistreatment. | 190 |
| Tools for comparing and benchmarking small code snippets | 514 |
| A toolbox for researching and evaluating robustness against attacks on machine learning models | 1,311 |
| This project develops and evaluates algorithms for fair resource allocation in federated learning, aiming to promote more inclusive AI systems. | 244 |
| An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance | 1,135 |
| A Python library providing tools and algorithms for fairness in machine learning model development | 29 |
| A toolkit for auditing and mitigating bias in machine learning systems | 96 |
| A system that uses data visualization and machine learning to help make fair decisions in a transparent and explainable way | 25 |
| A framework for experimenting with robust optimization methods to improve fairness in machine learning models on noisy protected groups. | 6 |
| An auditing toolbox to assess the fairness of black-box predictive models | 361 |
| Compiles bias evaluation datasets and provides access to original data sources for large language models | 115 |