fairness_measures_code
Discrimination quantifier
This repository contains implementations of measures to quantify discrimination in data
38 stars
8 watching
6 forks
Language: Python
last commit: 11 months ago Related projects:
Repository | Description | Stars |
---|---|---|
| An open-source tool for simulating the long-term impacts of machine learning-based decision systems on social environments | 314 |
| Provides a Python implementation of fairness mechanisms in classification models to mitigate disparate impact and mistreatment. | 190 |
| A collection of code implementing the FairVFL algorithm and its associated data structures and utilities for efficient and accurate fairness-aware machine learning model training. | 7 |
| A collection of common Python coding mistakes and poor practices | 1,716 |
| A dataset and benchmarking framework to evaluate the performance of question answering models on detecting and mitigating social biases. | 92 |
| A code analysis and automation platform | 111 |
| Compiles bias evaluation datasets and provides access to original data sources for large language models | 115 |
| Tools and resources for identifying biased language in code and content. | 21 |
| A tool to assess and mitigate unfairness in AI systems, helping developers ensure their models do not disproportionately harm certain groups of people. | 1,974 |
| An evaluation toolkit to assess fairness in machine learning models | 343 |
| Provides implementations of various supervised machine learning evaluation metrics in multiple programming languages. | 1,632 |
| A Python library providing tools and algorithms for fairness in machine learning model development | 29 |
| A tool to test software for discriminatory biases in decision-making processes | 103 |
| A tool for evaluating and improving the fairness of machine learning models | 57 |
| An implementation of Zemel et al.'s 2013 algorithm for learning fair representations in machine learning | 26 |