fairml
Model auditor
An auditing toolbox to assess the fairness of black-box predictive models
361 stars
19 watching
73 forks
Language: Python
last commit: over 4 years ago
Linked from 3 awesome lists
auditing-predictive-modelsdiscriminationfairnessmodel-criticismprediction-modeltoolbox
Related projects:
| Repository | Description | Stars |
|---|---|---|
| | A software package for auditing and analyzing machine learning models to detect unfair biases | 130 |
| | Toolkit to audit and mitigate biases in machine learning models | 701 |
| | A toolkit for auditing and mitigating bias in machine learning systems | 96 |
| | Provides a Python implementation of fairness mechanisms in classification models to mitigate disparate impact and mistreatment. | 190 |
| | An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance | 1,135 |
| | Compiles bias evaluation datasets and provides access to original data sources for large language models | 115 |
| | A tool to assess and mitigate unfairness in AI systems, helping developers ensure their models do not disproportionately harm certain groups of people. | 1,974 |
| | A tool for detecting bias in machine learning models and mitigating it using various techniques. | 86 |
| | Automates code quality checks for Python programs | 1,049 |
| | A curated list of large machine learning models tracked over time | 341 |
| | Evaluates and compares the performance of multimodal large language models on various tasks | 56 |
| | An open-source benchmarking framework for evaluating cross-style visual capability of large multimodal models | 84 |
| | A tool for evaluating and improving the fairness of machine learning models | 57 |
| | A framework for auditing and improving regression models by analyzing their training data | 8 |
| | A toolbox for researching and evaluating robustness against attacks on machine learning models | 1,311 |