fairmodels
Bias detector
A tool for detecting bias in machine learning models and mitigating it using various techniques.
Flexible tool for bias detection, visualization, and mitigation
86 stars
7 watching
16 forks
Language: R
last commit: over 2 years ago explain-classifiersexplainable-mlfairnessfairness-comparisonfairness-mlmodel-evaluation
Related projects:
Repository | Description | Stars |
---|---|---|
| A dataset and benchmarking framework to evaluate the performance of question answering models on detecting and mitigating social biases. | 92 |
| Compiles bias evaluation datasets and provides access to original data sources for large language models | 115 |
| A set of tools to provide insights into the workings of an ensemble machine learning model. | 230 |
| Toolkit to audit and mitigate biases in machine learning models | 701 |
| An auditing toolbox to assess the fairness of black-box predictive models | 361 |
| Provides tools to assess and visualize the importance and effects of features in machine learning models | 37 |
| A tool for explaining predictions from machine learning models by attributing them to specific input variables and their interactions. | 82 |
| A toolkit for auditing and mitigating bias in machine learning systems | 96 |
| A tool to help understand and explain the behavior of complex machine learning models | 1,390 |
| A tool to assess and mitigate unfairness in AI systems, helping developers ensure their models do not disproportionately harm certain groups of people. | 1,974 |
| A tool for creating interactive, model-agnostic explanations of machine learning models in R | 328 |
| An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance | 1,135 |
| This project investigates how bias can be introduced and spread in machine learning models during federated learning, and aims to detect and mitigate this issue. | 11 |
| Automatically detects and measures bias in visual datasets. | 111 |
| A software package for auditing and analyzing machine learning models to detect unfair biases | 130 |