fairmodels
Bias detector
A tool for detecting bias in machine learning models and mitigating it using various techniques.
Flexible tool for bias detection, visualization, and mitigation
86 stars
7 watching
16 forks
Language: R
last commit: over 2 years ago explain-classifiersexplainable-mlfairnessfairness-comparisonfairness-mlmodel-evaluation
Related projects:
Repository | Description | Stars |
---|---|---|
nyu-mll/bbq | A dataset and benchmarking framework to evaluate the performance of question answering models on detecting and mitigating social biases. | 92 |
i-gallegos/fair-llm-benchmark | Compiles bias evaluation datasets and provides access to original data sources for large language models | 115 |
modeloriented/randomforestexplainer | A set of tools to provide insights into the workings of an ensemble machine learning model. | 230 |
dssg/aequitas | Toolkit to audit and mitigate biases in machine learning models | 701 |
adebayoj/fairml | An auditing toolbox to assess the fairness of black-box predictive models | 361 |
modeloriented/ingredients | Provides tools to assess and visualize the importance and effects of features in machine learning models | 37 |
modeloriented/ibreakdown | A tool for explaining predictions from machine learning models by attributing them to specific input variables and their interactions. | 82 |
responsiblyai/responsibly | A toolkit for auditing and mitigating bias in machine learning systems | 96 |
modeloriented/dalex | A tool to help understand and explain the behavior of complex machine learning models | 1,390 |
fairlearn/fairlearn | A tool to assess and mitigate unfairness in AI systems, helping developers ensure their models do not disproportionately harm certain groups of people. | 1,974 |
modeloriented/modelstudio | A tool for creating interactive, model-agnostic explanations of machine learning models in R | 328 |
ethicalml/xai | An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance | 1,135 |
privacytrustlab/bias_in_fl | This project investigates how bias can be introduced and spread in machine learning models during federated learning, and aims to detect and mitigate this issue. | 11 |
princetonvisualai/revise-tool | Automatically detects and measures bias in visual datasets. | 111 |
algofairness/blackboxauditing | A software package for auditing and analyzing machine learning models to detect unfair biases | 130 |