fairmodels
Bias detector
A tool for detecting bias in machine learning models and mitigating it using various techniques.
Flexible tool for bias detection, visualization, and mitigation
86 stars
7 watching
15 forks
Language: R
last commit: about 2 years ago explain-classifiersexplainable-mlfairnessfairness-comparisonfairness-mlmodel-evaluation
Related projects:
Repository | Description | Stars |
---|---|---|
nyu-mll/bbq | A dataset and benchmarking framework to evaluate the performance of question answering models on detecting and mitigating social biases. | 87 |
i-gallegos/fair-llm-benchmark | Compiles bias evaluation datasets and provides access to original data sources for large language models | 110 |
modeloriented/randomforestexplainer | A set of tools to provide insights into the workings of an ensemble machine learning model. | 230 |
dssg/aequitas | Toolkit to audit and mitigate biases in machine learning models | 694 |
adebayoj/fairml | An auditing toolbox to assess the fairness of black-box predictive models | 360 |
modeloriented/ingredients | Provides tools to assess and visualize the importance and effects of features in machine learning models | 37 |
modeloriented/ibreakdown | A tool for explaining predictions from machine learning models by attributing them to specific input variables and their interactions. | 81 |
responsiblyai/responsibly | A toolkit for auditing and mitigating bias in machine learning systems | 94 |
modeloriented/dalex | A tool to help understand and explain the behavior of complex machine learning models | 1,375 |
fairlearn/fairlearn | A Python package to assess and improve the fairness of machine learning models. | 1,948 |
modeloriented/modelstudio | A tool for creating interactive, model-agnostic explanations of machine learning models in R | 326 |
ethicalml/xai | An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance | 1,125 |
privacytrustlab/bias_in_fl | Analyzing bias propagation in federated learning algorithms to improve group fairness and robustness | 11 |
princetonvisualai/revise-tool | Automatically detects and measures bias in visual datasets. | 111 |
algofairness/blackboxauditing | A software package for auditing and analyzing machine learning models to detect unfair biases | 130 |