BlackBoxAuditing
Bias auditor
A software package for auditing and analyzing machine learning models to detect unfair biases
Research code for auditing and exploring black box machine-learning models.
130 stars
18 watching
36 forks
Language: Python
last commit: over 1 year ago Related projects:
Repository | Description | Stars |
---|---|---|
adebayoj/fairml | An auditing toolbox to assess the fairness of black-box predictive models | 361 |
dssg/aequitas | Toolkit to audit and mitigate biases in machine learning models | 694 |
responsiblyai/responsibly | A toolkit for auditing and mitigating bias in machine learning systems | 94 |
nyu-mll/bbq | A dataset and benchmarking framework to evaluate the performance of question answering models on detecting and mitigating social biases. | 87 |
algofairness/fairness-comparison | An online repository providing benchmarking tools and data for evaluating fairness-aware machine learning algorithms | 159 |
privacytrustlab/bias_in_fl | Analyzing bias propagation in federated learning algorithms to improve group fairness and robustness | 11 |
ethicalml/xai | An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance | 1,125 |
azure/counterfit | An automation tool that assesses the security of machine learning systems by bringing together various adversarial frameworks under one platform. | 806 |
modeloriented/fairmodels | A tool for detecting bias in machine learning models and mitigating it using various techniques. | 86 |
i-gallegos/fair-llm-benchmark | Compiles bias evaluation datasets and provides access to original data sources for large language models | 110 |
zimmski/go-mutesting | A tool to detect untested parts of source code by introducing small changes and testing the resulting behavior. | 643 |
visionjo/facerec-bias-bfw | This project provides a data proxy to evaluate bias in facial recognition systems across demographic groups. | 46 |
privacytrustlab/ml_privacy_meter | An auditing tool to assess the privacy risks of machine learning models | 604 |
borealisai/advertorch | A toolbox for researching and evaluating robustness against attacks on machine learning models | 1,308 |
albermax/innvestigate | A toolbox to help understand neural networks' predictions by providing different analysis methods and a common interface. | 1,268 |