BlackBoxAuditing
Bias auditor
A software package for auditing and analyzing machine learning models to detect unfair biases
Research code for auditing and exploring black box machine-learning models.
130 stars
18 watching
36 forks
Language: Python
last commit: over 1 year ago Related projects:
Repository | Description | Stars |
---|---|---|
adebayoj/fairml | An auditing toolbox to assess the fairness of black-box predictive models | 361 |
dssg/aequitas | Toolkit to audit and mitigate biases in machine learning models | 701 |
responsiblyai/responsibly | A toolkit for auditing and mitigating bias in machine learning systems | 96 |
nyu-mll/bbq | A dataset and benchmarking framework to evaluate the performance of question answering models on detecting and mitigating social biases. | 92 |
algofairness/fairness-comparison | An online repository providing benchmarking tools and data for evaluating fairness-aware machine learning algorithms | 159 |
privacytrustlab/bias_in_fl | This project investigates how bias can be introduced and spread in machine learning models during federated learning, and aims to detect and mitigate this issue. | 11 |
ethicalml/xai | An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance | 1,135 |
azure/counterfit | An automation tool that assesses the security of machine learning systems by bringing together various adversarial frameworks under one platform. | 818 |
modeloriented/fairmodels | A tool for detecting bias in machine learning models and mitigating it using various techniques. | 86 |
i-gallegos/fair-llm-benchmark | Compiles bias evaluation datasets and provides access to original data sources for large language models | 115 |
zimmski/go-mutesting | A tool to detect untested parts of source code by introducing small changes and testing the resulting behavior. | 650 |
visionjo/facerec-bias-bfw | This project provides a data proxy to evaluate bias in facial recognition systems across demographic groups. | 47 |
privacytrustlab/ml_privacy_meter | An auditing tool to assess the privacy risks of machine learning models | 613 |
borealisai/advertorch | A toolbox for researching and evaluating robustness against attacks on machine learning models | 1,311 |
albermax/innvestigate | A toolbox to help understand neural networks' predictions by providing different analysis methods and a common interface. | 1,271 |