BlackBoxAuditing
Bias auditor
A software package for auditing and analyzing machine learning models to detect unfair biases
Research code for auditing and exploring black box machine-learning models.
130 stars
18 watching
36 forks
Language: Python
last commit: over 1 year ago Related projects:
Repository | Description | Stars |
---|---|---|
adebayoj/fairml | An auditing toolbox to assess the fairness of black-box predictive models | 361 |
dssg/aequitas | Toolkit to audit and mitigate biases in machine learning models | 696 |
responsiblyai/responsibly | A toolkit for auditing and mitigating bias in machine learning systems | 95 |
nyu-mll/bbq | A dataset and benchmarking framework to evaluate the performance of question answering models on detecting and mitigating social biases. | 91 |
algofairness/fairness-comparison | An online repository providing benchmarking tools and data for evaluating fairness-aware machine learning algorithms | 159 |
privacytrustlab/bias_in_fl | This code repository analyzes and mitigates bias propagation in federated learning algorithms to improve group fairness. | 11 |
ethicalml/xai | An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance | 1,129 |
azure/counterfit | An automation tool that assesses the security of machine learning systems by bringing together various adversarial frameworks under one platform. | 809 |
modeloriented/fairmodels | A tool for detecting bias in machine learning models and mitigating it using various techniques. | 86 |
i-gallegos/fair-llm-benchmark | Compiles bias evaluation datasets and provides access to original data sources for large language models | 114 |
zimmski/go-mutesting | A tool to detect untested parts of source code by introducing small changes and testing the resulting behavior. | 649 |
visionjo/facerec-bias-bfw | This project provides a data proxy to evaluate bias in facial recognition systems across demographic groups. | 46 |
privacytrustlab/ml_privacy_meter | An auditing tool to assess the privacy risks of machine learning models | 612 |
borealisai/advertorch | A toolbox for researching and evaluating robustness against attacks on machine learning models | 1,310 |
albermax/innvestigate | A toolbox to help understand neural networks' predictions by providing different analysis methods and a common interface. | 1,269 |