responsibly

Bias auditor

A toolkit for auditing and mitigating bias in machine learning systems

Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰

GitHub

94 stars
6 watching
21 forks
Language: Python
last commit: about 1 year ago
artificial-intelligenceauditbiasbias-correctionbias-finderbias-reductiondata-scienceethicsfairnessfairness-aifairness-awareness-modelfairness-mlfairness-testingmachine-biasmachine-learningnatural-language-processingpython

Related projects:

Repository Description Stars
dssg/aequitas Toolkit to audit and mitigate biases in machine learning models 694
algofairness/blackboxauditing A software package for auditing and analyzing machine learning models to detect unfair biases 130
ethicalml/xai An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance 1,125
adebayoj/fairml An auditing toolbox to assess the fairness of black-box predictive models 361
modeloriented/fairmodels A tool for detecting bias in machine learning models and mitigating it using various techniques. 86
jphall663/responsible_xai Guidelines and resources for the development of responsible AI systems 17
fairlearn/fairlearn A Python package to assess and improve the fairness of machine learning models. 1,948
nyu-mll/bbq A dataset and benchmarking framework to evaluate the performance of question answering models on detecting and mitigating social biases. 87
princetonvisualai/revise-tool Automatically detects and measures bias in visual datasets. 111
borealisai/advertorch A toolbox for researching and evaluating robustness against attacks on machine learning models 1,308
koaning/scikit-fairness A Python library providing tools and algorithms for fairness in machine learning model development 29
andreysharapov/xaience An online repository providing resources and information on explainable AI, algorithmic fairness, ML security, and related topics 107
algofairness/fairness-comparison An online repository providing benchmarking tools and data for evaluating fairness-aware machine learning algorithms 159
visionjo/facerec-bias-bfw This project provides a data proxy to evaluate bias in facial recognition systems across demographic groups. 46
i-gallegos/fair-llm-benchmark Compiles bias evaluation datasets and provides access to original data sources for large language models 110