fairml

Model auditor

An auditing toolbox to assess the fairness of black-box predictive models

GitHub

360 stars
19 watching
72 forks
Language: Python
last commit: over 3 years ago
Linked from 3 awesome lists

auditing-predictive-modelsdiscriminationfairnessmodel-criticismprediction-modeltoolbox

Backlinks from these awesome lists:

Related projects:

Repository Description Stars
algofairness/blackboxauditing A software package for auditing and analyzing machine learning models to detect unfair biases 130
dssg/aequitas Toolkit to audit and mitigate biases in machine learning models 694
responsiblyai/responsibly A toolkit for auditing and mitigating bias in machine learning systems 94
mbilalzafar/fair-classification Provides a Python implementation of fairness mechanisms in classification models to mitigate disparate impact and mistreatment. 189
ethicalml/xai An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance 1,125
i-gallegos/fair-llm-benchmark Compiles bias evaluation datasets and provides access to original data sources for large language models 110
fairlearn/fairlearn A Python package to assess and improve the fairness of machine learning models. 1,948
modeloriented/fairmodels A tool for detecting bias in machine learning models and mitigating it using various techniques. 86
klen/pylama Automates code quality checks for Python programs 1,050
openbmb/bmlist A curated list of large machine learning models tracked over time 341
freedomintelligence/mllm-bench Evaluates and compares the performance of multimodal large language models on various tasks 55
aifeg/benchlmm An open-source benchmarking framework for evaluating cross-style visual capability of large multimodal models 83
cgnorthcutt/cleanlab A tool for evaluating and improving the fairness of machine learning models 57
seedatnabeel/triage A framework for auditing and improving regression models by analyzing their training data 8
borealisai/advertorch A toolbox for researching and evaluating robustness against attacks on machine learning models 1,308