Fair-LLM-Benchmark

Bias datasets

Compiles bias evaluation datasets and provides access to original data sources for large language models

GitHub

110 stars
4 watching
9 forks
Language: Python
last commit: about 1 year ago

Related projects:

Repository Description Stars
dssg/aequitas Toolkit to audit and mitigate biases in machine learning models 694
freedomintelligence/mllm-bench Evaluates and compares the performance of multimodal large language models on various tasks 55
damo-nlp-sg/m3exam A benchmark for evaluating large language models in multiple languages and formats 92
privacytrustlab/bias_in_fl Analyzing bias propagation in federated learning algorithms to improve group fairness and robustness 11
mlgroupjlu/llm-eval-survey A repository of papers and resources for evaluating large language models. 1,433
aifeg/benchlmm An open-source benchmarking framework for evaluating cross-style visual capability of large multimodal models 83
nyu-mll/bbq A dataset and benchmarking framework to evaluate the performance of question answering models on detecting and mitigating social biases. 87
mbilalzafar/fair-classification Provides a Python implementation of fairness mechanisms in classification models to mitigate disparate impact and mistreatment. 189
qcri/llmebench A benchmarking framework for large language models 80
modeloriented/fairmodels A tool for detecting bias in machine learning models and mitigating it using various techniques. 86
ethicalml/xai An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance 1,125
btschwertfeger/python-cmethods A collection of bias correction techniques for climate data analysis 60
adebayoj/fairml An auditing toolbox to assess the fairness of black-box predictive models 360
google/ml-fairness-gym An open source framework for studying long-term fairness effects in machine learning decision systems 312
cisco-open/inclusive-language Tools and resources for identifying biased language in code and content. 21