facerec-bias-bfw

Bias evaluator

This project provides a data proxy to evaluate bias in facial recognition systems across demographic groups.

Source code and notebooks to reproduce experiments and benchmarks on Bias Faces in the Wild (BFW).

GitHub

46 stars
7 watching
9 forks
Language: Jupyter Notebook
last commit: about 2 years ago
Linked from 1 awesome list

biasbias-mitigationclassificationcomputer-visiondata-analysisdata-visualizationdatasetethnic-diversityethnicity-analysisface-datasetface-recognitionface-verificationgender-biasmachine-learningpython

Backlinks from these awesome lists:

Related projects:

Repository Description Stars
princetonvisualai/revise-tool Automatically detects and measures bias in visual datasets. 111
ecmwf-projects/ibicus Provides tools and methods for bias correction of climate models and evaluation. 51
rudinger/winogender-schemas A dataset of manually crafted sentence templates to evaluate the presence of gender bias in natural language processing systems 68
datamllab/mitigating_gender_bias_in_captioning_system An investigation into bias in image captioning systems using a dataset and a new model design to mitigate this bias 13
i-gallegos/fair-llm-benchmark Compiles bias evaluation datasets and provides access to original data sources for large language models 110
nyu-mll/bbq A dataset and benchmarking framework to evaluate the performance of question answering models on detecting and mitigating social biases. 87
modeloriented/fairmodels A tool for detecting bias in machine learning models and mitigating it using various techniques. 86
dssg/aequitas Toolkit to audit and mitigate biases in machine learning models 694
btschwertfeger/python-cmethods A collection of bias correction techniques for climate data analysis 60
jaimeivancervantes/facerecognition This project implements techniques for face recognition using PCA, Fisher's LDA, and SVMs. 39
responsiblyai/responsibly A toolkit for auditing and mitigating bias in machine learning systems 94
algofairness/blackboxauditing A software package for auditing and analyzing machine learning models to detect unfair biases 130
privacytrustlab/bias_in_fl Analyzing bias propagation in federated learning algorithms to improve group fairness and robustness 11
fwang91/imdb-face A large-scale noise-controlled face recognition dataset designed to study the impact of data noise on recognition accuracy. 431
aofdev/vue-pwa-rekognition An application that uses Vue2 and Amazon Rekognition to analyze faces in images stored on S3. 41