bias_in_FL
Bias analysis tool
Analyzing bias propagation in federated learning algorithms to improve group fairness and robustness
This is the code repository for the paper titled "Bias Propagation in Federated Learning" which was accepted to the International Conference on Learning Representations (ICLR) 2023.
11 stars
2 watching
2 forks
Language: Python
last commit: over 1 year ago Related projects:
Repository | Description | Stars |
---|---|---|
nyu-mll/bbq | A dataset and benchmarking framework to evaluate the performance of question answering models on detecting and mitigating social biases. | 87 |
privacytrustlab/ml_privacy_meter | An auditing tool to assess the privacy risks of machine learning models | 604 |
dssg/aequitas | Toolkit to audit and mitigate biases in machine learning models | 694 |
i-gallegos/fair-llm-benchmark | Compiles bias evaluation datasets and provides access to original data sources for large language models | 110 |
algofairness/blackboxauditing | A software package for auditing and analyzing machine learning models to detect unfair biases | 130 |
ibm/fl-arbitrary-participation | Analyzes Federated Learning with Arbitrary Client Participation using various optimization strategies and datasets. | 4 |
dcalab-unipv/turning-privacy-preserving-mechanisms-against-federated-learning | This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. | 8 |
ethicalml/xai | An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance | 1,125 |
tsingz0/dbe | This implementation of a federated learning method aims to reduce domain bias in representation space, enabling more efficient knowledge transfer between clients and servers. | 22 |
litian96/fair_flearn | This project develops and evaluates algorithms for fair resource allocation in federated learning, aiming to promote more inclusive AI systems. | 243 |
mbilalzafar/fair-classification | Provides a Python implementation of fairness mechanisms in classification models to mitigate disparate impact and mistreatment. | 189 |
modeloriented/fairmodels | A tool for detecting bias in machine learning models and mitigating it using various techniques. | 86 |
btschwertfeger/python-cmethods | A collection of bias correction techniques for climate data analysis | 60 |
jonasgeiping/breaching | A PyTorch framework for analyzing vulnerabilities in federated learning models and predicting data breaches | 269 |
wizard1203/vhl | A toolkit for federated learning with a focus on defending against data heterogeneity | 40 |