bias_in_FL
Bias detection tool
This project investigates how bias can be introduced and spread in machine learning models during federated learning, and aims to detect and mitigate this issue.
This is the code repository for the paper titled "Bias Propagation in Federated Learning" which was accepted to the International Conference on Learning Representations (ICLR) 2023.
11 stars
2 watching
2 forks
Language: Python
last commit: almost 2 years ago Related projects:
Repository | Description | Stars |
---|---|---|
nyu-mll/bbq | A dataset and benchmarking framework to evaluate the performance of question answering models on detecting and mitigating social biases. | 92 |
privacytrustlab/ml_privacy_meter | An auditing tool to assess the privacy risks of machine learning models | 613 |
dssg/aequitas | Toolkit to audit and mitigate biases in machine learning models | 701 |
i-gallegos/fair-llm-benchmark | Compiles bias evaluation datasets and provides access to original data sources for large language models | 115 |
algofairness/blackboxauditing | A software package for auditing and analyzing machine learning models to detect unfair biases | 130 |
ibm/fl-arbitrary-participation | Analyzes Federated Learning with Arbitrary Client Participation using various optimization strategies and datasets. | 4 |
dcalab-unipv/turning-privacy-preserving-mechanisms-against-federated-learning | This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. | 8 |
ethicalml/xai | An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance | 1,135 |
tsingz0/dbe | An implementation of a federated learning method to reduce domain bias in representation space, enabling improved knowledge transfer between clients and servers. | 22 |
litian96/fair_flearn | This project develops and evaluates algorithms for fair resource allocation in federated learning, aiming to promote more inclusive AI systems. | 244 |
mbilalzafar/fair-classification | Provides a Python implementation of fairness mechanisms in classification models to mitigate disparate impact and mistreatment. | 190 |
modeloriented/fairmodels | A tool for detecting bias in machine learning models and mitigating it using various techniques. | 86 |
btschwertfeger/python-cmethods | A collection of bias correction techniques for climate data analysis | 60 |
jonasgeiping/breaching | A PyTorch framework for analyzing vulnerabilities in federated learning models and predicting data breaches | 274 |
wizard1203/vhl | A toolkit for federated learning with a focus on defending against data heterogeneity | 40 |