A3FL

Federated Learning Attack

A framework for attacking federated learning systems with adaptive backdoor attacks

GitHub

23 stars
1 watching
4 forks
Language: Python
last commit: over 1 year ago

Related projects:

Repository Description Stars
ebagdasa/backdoor_federated_learning This project provides an implementation of backdoor attacks in federated learning frameworks using Python and PyTorch. 277
sliencerx/learning-to-attack-federated-learning An implementation of a framework for learning how to attack federated learning systems 15
haozzh/fedcr Evaluates various methods for federated learning on different models and tasks. 19
ai-secure/crfl This project presents a framework for robust federated learning against backdoor attacks. 71
ai-secure/dba A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. 179
zhuohangli/ggl Researchers develop an attack method to measure the effectiveness of federated learning privacy defenses by generating leakage in gradients 58
ksreenivasan/ood_federated_learning Researchers investigate vulnerabilities in Federated Learning systems by introducing new backdoor attacks and exploring methods to defend against them. 66
deu30303/feddefender A PyTorch implementation of an attack-tolerant federated learning system to train robust local models against malicious attacks from adversaries. 10
fangxiuwen/robust_fl An implementation of a robust federated learning framework for handling noisy and heterogeneous clients in machine learning. 43
jeremy313/fl-wbc A defense mechanism against model poisoning attacks in federated learning 37
xiyuanyang45/dynamicpfl A method for personalizing machine learning models in federated learning settings with adaptive differential privacy to improve performance and robustness 57
eth-sri/bayes-framework-leakage Develops and evaluates a framework for detecting attacks on federated learning systems 11
git-disl/lockdown A backdoor defense system for federated learning, designed to protect against data poisoning attacks by isolating subspace training and aggregating models with robust consensus fusion. 18
pengyang7881187/fedrl Enabling multiple agents to learn from heterogeneous environments without sharing their knowledge or data 56
dcalab-unipv/turning-privacy-preserving-mechanisms-against-federated-learning This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. 8