FL-WBC

Attack defense

A defense mechanism against model poisoning attacks in federated learning

Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective".

GitHub

37 stars
1 watching
10 forks
Language: Python
last commit: about 3 years ago
federated-learningneurips-2021poisoning-attack

Related projects:

Repository Description Stars
yflyl613/fedrec A PyTorch implementation of an attack and defense mechanism against Federated Recommendation Systems 21
git-disl/lockdown A backdoor defense system against attacks in federated learning algorithms used for machine learning model training on distributed datasets. 14
jeremy313/soteria An implementation of a defense against model inversion attacks in federated learning 55
hfzhang31/a3fl A framework for attacking federated learning systems with adaptive backdoor attacks 22
deu30303/feddefender A PyTorch implementation of an attack-tolerant federated learning system to train robust local models against malicious attacks from adversaries. 9
inspire-group/modelpoisoning An implementation of model poisoning attacks in federated learning 146
sliencerx/learning-to-attack-federated-learning An implementation of a framework for learning how to attack federated learning systems 15
wizard1203/vhl A toolkit for federated learning with a focus on defending against data heterogeneity 40
dcalab-unipv/turning-privacy-preserving-mechanisms-against-federated-learning This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. 8
jhcknzzm/federated-learning-backdoor An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. 63
ai-secure/fedgame An implementation of a game-theoretic defense against backdoor attacks in federated learning. 5
zju-diver/shapleyfl-robust-federated-learning-based-on-shapley-value An implementation of a robust federated learning method based on Shapley value to defend against various data and model poisoning attacks 19
ai-secure/dba A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. 176
yjw1029/ua-fedrec An implementation of a federated news recommendation system vulnerable to untargeted attacks 17
ai-secure/crfl This project presents a framework for robust federated learning against backdoor attacks. 71