FedRec

Attack defense

A PyTorch implementation of an attack and defense mechanism against Federated Recommendation Systems

[AAAI 2023] Official PyTorch implementation for "Untargeted Attack against Federated Recommendation Systems via Poisonous Item Embeddings and the Defense"

GitHub

21 stars
1 watching
1 forks
Language: Python
last commit: about 2 years ago
federated-recommendationmodel-poisoning-attack

Related projects:

Repository Description Stars
jeremy313/fl-wbc A defense mechanism against model poisoning attacks in federated learning 37
yjw1029/ua-fedrec An implementation of a federated news recommendation system vulnerable to untargeted attacks 19
deu30303/feddefender A PyTorch implementation of an attack-tolerant federated learning system to train robust local models against malicious attacks from adversaries. 10
jonasgeiping/breaching A PyTorch framework for analyzing vulnerabilities in federated learning models and predicting data breaches 274
ai-secure/fedgame An implementation of a game-theoretic defense against backdoor attacks in federated learning. 6
jeremy313/soteria An implementation of a defense against model inversion attacks in federated learning 55
ai-secure/dba A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. 179
illidanlab/fedrbn An implementation of Federated Robustness Propagation in PyTorch to share robustness across heterogeneous federated learning users. 26
git-disl/lockdown A backdoor defense system for federated learning, designed to protect against data poisoning attacks by isolating subspace training and aggregating models with robust consensus fusion. 18
hfzhang31/a3fl A framework for attacking federated learning systems with adaptive backdoor attacks 23
utkuozbulak/pytorch-cnn-adversarial-attacks PyTorch implementation of various Convolutional Neural Network adversarial attack techniques 354
jhayes14/adversarial-patch A PyTorch implementation of an adversarial patch system to defend against image attacks 208
dcalab-unipv/turning-privacy-preserving-mechanisms-against-federated-learning This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. 8
wizard1203/vhl A toolkit for federated learning with a focus on defending against data heterogeneity 40
inspire-group/modelpoisoning An implementation of model poisoning attacks in federated learning 146