FedRec
Attack defense
A PyTorch implementation of an attack and defense mechanism against Federated Recommendation Systems
[AAAI 2023] Official PyTorch implementation for "Untargeted Attack against Federated Recommendation Systems via Poisonous Item Embeddings and the Defense"
21 stars
1 watching
1 forks
Language: Python
last commit: almost 3 years ago federated-recommendationmodel-poisoning-attack
Related projects:
| Repository | Description | Stars |
|---|---|---|
| | A defense mechanism against model poisoning attacks in federated learning | 37 |
| | An implementation of a federated news recommendation system vulnerable to untargeted attacks | 19 |
| | A PyTorch implementation of an attack-tolerant federated learning system to train robust local models against malicious attacks from adversaries. | 10 |
| | A PyTorch framework for analyzing vulnerabilities in federated learning models and predicting data breaches | 274 |
| | An implementation of a game-theoretic defense against backdoor attacks in federated learning. | 6 |
| | An implementation of a defense against model inversion attacks in federated learning | 55 |
| | A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. | 179 |
| | An implementation of Federated Robustness Propagation in PyTorch to share robustness across heterogeneous federated learning users. | 26 |
| | A backdoor defense system for federated learning, designed to protect against data poisoning attacks by isolating subspace training and aggregating models with robust consensus fusion. | 18 |
| | A framework for attacking federated learning systems with adaptive backdoor attacks | 23 |
| | PyTorch implementation of various Convolutional Neural Network adversarial attack techniques | 354 |
| | A PyTorch implementation of an adversarial patch system to defend against image attacks | 208 |
| | This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. | 8 |
| | A toolkit for federated learning with a focus on defending against data heterogeneity | 40 |
| | An implementation of model poisoning attacks in federated learning | 146 |