FL-WBC
Attack defense
A defense mechanism against model poisoning attacks in federated learning
Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective".
37 stars
1 watching
10 forks
Language: Python
last commit: over 3 years ago federated-learningneurips-2021poisoning-attack
Related projects:
Repository | Description | Stars |
---|---|---|
| A PyTorch implementation of an attack and defense mechanism against Federated Recommendation Systems | 21 |
| A backdoor defense system for federated learning, designed to protect against data poisoning attacks by isolating subspace training and aggregating models with robust consensus fusion. | 18 |
| An implementation of a defense against model inversion attacks in federated learning | 55 |
| A framework for attacking federated learning systems with adaptive backdoor attacks | 23 |
| A PyTorch implementation of an attack-tolerant federated learning system to train robust local models against malicious attacks from adversaries. | 10 |
| An implementation of model poisoning attacks in federated learning | 146 |
| An implementation of a framework for learning how to attack federated learning systems | 15 |
| A toolkit for federated learning with a focus on defending against data heterogeneity | 40 |
| This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. | 8 |
| An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. | 65 |
| An implementation of a game-theoretic defense against backdoor attacks in federated learning. | 6 |
| An implementation of a robust federated learning method based on Shapley value to defend against various data and model poisoning attacks | 19 |
| A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. | 179 |
| An implementation of a federated news recommendation system vulnerable to untargeted attacks | 19 |
| This project presents a framework for robust federated learning against backdoor attacks. | 71 |