ModelPoisoning
Model Poisoning Attack Library
An implementation of model poisoning attacks in federated learning
Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470
146 stars
6 watching
37 forks
Language: Python
last commit: about 2 years ago Related projects:
Repository | Description | Stars |
---|---|---|
jeremy313/fl-wbc | A defense mechanism against model poisoning attacks in federated learning | 37 |
lhfowl/robbing_the_fed | This implementation allows an attacker to directly obtain user data from federated learning gradient updates by modifying the shared model architecture. | 23 |
yflyl613/fedrec | A PyTorch implementation of an attack and defense mechanism against Federated Recommendation Systems | 21 |
jonasgeiping/breaching | A PyTorch framework for analyzing vulnerabilities in federated learning models and predicting data breaches | 269 |
ai-secure/dba | A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. | 176 |
jind11/textfooler | A tool for generating adversarial examples to attack text classification and inference models | 494 |
junyizhu-ai/surrogate_model_extension | A framework for analyzing and exploiting vulnerabilities in federated learning models using surrogate model attacks | 9 |
hfzhang31/a3fl | A framework for attacking federated learning systems with adaptive backdoor attacks | 22 |
eth-sri/bayes-framework-leakage | Develops and evaluates a framework for detecting attacks on federated learning systems | 11 |
ftramer/steal-ml | An implementation of extraction attacks against Machine Learning models offered by Cloud-based services | 344 |
royson/fedl2p | This project enables personalized learning models by collaborating on learning the best strategy for each client | 19 |
sliencerx/learning-to-attack-federated-learning | An implementation of a framework for learning how to attack federated learning systems | 15 |
utkuozbulak/adaptive-segmentation-mask-attack | An implementation of an adversarial example generation method for deep learning segmentation models. | 57 |
dcalab-unipv/turning-privacy-preserving-mechanisms-against-federated-learning | This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. | 8 |
ebagdasa/backdoor_federated_learning | An implementation of a framework for backdoors in federated learning, allowing researchers to test and analyze various attacks on distributed machine learning models. | 271 |