ModelPoisoning

Model Poisoning Attack Library

An implementation of model poisoning attacks in federated learning

Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470

GitHub

146 stars
6 watching
37 forks
Language: Python
last commit: over 2 years ago

Related projects:

Repository Description Stars
jeremy313/fl-wbc A defense mechanism against model poisoning attacks in federated learning 37
lhfowl/robbing_the_fed This implementation allows an attacker to directly obtain user data from federated learning gradient updates by modifying the shared model architecture. 23
yflyl613/fedrec A PyTorch implementation of an attack and defense mechanism against Federated Recommendation Systems 21
jonasgeiping/breaching A PyTorch framework for analyzing vulnerabilities in federated learning models and predicting data breaches 274
ai-secure/dba A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. 179
jind11/textfooler A tool for generating adversarial examples to attack text classification and inference models 496
junyizhu-ai/surrogate_model_extension A framework for analyzing and exploiting vulnerabilities in federated learning models using surrogate model attacks 9
hfzhang31/a3fl A framework for attacking federated learning systems with adaptive backdoor attacks 23
eth-sri/bayes-framework-leakage Develops and evaluates a framework for detecting attacks on federated learning systems 11
ftramer/steal-ml A tool for extracting machine learning models from cloud-based services using prediction APIs 344
royson/fedl2p This project enables personalized learning models by collaborating on learning the best strategy for each client 19
sliencerx/learning-to-attack-federated-learning An implementation of a framework for learning how to attack federated learning systems 15
utkuozbulak/adaptive-segmentation-mask-attack An implementation of an adversarial example generation method for deep learning segmentation models. 58
dcalab-unipv/turning-privacy-preserving-mechanisms-against-federated-learning This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. 8
ebagdasa/backdoor_federated_learning This project provides an implementation of backdoor attacks in federated learning frameworks using Python and PyTorch. 277