robbing_the_fed
Model inversion attack
This implementation allows an attacker to directly obtain user data from federated learning gradient updates by modifying the shared model architecture.
23 stars
2 watching
5 forks
Language: Python
last commit: almost 3 years ago Related projects:
Repository | Description | Stars |
---|---|---|
hfzhang31/a3fl | A framework for attacking federated learning systems with adaptive backdoor attacks | 22 |
inspire-group/modelpoisoning | An implementation of model poisoning attacks in federated learning | 146 |
ml-postech/gradient-inversion-generative-image-prior | An implementation of a method to invert gradients in federated learning to potentially reveal sensitive client data | 39 |
sliencerx/learning-to-attack-federated-learning | An implementation of a framework for learning how to attack federated learning systems | 15 |
eth-sri/bayes-framework-leakage | Develops and evaluates a framework for detecting attacks on federated learning systems | 11 |
jeremy313/fl-wbc | A defense mechanism against model poisoning attacks in federated learning | 37 |
jeremy313/soteria | An implementation of a defense against model inversion attacks in federated learning | 55 |
gwenlegate/guidinglastlayerflpretrain | Investigates transfer learning in federated learning by guiding the last layer with pre-trained models | 7 |
zhuohangli/ggl | An attack implementation to test and evaluate the effectiveness of federated learning privacy defenses. | 57 |
ksreenivasan/ood_federated_learning | Researchers investigate vulnerabilities in Federated Learning systems by introducing new backdoor attacks and exploring methods to defend against them. | 64 |
jonasgeiping/breaching | A PyTorch framework for analyzing vulnerabilities in federated learning models and predicting data breaches | 269 |
jhcknzzm/federated-learning-backdoor | An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. | 63 |
gdisag/gradient_disaggregation | An algorithm that breaks secure aggregation protocols in federated learning by recovering individual model updates from aggregated sums | 14 |
dcalab-unipv/turning-privacy-preserving-mechanisms-against-federated-learning | This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. | 8 |
ebagdasa/backdoor_federated_learning | An implementation of a framework for backdoors in federated learning, allowing researchers to test and analyze various attacks on distributed machine learning models. | 271 |