GGL
Federated Learning Attacker
An attack implementation to test and evaluate the effectiveness of federated learning privacy defenses.
A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".
57 stars
4 watching
15 forks
Language: Jupyter Notebook
last commit: about 2 years ago Related projects:
Repository | Description | Stars |
---|---|---|
hfzhang31/a3fl | A framework for attacking federated learning systems with adaptive backdoor attacks | 22 |
xiyuanyang45/dynamicpfl | A method for personalizing machine learning models in federated learning settings with adaptive differential privacy to improve performance and robustness | 51 |
zhuangdizhu/fedgen | An implementation of algorithms for decentralized machine learning in heterogeneous federated learning settings. | 239 |
jeremy313/soteria | An implementation of a defense against model inversion attacks in federated learning | 55 |
shenzebang/centaur-privacy-federated-representation-learning | A framework for Federated Learning with Differential Privacy using PyTorch | 13 |
dcalab-unipv/turning-privacy-preserving-mechanisms-against-federated-learning | This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. | 8 |
ai-secure/dba | A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. | 177 |
kenziyuliu/private-cross-silo-fl | This repository provides an implementation of a cross-silo federated learning framework with differential privacy mechanisms. | 25 |
pengyang7881187/fedrl | Enabling multiple agents to learn from heterogeneous environments without sharing their knowledge or data | 54 |
git-disl/lockdown | A backdoor defense system against attacks in federated learning algorithms used for machine learning model training on distributed datasets. | 14 |
yamingguo98/fediir | An implementation of a federated learning algorithm that generalizes to out-of-distribution scenarios using implicit invariant relationships | 9 |
ai-secure/fedgame | An implementation of a game-theoretic defense against backdoor attacks in federated learning. | 5 |
eth-sri/bayes-framework-leakage | Develops and evaluates a framework for detecting attacks on federated learning systems | 11 |
jhcknzzm/federated-learning-backdoor | An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. | 63 |
idanachituve/pfedgp | An implementation of Personalized Federated Learning with Gaussian Processes using Python. | 32 |