GGL
Gradient Attack Method
Researchers develop an attack method to measure the effectiveness of federated learning privacy defenses by generating leakage in gradients
A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".
58 stars
4 watching
15 forks
Language: Jupyter Notebook
last commit: over 2 years ago Related projects:
Repository | Description | Stars |
---|---|---|
| A framework for attacking federated learning systems with adaptive backdoor attacks | 23 |
| A method for personalizing machine learning models in federated learning settings with adaptive differential privacy to improve performance and robustness | 57 |
| An implementation of algorithms for decentralized machine learning in heterogeneous federated learning settings. | 243 |
| An implementation of a defense against model inversion attacks in federated learning | 55 |
| A framework for Federated Learning with Differential Privacy using PyTorch | 13 |
| This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. | 8 |
| A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. | 179 |
| This repository provides an implementation of a cross-silo federated learning framework with differential privacy mechanisms. | 25 |
| Enabling multiple agents to learn from heterogeneous environments without sharing their knowledge or data | 56 |
| A backdoor defense system for federated learning, designed to protect against data poisoning attacks by isolating subspace training and aggregating models with robust consensus fusion. | 18 |
| An implementation of a federated learning algorithm that generalizes to out-of-distribution scenarios using implicit invariant relationships | 10 |
| An implementation of a game-theoretic defense against backdoor attacks in federated learning. | 6 |
| Develops and evaluates a framework for detecting attacks on federated learning systems | 11 |
| An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. | 65 |
| An implementation of Personalized Federated Learning with Gaussian Processes using Python. | 32 |