Turning-Privacy-preserving-Mechanisms-against-Federated-Learning
Federated Learning Attack
This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms.
Offical code for the paper "Turning Privacy-preserving Mechanisms against Federated Learning" accepted at ACM Conference on Computer and Communications Security (CCS) 2023
8 stars
1 watching
1 forks
Language: Python
last commit: 11 months ago Related projects:
Repository | Description | Stars |
---|---|---|
| A backdoor defense system for federated learning, designed to protect against data poisoning attacks by isolating subspace training and aggregating models with robust consensus fusion. | 18 |
| An open-source implementation of a federated learning framework to protect data privacy in embodied agent learning for Vision-and-Language Navigation. | 13 |
| An implementation of a defense against model inversion attacks in federated learning | 55 |
| A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. | 179 |
| This project provides an implementation of backdoor attacks in federated learning frameworks using Python and PyTorch. | 277 |
| An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. | 65 |
| A PyTorch framework for analyzing vulnerabilities in federated learning models and predicting data breaches | 274 |
| Simulates a federated learning setting to preserve individual data privacy | 365 |
| An implementation of a game-theoretic defense against backdoor attacks in federated learning. | 6 |
| A framework for attacking federated learning systems with adaptive backdoor attacks | 23 |
| This project presents a framework for robust federated learning against backdoor attacks. | 71 |
| Researchers develop an attack method to measure the effectiveness of federated learning privacy defenses by generating leakage in gradients | 58 |
| A defense mechanism against model poisoning attacks in federated learning | 37 |
| A method for personalizing machine learning models in federated learning settings with adaptive differential privacy to improve performance and robustness | 57 |
| Researchers investigate vulnerabilities in Federated Learning systems by introducing new backdoor attacks and exploring methods to defend against them. | 66 |