Turning-Privacy-preserving-Mechanisms-against-Federated-Learning
Federated Learning Attack
This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms.
Offical code for the paper "Turning Privacy-preserving Mechanisms against Federated Learning" accepted at ACM Conference on Computer and Communications Security (CCS) 2023
8 stars
1 watching
1 forks
Language: Python
last commit: 8 months ago Related projects:
Repository | Description | Stars |
---|---|---|
git-disl/lockdown | A framework to defend against attacks in federated learning by using isolated subspace training and data poisoning detection | 15 |
eric-ai-lab/fedvln | An open-source implementation of a federated learning framework to protect data privacy in embodied agent learning for Vision-and-Language Navigation. | 13 |
jeremy313/soteria | An implementation of a defense against model inversion attacks in federated learning | 55 |
ai-secure/dba | A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. | 177 |
ebagdasa/backdoor_federated_learning | An implementation of backdoor attacks in federated learning using PyTorch. | 275 |
jhcknzzm/federated-learning-backdoor | An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. | 64 |
jonasgeiping/breaching | A PyTorch framework for analyzing vulnerabilities in federated learning models and predicting data breaches | 271 |
sap-samples/machine-learning-diff-private-federated-learning | Simulates a federated learning setting to preserve individual data privacy | 363 |
ai-secure/fedgame | An implementation of a game-theoretic defense against backdoor attacks in federated learning. | 5 |
hfzhang31/a3fl | A framework for attacking federated learning systems with adaptive backdoor attacks | 23 |
ai-secure/crfl | This project presents a framework for robust federated learning against backdoor attacks. | 71 |
zhuohangli/ggl | An implementation of an attack on federated learning privacy mechanisms using Generative Gradient Leakage | 57 |
jeremy313/fl-wbc | A defense mechanism against model poisoning attacks in federated learning | 37 |
xiyuanyang45/dynamicpfl | A method for personalizing machine learning models in federated learning settings with adaptive differential privacy to improve performance and robustness | 55 |
ksreenivasan/ood_federated_learning | Researchers investigate vulnerabilities in Federated Learning systems by introducing new backdoor attacks and exploring methods to defend against them. | 65 |