Turning-Privacy-preserving-Mechanisms-against-Federated-Learning

Federated Learning Attack

This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms.

Offical code for the paper "Turning Privacy-preserving Mechanisms against Federated Learning" accepted at ACM Conference on Computer and Communications Security (CCS) 2023

GitHub

8 stars
1 watching
1 forks
Language: Python
last commit: 10 months ago

Related projects:

Repository Description Stars
git-disl/lockdown A backdoor defense system for federated learning, designed to protect against data poisoning attacks by isolating subspace training and aggregating models with robust consensus fusion. 18
eric-ai-lab/fedvln An open-source implementation of a federated learning framework to protect data privacy in embodied agent learning for Vision-and-Language Navigation. 13
jeremy313/soteria An implementation of a defense against model inversion attacks in federated learning 55
ai-secure/dba A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. 179
ebagdasa/backdoor_federated_learning This project provides an implementation of backdoor attacks in federated learning frameworks using Python and PyTorch. 277
jhcknzzm/federated-learning-backdoor An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. 65
jonasgeiping/breaching A PyTorch framework for analyzing vulnerabilities in federated learning models and predicting data breaches 274
sap-samples/machine-learning-diff-private-federated-learning Simulates a federated learning setting to preserve individual data privacy 365
ai-secure/fedgame An implementation of a game-theoretic defense against backdoor attacks in federated learning. 6
hfzhang31/a3fl A framework for attacking federated learning systems with adaptive backdoor attacks 23
ai-secure/crfl This project presents a framework for robust federated learning against backdoor attacks. 71
zhuohangli/ggl Researchers develop an attack method to measure the effectiveness of federated learning privacy defenses by generating leakage in gradients 58
jeremy313/fl-wbc A defense mechanism against model poisoning attacks in federated learning 37
xiyuanyang45/dynamicpfl A method for personalizing machine learning models in federated learning settings with adaptive differential privacy to improve performance and robustness 57
ksreenivasan/ood_federated_learning Researchers investigate vulnerabilities in Federated Learning systems by introducing new backdoor attacks and exploring methods to defend against them. 66