Turning-Privacy-preserving-Mechanisms-against-Federated-Learning

Federated Learning Attack

This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms.

Offical code for the paper "Turning Privacy-preserving Mechanisms against Federated Learning" accepted at ACM Conference on Computer and Communications Security (CCS) 2023

GitHub

8 stars
1 watching
1 forks
Language: Python
last commit: 8 months ago

Related projects:

Repository Description Stars
git-disl/lockdown A backdoor defense system against attacks in federated learning algorithms used for machine learning model training on distributed datasets. 14
eric-ai-lab/fedvln An open-source implementation of a federated learning framework to protect data privacy in embodied agent learning for Vision-and-Language Navigation. 13
jeremy313/soteria An implementation of a defense against model inversion attacks in federated learning 55
ai-secure/dba A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. 177
ebagdasa/backdoor_federated_learning An implementation of a framework for backdoors in federated learning, allowing researchers to test and analyze various attacks on distributed machine learning models. 271
jhcknzzm/federated-learning-backdoor An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. 63
jonasgeiping/breaching A PyTorch framework for analyzing vulnerabilities in federated learning models and predicting data breaches 269
sap-samples/machine-learning-diff-private-federated-learning Simulates a federated learning setting to preserve individual data privacy 360
ai-secure/fedgame An implementation of a game-theoretic defense against backdoor attacks in federated learning. 5
hfzhang31/a3fl A framework for attacking federated learning systems with adaptive backdoor attacks 22
ai-secure/crfl This project presents a framework for robust federated learning against backdoor attacks. 71
zhuohangli/ggl An attack implementation to test and evaluate the effectiveness of federated learning privacy defenses. 57
jeremy313/fl-wbc A defense mechanism against model poisoning attacks in federated learning 37
xiyuanyang45/dynamicpfl A method for personalizing machine learning models in federated learning settings with adaptive differential privacy to improve performance and robustness 51
ksreenivasan/ood_federated_learning Researchers investigate vulnerabilities in Federated Learning systems by introducing new backdoor attacks and exploring methods to defend against them. 64