breaching

Federated Breach Analysis

A PyTorch framework for analyzing vulnerabilities in federated learning models and predicting data breaches

Breaching privacy in federated learning scenarios for vision and text

GitHub

271 stars
5 watching
60 forks
Language: Python
last commit: 8 months ago
decentralized-learningfederated-learningmachine-learningprivacy-auditpytorchsecurity

Related projects:

Repository Description Stars
eth-sri/bayes-framework-leakage Develops and evaluates a framework for detecting attacks on federated learning systems 11
dcalab-unipv/turning-privacy-preserving-mechanisms-against-federated-learning This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. 8
yflyl613/fedrec A PyTorch implementation of an attack and defense mechanism against Federated Recommendation Systems 21
ksreenivasan/ood_federated_learning Researchers investigate vulnerabilities in Federated Learning systems by introducing new backdoor attacks and exploring methods to defend against them. 65
deu30303/feddefender A PyTorch implementation of an attack-tolerant federated learning system to train robust local models against malicious attacks from adversaries. 9
jeremy313/soteria An implementation of a defense against model inversion attacks in federated learning 55
chandra2thapa/splitfed-when-federated-learning-meets-split-learning An implementation of federated learning and split learning techniques with PyTorch on the HAM10000 dataset 131
sap-samples/machine-learning-diff-private-federated-learning Simulates a federated learning setting to preserve individual data privacy 363
ebagdasa/backdoor_federated_learning An implementation of backdoor attacks in federated learning using PyTorch. 275
shenzebang/centaur-privacy-federated-representation-learning A framework for Federated Learning with Differential Privacy using PyTorch 13
shenzebang/federated-learning-pytorch A PyTorch-based framework for Federated Learning experiments 40
git-disl/lockdown A framework to defend against attacks in federated learning by using isolated subspace training and data poisoning detection 15
charliedinh/pfedme An implementation of Personalized Federated Learning with Moreau Envelopes and related algorithms using PyTorch for research and experimentation. 290
jhcknzzm/federated-learning-backdoor An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. 64
ai-secure/dba A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. 177