breaching

Federated Breach Analysis

A PyTorch framework for analyzing vulnerabilities in federated learning models and predicting data breaches

Breaching privacy in federated learning scenarios for vision and text

GitHub

269 stars
5 watching
60 forks
Language: Python
last commit: 7 months ago
decentralized-learningfederated-learningmachine-learningprivacy-auditpytorchsecurity

Related projects:

Repository Description Stars
eth-sri/bayes-framework-leakage Develops and evaluates a framework for detecting attacks on federated learning systems 11
dcalab-unipv/turning-privacy-preserving-mechanisms-against-federated-learning This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. 8
yflyl613/fedrec A PyTorch implementation of an attack and defense mechanism against Federated Recommendation Systems 21
ksreenivasan/ood_federated_learning Researchers investigate vulnerabilities in Federated Learning systems by introducing new backdoor attacks and exploring methods to defend against them. 64
deu30303/feddefender A PyTorch implementation of an attack-tolerant federated learning system to train robust local models against malicious attacks from adversaries. 9
jeremy313/soteria An implementation of a defense against model inversion attacks in federated learning 55
chandra2thapa/splitfed-when-federated-learning-meets-split-learning An implementation of federated learning and split learning techniques with PyTorch on the HAM10000 dataset 129
sap-samples/machine-learning-diff-private-federated-learning Simulates a federated learning setting to preserve individual data privacy 360
ebagdasa/backdoor_federated_learning An implementation of a framework for backdoors in federated learning, allowing researchers to test and analyze various attacks on distributed machine learning models. 271
shenzebang/centaur-privacy-federated-representation-learning A framework for Federated Learning with Differential Privacy using PyTorch 13
shenzebang/federated-learning-pytorch A PyTorch-based framework for Federated Learning experiments 40
git-disl/lockdown A backdoor defense system against attacks in federated learning algorithms used for machine learning model training on distributed datasets. 14
charliedinh/pfedme An implementation of Personalized Federated Learning with Moreau Envelopes and related algorithms using PyTorch for research and experimentation. 289
jhcknzzm/federated-learning-backdoor An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. 63
ai-secure/dba A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. 176