breaching
Federated Breach Analysis
A PyTorch framework for analyzing vulnerabilities in federated learning models and predicting data breaches
Breaching privacy in federated learning scenarios for vision and text
274 stars
5 watching
60 forks
Language: Python
last commit: 10 months ago decentralized-learningfederated-learningmachine-learningprivacy-auditpytorchsecurity
Related projects:
Repository | Description | Stars |
---|---|---|
| Develops and evaluates a framework for detecting attacks on federated learning systems | 11 |
| This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. | 8 |
| A PyTorch implementation of an attack and defense mechanism against Federated Recommendation Systems | 21 |
| Researchers investigate vulnerabilities in Federated Learning systems by introducing new backdoor attacks and exploring methods to defend against them. | 66 |
| A PyTorch implementation of an attack-tolerant federated learning system to train robust local models against malicious attacks from adversaries. | 10 |
| An implementation of a defense against model inversion attacks in federated learning | 55 |
| An implementation of federated learning and split learning techniques with PyTorch on the HAM10000 dataset | 134 |
| Simulates a federated learning setting to preserve individual data privacy | 365 |
| This project provides an implementation of backdoor attacks in federated learning frameworks using Python and PyTorch. | 277 |
| A framework for Federated Learning with Differential Privacy using PyTorch | 13 |
| A PyTorch-based framework for Federated Learning experiments | 40 |
| A backdoor defense system for federated learning, designed to protect against data poisoning attacks by isolating subspace training and aggregating models with robust consensus fusion. | 18 |
| An implementation of Personalized Federated Learning with Moreau Envelopes and related algorithms using PyTorch for research and experimentation. | 291 |
| An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. | 65 |
| A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. | 179 |