Soteria
Federated Learning Defense
An implementation of a defense against model inversion attacks in federated learning
Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"
55 stars
2 watching
9 forks
Language: Jupyter Notebook
last commit: almost 2 years ago cvpr2021federated-learningmodel-inversion-attackprivacy
Related projects:
Repository | Description | Stars |
---|---|---|
| A backdoor defense system for federated learning, designed to protect against data poisoning attacks by isolating subspace training and aggregating models with robust consensus fusion. | 18 |
| This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. | 8 |
| An implementation of a game-theoretic defense against backdoor attacks in federated learning. | 6 |
| A framework for Federated Learning with Differential Privacy using PyTorch | 13 |
| A defense mechanism against model poisoning attacks in federated learning | 37 |
| Numerical experiments for private federated learning with communication compression algorithms | 7 |
| A PyTorch implementation of an attack-tolerant federated learning system to train robust local models against malicious attacks from adversaries. | 10 |
| An implementation of a robust federated learning method based on Shapley value to defend against various data and model poisoning attacks | 19 |
| Researchers investigate vulnerabilities in Federated Learning systems by introducing new backdoor attacks and exploring methods to defend against them. | 66 |
| A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. | 179 |
| Simulates a federated learning setting to preserve individual data privacy | 365 |
| Develops techniques to improve the resistance of split learning in federated learning against model inversion attacks | 19 |
| Researchers develop an attack method to measure the effectiveness of federated learning privacy defenses by generating leakage in gradients | 58 |
| A toolkit for federated learning with a focus on defending against data heterogeneity | 40 |
| This project presents a framework for robust federated learning against backdoor attacks. | 71 |