FedGame
Federated Defense Library
An implementation of a game-theoretic defense against backdoor attacks in federated learning.
Official implementation for paper "FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning" (NeurIPS 2023).
6 stars
3 watching
0 forks
Language: Python
last commit: 3 months ago Related projects:
Repository | Description | Stars |
---|---|---|
ai-secure/dba | A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. | 179 |
git-disl/lockdown | A backdoor defense system for federated learning, designed to protect against data poisoning attacks by isolating subspace training and aggregating models with robust consensus fusion. | 18 |
ebagdasa/backdoor_federated_learning | This project provides an implementation of backdoor attacks in federated learning frameworks using Python and PyTorch. | 277 |
ai-secure/crfl | This project presents a framework for robust federated learning against backdoor attacks. | 71 |
jeremy313/soteria | An implementation of a defense against model inversion attacks in federated learning | 55 |
dcalab-unipv/turning-privacy-preserving-mechanisms-against-federated-learning | This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. | 8 |
hfzhang31/a3fl | A framework for attacking federated learning systems with adaptive backdoor attacks | 23 |
yflyl613/fedrec | A PyTorch implementation of an attack and defense mechanism against Federated Recommendation Systems | 21 |
deu30303/feddefender | A PyTorch implementation of an attack-tolerant federated learning system to train robust local models against malicious attacks from adversaries. | 10 |
jhcknzzm/federated-learning-backdoor | An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. | 65 |
jeremy313/fl-wbc | A defense mechanism against model poisoning attacks in federated learning | 37 |
idanachituve/pfedgp | An implementation of Personalized Federated Learning with Gaussian Processes using Python. | 32 |
eth-sri/bayes-framework-leakage | Develops and evaluates a framework for detecting attacks on federated learning systems | 11 |
zhuohangli/ggl | Researchers develop an attack method to measure the effectiveness of federated learning privacy defenses by generating leakage in gradients | 58 |
zju-diver/shapleyfl-robust-federated-learning-based-on-shapley-value | An implementation of a robust federated learning method based on Shapley value to defend against various data and model poisoning attacks | 19 |