FLIP

Backdoor defense framework

A framework for defending against backdoor attacks in federated learning systems

[ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning

GitHub

48 stars
3 watching
2 forks
Language: Python
last commit: about 1 month ago
backdoorbackdoor-attacksbackdoor-defensebyzantinecomputer-visiondefensedistributed-computingfederated-learningprivacypythonpytorchsecurity

Related projects:

Repository Description Stars
ebagdasa/backdoor_federated_learning This project provides an implementation of backdoor attacks in federated learning frameworks using Python and PyTorch. 277
hfzhang31/a3fl A framework for attacking federated learning systems with adaptive backdoor attacks 23
ai-secure/crfl This project presents a framework for robust federated learning against backdoor attacks. 71
ybdai7/chameleon-durable-backdoor A federated learning system implementation that enables planting durable backdoors in global models by adapting to peer images. 34
ksreenivasan/ood_federated_learning Researchers investigate vulnerabilities in Federated Learning systems by introducing new backdoor attacks and exploring methods to defend against them. 66
git-disl/lockdown A backdoor defense system for federated learning, designed to protect against data poisoning attacks by isolating subspace training and aggregating models with robust consensus fusion. 18
nil0x42/phpsploit A tool allowing attackers to remotely execute commands and maintain persistence on compromised web servers using stealthy PHP backdoors. 2,237
dcalab-unipv/turning-privacy-preserving-mechanisms-against-federated-learning This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. 8
zlijingtao/ressfl Develops techniques to improve the resistance of split learning in federated learning against model inversion attacks 19
jhcknzzm/federated-learning-backdoor An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. 65
ethz-spylab/rlhf_trojan_competition Detecting backdoors in language models to prevent malicious AI usage 109
ganyuwang/vfl-czofo A unified framework for improving privacy and reducing communication overhead in distributed machine learning models. 12
ai-secure/fedgame An implementation of a game-theoretic defense against backdoor attacks in federated learning. 6
jeremy313/soteria An implementation of a defense against model inversion attacks in federated learning 55
tobiabocchi/flipperzero-bruteforce Automates brute-force attacks on fixed OOK codes used in SubGHz protocols 2,057