FLIP

Backdoor defense framework

A framework for defending against backdoor attacks in federated learning systems

[ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning

GitHub

44 stars
3 watching
2 forks
Language: Python
last commit: about 1 year ago
backdoorbackdoor-attacksbackdoor-defensebyzantinecomputer-visiondefensedistributed-computingfederated-learningprivacypythonpytorchsecurity

Related projects:

Repository Description Stars
ebagdasa/backdoor_federated_learning An implementation of a framework for backdoors in federated learning, allowing researchers to test and analyze various attacks on distributed machine learning models. 271
hfzhang31/a3fl A framework for attacking federated learning systems with adaptive backdoor attacks 22
ai-secure/crfl This project presents a framework for robust federated learning against backdoor attacks. 71
ybdai7/chameleon-durable-backdoor A federated learning system implementation that enables planting durable backdoors in global models by adapting to peer images. 32
ksreenivasan/ood_federated_learning Researchers investigate vulnerabilities in Federated Learning systems by introducing new backdoor attacks and exploring methods to defend against them. 64
git-disl/lockdown A backdoor defense system against attacks in federated learning algorithms used for machine learning model training on distributed datasets. 14
nil0x42/phpsploit A tool allowing attackers to remotely execute commands and maintain persistence on compromised web servers using stealthy PHP backdoors. 2,221
dcalab-unipv/turning-privacy-preserving-mechanisms-against-federated-learning This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. 8
zlijingtao/ressfl Develops techniques to improve the resistance of split learning in federated learning against model inversion attacks 20
jhcknzzm/federated-learning-backdoor An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. 63
ethz-spylab/rlhf_trojan_competition Detecting backdoors in language models to prevent malicious AI usage 107
ganyuwang/vfl-czofo A unified framework for improving privacy and reducing communication overhead in distributed machine learning models. 11
ai-secure/fedgame An implementation of a game-theoretic defense against backdoor attacks in federated learning. 5
jeremy313/soteria An implementation of a defense against model inversion attacks in federated learning 55
tobiabocchi/flipperzero-bruteforce Automates brute-force attacks on fixed OOK codes used in SubGHz protocols 2,011