Chameleon-durable-backdoor
Federated learning backdoor system
A federated learning system implementation that enables planting durable backdoors in global models by adapting to peer images.
[ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (https://proceedings.mlr.press/v202/dai23a)"
34 stars
1 watching
5 forks
Language: Python
last commit: 5 months ago Related projects:
Repository | Description | Stars |
---|---|---|
| This project provides an implementation of backdoor attacks in federated learning frameworks using Python and PyTorch. | 277 |
| An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. | 65 |
| This project presents a framework for robust federated learning against backdoor attacks. | 71 |
| A backdoor defense system for federated learning, designed to protect against data poisoning attacks by isolating subspace training and aggregating models with robust consensus fusion. | 18 |
| Researchers investigate vulnerabilities in Federated Learning systems by introducing new backdoor attacks and exploring methods to defend against them. | 66 |
| A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. | 179 |
| A framework for defending against backdoor attacks in federated learning systems | 48 |
| This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. | 8 |
| A PyTorch implementation of Federated Class-Incremental Learning for Continual Learning in Computer Vision | 102 |
| A framework for attacking federated learning systems with adaptive backdoor attacks | 23 |
| A method for personalizing machine learning models in federated learning settings with adaptive differential privacy to improve performance and robustness | 57 |
| An implementation of a game-theoretic defense against backdoor attacks in federated learning. | 6 |
| An implementation of Federated Robustness Propagation in PyTorch to share robustness across heterogeneous federated learning users. | 26 |
| Enabling multiple agents to learn from heterogeneous environments without sharing their knowledge or data | 56 |
| This project presents an approach to federated learning with partial client participation by optimizing anchor selection for improving model accuracy and convergence. | 2 |