FedAttack
Adversarial attack tool
An implementation of an adversarial attack method in federated learning
Source code of FedAttack.
11 stars
1 watching
2 forks
Language: Jupyter Notebook
last commit: almost 3 years ago Related projects:
Repository | Description | Stars |
---|---|---|
thunlp/openattack | A Python toolkit for generating adversarial examples to test the robustness of natural language processing models | 689 |
hfzhang31/a3fl | A framework for attacking federated learning systems with adaptive backdoor attacks | 22 |
yjw1029/ua-fedrec | An implementation of a federated news recommendation system vulnerable to untargeted attacks | 17 |
ai-secure/dba | A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. | 176 |
deu30303/feddefender | A PyTorch implementation of an attack-tolerant federated learning system to train robust local models against malicious attacks from adversaries. | 9 |
yflyl613/fedrec | A PyTorch implementation of an attack and defense mechanism against Federated Recommendation Systems | 21 |
jeremy313/fl-wbc | A defense mechanism against model poisoning attacks in federated learning | 37 |
jhcknzzm/federated-learning-backdoor | An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. | 63 |
zhuohangli/ggl | An attack implementation to test and evaluate the effectiveness of federated learning privacy defenses. | 57 |
jind11/textfooler | A tool for generating adversarial examples to attack text classification and inference models | 494 |
git-disl/lockdown | A backdoor defense system against attacks in federated learning algorithms used for machine learning model training on distributed datasets. | 14 |
und3rf10w/aggressor-scripts | A collection of PowerShell scripts for Cobalt Strike 3.x used to perform various attacks and techniques | 404 |
jonasgeiping/breaching | A PyTorch framework for analyzing vulnerabilities in federated learning models and predicting data breaches | 269 |
ai-secure/fedgame | An implementation of a game-theoretic defense against backdoor attacks in federated learning. | 5 |
mitre/advmlthreatmatrix | A framework to help security analysts understand and prepare for adversarial machine learning attacks on AI systems | 1,050 |