FedAttack

Adversarial attack tool

An implementation of an adversarial attack method in federated learning

Source code of FedAttack.

GitHub

11 stars
1 watching
2 forks
Language: Jupyter Notebook
last commit: almost 3 years ago

Related projects:

Repository Description Stars
thunlp/openattack A Python toolkit for generating adversarial examples to test the robustness of natural language processing models 699
hfzhang31/a3fl A framework for attacking federated learning systems with adaptive backdoor attacks 23
yjw1029/ua-fedrec An implementation of a federated news recommendation system vulnerable to untargeted attacks 19
ai-secure/dba A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. 179
deu30303/feddefender A PyTorch implementation of an attack-tolerant federated learning system to train robust local models against malicious attacks from adversaries. 10
yflyl613/fedrec A PyTorch implementation of an attack and defense mechanism against Federated Recommendation Systems 21
jeremy313/fl-wbc A defense mechanism against model poisoning attacks in federated learning 37
jhcknzzm/federated-learning-backdoor An implementation of a federated learning attack method known as Neurotoxin, which introduces backdoors into machine learning models during the training process. 65
zhuohangli/ggl Researchers develop an attack method to measure the effectiveness of federated learning privacy defenses by generating leakage in gradients 58
jind11/textfooler A tool for generating adversarial examples to attack text classification and inference models 496
git-disl/lockdown A backdoor defense system for federated learning, designed to protect against data poisoning attacks by isolating subspace training and aggregating models with robust consensus fusion. 18
und3rf10w/aggressor-scripts A collection of PowerShell scripts for Cobalt Strike 3.x used to perform various attacks and techniques 404
jonasgeiping/breaching A PyTorch framework for analyzing vulnerabilities in federated learning models and predicting data breaches 274
ai-secure/fedgame An implementation of a game-theoretic defense against backdoor attacks in federated learning. 6
mitre/advmlthreatmatrix A framework to help security analysts understand and prepare for adversarial machine learning attacks on AI systems 1,056