iba
Attack simulator
This repository provides a setup and framework for investigating irreversible backdoor attacks in Federated Learning systems.
IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)
31 stars
1 watching
4 forks
Language: Python
last commit: about 1 year ago Related projects:
Repository | Description | Stars |
---|---|---|
ai-secure/dba | A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. | 179 |
ebagdasa/backdoor_federated_learning | This project provides an implementation of backdoor attacks in federated learning frameworks using Python and PyTorch. | 277 |
hfzhang31/a3fl | A framework for attacking federated learning systems with adaptive backdoor attacks | 23 |
centerforaisafety/harmbench | A standardized framework for evaluating and improving the robustness of large language models against adversarial attacks | 366 |
ai-secure/crfl | This project presents a framework for robust federated learning against backdoor attacks. | 71 |
eth-sri/bayes-framework-leakage | Develops and evaluates a framework for detecting attacks on federated learning systems | 11 |
ksreenivasan/ood_federated_learning | Researchers investigate vulnerabilities in Federated Learning systems by introducing new backdoor attacks and exploring methods to defend against them. | 66 |
nshalabi/attack-tools | Utilities for simulating adversary behavior in the context of threat intelligence and security analysis | 1,011 |
13o-bbr-bbq/machine_learning_security | An open-source project that explores the intersection of machine learning and security to develop tools for detecting vulnerabilities in web applications. | 1,987 |
sbasu7241/aws-threat-simulation-and-detection | This repository documents the simulation and detection of various AWS attack scenarios using Stratus Red Team and SumoLogic for logging and analysis. | 284 |
git-disl/lockdown | A backdoor defense system for federated learning, designed to protect against data poisoning attacks by isolating subspace training and aggregating models with robust consensus fusion. | 18 |
openbas-platform/openbas | A comprehensive cyber adversary simulation platform for planning and conducting simulated attacks and exercises | 765 |
ybdai7/chameleon-durable-backdoor | A federated learning system implementation that enables planting durable backdoors in global models by adapting to peer images. | 34 |
mitre/caldera | Automates adversary emulation and incident response using a framework built on the MITRE ATT&CK model | 5,722 |
airbnb/artificial-adversary | A tool to generate adversarial text examples and test machine learning models against them | 399 |