iba
Attack simulator
This repository provides a setup and framework for investigating irreversible backdoor attacks in Federated Learning systems.
IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)
31 stars
1 watching
4 forks
Language: Python
last commit: over 1 year ago Related projects:
Repository | Description | Stars |
---|---|---|
| A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. | 179 |
| This project provides an implementation of backdoor attacks in federated learning frameworks using Python and PyTorch. | 277 |
| A framework for attacking federated learning systems with adaptive backdoor attacks | 23 |
| A standardized framework for evaluating and improving the robustness of large language models against adversarial attacks | 366 |
| This project presents a framework for robust federated learning against backdoor attacks. | 71 |
| Develops and evaluates a framework for detecting attacks on federated learning systems | 11 |
| Researchers investigate vulnerabilities in Federated Learning systems by introducing new backdoor attacks and exploring methods to defend against them. | 66 |
| Utilities for simulating adversary behavior in the context of threat intelligence and security analysis | 1,011 |
| An open-source project that explores the intersection of machine learning and security to develop tools for detecting vulnerabilities in web applications. | 1,987 |
| This repository documents the simulation and detection of various AWS attack scenarios using Stratus Red Team and SumoLogic for logging and analysis. | 284 |
| A backdoor defense system for federated learning, designed to protect against data poisoning attacks by isolating subspace training and aggregating models with robust consensus fusion. | 18 |
| A comprehensive cyber adversary simulation platform for planning and conducting simulated attacks and exercises | 765 |
| A federated learning system implementation that enables planting durable backdoors in global models by adapting to peer images. | 34 |
| Automates adversary emulation and incident response using a framework built on the MITRE ATT&CK model | 5,722 |
| A tool to generate adversarial text examples and test machine learning models against them | 399 |