advertorch

Robustness tester

A toolbox for researching and evaluating robustness against attacks on machine learning models

A Toolbox for Adversarial Robustness Research

GitHub

1k stars
27 watching
198 forks
Language: Jupyter Notebook
last commit: about 1 year ago
Linked from 2 awesome lists

adversarial-attacksadversarial-exampleadversarial-examplesadversarial-learningadversarial-machine-learningadversarial-perturbationsbenchmarkingmachine-learningpytorchrobustnesssecuritytoolbox

Backlinks from these awesome lists:

Related projects:

Repository Description Stars
robustbench/robustbench A standardized benchmark for measuring the robustness of machine learning models against adversarial attacks 667
guanghelee/neurips19-certificates-of-robustness Tight certificates of adversarial robustness for randomly smoothed classifiers 17
hendrycks/robustness Evaluates and benchmarks the robustness of deep learning models to various corruptions and perturbations in computer vision tasks. 1,022
max-andr/provably-robust-boosting Provides provably robust machine learning models against adversarial attacks 50
eth-sri/diffai Trains neural networks to be provably robust against adversarial examples using abstract interpretation techniques. 218
thunlp/openattack A Python toolkit for generating adversarial examples to test the robustness of natural language processing models 689
google-research/robustness_metrics A toolset to evaluate the robustness of machine learning models 466
edisonleeeee/greatx A toolbox for graph reliability and robustness against noise, distribution shifts, and attacks. 83
airbnb/artificial-adversary A tool to generate adversarial text examples and test machine learning models against them 397
madrylab/robustness A library for training and evaluating neural networks with a focus on adversarial robustness. 918
advboxes/advbox A toolbox for generating adversarial examples to test the robustness of machine learning models 1,388
jind11/textfooler A tool for generating adversarial examples to attack text classification and inference models 494
sail-sg/mmcbench A benchmarking framework designed to evaluate the robustness of large multimodal models against common corruption scenarios 27
mitre/advmlthreatmatrix A framework to help security analysts understand and prepare for adversarial machine learning attacks on AI systems 1,050
jhayes14/adversarial-patch A PyTorch implementation of an adversarial patch system to defend against image attacks 204