AdvBox
Adversarial example generator
A toolbox for generating adversarial examples to test the robustness of machine learning models
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
1k stars
56 watching
265 forks
Language: Jupyter Notebook
last commit: over 2 years ago
Linked from 1 awesome list
adversarial-attacksadversarial-exampleadversarial-examplesdeep-learningdeepfoolfgsmgraphpipemachine-learningonnxpaddlepaddlesecurity
Related projects:
| Repository | Description | Stars |
|---|---|---|
| | A tool for generating adversarial examples to attack text classification and inference models | 496 |
| | An online tool allowing users to visualize and generate adversarial examples to deceive neural networks | 130 |
| | An adversarial image optimization tool allowing users to generate images designed to deceive machine learning models | 70 |
| | A toolbox for researching and evaluating robustness against attacks on machine learning models | 1,311 |
| | An approach to create adversarial examples for tree-based ensemble models | 22 |
| | An implementation of an adversarial example generation method for deep learning segmentation models. | 58 |
| | A method to create adversarial inputs for deep neural networks, designed to fool their predictions | 359 |
| | PyTorch implementation of various Convolutional Neural Network adversarial attack techniques | 354 |
| | This project enables reprogramming of pre-trained neural networks to work on new tasks by fine-tuning them on smaller datasets. | 33 |
| | A Python toolkit for generating adversarial examples to test the robustness of natural language processing models | 699 |
| | A tool to generate adversarial text examples and test machine learning models against them | 399 |
| | This codebase provides an implementation of a novel adversarial reward learning algorithm for generating human-like visual stories from image sequences. | 136 |
| | A standardized benchmark for measuring the robustness of machine learning models against adversarial attacks | 682 |
| | Repurposes pre-trained neural networks for new classification tasks through adversarial reprogramming of their inputs. | 6 |
| | Trains neural networks to be provably robust against adversarial examples using abstract interpretation techniques. | 219 |