TextFooler
Adversarial examples generator
A tool for generating adversarial examples to attack text classification and inference models
A Model for Natural Language Attack on Text Classification and Inference
496 stars
15 watching
79 forks
Language: Python
last commit: about 2 years ago adversarial-attacksbertbert-modelnatural-language-inferencenatural-language-processingtext-classification
Related projects:
Repository | Description | Stars |
---|---|---|
| A toolbox for generating adversarial examples to test the robustness of machine learning models | 1,389 |
| A tool to generate adversarial text examples and test machine learning models against them | 399 |
| An adversarial image optimization tool allowing users to generate images designed to deceive machine learning models | 70 |
| An implementation of an adversarial example generation method for deep learning segmentation models. | 58 |
| A Python toolkit for generating adversarial examples to test the robustness of natural language processing models | 699 |
| An approach to create adversarial examples for tree-based ensemble models | 22 |
| A method to create adversarial inputs for deep neural networks, designed to fool their predictions | 359 |
| An online tool allowing users to visualize and generate adversarial examples to deceive neural networks | 130 |
| A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. | 179 |
| PyTorch implementation of various Convolutional Neural Network adversarial attack techniques | 354 |
| A flexible Bayesian text classifier with backend storage support | 158 |
| A toolbox for researching and evaluating robustness against attacks on machine learning models | 1,311 |
| A toolkit for generating and analyzing adversarial triggers in natural language processing models. | 295 |
| A Generative Adversarial Networks implementation for modeling illustrations using a custom dataset of anime faces. | 269 |
| An implementation of model poisoning attacks in federated learning | 146 |