adversarial-robustness-toolbox
ML defense toolkit
A Python library that provides tools and techniques to defend against various attacks on machine learning models and applications.
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
5k stars
99 watching
1k forks
Language: Python
last commit: 2 months ago
Linked from 1 awesome list
adversarial-attacksadversarial-examplesadversarial-machine-learningaiartificial-intelligenceattackblue-teamevasionextractioninferencemachine-learningpoisoningprivacypythonred-teamtrusted-aitrustworthy-ai
Related projects:
Repository | Description | Stars |
---|---|---|
| A toolbox for researching and evaluating robustness against attacks on machine learning models | 1,311 |
| A Python library for benchmarking machine learning systems' vulnerability to adversarial examples. | 6,218 |
| A toolkit for explaining complex AI models and data-driven insights | 1,641 |
| Provides a framework for computing tight certificates of adversarial robustness for randomly smoothed classifiers. | 17 |
| A comprehensive toolkit for detecting and mitigating bias in machine learning models and datasets. | 2,483 |
| Trains neural networks to be provably robust against adversarial examples using abstract interpretation techniques. | 219 |
| An adversarial attack framework on large vision-language models | 165 |
| A framework to help security analysts understand and prepare for adversarial machine learning attacks on AI systems | 1,056 |
| A PyTorch implementation of an adversarial patch system to defend against image attacks | 208 |
| A Python toolkit for generating adversarial examples to test the robustness of natural language processing models | 699 |
| Empowers security professionals to identify risks in generative AI systems by providing a framework for proactive risk assessment and red teaming. | 1,977 |
| A toolbox for graph reliability and robustness against noise, distribution shifts, and attacks. | 85 |
| A tool to generate adversarial text examples and test machine learning models against them | 399 |
| PyTorch implementation of various Convolutional Neural Network adversarial attack techniques | 354 |
| Research-focused notebooks on developing robust and secure AI models against adversarial attacks | 1 |