adversarial-robustness-toolbox
ML defense toolkit
A Python library that provides tools and techniques to defend against various attacks on machine learning models and applications.
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
5k stars
99 watching
1k forks
Language: Python
last commit: 1 day ago
Linked from 1 awesome list
adversarial-attacksadversarial-examplesadversarial-machine-learningaiartificial-intelligenceattackblue-teamevasionextractioninferencemachine-learningpoisoningprivacypythonred-teamtrusted-aitrustworthy-ai
Related projects:
Repository | Description | Stars |
---|---|---|
borealisai/advertorch | A toolbox for researching and evaluating robustness against attacks on machine learning models | 1,311 |
cleverhans-lab/cleverhans | A Python library for benchmarking machine learning systems' vulnerability to adversarial examples. | 6,218 |
trusted-ai/aix360 | A toolkit for explaining complex AI models and data-driven insights | 1,641 |
guanghelee/neurips19-certificates-of-robustness | Provides a framework for computing tight certificates of adversarial robustness for randomly smoothed classifiers. | 17 |
trusted-ai/aif360 | A comprehensive toolkit for detecting and mitigating bias in machine learning models and datasets. | 2,483 |
eth-sri/diffai | Trains neural networks to be provably robust against adversarial examples using abstract interpretation techniques. | 219 |
yunqing-me/attackvlm | An adversarial attack framework on large vision-language models | 165 |
mitre/advmlthreatmatrix | A framework to help security analysts understand and prepare for adversarial machine learning attacks on AI systems | 1,056 |
jhayes14/adversarial-patch | A PyTorch implementation of an adversarial patch system to defend against image attacks | 208 |
thunlp/openattack | A Python toolkit for generating adversarial examples to test the robustness of natural language processing models | 699 |
azure/pyrit | Empowers security professionals to identify risks in generative AI systems by providing a framework for proactive risk assessment and red teaming. | 1,977 |
edisonleeeee/greatx | A toolbox for graph reliability and robustness against noise, distribution shifts, and attacks. | 85 |
airbnb/artificial-adversary | A tool to generate adversarial text examples and test machine learning models against them | 399 |
utkuozbulak/pytorch-cnn-adversarial-attacks | PyTorch implementation of various Convolutional Neural Network adversarial attack techniques | 354 |
clementsicard/reliable-and-trustworthy-ai-notebooks | Research-focused notebooks on developing robust and secure AI models against adversarial attacks | 1 |