robustbench
Adversarial benchmarking tool
A standardized benchmark for measuring the robustness of machine learning models against adversarial attacks
RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]
682 stars
9 watching
98 forks
Language: Python
last commit: 5 months ago
Linked from 1 awesome list
adversarial-machine-learningadversarial-robustnessbenchmarkmodel-zoo
Related projects:
Repository | Description | Stars |
---|---|---|
| A toolbox for researching and evaluating robustness against attacks on machine learning models | 1,311 |
| Provides a framework for computing tight certificates of adversarial robustness for randomly smoothed classifiers. | 17 |
| Evaluates and benchmarks the robustness of deep learning models to various corruptions and perturbations in computer vision tasks. | 1,030 |
| Provides provably robust machine learning models against adversarial attacks | 50 |
| A platform providing reasonably accurate benchmarking results for JavaScript performance comparisons. | 44 |
| A toolset to evaluate the robustness of machine learning models | 466 |
| A library for training and evaluating neural networks with a focus on adversarial robustness. | 921 |
| Automated testing of software components to identify vulnerabilities and weaknesses | 1,110 |
| Trains neural networks to be provably robust against adversarial examples using abstract interpretation techniques. | 219 |
| A Python toolkit for generating adversarial examples to test the robustness of natural language processing models | 699 |
| A benchmarking framework designed to evaluate the robustness of large multimodal models against common corruption scenarios | 27 |
| An implementation of Federated Robustness Propagation in PyTorch to share robustness across heterogeneous federated learning users. | 26 |
| A tool for measuring and comparing the performance of PHP code | 1,906 |
| An implementation of robust decision tree based models against adversarial examples using the XGBoost framework. | 67 |
| A comprehensive benchmarking framework for evaluating the performance and safety of reward models in reinforcement learning. | 459 |