provably-robust-boosting
Robust Boosting Models
Provides provably robust machine learning models against adversarial attacks
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]
50 stars
6 watching
12 forks
Language: Python
last commit: almost 5 years ago
Linked from 2 awesome lists
adversarial-attacksboosted-decision-stumpsboosted-treesboostingprovable-defense
Related projects:
Repository | Description | Stars |
---|---|---|
| A toolbox for researching and evaluating robustness against attacks on machine learning models | 1,311 |
| Provides a framework for computing tight certificates of adversarial robustness for randomly smoothed classifiers. | 17 |
| Trains neural networks to be provably robust against adversarial examples using abstract interpretation techniques. | 219 |
| A standardized benchmark for measuring the robustness of machine learning models against adversarial attacks | 682 |
| Evaluates and benchmarks the robustness of deep learning models to various corruptions and perturbations in computer vision tasks. | 1,030 |
| A Python library implementing a machine learning boosting framework with probabilistic prediction capabilities | 1,663 |
| A package implementing a lightweight gradient boosted decision tree algorithm | 68 |
| An implementation of robust decision tree based models against adversarial examples using the XGBoost framework. | 67 |
| A suite of algorithms and weak learners for the online learning setting in machine learning | 65 |
| An adversarial attack framework on large vision-language models | 165 |
| An approach to create adversarial examples for tree-based ensemble models | 22 |
| Optimal binning for binary, continuous and multiclass target types with constraints | 460 |
| This repository implements methods to find influential training samples in Gradient Boosted Decision Trees ensembles | 67 |
| An implementation of Federated Robustness Propagation in PyTorch to share robustness across heterogeneous federated learning users. | 26 |
| Combating heterogeneity in federated learning by combining adversarial training with client-wise slack during aggregation | 28 |