Awesome-AI-Security

AI Security Resource List

A curated list of resources on AI security threats and defense strategies

file_folder #AISecurity

GitHub

1k stars
88 watching
182 forks
last commit: over 2 years ago

Awesome AI Security / ▲ Adversarial examples

Explaining and Harnessing Adversarial Examples
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
Delving into Transferable Adversarial Examples and Black-box Attacks
On the (Statistical) Detection of Adversarial Examples
The Space of Transferable Adversarial Examples
Adversarial Attacks on Neural Network Policies
Adversarial Perturbations Against Deep Neural Networks for Malware Classification
Crafting Adversarial Input Sequences for Recurrent Neural Networks
Practical Black-Box Attacks against Machine Learning
Adversarial examples in the physical world
Robust Physical-World Attacks on Deep Learning Models
Can you fool AI with adversarial examples on a visual Turing test?
Synthesizing Robust Adversarial Examples
Defensive Distillation is Not Robust to Adversarial Examples
Vulnerability of machine learning models to adversarial examples
Adversarial Examples for Evaluating Reading Comprehension Systems
Adversarial Examples and Adversarial Training by Ian Goodfellow at Stanford
Tactics of Adversarial Attack on Deep Reinforcement Learning Agents
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
Did you hear that? Adversarial Examples Against Automatic Speech Recognition
Adversarial Manipulation of Deep Representations
Exploring the Space of Adversarial Images
Note on Attacking Object Detectors with Adversarial Stickers
Adversarial Patch
LOTS about Attacking Deep Features
Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN
Adversarial Images for Variational Autoencoders
Delving into adversarial attacks on deep policies
Simple Black-Box Adversarial Perturbations for Deep Networks
DeepFool: a simple and accurate method to fool deep neural networks

Awesome AI Security / ▲ Evasion

Query Strategies for Evading Convex-Inducing Classifiers
Evasion attacks against machine learning at test time
Automatically Evading Classifiers A Case Study on PDF Malware Classifiers
Looking at the Bag is not Enough to Find the Bomb: An Evasion of Structural Methods for Malicious PDF Files Detection
Generic Black-Box End-to-End Attack against RNNs and Other API Calls Based Malware Classifiers
Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition
Fast Feature Fool: A data independent approach to universal adversarial perturbations
One pixel attack for fooling deep neural networks
Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
RHMD: Evasion-Resilient Hardware Malware Detectors

Awesome AI Security / ▲ Poisoning

Poisoning Behavioral Malware Clustering
Efficient Label Contamination Attacks Against Black-Box Learning Models
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

Awesome AI Security / ▲ Feature selection

Is Feature Selection Secure against Training Data Poisoning?

Awesome AI Security / ▲ Misc

Can Machine Learning Be Secure?
On The Integrity Of Deep Learning Systems In Adversarial Settings
Stealing Machine Learning Models via Prediction APIs
Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
A Methodology for Formalizing Model-Inversion Attacks
Adversarial Attacks against Intrusion Detection Systems: Taxonomy, Solutions and Open Issues
Adversarial Data Mining for Cyber Security
High Dimensional Spaces, Deep Learning and Adversarial Examples
Neural Networks in Adversarial Setting and Ill-Conditioned Weight Space
Adversarial Machines
Adversarial Task Allocation
Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Adversarial Robustness: Softmax versus Openmax
DEF CON 25 - Hyrum Anderson - Evading next gen AV using AI
Adversarial Learning for Good: My Talk at #34c3 on Deep Learning Blindspots
Universal adversarial perturbations
Camouflage from face detection - CV Dazzle

Awesome AI Security / ▲ Code

CleverHans - Python library to benchmark machine learning systems vulnerability to adversarial examples 6,218 8 months ago
Model extraction attacks on Machine-Learning-as-a-Service platforms 344 about 4 years ago
Foolbox - Python toolbox to create adversarial examples 2,798 9 months ago
Adversarial Machine Learning Library(Ad-lib) 59 about 6 years ago
Deep-pwning 562 over 1 year ago
DeepFool 359 over 4 years ago
Universal adversarial perturbations 242 almost 6 years ago
Malware Env for OpenAI Gym 617 about 2 years ago
Exploring the Space of Adversarial Images 70 over 8 years ago
StringSifter - A machine learning tool that ranks strings based on their relevance for malware analysis 688 5 months ago
EvadeML - Machine Learning in the Presence of Adversaries
Adversarial Machine Learning - PRA Lab
Adversarial Examples and their implications