AttackVLM
Attack framework
An adversarial attack framework on large vision-language models
[NeurIPS-2023] Annual Conference on Neural Information Processing Systems
161 stars
2 watching
8 forks
Language: Python
last commit: about 1 year ago adversarial-attackdeep-generative-modelfoundation-modelsgenerative-aiimage-to-text-generationlarge-language-modelstext-to-image-generationtrustworthy-aivision-language-model
Related projects:
Repository | Description | Stars |
---|---|---|
ys-zong/foolyourvllms | An attack framework to manipulate the output of large language models and vision-language models | 14 |
chong-z/tree-ensemble-attack | An approach to create adversarial examples for tree-based ensemble models | 22 |
mitre/advmlthreatmatrix | A framework to help security analysts understand and prepare for adversarial machine learning attacks on AI systems | 1,050 |
junyizhu-ai/surrogate_model_extension | A framework for analyzing and exploiting vulnerabilities in federated learning models using surrogate model attacks | 9 |
jind11/textfooler | A tool for generating adversarial examples to attack text classification and inference models | 494 |
utkuozbulak/adaptive-segmentation-mask-attack | An implementation of an adversarial example generation method for deep learning segmentation models. | 57 |
hfzhang31/a3fl | A framework for attacking federated learning systems with adaptive backdoor attacks | 22 |
yunishi3/3d-fcr-alphagan | This project aims to develop a generative model for 3D multi-object scenes using a novel network architecture inspired by auto-encoding and generative adversarial networks. | 103 |
jeremy313/fl-wbc | A defense mechanism against model poisoning attacks in federated learning | 37 |
zhuohangli/ggl | An attack implementation to test and evaluate the effectiveness of federated learning privacy defenses. | 57 |
junyizhu-ai/r-gap | A tool to demonstrate and analyze attacks on private data in machine learning models using gradients | 34 |
max-andr/provably-robust-boosting | Provides provably robust machine learning models against adversarial attacks | 50 |
yuliang-liu/monkey | A toolkit for building conversational AI models that can process images and text inputs. | 1,825 |
zlijingtao/ressfl | Develops techniques to improve the resistance of split learning in federated learning against model inversion attacks | 20 |
yuxie11/r2d2 | A framework for large-scale cross-modal benchmarks and vision-language tasks in Chinese | 157 |