BenchLMM
Visual Model Benchmark
An open-source benchmarking framework for evaluating cross-style visual capability of large multimodal models
[ECCV 2024] BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models
84 stars
0 watching
6 forks
Language: Python
last commit: 6 months ago benchmarkcvdatasetlarge-language-modelslarge-multimodal-models
Related projects:
Repository | Description | Stars |
---|---|---|
| A benchmark for evaluating large language models' ability to process multimodal input | 322 |
| A benchmark for evaluating large language models in multiple languages and formats | 93 |
| A benchmarking framework for large language models | 81 |
| A benchmark for evaluating the safety and robustness of vision language models against adversarial attacks. | 72 |
| An LLM-free benchmark suite for evaluating MLLMs' hallucination capabilities in various tasks and dimensions | 98 |
| Evaluates and compares the performance of multimodal large language models on various tasks | 56 |
| A benchmark for evaluating machine learning algorithms' performance on large datasets | 1,874 |
| Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. | 15 |
| An implementation of a multimodal language model with capabilities for comprehension and generation | 585 |
| Comprehensive benchmark for evaluating multi-modal large language models on video analysis tasks | 422 |
| Measures the understanding of massive multitask Chinese datasets using large language models | 87 |
| Compiles bias evaluation datasets and provides access to original data sources for large language models | 115 |
| Measures the performance of deep learning models in various deployment scenarios. | 1,256 |
| An evaluation framework for multimodal language models' visual capabilities using image and question benchmarks. | 296 |
| Develops an end-to-end model for multiple visual perception and reasoning tasks using a single encoder, decoder, and large language model. | 1,336 |