MMVP

Visual model evaluation

An evaluation framework for multimodal language models' visual capabilities using image and question benchmarks.

GitHub

288 stars
10 watching
7 forks
Language: Python
last commit: 10 months ago

Related projects:

Repository Description Stars
zhourax/vega Develops a multimodal task and dataset to assess vision-language models' ability to handle interleaved image-text inputs. 33
modelscope/evalscope A framework for efficient large model evaluation and performance benchmarking. 248
yuweihao/mm-vet Evaluates the capabilities of large multimodal models using a set of diverse tasks and metrics 267
chenllliang/mmevalpro A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. 22
multimodal-art-projection/omnibench Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. 14
openbmb/viscpm A family of large multimodal models supporting multimodal conversational capabilities and text-to-image generation in multiple languages 1,089
xverse-ai/xverse-v-13b A large multimodal model for visual question answering, trained on a dataset of 2.1B image-text pairs and 8.2M instruction sequences. 77
huggingface/evaluate An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. 2,034
freedomintelligence/mllm-bench Evaluates and compares the performance of multimodal large language models on various tasks 55
aifeg/benchlmm An open-source benchmarking framework for evaluating cross-style visual capability of large multimodal models 83
mlo-lab/muvi A software framework for multi-view latent variable modeling with domain-informed structured sparsity 29
yuliang-liu/monkey A toolkit for building conversational AI models that can process images and text inputs. 1,825
evolvinglmms-lab/lmms-eval Tools and evaluation suite for large multimodal models 2,058
microsoft/mm-react An AI-powered system that leverages multimodal reasoning and action to analyze visual data and provide insights 933
opengvlab/multi-modality-arena An evaluation platform for comparing multi-modality models on visual question-answering tasks 467