MMEvalPro

Model Evaluator

A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline.

Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs

GitHub

22 stars
1 watching
2 forks
Language: Python
last commit: about 2 months ago

Related projects:

Repository Description Stars
evolvinglmms-lab/lmms-eval Tools and evaluation suite for large multimodal models 2,058
mshukor/evalign-icl Evaluating and improving large multimodal models through in-context learning 20
allenai/olmo-eval An evaluation framework for large language models. 310
freedomintelligence/mllm-bench Evaluates and compares the performance of multimodal large language models on various tasks 55
mlabonne/llm-autoeval A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters. 558
huggingface/evaluate An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. 2,034
pkunlp-icler/pca-eval An open-source benchmark and evaluation tool for assessing multimodal large language models' performance in embodied decision-making tasks 100
edublancas/sklearn-evaluation A tool for evaluating and visualizing machine learning model performance 3
maluuba/nlg-eval A toolset for evaluating and comparing natural language generation models 1,347
tsb0601/mmvp An evaluation framework for multimodal language models' visual capabilities using image and question benchmarks. 288
yuweihao/mm-vet Evaluates the capabilities of large multimodal models using a set of diverse tasks and metrics 267
esmvalgroup/esmvaltool A community-developed tool for evaluating climate models and providing diagnostic metrics. 223
open-compass/vlmevalkit A toolkit for evaluating large vision-language models on various benchmarks and datasets. 1,343
openai/simple-evals A library for evaluating language models using standardized prompts and benchmarking tests. 1,939
huggingface/lighteval A toolkit for evaluating Large Language Models across multiple backends 804