MMEvalPro

Model Evaluator

A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline.

Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs

GitHub

22 stars
1 watching
2 forks
Language: Python
last commit: 4 months ago

Related projects:

Repository Description Stars
evolvinglmms-lab/lmms-eval Tools and evaluation framework for accelerating the development of large multimodal models by providing an efficient way to assess their performance 2,164
mshukor/evalign-icl Evaluating and improving large multimodal models through in-context learning 21
allenai/olmo-eval A framework for evaluating language models on NLP tasks 326
freedomintelligence/mllm-bench Evaluates and compares the performance of multimodal large language models on various tasks 56
mlabonne/llm-autoeval A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters. 566
huggingface/evaluate An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. 2,063
pkunlp-icler/pca-eval An open-source benchmark and evaluation tool for assessing multimodal large language models' performance in embodied decision-making tasks 99
edublancas/sklearn-evaluation A tool for evaluating and visualizing machine learning model performance 3
maluuba/nlg-eval A toolset for evaluating and comparing natural language generation models 1,350
tsb0601/mmvp An evaluation framework for multimodal language models' visual capabilities using image and question benchmarks. 296
yuweihao/mm-vet Evaluates the capabilities of large multimodal models using a set of diverse tasks and metrics 274
esmvalgroup/esmvaltool A community-developed tool for evaluating climate models and providing diagnostic metrics. 230
open-compass/vlmevalkit An evaluation toolkit for large vision-language models 1,514
openai/simple-evals Evaluates language models using standardized benchmarks and prompting techniques. 2,059
huggingface/lighteval An all-in-one toolkit for evaluating Large Language Models (LLMs) across multiple backends. 879