MMEvalPro
Model Evaluator
A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline.
Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs
22 stars
1 watching
2 forks
Language: Python
last commit: 5 months ago Related projects:
Repository | Description | Stars |
---|---|---|
| Tools and evaluation framework for accelerating the development of large multimodal models by providing an efficient way to assess their performance | 2,164 |
| Evaluating and improving large multimodal models through in-context learning | 21 |
| A framework for evaluating language models on NLP tasks | 326 |
| Evaluates and compares the performance of multimodal large language models on various tasks | 56 |
| A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters. | 566 |
| An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. | 2,063 |
| An open-source benchmark and evaluation tool for assessing multimodal large language models' performance in embodied decision-making tasks | 99 |
| A tool for evaluating and visualizing machine learning model performance | 3 |
| A toolset for evaluating and comparing natural language generation models | 1,350 |
| An evaluation framework for multimodal language models' visual capabilities using image and question benchmarks. | 296 |
| Evaluates the capabilities of large multimodal models using a set of diverse tasks and metrics | 274 |
| A community-developed tool for evaluating climate models and providing diagnostic metrics. | 230 |
| An evaluation toolkit for large vision-language models | 1,514 |
| Evaluates language models using standardized benchmarks and prompting techniques. | 2,059 |
| An all-in-one toolkit for evaluating Large Language Models (LLMs) across multiple backends. | 879 |