lmms-eval
Model evaluator
Tools and evaluation suite for large multimodal models
Accelerating the development of large multimodal models (LMMs) with lmms-eval
2k stars
3 watching
150 forks
Language: Python
last commit: 5 days ago
Linked from 1 awesome list
Related projects:
Repository | Description | Stars |
---|---|---|
freedomintelligence/mllm-bench | Evaluates and compares the performance of multimodal large language models on various tasks | 55 |
chenllliang/mmevalpro | A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. | 22 |
mlgroupjlu/llm-eval-survey | A repository of papers and resources for evaluating large language models. | 1,433 |
allenai/olmo-eval | An evaluation framework for large language models. | 310 |
mlabonne/llm-autoeval | A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters. | 558 |
mshukor/evalign-icl | Evaluating and improving large multimodal models through in-context learning | 20 |
open-compass/vlmevalkit | A toolkit for evaluating large vision-language models on various benchmarks and datasets. | 1,343 |
declare-lab/instruct-eval | An evaluation framework for large language models trained with instruction tuning methods | 528 |
prometheus-eval/prometheus-eval | An open-source framework that enables language model evaluation using Prometheus and GPT4 | 796 |
esmvalgroup/esmvaltool | A community-developed tool for evaluating climate models and providing diagnostic metrics. | 223 |
h2oai/h2o-llm-eval | An evaluation framework for large language models with Elo rating system and A/B testing capabilities | 50 |
maluuba/nlg-eval | A toolset for evaluating and comparing natural language generation models | 1,347 |
huggingface/lighteval | A toolkit for evaluating Large Language Models across multiple backends | 804 |
evolvinglmms-lab/longva | This project provides a model for long context transfer from language to vision using a deep learning framework. | 334 |
modelscope/evalscope | A framework for efficient large model evaluation and performance benchmarking. | 248 |