MLLM-Bench

Model evaluator

Evaluates and compares the performance of multimodal large language models on various tasks

MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria

GitHub

55 stars
10 watching
3 forks
Language: Python
last commit: about 1 month ago

Related projects:

Repository Description Stars
evolvinglmms-lab/lmms-eval Tools and evaluation suite for large multimodal models 2,058
chenllliang/mmevalpro A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. 22
mshukor/evalign-icl Evaluating and improving large multimodal models through in-context learning 20
mlgroupjlu/llm-eval-survey A repository of papers and resources for evaluating large language models. 1,433
ailab-cvc/seed-bench A benchmark for evaluating large language models' ability to process multimodal input 315
mlabonne/llm-autoeval A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters. 558
open-compass/vlmevalkit A toolkit for evaluating large vision-language models on various benchmarks and datasets. 1,343
junyangwang0410/amber An LLM-free benchmark suite for evaluating MLLMs' hallucination capabilities in various tasks and dimensions 93
felixgithub2017/mmcu Evaluates the semantic understanding capabilities of large Chinese language models using a multimodal dataset. 87
yuweihao/mm-vet Evaluates the capabilities of large multimodal models using a set of diverse tasks and metrics 267
aifeg/benchlmm An open-source benchmarking framework for evaluating cross-style visual capability of large multimodal models 83
maluuba/nlg-eval A toolset for evaluating and comparing natural language generation models 1,347
benhamner/metrics Provides implementations of various supervised machine learning evaluation metrics in multiple programming languages. 1,627
declare-lab/instruct-eval An evaluation framework for large language models trained with instruction tuning methods 528
i-gallegos/fair-llm-benchmark Compiles bias evaluation datasets and provides access to original data sources for large language models 110