PCA-EVAL
Multimodal model evaluator
An open-source benchmark and evaluation tool for assessing multimodal large language models' performance in embodied decision-making tasks
[ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain
100 stars
5 watching
3 forks
Language: Jupyter Notebook
last commit: 8 months ago Related projects:
Repository | Description | Stars |
---|---|---|
mshukor/evalign-icl | Evaluating and improving large multimodal models through in-context learning | 20 |
chenllliang/mmevalpro | A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. | 22 |
freedomintelligence/mllm-bench | Evaluates and compares the performance of multimodal large language models on various tasks | 55 |
princeton-nlp/charxiv | An evaluation suite for assessing chart understanding in multimodal large language models. | 75 |
openai/simple-evals | A library for evaluating language models using standardized prompts and benchmarking tests. | 1,939 |
edublancas/sklearn-evaluation | A tool for evaluating and visualizing machine learning model performance | 3 |
allenai/olmo-eval | An evaluation framework for large language models. | 310 |
evolvinglmms-lab/lmms-eval | Tools and evaluation suite for large multimodal models | 2,058 |
multimodal-art-projection/omnibench | Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. | 14 |
open-compass/vlmevalkit | A toolkit for evaluating large vision-language models on various benchmarks and datasets. | 1,343 |
huggingface/evaluate | An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. | 2,034 |
maluuba/nlg-eval | A toolset for evaluating and comparing natural language generation models | 1,347 |
tatsu-lab/alpaca_eval | An automatic evaluation tool for large language models | 1,526 |
jpmml/jpmml-evaluator-spark | A library that enables evaluation of predictive models stored in PMML format within Apache Spark | 94 |
mlgroupjlu/llm-eval-survey | A repository of papers and resources for evaluating large language models. | 1,433 |