evalscope

Model Evaluator

A framework for efficient large model evaluation and performance benchmarking.

A streamlined and customizable framework for efficient large model evaluation and performance benchmarking

GitHub

248 stars
7 watching
31 forks
Language: Python
last commit: 6 days ago
Linked from 1 awesome list

evaluationllmperformanceragvlm

Backlinks from these awesome lists:

Related projects:

Repository Description Stars
huggingface/evaluate An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. 2,034
open-evals/evals A framework for evaluating OpenAI models and an open-source registry of benchmarks. 19
tsb0601/mmvp An evaluation framework for multimodal language models' visual capabilities using image and question benchmarks. 288
allenai/olmo-eval An evaluation framework for large language models. 310
chenllliang/mmevalpro A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. 22
evolvinglmms-lab/lmms-eval Tools and evaluation suite for large multimodal models 2,058
flageval-baai/flageval An evaluation toolkit and platform for assessing large models in various domains 300
prometheus-eval/prometheus-eval An open-source framework that enables language model evaluation using Prometheus and GPT4 796
openai/simple-evals A library for evaluating language models using standardized prompts and benchmarking tests. 1,939
maluuba/nlg-eval A toolset for evaluating and comparing natural language generation models 1,347
declare-lab/instruct-eval An evaluation framework for large language models trained with instruction tuning methods 528
aiverify-foundation/llm-evals-catalogue A collaborative catalogue of Large Language Model evaluation frameworks and papers. 14
relari-ai/continuous-eval Provides a comprehensive framework for evaluating Large Language Model (LLM) applications and pipelines with customizable metrics 446
esmvalgroup/esmvaltool A community-developed tool for evaluating climate models and providing diagnostic metrics. 223
mlgroupjlu/llm-eval-survey A repository of papers and resources for evaluating large language models. 1,433