evalscope

Model evaluator

A framework for efficiently evaluating and benchmarking large models

A streamlined and customizable framework for efficient large model evaluation and performance benchmarking

GitHub

308 stars
7 watching
36 forks
Language: Python
last commit: about 1 month ago
Linked from 1 awesome list

evaluationllmperformanceragvlm

Backlinks from these awesome lists:

Related projects:

Repository Description Stars
huggingface/evaluate An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. 2,063
open-evals/evals A framework for evaluating OpenAI models and an open-source registry of benchmarks. 19
tsb0601/mmvp An evaluation framework for multimodal language models' visual capabilities using image and question benchmarks. 296
allenai/olmo-eval A framework for evaluating language models on NLP tasks 326
chenllliang/mmevalpro A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. 22
evolvinglmms-lab/lmms-eval Tools and evaluation framework for accelerating the development of large multimodal models by providing an efficient way to assess their performance 2,164
flageval-baai/flageval An evaluation toolkit and platform for assessing large models in various domains 307
prometheus-eval/prometheus-eval An open-source framework that enables language model evaluation using Prometheus and GPT4 820
openai/simple-evals Evaluates language models using standardized benchmarks and prompting techniques. 2,059
maluuba/nlg-eval A toolset for evaluating and comparing natural language generation models 1,350
declare-lab/instruct-eval An evaluation framework for large language models trained with instruction tuning methods 535
aiverify-foundation/llm-evals-catalogue A collaborative catalogue of LLM evaluation frameworks and papers 13
relari-ai/continuous-eval Provides a comprehensive framework for evaluating Large Language Model (LLM) applications and pipelines with customizable metrics 455
esmvalgroup/esmvaltool A community-developed tool for evaluating climate models and providing diagnostic metrics. 230
mlgroupjlu/llm-eval-survey A repository of papers and resources for evaluating large language models. 1,450