MixEval

LLM evaluator

An evaluation suite and dynamic data release platform for large language models

The official evaluation suite and dynamic data release for MixEval.

GitHub

224 stars
1 watching
34 forks
Language: Python
last commit: 14 days ago
Linked from 1 awesome list

benchmarkbenchmark-mixturebenchmarking-frameworkbenchmarking-suiteevaluationevaluation-frameworkfoundation-modelslarge-language-modellarge-language-modelslarge-multimodal-modelsllm-evaluationllm-evaluation-frameworkllm-inferencemixeval

Backlinks from these awesome lists:

Related projects:

Repository Description Stars
huggingface/lighteval A toolkit for evaluating Large Language Models across multiple backends 804
mlgroupjlu/llm-eval-survey A repository of papers and resources for evaluating large language models. 1,433
relari-ai/continuous-eval Provides a comprehensive framework for evaluating Large Language Model (LLM) applications and pipelines with customizable metrics 446
wgryc/phasellm A framework for managing and testing large language models to evaluate their performance and optimize user experiences. 448
prometheus-eval/prometheus-eval An open-source framework that enables language model evaluation using Prometheus and GPT4 796
evolvinglmms-lab/lmms-eval Tools and evaluation suite for large multimodal models 2,058
freedomintelligence/mllm-bench Evaluates and compares the performance of multimodal large language models on various tasks 55
ailab-cvc/seed-bench A benchmark for evaluating large language models' ability to process multimodal input 315
mlabonne/llm-autoeval A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters. 558
qcri/llmebench A benchmarking framework for large language models 80
chenllliang/mmevalpro A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. 22
flageval-baai/flageval An evaluation toolkit and platform for assessing large models in various domains 300
victordibia/llmx An API that provides a unified interface to multiple large language models for chat fine-tuning 79
h2oai/h2o-llm-eval An evaluation framework for large language models with Elo rating system and A/B testing capabilities 50
hkust-nlp/ceval An evaluation suite providing multiple-choice questions for foundation models in various disciplines, with tools for assessing model performance. 1,636