MixEval
LLM evaluator
An evaluation suite and dynamic data release platform for large language models
The official evaluation suite and dynamic data release for MixEval.
230 stars
1 watching
37 forks
Language: Python
last commit: 2 months ago
Linked from 1 awesome list
benchmarkbenchmark-mixturebenchmarking-frameworkbenchmarking-suiteevaluationevaluation-frameworkfoundation-modelslarge-language-modellarge-language-modelslarge-multimodal-modelsllm-evaluationllm-evaluation-frameworkllm-inferencemixeval
Related projects:
Repository | Description | Stars |
---|---|---|
huggingface/lighteval | An all-in-one toolkit for evaluating Large Language Models (LLMs) across multiple backends. | 879 |
mlgroupjlu/llm-eval-survey | A repository of papers and resources for evaluating large language models. | 1,450 |
relari-ai/continuous-eval | Provides a comprehensive framework for evaluating Large Language Model (LLM) applications and pipelines with customizable metrics | 455 |
wgryc/phasellm | A framework for managing and testing large language models to evaluate their performance and optimize user experiences. | 451 |
prometheus-eval/prometheus-eval | An open-source framework that enables language model evaluation using Prometheus and GPT4 | 820 |
evolvinglmms-lab/lmms-eval | Tools and evaluation framework for accelerating the development of large multimodal models by providing an efficient way to assess their performance | 2,164 |
freedomintelligence/mllm-bench | Evaluates and compares the performance of multimodal large language models on various tasks | 56 |
ailab-cvc/seed-bench | A benchmark for evaluating large language models' ability to process multimodal input | 322 |
mlabonne/llm-autoeval | A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters. | 566 |
qcri/llmebench | A benchmarking framework for large language models | 81 |
chenllliang/mmevalpro | A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. | 22 |
flageval-baai/flageval | An evaluation toolkit and platform for assessing large models in various domains | 307 |
victordibia/llmx | An API that provides a unified interface to multiple large language models for chat fine-tuning | 79 |
h2oai/h2o-llm-eval | An evaluation framework for large language models with Elo rating system and A/B testing capabilities | 50 |
hkust-nlp/ceval | An evaluation suite providing multiple-choice questions for foundation models in various disciplines, with tools for assessing model performance. | 1,650 |