OLMo-Eval
Evaluation pipeline
A framework for evaluating language models on NLP tasks
Evaluation suite for LLMs
326 stars
6 watching
39 forks
Language: Python
last commit: about 1 month ago
Linked from 2 awesome lists
Related projects:
Repository | Description | Stars |
---|---|---|
openai/simple-evals | Evaluates language models using standardized benchmarks and prompting techniques. | 2,059 |
h2oai/h2o-llm-eval | An evaluation framework for large language models with Elo rating system and A/B testing capabilities | 50 |
evolvinglmms-lab/lmms-eval | Tools and evaluation framework for accelerating the development of large multimodal models by providing an efficient way to assess their performance | 2,164 |
chenllliang/mmevalpro | A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. | 22 |
mlabonne/llm-autoeval | A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters. | 566 |
allenai/reward-bench | A comprehensive benchmarking framework for evaluating the performance and safety of reward models in reinforcement learning. | 459 |
huggingface/evaluate | An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. | 2,063 |
declare-lab/instruct-eval | An evaluation framework for large language models trained with instruction tuning methods | 535 |
maluuba/nlg-eval | A toolset for evaluating and comparing natural language generation models | 1,350 |
open-evals/evals | A framework for evaluating OpenAI models and an open-source registry of benchmarks. | 19 |
modelscope/evalscope | A framework for efficiently evaluating and benchmarking large models | 308 |
tatsu-lab/alpaca_eval | An automatic evaluation tool for large language models | 1,568 |
prometheus-eval/prometheus-eval | An open-source framework that enables language model evaluation using Prometheus and GPT4 | 820 |
mlgroupjlu/llm-eval-survey | A repository of papers and resources for evaluating large language models. | 1,450 |
relari-ai/continuous-eval | Provides a comprehensive framework for evaluating Large Language Model (LLM) applications and pipelines with customizable metrics | 455 |