alpaca_eval

Evaluator

An automatic evaluation tool for large language models

An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.

GitHub

2k stars
8 watching
245 forks
Language: Jupyter Notebook
last commit: 2 months ago
Linked from 1 awesome list

deep-learningevaluationfoundation-modelsinstruction-followinglarge-language-modelsleaderboardnlprlhf

Backlinks from these awesome lists:

Related projects:

Repository Description Stars
declare-lab/instruct-eval An evaluation framework for large language models trained with instruction tuning methods 535
allenai/olmo-eval A framework for evaluating language models on NLP tasks 326
maluuba/nlg-eval A toolset for evaluating and comparing natural language generation models 1,350
edublancas/sklearn-evaluation A tool for evaluating and visualizing machine learning model performance 3
openai/simple-evals Evaluates language models using standardized benchmarks and prompting techniques. 2,059
huggingface/evaluate An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. 2,063
h2oai/h2o-llm-eval An evaluation framework for large language models with Elo rating system and A/B testing capabilities 50
obss/jury A comprehensive toolkit for evaluating NLP experiments offering automated metrics and efficient computation. 187
pkunlp-icler/pca-eval An open-source benchmark and evaluation tool for assessing multimodal large language models' performance in embodied decision-making tasks 99
ccapndave/elm-eexl An expression parser and evaluator for Elm language, used to evaluate logical expressions in educational software. 2
nullne/evaluator An expression evaluator library written in Go. 41
maja42/goval A Go library for evaluating arbitrary arithmetic, string, and logic expressions with support for variables and custom functions. 160
open-compass/vlmevalkit An evaluation toolkit for large vision-language models 1,514
evolvinglmms-lab/lmms-eval Tools and evaluation framework for accelerating the development of large multimodal models by providing an efficient way to assess their performance 2,164
rlancemartin/auto-evaluator An evaluation tool for question-answering systems using large language models and natural language processing techniques 1,065