instruct-eval

Model evaluator

An evaluation framework for large language models trained with instruction tuning methods

This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.

GitHub

535 stars
13 watching
43 forks
Language: Python
last commit: 10 months ago
Linked from 1 awesome list

instruct-tuningllm

Backlinks from these awesome lists:

Related projects:

Repository Description Stars
tatsu-lab/alpaca_eval An automatic evaluation tool for large language models 1,568
allenai/olmo-eval A framework for evaluating language models on NLP tasks 326
openai/simple-evals Evaluates language models using standardized benchmarks and prompting techniques. 2,059
huggingface/evaluate An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. 2,063
evolvinglmms-lab/lmms-eval Tools and evaluation framework for accelerating the development of large multimodal models by providing an efficient way to assess their performance 2,164
maluuba/nlg-eval A toolset for evaluating and comparing natural language generation models 1,350
edublancas/sklearn-evaluation A tool for evaluating and visualizing machine learning model performance 3
freedomintelligence/mllm-bench Evaluates and compares the performance of multimodal large language models on various tasks 56
mlabonne/llm-autoeval A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters. 566
chenllliang/mmevalpro A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. 22
mshukor/evalign-icl Evaluating and improving large multimodal models through in-context learning 21
h2oai/h2o-llm-eval An evaluation framework for large language models with Elo rating system and A/B testing capabilities 50
modelscope/evalscope A framework for efficiently evaluating and benchmarking large models 308
johnsnowlabs/langtest A tool for testing and evaluating large language models with a focus on AI safety and model assessment. 506
mlgroupjlu/llm-eval-survey A repository of papers and resources for evaluating large language models. 1,450