instruct-eval

Model evaluator

An evaluation framework for large language models trained with instruction tuning methods

This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.

GitHub

528 stars
13 watching
42 forks
Language: Python
last commit: 9 months ago
Linked from 1 awesome list

instruct-tuningllm

Backlinks from these awesome lists:

Related projects:

Repository Description Stars
tatsu-lab/alpaca_eval An automatic evaluation tool for large language models 1,526
allenai/olmo-eval An evaluation framework for large language models. 311
openai/simple-evals A library for evaluating language models using standardized prompts and benchmarking tests. 1,939
huggingface/evaluate An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. 2,034
evolvinglmms-lab/lmms-eval Tools and evaluation suite for large multimodal models 2,058
maluuba/nlg-eval A toolset for evaluating and comparing natural language generation models 1,349
edublancas/sklearn-evaluation A tool for evaluating and visualizing machine learning model performance 3
freedomintelligence/mllm-bench Evaluates and compares the performance of multimodal large language models on various tasks 55
mlabonne/llm-autoeval A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters. 558
chenllliang/mmevalpro A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. 22
mshukor/evalign-icl Evaluating and improving large multimodal models through in-context learning 20
h2oai/h2o-llm-eval An evaluation framework for large language models with Elo rating system and A/B testing capabilities 50
modelscope/evalscope A framework for efficient large model evaluation and performance benchmarking. 248
johnsnowlabs/langtest A tool for testing and evaluating large language models with a focus on AI safety and model assessment. 501
mlgroupjlu/llm-eval-survey A repository of papers and resources for evaluating large language models. 1,433