lighteval

LLM evaluator

An all-in-one toolkit for evaluating Large Language Models (LLMs) across multiple backends.

Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends

GitHub

879 stars
29 watching
105 forks
Language: Python
last commit: about 1 month ago
Linked from 2 awesome lists

evaluationevaluation-frameworkevaluation-metricshuggingface

Backlinks from these awesome lists:

Related projects:

Repository Description Stars
huggingface/evaluate An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. 2,063
mlabonne/llm-autoeval A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters. 566
psycoy/mixeval An evaluation suite and dynamic data release platform for large language models 230
evolvinglmms-lab/lmms-eval Tools and evaluation framework for accelerating the development of large multimodal models by providing an efficient way to assess their performance 2,164
chenllliang/mmevalpro A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. 22
allenai/olmo-eval A framework for evaluating language models on NLP tasks 326
mlgroupjlu/llm-eval-survey A repository of papers and resources for evaluating large language models. 1,450
flageval-baai/flageval An evaluation toolkit and platform for assessing large models in various domains 307
prometheus-eval/prometheus-eval An open-source framework that enables language model evaluation using Prometheus and GPT4 820
open-compass/vlmevalkit An evaluation toolkit for large vision-language models 1,514
huggingface/torchmoji An open-source implementation of a deep learning model that analyzes sentiment and emotion in text using emojis 919
maluuba/nlg-eval A toolset for evaluating and comparing natural language generation models 1,350
huggingface/pytorch-openai-transformer-lm Implementing OpenAI's transformer language model in PyTorch with pre-trained weights and fine-tuning capabilities 1,511
aiverify-foundation/llm-evals-catalogue A collaborative catalogue of LLM evaluation frameworks and papers 13
h2oai/h2o-llm-eval An evaluation framework for large language models with Elo rating system and A/B testing capabilities 50