lighteval
LLM evaluator
An all-in-one toolkit for evaluating Large Language Models (LLMs) across multiple backends.
Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends
879 stars
29 watching
105 forks
Language: Python
last commit: 3 months ago
Linked from 2 awesome lists
evaluationevaluation-frameworkevaluation-metricshuggingface
Related projects:
Repository | Description | Stars |
---|---|---|
| An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. | 2,063 |
| A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters. | 566 |
| An evaluation suite and dynamic data release platform for large language models | 230 |
| Tools and evaluation framework for accelerating the development of large multimodal models by providing an efficient way to assess their performance | 2,164 |
| A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. | 22 |
| A framework for evaluating language models on NLP tasks | 326 |
| A repository of papers and resources for evaluating large language models. | 1,450 |
| An evaluation toolkit and platform for assessing large models in various domains | 307 |
| An open-source framework that enables language model evaluation using Prometheus and GPT4 | 820 |
| An evaluation toolkit for large vision-language models | 1,514 |
| An open-source implementation of a deep learning model that analyzes sentiment and emotion in text using emojis | 919 |
| A toolset for evaluating and comparing natural language generation models | 1,350 |
| Implementing OpenAI's transformer language model in PyTorch with pre-trained weights and fine-tuning capabilities | 1,511 |
| A collaborative catalogue of LLM evaluation frameworks and papers | 13 |
| An evaluation framework for large language models with Elo rating system and A/B testing capabilities | 50 |