continuous-eval
LLM evaluation framework
Provides a comprehensive framework for evaluating Large Language Model (LLM) applications and pipelines with customizable metrics
Data-Driven Evaluation for LLM-Powered Applications
455 stars
4 watching
31 forks
Language: Python
last commit: 6 months ago
Linked from 1 awesome list
evaluation-frameworkevaluation-metricsinformation-retrievalllm-evaluationllmopsragretrieval-augmented-generation
Related projects:
Repository | Description | Stars |
---|---|---|
| A repository of papers and resources for evaluating large language models. | 1,450 |
| A collaborative catalogue of LLM evaluation frameworks and papers | 13 |
| An evaluation framework for large language models with Elo rating system and A/B testing capabilities | 50 |
| A framework for managing and testing large language models to evaluate their performance and optimize user experiences. | 451 |
| A framework for evaluating language models on NLP tasks | 326 |
| An evaluation suite and dynamic data release platform for large language models | 230 |
| A high-performance LLM written in Python/Jax for training and inference on Google Cloud TPUs and GPUs. | 1,557 |
| An open-source toolkit for building and evaluating large language models | 267 |
| A tool for evaluating the performance of large language model APIs | 678 |
| A benchmarking framework for large language models | 81 |
| An open-source framework that enables language model evaluation using Prometheus and GPT4 | 820 |
| An API that provides a unified interface to multiple large language models for chat fine-tuning | 79 |
| A flexible RL training framework designed for large language models | 427 |
| A framework for efficiently evaluating and benchmarking large models | 308 |
| A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters. | 566 |