continuous-eval
LLM evaluation framework
Provides a comprehensive framework for evaluating Large Language Model (LLM) applications and pipelines with customizable metrics
Data-Driven Evaluation for LLM-Powered Applications
446 stars
4 watching
29 forks
Language: Python
last commit: 3 months ago
Linked from 1 awesome list
evaluation-frameworkevaluation-metricsinformation-retrievalllm-evaluationllmopsragretrieval-augmented-generation
Related projects:
Repository | Description | Stars |
---|---|---|
mlgroupjlu/llm-eval-survey | A repository of papers and resources for evaluating large language models. | 1,433 |
aiverify-foundation/llm-evals-catalogue | A collaborative catalogue of Large Language Model evaluation frameworks and papers. | 14 |
h2oai/h2o-llm-eval | An evaluation framework for large language models with Elo rating system and A/B testing capabilities | 50 |
wgryc/phasellm | A framework for managing and testing large language models to evaluate their performance and optimize user experiences. | 448 |
allenai/olmo-eval | An evaluation framework for large language models. | 310 |
psycoy/mixeval | An evaluation suite and dynamic data release platform for large language models | 224 |
ai-hypercomputer/maxtext | A high-performance LLM written in Python/Jax for training and inference on Google Cloud TPUs and GPUs. | 1,529 |
aiplanethub/beyondllm | An open-source toolkit for building and evaluating large language models | 261 |
ray-project/llmperf | A tool for evaluating the performance of large language model APIs | 641 |
qcri/llmebench | A benchmarking framework for large language models | 80 |
prometheus-eval/prometheus-eval | An open-source framework that enables language model evaluation using Prometheus and GPT4 | 796 |
victordibia/llmx | An API that provides a unified interface to multiple large language models for chat fine-tuning | 79 |
volcengine/verl | A flexible and efficient reinforcement learning framework designed for large language models. | 315 |
modelscope/evalscope | A framework for efficient large model evaluation and performance benchmarking. | 248 |
mlabonne/llm-autoeval | A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters. | 558 |