evalscope
Model evaluator
A framework for efficiently evaluating and benchmarking large models
A streamlined and customizable framework for efficient large model evaluation and performance benchmarking
308 stars
7 watching
36 forks
Language: Python
last commit: 3 months ago
Linked from 1 awesome list
evaluationllmperformanceragvlm
Related projects:
Repository | Description | Stars |
---|---|---|
| An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. | 2,063 |
| A framework for evaluating OpenAI models and an open-source registry of benchmarks. | 19 |
| An evaluation framework for multimodal language models' visual capabilities using image and question benchmarks. | 296 |
| A framework for evaluating language models on NLP tasks | 326 |
| A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. | 22 |
| Tools and evaluation framework for accelerating the development of large multimodal models by providing an efficient way to assess their performance | 2,164 |
| An evaluation toolkit and platform for assessing large models in various domains | 307 |
| An open-source framework that enables language model evaluation using Prometheus and GPT4 | 820 |
| Evaluates language models using standardized benchmarks and prompting techniques. | 2,059 |
| A toolset for evaluating and comparing natural language generation models | 1,350 |
| An evaluation framework for large language models trained with instruction tuning methods | 535 |
| A collaborative catalogue of LLM evaluation frameworks and papers | 13 |
| Provides a comprehensive framework for evaluating Large Language Model (LLM) applications and pipelines with customizable metrics | 455 |
| A community-developed tool for evaluating climate models and providing diagnostic metrics. | 230 |
| A repository of papers and resources for evaluating large language models. | 1,450 |