llm-autoeval
Model evaluator
A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters.
Automatically evaluate your LLMs in Google Colab
566 stars
7 watching
93 forks
Language: Python
last commit: 11 months ago
Linked from 1 awesome list
Related projects:
Repository | Description | Stars |
---|---|---|
| A framework for evaluating language models on NLP tasks | 326 |
| A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. | 22 |
| Tools and evaluation framework for accelerating the development of large multimodal models by providing an efficient way to assess their performance | 2,164 |
| An all-in-one toolkit for evaluating Large Language Models (LLMs) across multiple backends. | 879 |
| A toolset for evaluating and comparing natural language generation models | 1,350 |
| A repository of papers and resources for evaluating large language models. | 1,450 |
| Evaluates and compares the performance of multimodal large language models on various tasks | 56 |
| An evaluation framework for large language models trained with instruction tuning methods | 535 |
| Evaluates the legal knowledge of large language models using a custom benchmarking framework. | 273 |
| A tool for evaluating and visualizing machine learning model performance | 3 |
| An evaluation framework for large language models with Elo rating system and A/B testing capabilities | 50 |
| An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. | 2,063 |
| Automated evaluation of language models for question answering tasks | 749 |
| An evaluation tool for question-answering systems using large language models and natural language processing techniques | 1,065 |
| A multilingual large language model developed by XVERSE Technology Inc. | 50 |