simple-evals
Model Evaluator
Evaluates language models using standardized benchmarks and prompting techniques.
2k stars
28 watching
177 forks
Language: Python
last commit: 3 months ago
Linked from 1 awesome list
Related projects:
Repository | Description | Stars |
---|---|---|
| A framework for evaluating OpenAI models and an open-source registry of benchmarks. | 19 |
| A framework for evaluating language models on NLP tasks | 326 |
| An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. | 2,063 |
| An evaluation framework for large language models trained with instruction tuning methods | 535 |
| An automatic evaluation tool for large language models | 1,568 |
| A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. | 22 |
| A toolset for evaluating and comparing natural language generation models | 1,350 |
| A tool for automatically evaluating RAG models by generating synthetic data and fine-tuning classifiers | 499 |
| Evaluates foundation models on human-centric tasks with diverse exams and question types | 714 |
| An evaluation toolkit for large vision-language models | 1,514 |
| An open-source benchmark and evaluation tool for assessing multimodal large language models' performance in embodied decision-making tasks | 99 |
| A comprehensive toolkit for evaluating NLP experiments offering automated metrics and efficient computation. | 187 |
| Tools and evaluation framework for accelerating the development of large multimodal models by providing an efficient way to assess their performance | 2,164 |
| A tool for evaluating and visualizing machine learning model performance | 3 |
| Provides a Python scripting infrastructure for evaluating and visualizing meteorological model performance. | 99 |