helm
ModelEvaluator
A framework to evaluate and compare language models by analyzing their performance on various tasks
Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09110). This framework is also used to evaluate text-to-image models in HEIM (https://arxiv.org/abs/2311.04287) and vision-language models in VHELM (https://arxiv.org/abs/2410.07112).
2k stars
36 watching
260 forks
Language: Python
last commit: about 1 month ago
Linked from 1 awesome list
Related projects:
Repository | Description | Stars |
---|---|---|
huggingface/evaluate | An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. | 2,063 |
chenllliang/mmevalpro | A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. | 22 |
stanford-crfm/levanter | A framework for training large language models that prioritizes legibility, scalability, and reproducibility | 527 |
flageval-baai/flageval | An evaluation toolkit and platform for assessing large models in various domains | 307 |
modelscope/evalscope | A framework for efficiently evaluating and benchmarking large models | 308 |
declare-lab/instruct-eval | An evaluation framework for large language models trained with instruction tuning methods | 535 |
allenai/olmo-eval | A framework for evaluating language models on NLP tasks | 326 |
mshukor/evalign-icl | Evaluating and improving large multimodal models through in-context learning | 21 |
openai/simple-evals | Evaluates language models using standardized benchmarks and prompting techniques. | 2,059 |
esmvalgroup/esmvaltool | A community-developed tool for evaluating climate models and providing diagnostic metrics. | 230 |
yuweihao/mm-vet | Evaluates the capabilities of large multimodal models using a set of diverse tasks and metrics | 274 |
modeloriented/dalex | A tool to help understand and explain the behavior of complex machine learning models | 1,390 |
marvinteichmann/convcrf | An implementation of a convolutional Conditional Random Field model for semantic segmentation tasks. | 564 |
huggingface/lighteval | An all-in-one toolkit for evaluating Large Language Models (LLMs) across multiple backends. | 879 |
edublancas/sklearn-evaluation | A tool for evaluating and visualizing machine learning model performance | 3 |