evaluate

Model Evaluator

An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance.

🤗 Evaluate: A library for easily evaluating machine learning models and datasets.

GitHub

2k stars
47 watching
263 forks
Language: Python
last commit: 4 months ago
Linked from 2 awesome lists

evaluationmachine-learning

Backlinks from these awesome lists:

Related projects:

Repository Description Stars
modelscope/evalscope A framework for efficiently evaluating and benchmarking large models 308
huggingface/lighteval An all-in-one toolkit for evaluating Large Language Models (LLMs) across multiple backends. 879
openai/simple-evals Evaluates language models using standardized benchmarks and prompting techniques. 2,059
chenllliang/mmevalpro A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. 22
edublancas/sklearn-evaluation A tool for evaluating and visualizing machine learning model performance 3
allenai/olmo-eval A framework for evaluating language models on NLP tasks 326
declare-lab/instruct-eval An evaluation framework for large language models trained with instruction tuning methods 535
open-evals/evals A framework for evaluating OpenAI models and an open-source registry of benchmarks. 19
obss/jury A comprehensive toolkit for evaluating NLP experiments offering automated metrics and efficient computation. 187
maluuba/nlg-eval A toolset for evaluating and comparing natural language generation models 1,350
evolvinglmms-lab/lmms-eval Tools and evaluation framework for accelerating the development of large multimodal models by providing an efficient way to assess their performance 2,164
tsb0601/mmvp An evaluation framework for multimodal language models' visual capabilities using image and question benchmarks. 296
tatsu-lab/alpaca_eval An automatic evaluation tool for large language models 1,568
mlabonne/llm-autoeval A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters. 566
stanford-crfm/helm A framework to evaluate and compare language models by analyzing their performance on various tasks 1,981