nlg-eval

Model evaluator

A toolset for evaluating and comparing natural language generation models

Evaluation code for various unsupervised automated metrics for Natural Language Generation.

GitHub

1k stars
28 watching
224 forks
Language: Python
last commit: 5 months ago
Linked from 1 awesome list

bleubleu-scoreciderdialogdialogueevaluationmachine-translationmeteornatural-language-generationnatural-language-processingnlgnlprougerouge-lskip-thought-vectorsskip-thoughtstask-oriented-dialogue

Backlinks from these awesome lists:

Related projects:

Repository Description Stars
mlabonne/llm-autoeval A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters. 566
allenai/olmo-eval A framework for evaluating language models on NLP tasks 326
chenllliang/mmevalpro A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. 22
edublancas/sklearn-evaluation A tool for evaluating and visualizing machine learning model performance 3
obss/jury A comprehensive toolkit for evaluating NLP experiments offering automated metrics and efficient computation. 187
evolvinglmms-lab/lmms-eval Tools and evaluation framework for accelerating the development of large multimodal models by providing an efficient way to assess their performance 2,164
declare-lab/instruct-eval An evaluation framework for large language models trained with instruction tuning methods 535
tatsu-lab/alpaca_eval An automatic evaluation tool for large language models 1,568
google-research/bleurt An evaluation metric for Natural Language Generation based on transfer learning. 705
mlgroupjlu/llm-eval-survey A repository of papers and resources for evaluating large language models. 1,450
freedomintelligence/mllm-bench Evaluates and compares the performance of multimodal large language models on various tasks 56
openai/simple-evals Evaluates language models using standardized benchmarks and prompting techniques. 2,059
neulab/explainaboard An interactive tool to analyze and compare the performance of natural language processing models 362
huggingface/evaluate An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. 2,063
mshukor/evalign-icl Evaluating and improving large multimodal models through in-context learning 21