nlg-eval
Model evaluator
A toolset for evaluating and comparing natural language generation models
Evaluation code for various unsupervised automated metrics for Natural Language Generation.
1k stars
28 watching
224 forks
Language: Python
last commit: 3 months ago
Linked from 1 awesome list
bleubleu-scoreciderdialogdialogueevaluationmachine-translationmeteornatural-language-generationnatural-language-processingnlgnlprougerouge-lskip-thought-vectorsskip-thoughtstask-oriented-dialogue
Related projects:
Repository | Description | Stars |
---|---|---|
mlabonne/llm-autoeval | A tool to automate the evaluation of large language models in Google Colab using various benchmarks and custom parameters. | 558 |
allenai/olmo-eval | An evaluation framework for large language models. | 311 |
chenllliang/mmevalpro | A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. | 22 |
edublancas/sklearn-evaluation | A tool for evaluating and visualizing machine learning model performance | 3 |
obss/jury | A comprehensive toolkit for evaluating NLP experiments offering automated metrics and efficient computation. | 188 |
evolvinglmms-lab/lmms-eval | Tools and evaluation suite for large multimodal models | 2,058 |
declare-lab/instruct-eval | An evaluation framework for large language models trained with instruction tuning methods | 528 |
tatsu-lab/alpaca_eval | An automatic evaluation tool for large language models | 1,526 |
google-research/bleurt | An evaluation metric for Natural Language Generation based on transfer learning. | 698 |
mlgroupjlu/llm-eval-survey | A repository of papers and resources for evaluating large language models. | 1,433 |
freedomintelligence/mllm-bench | Evaluates and compares the performance of multimodal large language models on various tasks | 55 |
openai/simple-evals | A library for evaluating language models using standardized prompts and benchmarking tests. | 1,939 |
neulab/explainaboard | An interactive tool to analyze and compare the performance of natural language processing models | 361 |
huggingface/evaluate | An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. | 2,034 |
mshukor/evalign-icl | Evaluating and improving large multimodal models through in-context learning | 20 |