ESMValTool

Model evaluator

A community-developed tool for evaluating climate models and providing diagnostic metrics.

ESMValTool: A community diagnostic and performance metrics tool for routine evaluation of Earth system models in CMIP

GitHub

223 stars
32 watching
128 forks
Language: NCL
last commit: 8 days ago
Linked from 1 awesome list


Backlinks from these awesome lists:

Related projects:

Repository Description Stars
chenllliang/mmevalpro A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. 22
evolvinglmms-lab/lmms-eval Tools and evaluation suite for large multimodal models 2,058
mshukor/evalign-icl Evaluating and improving large multimodal models through in-context learning 20
escomp/cesm Provides tools and infrastructure for managing and running the Community Earth System Model 342
jpmml/jpmml-evaluator-spark A library that enables evaluation of predictive models stored in PMML format within Apache Spark 94
dtcenter/metplus Provides a Python scripting infrastructure for evaluating and visualizing meteorological model performance. 98
open-compass/vlmevalkit A toolkit for evaluating large vision-language models on various benchmarks and datasets. 1,343
allenai/olmo-eval An evaluation framework for large language models. 311
huggingface/evaluate An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. 2,034
edublancas/sklearn-evaluation A tool for evaluating and visualizing machine learning model performance 3
declare-lab/instruct-eval An evaluation framework for large language models trained with instruction tuning methods 528
pcmdi/pcmdi_metrics Provides objective comparisons of Earth System Models with one another and available observations 102
modelscope/evalscope A framework for efficient large model evaluation and performance benchmarking. 248
openai/simple-evals A library for evaluating language models using standardized prompts and benchmarking tests. 1,939
maluuba/nlg-eval A toolset for evaluating and comparing natural language generation models 1,349