METplus

Model evaluator

Provides a Python scripting infrastructure for evaluating and visualizing meteorological model performance.

Python scripting infrastructure for MET tools.

GitHub

98 stars
16 watching
37 forks
Language: Python
last commit: 8 days ago
Linked from 1 awesome list


Backlinks from these awesome lists:

Related projects:

Repository Description Stars
pcmdi/pcmdi_metrics Provides objective comparisons of Earth System Models with one another and available observations 102
metno/pyaerocom Tools for evaluating climate and air quality models using Earth observation data. 26
chenllliang/mmevalpro A benchmarking framework for evaluating Large Multimodal Models by providing rigorous metrics and an efficient evaluation pipeline. 22
openai/simple-evals A library for evaluating language models using standardized prompts and benchmarking tests. 1,939
metoppv/improver A library of algorithms for meteorological post-processing and verification. 105
openclimatefix/metnet An implementation of Google Research's MetNet and MetNet-2 weather forecasting models using PyTorch. 242
esmvalgroup/esmvaltool A community-developed tool for evaluating climate models and providing diagnostic metrics. 223
m3works/metloom Provides tools and methods for collecting, managing, and analyzing meteorological data from various sources 16
evolvinglmms-lab/lmms-eval Tools and evaluation suite for large multimodal models 2,058
huggingface/evaluate An evaluation framework for machine learning models and datasets, providing standardized metrics and tools for comparing model performance. 2,034
maluuba/nlg-eval A toolset for evaluating and comparing natural language generation models 1,349
unidata/metpy A collection of Python tools for reading, visualizing, and performing calculations with weather data 1,260
mpas-dev/mpas-analysis Provides tools and analysis for understanding the behavior of large-scale climate models like MPAS within the E3SM framework. 55
ecmwf/metview-python Provides Python bindings to access and manipulate meteorological data using Metview 127
open-compass/vlmevalkit A toolkit for evaluating large vision-language models on various benchmarks and datasets. 1,343