Metrics

Evaluation metrics library

Provides implementations of various supervised machine learning evaluation metrics in multiple programming languages.

Machine learning evaluation metrics, implemented in Python, R, Haskell, and MATLAB / Octave

GitHub

2k stars
87 watching
454 forks
Language: Python
last commit: almost 2 years ago
Linked from 3 awesome lists


Backlinks from these awesome lists:

Related projects:

Repository Description Stars
martinkersner/py-img-seg-eval A Python package providing metrics and tools for evaluating image segmentation models 282
statisticianinstilettos/recmetrics A Python library providing metrics and diagnostic tools for evaluating recommender systems. 569
enochkan/torch-metrics A collection of common machine learning evaluation metrics implemented in PyTorch 110
astrazeneca/rexmex A library providing a comprehensive set of metrics and tools for evaluating recommender systems 278
mop/bier This project implements a deep metric learning framework using an adversarial auxiliary loss to improve robustness. 39
scikit-learn-contrib/metric-learn A Python library providing efficient implementations of various supervised and weakly-supervised metric learning algorithms. 1,399
freedomintelligence/mllm-bench Evaluates and compares the performance of multimodal large language models on various tasks 55
lartpang/pysodmetrics A library providing an implementation of various metrics for object segmentation and saliency detection in computer vision. 144
pascaldekloe/metrics Provides a simple and efficient way to track and expose performance metrics in Go applications. 28
hashicorp/go-metrics A Golang library for exporting performance and runtime metrics to external systems. 1,461
mshukor/evalign-icl Evaluating and improving large multimodal models through in-context learning 20
benhamner/machinelearning.jl A Julia library providing a consistent API for common machine learning algorithms 116
szilard/benchm-ml A benchmark for evaluating machine learning algorithms' performance on large datasets 1,869
beberlei/metrics A simple metrics library that abstracts different data collection backends. 317
i-gallegos/fair-llm-benchmark Compiles bias evaluation datasets and provides access to original data sources for large language models 110